Pages

Tuesday, March 3, 2020

*Security: Linux application level firewall


There are some iptables rules that are required.  They look like:
iptables -A OUTPUT -o eth0 -m owner --gid-owner other -j ACCEPT

Now the Linux iptables firewall is configured to only allow network access from applications that you have specifically started using the allownet group id.  Since this is not your primary group, you will need to manually start programs and switch the group ID if you want to allow network access.  This process basically means that only applications that you trust and have started correctly will have network access.

The easiest way to start a process as a different group id is to use the sg command.  The syntax is:
sg <group> "<command>"
Please be aware that the quotes are important, otherwise the sg command will only receive <command> up to the first space character.

If you wish to make this a bit easier to remember, you may want to create a script which you can more easily call to use to start a trusted application with network access.  Personally, I call my script allownet and it looks like this:
#!/bin/bash
bash -c "sg allownet $(printf " %q" "$*")"
This is a very simple script that I have placed in /usr/local/bin - so my default path statement finds it.  Basically it takes any parameters that it receives and wraps it to look like:
sg allownet "<parameters passed to allownet>"
Now, if I want to execute an ssh command, I can simply enter:
allownet ssh user@host.visideas.com
and everything should work perfectly.

We are now more protected from applications on our Linux system accessing the network without our knowledge.

Wednesday, October 2, 2019

*Limiting access to your site to Cloudflare IP addresses only


NOTE: This page has moved to https://datamakes.com/2019/10/02/limiting-access-to-your-web-site-to-cloudflare-ip-addresses-only/

For those of you that now me, you know that I am very paranoid about security.  I feel that while I know how to make a secure system, it very easy to get it wrong and very hard to get it right.  One of the things that always concerns me is having a lot of people randomly "poking" at my servers and possibly finding errors in my system.

CloudflareI have turned to Cloudflare to help reduce my exposed attack surface.  They have some great services, many of which are available for free.  This particular post will simply be talking about using their Web Application Firewall (WAF).

Cloudflare acts as a Content Distribution Network (CDN) that actually helps to speed up your web-site.  The short version of this is that a visitor connects to an IP address on the Cloudflare network which acts as a caching proxy to connect back to your actual server.  This is how Cloudflare can provide both security and caching.

What happens if someone has your servers actual IP address instead of the one hosted by Cloudflare?  This allows them to circumvent all of the Cloudflare provided security and attack your server directly.

If you are running a Linux server, it is actually easy to restrict incoming connections to your server to only be from trusted Cloudflare addresses.  By closing this bypass you are guaranteeing that you always have the protection of Cloudflare in front of your server.  This also means that you can use additional Cloudflare capabilities to minimize the number of unauthorized requests that come to your server, which also reduces server load.

NOTE:  This script should work with any CDN that provides similar capabilities as Cloudflare does - but I have not tested anybody else.

Here is the script that I use to close this Cloudflare bypass.  Please look for colored keywords to identify areas that you need to customize
#!/bin/sh
# Script taken and modified from https://github.com/Paul-Reed/cloudflare-ufw/blob/master/cloudflare-ufw.sh

# Safety - make sure you are authorized before we do anything
if [ "$(whoami)" != "root" ]; then
   echo ABORT: You must be root to run this script
   exit 1
fi

# Clear out all firewall rues
echo y | ufw reset

# Flush and remove all UFW rules in both the filter and nat tables
/sbin/iptables -F
/sbin/iptables -X
/sbin/iptables -F -t nat
/sbin/iptables -X -t nat

# Safety to make sure that everything is really removed
echo y | /usr/sbin/ufw reset

# Remove backup copies that the reset command generates
rm /etc/ufw/*201?????_*

# Disable the firewall until rules are set and assign default policies
/usr/sbin/ufw disable
/usr/sbin/ufw default deny incoming
/usr/sbin/ufw default allow outgoing

# Check to see if OpenVPN rules have been added to UFW already
#   If the rules are not already there, add rules above to the before.rules file
# NOTE: If you are not using OpenVPN, this block is not needed
if [ $(cat /etc/ufw/before.rules | grep "OpenVPN routing" | wc -l) -eq 0 ];
then
cat < /etc/ufw/before.rules.nat
*nat
:POSTROUTING ACCEPT [0:0]
-A POSTROUTING -s ww.xx.yy.zz/8 -o  -m comment --comment "OpenVPN routing" -j MASQUERADE
COMMIT
EOF
fi
# Merge the new before.rules into the existing before.rules
cat /etc/ufw/before.rules > /etc/ufw/before.rules.orig
cat /etc/ufw/before.rules.nat /etc/ufw/before.rules.orig > /etc/ufw/before.rules
# NOTE: If you are not using OpenVPN, this block is not needed

# Disable the firewall again to make sure all rules are really purged
/usr/sbin/ufw disable
# Enable the new firewall
echo y | /usr/sbin/ufw enable

# Put in some safety rules so you do not get locked out accidentally
/usr/sbin/ufw allow from www.xxx.yyy.zzz/32  to any port 22,80,443 proto tcp comment "Safety - home computers"
/usr/sbin/ufw allow from 127.0.0.1/32      comment "Safety - localhost"
/usr/sbin/ufw allow from www.xxx.yyy.zzz/32  to any port 22,80,443 proto tcp comment "Safety - home router"
/usr/sbin/ufw allow from www.xxx.yyy.zzz/32  to any port 22,80,443 proto tcp comment "Allow VPN users"
/usr/sbin/ufw allow from any   to any port 1194 proto udp  comment "OpenVPN via UDP"
/usr/sbin/ufw deny  from www.xxx.yyy.zzz to 224.0.0.1   comment "Block multi-cast"
echo Deny applied

# Determine working directory
DIR="$(dirname $(readlink -f $0))"
cd $DIR

# Get the authoritative lists of Cloudflare IP addresses
wget https://www.cloudflare.com/ips-v4 -O ips-v4.tmp
wget https://www.cloudflare.com/ips-v6 -O ips-v6.tmp
mv ips-v4.tmp ips-v4
mv ips-v6.tmp ips-v6

# Loop through all of the Cloudflare IP addresses and authorize them
for cfip in `cat ips-v4`; do /usr/sbin/ufw allow from $cfip to any port 443 proto tcp comment "Allow Cloudflare via TCP"; done
for cfip in `cat ips-v6`; do /usr/sbin/ufw allow from $cfip to any port 443 proto tcp comment "Allow Cloudflare via TCP"; done

#NOTE: You can repeat the above lines to add other rules or change the allowed ports

# Enable the firewall rules
echo y | ufw enable

# Display the nat table to ensure rules are properly added
/sbin/iptables -t nat -L -n
After that, I simply added this script to my system's crontab for the root user - since this script requires permission to run. I do not know how often Cloudflare IP addresses update so I set this to run once per day and on a system reboot.
@reboot               /path/to/script.sh | mail -s "resetUFW results - reboot" me@some_domain.com
   0  4   *   *   *   /path/to/script.sh | mail -s "resetUFW results" me@some_domain.com
Everything has been working great ever since.

Tuesday, February 27, 2018

*High intensity port multiplexing using haproxy


NOTE: This page has moved to https://datamakes.com/2018/02/17/high-intensity-port-sharing-with-haproxy/

As I am sure you already know, IPv4 addresses are in limited supply right now.  The solution to this is IPv6 which greatly enlarges the available address space.  The problem is that IPv6 is not yet deployed everywhere, so there is still a need to figure out how to maximize the usage of your existing IPv4 addresses.

I have a VPS on the Internet which only provides 1 IPv4 address.  Of course, I want to run multiple services on this VPS.  I also want to use well-known ports to decrease the chance of being blocked from accessing my VPS.

There are several tools that can handle port multi-plexing.  Probably among the most widely used are haproxy and sslh.  Both of these tools are probably available in your Linux package manager.

SSLH is very easy to use but it only multiplexes SSL and SSH sessions.  If you want more than 2 services on the same port then this tool is not for you.

HAPROXY is a bit more complicated to set up but it is also a lot more configurable.  This post will describe the way that I have haproxy configured to host multiple services.  I will post the full configuration file at the bottom of this post for easy copying and pasting.

NOTE: When you are reading the code below, any text that is underlined needs to be replaced with values that are appropriate to your installation.

The first step in configuring haproxy is to set up the "frontend"  This is the portion of haproxy that listens for incoming connections.  Your "frontend" might look like this:
frontend ssl
        mode tcp
        bind <ipaddress>:<port>
        tcp-request inspect-delay 3s
        tcp-request content accept if { req.ssl_hello_type 1 }
This basically tells haproxy which IP address and port to listen on for incoming connections.  You can also use the IP address 0.0.0.0 for every available IP address, if you have multiple.

The "inspect-delay" tells haproxy how long it should wait to receive data from the client before making a decision about what to do with the incoming connection.  This is required due to the difference in the way that HTTPS and SSH sessions are negotiated.  This is also the way that we distinguish the traffic type.

Once you have this front-end configured, you next need to configure your access control lists which connect your front-end to your backend(s).

The ACL for an SSH session looks like this:
        acl     <ssh label>             payload(0,7)    -m bin 5353482d322e30
This will detect SSH sessions and mark them with <ssh_label>  This is an arbitrary label and you can pick any name you want.  The only requirement is that it matches the rules that connect to the SSH backend.

Your "use_backend" statement for SSH would then look like:
        use_backend <ssh backend name>                     if <ssh label>
As before, the <ssh backend name> is an arbitrary label you can pick.  The only requirement again is that the backend name must match the backend definition.

Since we are now talking about the backend, here is what an SSH backend would look like:

backend openssh
        mode tcp
        timeout server 3h
        server openssh <ip address>:<port>
Typically you would use an IP address of 127.0.0.1 to mean localhost or the local machine.  The default port for SSH is 22.  It is possible to use any IP address and port you want in this definition.  That would be useful if the SSH server is on a different machine on a network behind your haproxy system.

Now we can add additional services.  It is common for a single web-server to host multiple web-sites.  These web-sites are identified by their DNS name.  On the server side this is called SNI or Server Name Indication.

Let's start by setting up an ACL for server1.visideas.com
        acl     <server one>               req.ssl_sni             -i server1.visideas.com
Then the matching use_backend rule would look like:
        use_backend <server 1 backend> if <server one acl> { req.ssl_hello_type 1 }
 Finally, your matching backend might look like:
backend <server 1 backend>
        mode tcp
        server webserver <server 1 IP>:<server 1 port>
 There are also some powerful matching criteria that you can use in your ACL's.  For example, both of these are valid:
        acl     <some acl>            req.ssl_sni             -m end .visideas.com
        acl     <different acl>            req.ssl_sni             -m found
The first line matches any domain name that ends in .visideas.com and marks it with <some acl>.  The second line matches any name and tags it with <different acl>.  These lines will not mark any requests received that were directed directly to an IP address.

Another use_backend that is useful is:
        use_backend <another backend>                      if { req.ssl_hello_type 1 }
The ssl_hello_type of 1 indicates the presence of an HTTPS request.  Since there is no tag name after the "if" this ACL would catch requests which were sent to this haproxy server by IP address.  This means that you can route traffic which came in by specifying IP address to an alternate service.

The final ACL that I will discuss is:
        use_backend <shadowsocks>                 if !{ req.ssl_hello_type 1 } !{ req.len 0 }
This ACL can detect traffic that is meant to be sent to a Shadowsocks server.  This traffic is identified because it does not contain an ssl_hello_type of 1 and it sends traffic immediately without waiting - i.e. the request length is not 0.

There are probably other protocols that this statement would match as well but I am using it for Shadowsocks.

Now, as promised, here is my complete haproxy.conf.  Again, please remember to change everything that is underlined to match your specific settings.

This configuration allows me to access the following services on port 443:
  1. An nginx server when accessed as https://s.visideas.com/
  2. An Apache2 server when access as https://k.visideas.com/ or https://*.visideas.com/ or https://<any DNS name>
  3. A Monit server when accessed as https://monit.visideas.com/
  4. An OpenConnect SSL VPN server when accessed as https://<ip address>/
  5. A Shadowsocks server when accessed using a Shadowsocks client
  6. An SSH server
global
    log /dev/log    local0
    log /dev/log    local1 notice
    chroot /var/lib/haproxy
    user haproxy
    group haproxy
    daemon

defaults
    log    global
    mode    tcp
    option  tcplog   
    option    dontlognull
    maxconn  2000
    timeout connect  5000
    timeout client 500000
    timeout server 500000

frontend ssl
    mode tcp
    bind <host IP>:443
    tcp-request inspect-delay 3s
    tcp-request content accept if { req.ssl_hello_type 1 }

    acl    ssh_payload        payload(0,7)    -m bin 5353482d322e30

        acl     www-monit        req.ssl_sni        -i monit.visideas.com
        acl     www-s        req.ssl_sni        -i s.visideas.com
        acl     www-r        req.ssl_sni        -i r.visideas.com
        acl     www-k        req.ssl_sni        -m end .visideas.com
        acl     www-k        req.ssl_sni        -m found

    use_backend www-monit            if www-monit { req.ssl_hello_type 1 }
    use_backend nginx-s        if www-s { req.ssl_hello_type 1 }
    use_backend apache2-k        if www-k { req.ssl_hello_type 1 }
    use_backend ocserv            if { req.ssl_hello_type 1 }
    use_backend openssh            if ssh_payload
    use_backend openssh            if !{ req.ssl_hello_type 1 } { req.len 0 }
    use_backend shadowsocks            if !{ req.ssl_hello_type 1 } !{ req.len 0 }

backend openssh
    mode tcp
    timeout server 3h
    server openssh 127.0.0.1:22

backend ocserv
    mode tcp
    timeout server 24h
    server sslvpn 127.0.0.1:4443

backend nginx-s
    mode tcp
    server webserver 127.0.0.1:8443

backend apache2-k
    mode tcp
    server webserver 127.0.0.1:10443

backend www-monit
    mode tcp
    server webserver 127.0.0.1:2812

backend shadowsocks
    mode tcp
    server socks 127.0.0.1:8530
I hope this helps you with maximizing the value of your shared IPv4 addresses with haproxy.

Friday, March 17, 2017

*Monitoring Google Contacts for changes - are you losing contacts?


NOTE: This page has moved to https://datamakes.com/2017/03/17/monitoring-google-contacts-for-changes-are-you-losing-contacts/

Have you ever thought you are losing contacts stored in Google?  That wonderful moment when you are trying to dial your phone - and the person you want is not in your address book.  Then you think about it and realize that you definitely have had them in there before..........how frustrating.

I believe that your contact list is probably one of the most important personal information you keep in your phone.  Contacts last over time and you don't always notice when they disappear until it is too late.

To help combat this problem, I have written a set of bash scripts which run on Linux to help you recognize a problem and provide you a way to correct it.

This solution is specifically for backing up Google Contacts - but the concepts would work for any contact storage engine where you can get vcards.

To make this work, you will need to get vdirsyncer installed properly.  There are very complete instructions at the vdirsyncer web-site at https://vdirsyncer.pimutils.org/en/stable/installation.html  

Please pay specific attention to the "Google" section in https://vdirsyncer.pimutils.org/en/stable/config.html  Specifically you will need to create an API key (client_id and client_secret) and install an additional python module to access Google.  All of this is fully documented so you should be able to follow those instructions.

Once you have vdirsyncer installed, let's speed things up and jump right to the configuration.  Here is a my vdirsyncer config file:
[general]
status_path = "<path>/status"

[storage googlecontacts]
type = "google_contacts"
token_file = "<path>/google.token"
client_id = "<client_id from the Google API console>"
client_secret = "<client_secret from the Google API console>"
read_only = "true"

[storage vcf]
type = "filesystem"
path = "<path>/contacts"
fileext = ".vcf"

[pair google]
a = "googlecontacts"
b = "vcf"
collections = ["from a"]
conflict_resolution = "a wins"
Now for a discussion of the important points in this config file:
  • You must substitute everything in < > with proper values
  • vdirsyncer seems to really require all of the quotation marks (") above - leave them in
  • The file <path>/google.token provides access to your Google account via an OAuth token - protect this file
  • The read_only parameter in the googlecontacts storage configuration means that no changes from your local PC will ever appear on Google.  NOTE: There should never be changes on your local system, unless something goes horribly wrong.....this is just a safety measure
  • The "a wins" also specifies that Google Contacts is the authoritative source of information
Once you create this configuration file, you will need to perform a one-time only step of running
vdirsyncer -c <config file> discover
This step will either automatically start a browser for you to authenticate with Google - or if a browser can not be started - a URL will be provided that you must browse to.  Once you authenticate, you will be given a long complex string which you will paste into your vdirsyncer window.  This is used to generate your OAuth token, which is stored in the token file specified above.

At this point in time, you probably want to run
vdirsyncer -c <config file> sync
just to make sure everything is working.  If everything goes well, you will end up with a bunch of vcard (.vcf) files in your <path>/contacts directory.

Now that everything is working, let's automate this.  The script below will notify you when:
  • A contact is deleted - the deleted contact is stored for your review
  • A contact is added
  • A contact is changed - both then old and new version are stored and dated for your review
 #!/bin/bash

BASEDIR=<path>
TODAYDIR=$BASEDIR/contacts/default
YESTERDAYDIR=$BASEDIR/yesterday
CONFIG=$BASEDIR/<config file>.conf
CHANGEDIR=$BASEDIR/changes
YESTERDAY=`date +%Y-%m-%d -d "yesterday"`
TODAY=`date +%Y-%m-%d`
CHANGED=0

/usr/local/bin/vdirsyncer -c $CONFIG sync | egrep -v "Syncing "

#Search for deletions
for vcf in `ls $YESTERDAYDIR/*.vcf`
do
   card=`echo $vcf | xargs -n 1 basename`
   NAME=`cat $YESTERDAYDIR/$card | egrep ^FN: | cut -f2 -d: | sed -e 's/ /_/g'`
   if [ ! -f "$TODAYDIR/$card" ]; then
      echo DELETED: $NAME \($card\)
      mv $YESTERDAYDIR/$card $CHANGEDIR/$NAME.$card.DELETED.$TODAY
      CHANGED=1
   fi
done

#Search for additions
for vcf in `ls $TODAYDIR/*.vcf`
do
   card=`echo $vcf | xargs -n 1 basename`
   if [ ! -f "$YESTERDAYDIR/$card" ]; then
      NAME=`cat $TODAYDIR/$card | egrep ^FN: | cut -f2 -d: | sed -e 's/ /_/g'`
      echo ADDED: $NAME \($card\)
      CHANGED=1
   fi
done

#Search for changes
for vcf in `ls $TODAYDIR/*.vcf`
do
   card=`echo $vcf | xargs -n 1 basename`
   if [ -f $YESTERDAYDIR/$card ]; then
      if [ `stat --printf="%s" $TODAYDIR/$card` -ne `stat --printf="%s" $YESTERDAYDIR/$card` ]; then
         NAME=`cat $YESTERDAYDIR/$card | egrep ^FN: | cut -f2 -d: | sed -e 's/ /_/g'`
         echo CHANGED: $NAME \($card\)
         cp $TODAYDIR/$card $CHANGEDIR/$NAME.$card.CHANGE.$TODAY
         cp $YESTERDAYDIR/$card $CHANGEDIR/$NAME.$card.CHANGE.$YESTERDAY
         CHANGED=1
      fi
   fi
done

# Copy all of todays entries into yesterdays directory - for comparison tomorrow
cp $TODAYDIR/*.vcf $YESTERDAYDIR

if [ "$CHANGED" == "1" ]; then
   <any code you want to specifically execute to notify you of a change - remember cron will automatically E-Mail you a log of this session, if any changes were found.  This is for any additional notification options.  For example, I use an API to send myself a text message>
fi
You should only have to change the items in red, make sure all of the directories exist, and add this script to your crontab.

Now you will automatically be notified when your contacts change.  If you see any unexpected changes, you will have all of the necessary information to restore the missing data.

Rest easy knowing that your contacts are safe.

Sunday, October 2, 2016

*Securing CloudFlare's FlexibleSSL even further using UFW


NOTE: This page has moved to https://datamakes.com/2016/10/02/securing-cloudflares-flexiblessl-even-farther-with-ufw/

In previous posts, I have mentioned how I am using CloudFlare's Flexible SSL to help secure this site.  From those posts you will remember that Flexible SSL means that your browsing session is encrypted between your browser and CloudFlare but possibly not encrypted between CloudFlare and the actual server which holds the data.  This causes the data flow to look like:


In the case of this web-site, for example, Blogger does not support HTTPS on custom domains, so the HTTP connection shown above exists here as well.
NOTE: As mentioned earlier, this site does not contain any personal information or allow anyone to login.  Therefore the underlying HTTP connection is of no security consequence.  If you can hack into this site, I encourage you to submit a report to the Google Bug Bounty program and get paid for your discovery.
Those of us that understand networking can see from above that it should be possible to bypass CloudFlare and get directly to the unencrypted HTTP port on the Apache server.  This is indeed true if you can determine the actual IP address of the Apache server.

This could potentially be a security hole that needs to be patched.  Fortunately, through the magic of scripting and the Uncomplicated Firewall (UFW) or any other firewall, we can shutdown this hole.

If you are running on a Linux server, take a look at this little script I have put together.  The basic flow of this script is to:
1) Download a list of known CloudFlare IP addresses - provided by CloudFlare
2) Parse each entry into a UFW command to permit access from CloudFlare to a specific port

The results of this script are that the Apache server will only accept connections coming from the CloudFlare network.  This does not encrypt the connection between CloudFlare and your Apache server, but it does prevent anyone from bypassing CloudFlare.

Here is the script:
#!/bin/bash

function to_int {
    local -i num="10#${1}"
    echo "${num}"
}

function port_is_ok {
    local port="$1"
    local -i port_num=$(to_int "${port}" 2>/dev/null)

    if (( $port_num < 1 || $port_num > 65535 )) ; then
        echo "*** ${port} is not a valid port" 1>&2
        port_is_ok=0
        return 0
    fi

    #echo 'ok'
    port_is_ok=1
    return 1
}

function addRules {
    for a in `curl -s https://www.cloudflare.com/ips-v4`
    do
       #echo ufw allow to any port $PORT proto tcp from $a
       ufw allow to any port $PORT proto tcp from $a
    done
}

function removeRules {
    for a in `ufw status numbered | grep $PORT/tcp | cut -c45-`
    do
       #echo ufw --force delete allow to any port $PORT proto tcp from $a
       ufw --force delete allow to any port $PORT proto tcp from $a
    done
}

if [ "`whoami`" != "root" ]; then
   echo ABORT: This script must be run as root
   exit 1
fi

port_is_ok $2
#echo $port_is_ok
if [ $port_is_ok -eq 0 ]; then
   echo "Usage: $0 <add|remove|refresh> <port number>"
   exit 0
fi

PORT=$2
case "$1" in
   "add")
      addRules
      ;;
   "remove")
      removeRules
      ;;
   "refresh")
      removeRules
      addRules
      ;;
   *)
      echo "ABORT: Usage $0 <add|remove|refresh> <port>"
      exit 1
      ;;
esac
You can see from the "usage" line, that the command format for this script is:
Usage: <script> <add|remove|refresh> <port number>
Here is what each of the command means:
1) add is to allow access from CloudFlare to a port
2) remove is to remove access from CloudFlare to a port
3) refresh is a combination of remove then add - to make sure that you have all of the current CloudFlare IP addresses

A few warnings about this script:
1) It must be run as root - but could be modified to allow anyone who can issue ufw or iptables commands via sudo
2) It only processes CloudFlare IPv4 addresses - but can be modified to allow IPv6 as well (the URL for CloudFlare's IPv6 addresses is https://www.cloudflare.com/ips-v6)
3) It currently issues ufw commands but could be easily modified to support iptables (or any other firewall) commands

Good luck - and enjoy your more secure CloudFlare network.

Tuesday, April 26, 2016

*Preventing file changes on Linux


NOTE: This page has moved to https://datamakes.com/2016/04/26/preventing-file-changes-on-linux/

Today's tip will be short - but it can be very useful.  Simply put, if you want to prevent a file from being changed on a Linux file system I have just learned that there is an immutable options.  All you have to do is type (as root)
chattr +i <filename>
Now, of course, you can undo this by using
chattr -i <filename>
So, you may be asking, why would I want to make a file unchangeable?

I will answer that by describing the specific case that caused me to look for this.  I was in the process of trying to enable DNSSEC on my Linux computer.  To address this concern, I installed the unbound DNS resolver (a topic for a different post)

I tried to make some configuration changes to both dhclient and resolvconf to ensure I was always using unbound.  Neither of these changes seemed to force the VPN client I was using from Private Internet Access to use 127.0.0.1 as the DNS server.  This leads me to believe that the Private Internet Access client directly writes /etc/resolv.conf - completely bypassing unbound.

The solution - immutable files.  Basically, I locked /etc/resolv.conf so that it can't be changed!  Now, I just have to remember to unlock it if I ever run a VPN application where I really do want to honor the DNS servers of the VPN provider - such as for a corporate network.


Saturday, April 2, 2016

*Web Knocking - an HTTP(S) based equivalent of Port Knocking


NOTE: This page has moved to https://datamakes.com/2016/03/03/web-knocking-an-https-based-equivalent-of-port-knocking/

A few weeks ago, I was trying to figure out a way that I could remotely trigger a computer in my home to perform an automated task.  For those that know me, you already know that I am extremely paranoid about providing remote access to anything, since it is very easy to misconfigure remote access and create large security holes.

I thought about trying to use port knocking as the trigger.  For those not familiar with port knocking, the basic idea is that you can detect incoming packets (either TCP or UDP) to specific ports.  If you receive the correct sequence of port connection requests, even if the packets do not ever get received by a server, then you can trigger automated tasks.

Port triggering might be used to open a port on a firewall, for example.  The idea being that if you are away from your network and want to access a server, you send packets to ports 1234/udp, 5678/tcp, 8442/udp (or whatever sequence you like) and then a scripts allows access to port 443/tcp for your remote IP address.  The theory is that since only you know the correct sequence of ports, you should be the only one to be able to gain access to your server.

But I ran into a problem with port knocking.  I quickly found out that depending on the network I was on, I could not always send packets destined for the ports in my port knocking sequence.  This could be due to the airport proxy system, a corporate network restriction, or a host of other network limiting techniques.

So I found a work-around to my problem which I am calling "web knocking"  The basic idea is the same, except I am using mod_security within an Apache server to be the receiver of my incoming requests.  In mod_security, I wrote a rule that looked like:
SecRule REQUEST_FILENAME "^/trigger.php" "phase:1,ID:'32100',drop,msg:'Automation triggered',exec:/home/my/automatic/script.sh"
 Let's take a quick look at this rule to see exactly what it does.
  1. The REQUEST_FILENAME clause tells us to match on just the filename portion of the URL requested
  2. The "^trigger.php" is a regex which is matched against the filename requested.  For those not familiar with regex's, the ^ means "beginning of the string" - so this would match /trigger.php but not /my_trigger.php
  3. The "phase:1" portion tells mod_security that we want this rule executed in the early stages of the HTTP connection
  4. The ID number is whatever you choose to appear in the logs
  5. The "drop" tells mod_security to immediately drop the connection with no further reason provided to the client.  This is key.  Nobody will receive confirmation that the file does or does not exist or that any action was taken based on their connection attempt.
  6. The "exec" section tells mod_security to execute a specific script to take whatever action you desired.
Now, an interesting trick with this rule is that /trigger.php does not have to exist on your server to make this work.  In fact, it is probably better if the file does not exist, so you don't accidentally run anything you weren't expecting.

It is also important to recognize that the script will be run under the ID used to run the web server.  This could be www, nobody, or something else depending on your configuration.  You need to make sure that the web server has proper permissions to run the script and whatever is inside the script.

You will also have access to portions of the HTTP request data as environment variables.  To determine exactly what your web server provides to the script, I would suggest adding "env > /dev/shm/vars" (or something similar) so you can see all of the environment variables that exist for your use.

You can make the scripts as complex as you want, including chaining them together.  For example, you could have /trigger1 run a script that creates a temp file.  The script that triggers when you request /trigger2 could check for the existence of that temporary file and not run if the temp file does not exist.

Just remember that even a successful request from you will result in mod_security dropping the connection.  Therefore, you won't get confirmation that your request was received.......but you could have your script send your cell phone an SMS or any other action so you know that your automation triggered properly.

Happy web knocking!