High intensity port multiplexing using haproxy

Tuesday, February 27, 2018
As I am sure you already know, IPv4 addresses are in limited supply right now.  The solution to this is IPv6 which greatly enlarges the available address space.  The problem is that IPv6 is not yet deployed everywhere, so there is still a need to figure out how to maximize the usage of your existing IPv4 addresses.

I have a VPS on the Internet which only provides 1 IPv4 address.  Of course, I want to run multiple services on this VPS.  I also want to use well-known ports to decrease the chance of being blocked from accessing my VPS.

There are several tools that can handle port multi-plexing.  Probably among the most widely used are haproxy and sslh.  Both of these tools are probably available in your Linux package manager.

SSLH is very easy to use but it only multiplexes SSL and SSH sessions.  If you want more than 2 services on the same port then this tool is not for you.

HAPROXY is a bit more complicated to set up but it is also a lot more configurable.  This post will describe the way that I have haproxy configured to host multiple services.  I will post the full configuration file at the bottom of this post for easy copying and pasting.

NOTE: When you are reading the code below, any text that is underlined needs to be replaced with values that are appropriate to your installation.

The first step in configuring haproxy is to set up the "frontend"  This is the portion of haproxy that listens for incoming connections.  Your "frontend" might look like this:
frontend ssl
        mode tcp
        bind <ipaddress>:<port>
        tcp-request inspect-delay 3s
        tcp-request content accept if { req.ssl_hello_type 1 }
This basically tells haproxy which IP address and port to listen on for incoming connections.  You can also use the IP address 0.0.0.0 for every available IP address, if you have multiple.

The "inspect-delay" tells haproxy how long it should wait to receive data from the client before making a decision about what to do with the incoming connection.  This is required due to the difference in the way that HTTPS and SSH sessions are negotiated.  This is also the way that we distinguish the traffic type.

Once you have this front-end configured, you next need to configure your access control lists which connect your front-end to your backend(s).

The ACL for an SSH session looks like this:
        acl     <ssh label>             payload(0,7)    -m bin 5353482d322e30
This will detect SSH sessions and mark them with <ssh_label>  This is an arbitrary label and you can pick any name you want.  The only requirement is that it matches the rules that connect to the SSH backend.

Your "use_backend" statement for SSH would then look like:
        use_backend <ssh backend name>                     if <ssh label>
As before, the <ssh backend name> is an arbitrary label you can pick.  The only requirement again is that the backend name must match the backend definition.

Since we are now talking about the backend, here is what an SSH backend would look like:

backend openssh
        mode tcp
        timeout server 3h
        server openssh <ip address>:<port>
Typically you would use an IP address of 127.0.0.1 to mean localhost or the local machine.  The default port for SSH is 22.  It is possible to use any IP address and port you want in this definition.  That would be useful if the SSH server is on a different machine on a network behind your haproxy system.

Now we can add additional services.  It is common for a single web-server to host multiple web-sites.  These web-sites are identified by their DNS name.  On the server side this is called SNI or Server Name Indication.

Let's start by setting up an ACL for server1.visideas.com
        acl     <server one>               req.ssl_sni             -i server1.visideas.com
Then the matching use_backend rule would look like:
        use_backend <server 1 backend> if <server one acl> { req.ssl_hello_type 1 }
 Finally, your matching backend might look like:
backend <server 1 backend>
        mode tcp
        server webserver <server 1 IP>:<server 1 port>
 There are also some powerful matching criteria that you can use in your ACL's.  For example, both of these are valid:
        acl     <some acl>            req.ssl_sni             -m end .visideas.com
        acl     <different acl>            req.ssl_sni             -m found
The first line matches any domain name that ends in .visideas.com and marks it with <some acl>.  The second line matches any name and tags it with <different acl>.  These lines will not mark any requests received that were directed directly to an IP address.

Another use_backend that is useful is:
        use_backend <another backend>                      if { req.ssl_hello_type 1 }
The ssl_hello_type of 1 indicates the presence of an HTTPS request.  Since there is no tag name after the "if" this ACL would catch requests which were sent to this haproxy server by IP address.  This means that you can route traffic which came in by specifying IP address to an alternate service.

The final ACL that I will discuss is:
        use_backend <shadowsocks>                 if !{ req.ssl_hello_type 1 } !{ req.len 0 }
This ACL can detect traffic that is meant to be sent to a Shadowsocks server.  This traffic is identified because it does not contain an ssl_hello_type of 1 and it sends traffic immediately without waiting - i.e. the request length is not 0.

There are probably other protocols that this statement would match as well but I am using it for Shadowsocks.

Now, as promised, here is my complete haproxy.conf.  Again, please remember to change everything that is underlined to match your specific settings.

This configuration allows me to access the following services on port 443:
  1. An nginx server when accessed as https://s.visideas.com/
  2. An Apache2 server when access as https://k.visideas.com/ or https://*.visideas.com/ or https://<any DNS name>
  3. A Monit server when accessed as https://monit.visideas.com/
  4. An OpenConnect SSL VPN server when accessed as https://<ip address>/
  5. A Shadowsocks server when accessed using a Shadowsocks client
  6. An SSH server
global
    log /dev/log    local0
    log /dev/log    local1 notice
    chroot /var/lib/haproxy
    user haproxy
    group haproxy
    daemon

defaults
    log    global
    mode    tcp
    option  tcplog   
    option    dontlognull
    maxconn  2000
    timeout connect  5000
    timeout client 500000
    timeout server 500000

frontend ssl
    mode tcp
    bind <host IP>:443
    tcp-request inspect-delay 3s
    tcp-request content accept if { req.ssl_hello_type 1 }

    acl    ssh_payload        payload(0,7)    -m bin 5353482d322e30

        acl     www-monit        req.ssl_sni        -i monit.visideas.com
        acl     www-s        req.ssl_sni        -i s.visideas.com
        acl     www-r        req.ssl_sni        -i r.visideas.com
        acl     www-k        req.ssl_sni        -m end .visideas.com
        acl     www-k        req.ssl_sni        -m found

    use_backend www-monit            if www-monit { req.ssl_hello_type 1 }
    use_backend nginx-s        if www-s { req.ssl_hello_type 1 }
    use_backend apache2-k        if www-k { req.ssl_hello_type 1 }
    use_backend ocserv            if { req.ssl_hello_type 1 }
    use_backend openssh            if ssh_payload
    use_backend openssh            if !{ req.ssl_hello_type 1 } { req.len 0 }
    use_backend shadowsocks            if !{ req.ssl_hello_type 1 } !{ req.len 0 }

backend openssh
    mode tcp
    timeout server 3h
    server openssh 127.0.0.1:22

backend ocserv
    mode tcp
    timeout server 24h
    server sslvpn 127.0.0.1:4443

backend nginx-s
    mode tcp
    server webserver 127.0.0.1:8443

backend apache2-k
    mode tcp
    server webserver 127.0.0.1:10443

backend www-monit
    mode tcp
    server webserver 127.0.0.1:2812

backend shadowsocks
    mode tcp
    server socks 127.0.0.1:8530
I hope this helps you with maximizing the value of your shared IPv4 addresses with haproxy.

Read more ...

Monitoring Google Contacts for changes - are you losing contacts?

Friday, March 17, 2017
Have you ever thought you are losing contacts stored in Google?  That wonderful moment when you are trying to dial your phone - and the person you want is not in your address book.  Then you think about it and realize that you definitely have had them in there before..........how frustrating.

I believe that your contact list is probably one of the most important personal information you keep in your phone.  Contacts last over time and you don't always notice when they disappear until it is too late.

To help combat this problem, I have written a set of bash scripts which run on Linux to help you recognize a problem and provide you a way to correct it.

This solution is specifically for backing up Google Contacts - but the concepts would work for any contact storage engine where you can get vcards.

To make this work, you will need to get vdirsyncer installed properly.  There are very complete instructions at the vdirsyncer web-site at https://vdirsyncer.pimutils.org/en/stable/installation.html  

Please pay specific attention to the "Google" section in https://vdirsyncer.pimutils.org/en/stable/config.html  Specifically you will need to create an API key (client_id and client_secret) and install an additional python module to access Google.  All of this is fully documented so you should be able to follow those instructions.

Once you have vdirsyncer installed, let's speed things up and jump right to the configuration.  Here is a my vdirsyncer config file:
[general]
status_path = "<path>/status"

[storage googlecontacts]
type = "google_contacts"
token_file = "<path>/google.token"
client_id = "<client_id from the Google API console>"
client_secret = "<client_secret from the Google API console>"
read_only = "true"

[storage vcf]
type = "filesystem"
path = "<path>/contacts"
fileext = ".vcf"

[pair google]
a = "googlecontacts"
b = "vcf"
collections = ["from a"]
conflict_resolution = "a wins"
Now for a discussion of the important points in this config file:
  • You must substitute everything in < > with proper values
  • vdirsyncer seems to really require all of the quotation marks (") above - leave them in
  • The file <path>/google.token provides access to your Google account via an OAuth token - protect this file
  • The read_only parameter in the googlecontacts storage configuration means that no changes from your local PC will ever appear on Google.  NOTE: There should never be changes on your local system, unless something goes horribly wrong.....this is just a safety measure
  • The "a wins" also specifies that Google Contacts is the authoritative source of information
Once you create this configuration file, you will need to perform a one-time only step of running
vdirsyncer -c <config file> discover
This step will either automatically start a browser for you to authenticate with Google - or if a browser can not be started - a URL will be provided that you must browse to.  Once you authenticate, you will be given a long complex string which you will paste into your vdirsyncer window.  This is used to generate your OAuth token, which is stored in the token file specified above.

At this point in time, you probably want to run
vdirsyncer -c <config file> sync
just to make sure everything is working.  If everything goes well, you will end up with a bunch of vcard (.vcf) files in your <path>/contacts directory.

Now that everything is working, let's automate this.  The script below will notify you when:
  • A contact is deleted - the deleted contact is stored for your review
  • A contact is added
  • A contact is changed - both then old and new version are stored and dated for your review
 #!/bin/bash

BASEDIR=<path>
TODAYDIR=$BASEDIR/contacts/default
YESTERDAYDIR=$BASEDIR/yesterday
CONFIG=$BASEDIR/<config file>.conf
CHANGEDIR=$BASEDIR/changes
YESTERDAY=`date +%Y-%m-%d -d "yesterday"`
TODAY=`date +%Y-%m-%d`
CHANGED=0

/usr/local/bin/vdirsyncer -c $CONFIG sync | egrep -v "Syncing "

#Search for deletions
for vcf in `ls $YESTERDAYDIR/*.vcf`
do
   card=`echo $vcf | xargs -n 1 basename`
   NAME=`cat $YESTERDAYDIR/$card | egrep ^FN: | cut -f2 -d: | sed -e 's/ /_/g'`
   if [ ! -f "$TODAYDIR/$card" ]; then
      echo DELETED: $NAME \($card\)
      mv $YESTERDAYDIR/$card $CHANGEDIR/$NAME.$card.DELETED.$TODAY
      CHANGED=1
   fi
done

#Search for additions
for vcf in `ls $TODAYDIR/*.vcf`
do
   card=`echo $vcf | xargs -n 1 basename`
   if [ ! -f "$YESTERDAYDIR/$card" ]; then
      NAME=`cat $TODAYDIR/$card | egrep ^FN: | cut -f2 -d: | sed -e 's/ /_/g'`
      echo ADDED: $NAME \($card\)
      CHANGED=1
   fi
done

#Search for changes
for vcf in `ls $TODAYDIR/*.vcf`
do
   card=`echo $vcf | xargs -n 1 basename`
   if [ -f $YESTERDAYDIR/$card ]; then
      if [ `stat --printf="%s" $TODAYDIR/$card` -ne `stat --printf="%s" $YESTERDAYDIR/$card` ]; then
         NAME=`cat $YESTERDAYDIR/$card | egrep ^FN: | cut -f2 -d: | sed -e 's/ /_/g'`
         echo CHANGED: $NAME \($card\)
         cp $TODAYDIR/$card $CHANGEDIR/$NAME.$card.CHANGE.$TODAY
         cp $YESTERDAYDIR/$card $CHANGEDIR/$NAME.$card.CHANGE.$YESTERDAY
         CHANGED=1
      fi
   fi
done

# Copy all of todays entries into yesterdays directory - for comparison tomorrow
cp $TODAYDIR/*.vcf $YESTERDAYDIR

if [ "$CHANGED" == "1" ]; then
   <any code you want to specifically execute to notify you of a change - remember cron will automatically E-Mail you a log of this session, if any changes were found.  This is for any additional notification options.  For example, I use an API to send myself a text message>
fi
You should only have to change the items in red, make sure all of the directories exist, and add this script to your crontab.

Now you will automatically be notified when your contacts change.  If you see any unexpected changes, you will have all of the necessary information to restore the missing data.

Rest easy knowing that your contacts are safe.
Read more ...

Securing CloudFlare's FlexibleSSL even further using UFW

Sunday, October 2, 2016
In previous posts, I have mentioned how I am using CloudFlare's Flexible SSL to help secure this site.  From those posts you will remember that Flexible SSL means that your browsing session is encrypted between your browser and CloudFlare but possibly not encrypted between CloudFlare and the actual server which holds the data.  This causes the data flow to look like:


In the case of this web-site, for example, Blogger does not support HTTPS on custom domains, so the HTTP connection shown above exists here as well.
NOTE: As mentioned earlier, this site does not contain any personal information or allow anyone to login.  Therefore the underlying HTTP connection is of no security consequence.  If you can hack into this site, I encourage you to submit a report to the Google Bug Bounty program and get paid for your discovery.
Those of us that understand networking can see from above that it should be possible to bypass CloudFlare and get directly to the unencrypted HTTP port on the Apache server.  This is indeed true if you can determine the actual IP address of the Apache server.

This could potentially be a security hole that needs to be patched.  Fortunately, through the magic of scripting and the Uncomplicated Firewall (UFW) or any other firewall, we can shutdown this hole.

If you are running on a Linux server, take a look at this little script I have put together.  The basic flow of this script is to:
1) Download a list of known CloudFlare IP addresses - provided by CloudFlare
2) Parse each entry into a UFW command to permit access from CloudFlare to a specific port

The results of this script are that the Apache server will only accept connections coming from the CloudFlare network.  This does not encrypt the connection between CloudFlare and your Apache server, but it does prevent anyone from bypassing CloudFlare.

Here is the script:
#!/bin/bash

function to_int {
    local -i num="10#${1}"
    echo "${num}"
}

function port_is_ok {
    local port="$1"
    local -i port_num=$(to_int "${port}" 2>/dev/null)

    if (( $port_num < 1 || $port_num > 65535 )) ; then
        echo "*** ${port} is not a valid port" 1>&2
        port_is_ok=0
        return 0
    fi

    #echo 'ok'
    port_is_ok=1
    return 1
}

function addRules {
    for a in `curl -s https://www.cloudflare.com/ips-v4`
    do
       #echo ufw allow to any port $PORT proto tcp from $a
       ufw allow to any port $PORT proto tcp from $a
    done
}

function removeRules {
    for a in `ufw status numbered | grep $PORT/tcp | cut -c45-`
    do
       #echo ufw --force delete allow to any port $PORT proto tcp from $a
       ufw --force delete allow to any port $PORT proto tcp from $a
    done
}

if [ "`whoami`" != "root" ]; then
   echo ABORT: This script must be run as root
   exit 1
fi

port_is_ok $2
#echo $port_is_ok
if [ $port_is_ok -eq 0 ]; then
   echo "Usage: $0 <add|remove|refresh> <port number>"
   exit 0
fi

PORT=$2
case "$1" in
   "add")
      addRules
      ;;
   "remove")
      removeRules
      ;;
   "refresh")
      removeRules
      addRules
      ;;
   *)
      echo "ABORT: Usage $0 <add|remove|refresh> <port>"
      exit 1
      ;;
esac
You can see from the "usage" line, that the command format for this script is:
Usage: <script> <add|remove|refresh> <port number>
Here is what each of the command means:
1) add is to allow access from CloudFlare to a port
2) remove is to remove access from CloudFlare to a port
3) refresh is a combination of remove then add - to make sure that you have all of the current CloudFlare IP addresses

A few warnings about this script:
1) It must be run as root - but could be modified to allow anyone who can issue ufw or iptables commands via sudo
2) It only processes CloudFlare IPv4 addresses - but can be modified to allow IPv6 as well (the URL for CloudFlare's IPv6 addresses is https://www.cloudflare.com/ips-v6)
3) It currently issues ufw commands but could be easily modified to support iptables (or any other firewall) commands

Good luck - and enjoy your more secure CloudFlare network.
Read more ...

Making Siri reminders appear in GQueues - or other task management tools - with IFTTT

Wednesday, September 21, 2016
I would imagine that, like me, you probably have about 100,000,000 things to do on any given day.  Organization in this type of environment is key to your success.  To help me remain organized, I have chosen to use GQueues, which I can not recommend strongly enough.  (If you are going to sign-up, please use the link https://goo.gl/JJhdlr because I will get a referral credit - same price to you and a bonus to me - win/win)

Now, if you are a road warrior like I am, you will also find yourself traveling in a car, plane, or train and you want to make a quick reminder.  Then you say to yourself, I have this really cool assistant Siri on my phone, maybe she can help.  You can even say things like "Remind me to sign up for GQueues" and a reminder is created.

Then you realize that you now have 2 task management systems......which is BAD.  So you look for a way to integrate both systems.

Here comes IFTTT to the rescue.  You can install the IF by IFTTT application on your phone (both iOS and Android) and here comes the magic.

To make this work, you need to know what E-Mail address you can send tasks to.  On GQueues, you can find this on your Settings page, General tab and it will look something like name_abcdef@@gqueues.appspotmail.com

On the IFTTT side, you need 2 channels connected.  They are:
1) iOS reminders
2) GMail (or any other E-Mail sending action)

Your recipe will look like this:



The Trigger is "Any new reminder".

The action is "Send an E-Mail to <your special E-Mail address>"

Make the Subject of the Message equal to {{Title}} (keep the curly braces) and this will set the subject line to the Title of the reminder Siri created for you.

You can now experiment with different fields and formats for the subject and body to make it work well with your task management tool.  Please note:  IFTTT does not remove the actual reminder from the iOS reminder application, so you may want to clean up periodically.

Now, when you are driving down I-95 you can simply say to Siri "Remind me to post a blog article about how I made Siri smarter" and that exact task will end up in GQueues.
Read more ...

Why is SSL so important? Your ISP may be watching you more closely than you think

Saturday, September 10, 2016
As you know from my previous blog entries, I have been focused on security and privacy.  I actually spent a lot of time trying to determine if SSL enabling this particular site was important (please see my previous blog entry at https://blog.flexency.com/2016/08/enabling-ssl-on-blogger-with-custom.html for a longer discussion on this)

I had originally decided that SSL was not important for a site that:
1) Does not require you to login
2) Does not have any information specific to an individual or group
3) Was on a platform that would be difficult to hack (notice that I did not say "secure")

Since I am using Blogger as my hosting environment, I thought it was better to have a fully managed platform that would be free from code defects.  On a non-SSL site, what can people in the middle really see?  Your IP address, browser information, the number of times you visit a site????  Is this information really sensitive enough to care about encrypting?

Then the epiphany hit - ISP level tracking.  There are ISPs out there that are using something called an X-UIDH header to track your activity across all HTTP sites you visit.  If you are not familiar with the X-UIDH header, you should read the extremely information Electronic Frontier Foundation (EFF) posting on this topic at https://www.eff.org/deeplinks/2014/11/verizon-x-uidh

The very short summary is that ISPs can (and are) changing your browser requests to include a unique tracking ID.  You can not stop this, you can not prevent it, you *may* be able to opt out (if you believe an opt out will work).  It is also unclear who gets to purchase this information and how it is used.

The good news is that SSL requests can not be easily modified without much more sophisticated techniques.

VPNs also protect your traffic against modification.  There is a catch with VPNs, like everything else.  Any VPN encrypts your traffic to their VPN server.  Once your traffic reaches the VPN server, the VPN's encryption is removed and the normal traffic flows out.  Therefore, if you are sending HTTP requests over a VPN you have changed who can see your traffic but you have not fixed the underlying problem.......*someone* can see it.  You have to trust your VPN to be honest about what they do with your data.
Read more ...

Enabling SSL on Blogger with a custom domain name

Tuesday, August 16, 2016
I was on a mission for the past few days to try and enable SSL for this web-site.  Not that there is anything confidential on the site but I just wanted to see if I could get that nice little green padlock to appear.

It turns out that Blogger supports SSL encryption on blogs hosted on their blogspot.com domain for everyone.  Therefore, I could have this blog hosted at https://flexency.blogspot.com/ (which redirects right back here, now) and I would get my encryption.....but I would lose my own domain name.  Once you switch to an alternate URL, like http://blog.flexency.com/, you lose your ability to enable encryption.

Enter CloudFlare to solve this problem with Flexible SSL.  Flexible SSL works by creating an SSL connection between your web browser and the CloudFlare network and a regular HTTP connection between CloudFlare and the back-end - Blogger in my case.  For a full description of this process, please see a very detailed posting on the CloudFlare web-site at https://blog.cloudflare.com/ssl-on-tumblr-wordpress-blogger-appengine-pos/

The benefit of FlexibleSSL is that the connection between your browser and the CloudFlare platform is protected by SSL.  This means that your ISP can not easily see what pages you are viewing.  They can still track DNS lookups (if you use their DNS) and the number of connections and/or bytes sent to the CloudFlare IP address.

Remember:  This site does not require you to login, store any personal information or ask you for any information.  Therefore, the fact that the connection between CloudFlare and Blogger is over HTTP is not a security concern.  Using a managed platform that is always patched against security vulnerabilities easily outweighs any *perceived* risk of this HTTP connection.  Please see http://blog.flexency.com/p/site-security.html for a full discussion of the trade-offs between a managed platform and an HTTPS connection

So by following these instructions, I started to enable encryption.  After getting all of the proper settings in place, I went my browser to test the connection.

And....

Mixed content error!!!  This means some elements of the page were served via HTTP when the entire page was supposed to be HTTPS

So I started moving all of my images to servers with SSL enabled.  I was actually able to move everything except the favicon.ico file.  This URL was automatically generated by Blogger and I was unable to figure out how to change it.  This 1 single (relatively unimportant) file was now causing a Mixed content error for every page on my blog.

After looking around, I found out that by editing my blog template and disabling this line: (a BIG Thank You to whoever posted this Stack Overflow answer --> http://stackoverflow.com/questions/33960856/delete-favicon-ico-file-from-blogger )
<b:include data='blog' name='all-head-content'/>

that pestering favicon.ico reference disappeared.  Apparently this is a Blogger macro that adds several headers - and the favicon.ico reference is hard-coded to be HTTP and not a relative URL.

Upon further investigation and editing my template, I found out that single above line ended up generating the following HTML on every page of my blog:
<meta content='text/html; charset=UTF-8' http-equiv='Content-Type'/>
<meta content='blogger' name='generator'/>
<link href='http://blog.flexency.com/favicon.ico' rel='icon' type='image/x-icon'/>
<link href='http://blog.flexency.com/' rel='canonical'/>
<link rel="alternate" type="application/atom+xml" title="Flexency - Atom" href="http://blog.flexency.com/feeds/posts/default" />
<link rel="alternate" type="application/rss+xml" title="Flexency - RSS" href="http://blog.flexency.com/feeds/posts/default?alt=rss" />
<link rel="service.post" type="application/atom+xml" title="Flexency - Atom" href="https://www.blogger.com/feeds/3885535862729175922/posts/default" />
<link rel="openid.server" href="https://www.blogger.com/openid-server.g" />
<link rel="openid.delegate" href="http://blog.flexency.com/" />
<!--[if IE]><script type="text/javascript" src="https://www.blogger.com/static/v1/jsbin/4044097237-ieretrofit.js"></script>
<![endif]-->
<meta content='http://blog.flexency.com/' property='og:url'/>
<!--[if IE]> <script> (function() { var html5 = ("abbr,article,aside,audio,canvas,datalist,details," + "figure,footer,header,hgroup,mark,menu,meter,nav,output," + "progress,section,time,video").split(','); for (var i = 0; i < html5.length; i++) { document.createElement(html5[i]); } try { document.execCommand('BackgroundImageCache', false, true); } catch(e) {} })(); </script> <![endif]-->

The exact HTML this generates on your blog may be different based on your template.  Just make sure to add back manually any important lines that also get removed when you try to get rid of favicon.ico

To find your exact headers, you can simply insert an HTML comment such as:
<!-- BEGIN headers-->

before the macro and another one directly after the macro.  The next time you reload your blog, you will be able to easily determine which lines are generated for your blog.

Happy blogging on Blogger with SSL. 
Read more ...

Does your IT shop operate like a monopoly? Maybe it is time to be more innovative with IT4IT

Saturday, July 16, 2016
Have you ever tried to work with an internally run and managed IT organization in a large firm?  If you have, you might get the feeling that you are trying to deal with a major utility who has a monopoly.  What do I mean by this statement?

As a homeowner, I would like to lower my power bill.  Unfortunately, I imagine that if I call my power company and ask to negotiate better pricing, I probably would not get very far.  Then I could try to ask for better service, priority power outage restoration, etc......  Again, I do not think I would get very far.  This is because the power company is a regulated monopoly.  They have no incentive to deal with me as a direct customer.  I also do not have the option of selecting a different power company (yes, I know, for delivery only.....I can choose my energy supplier within limits now)

Are internally run IT organizations much different from a regulated monopoly?  They do (generally) set their own charge back rates......they may or may not negotiate better levels of service......and you usually are not allowed to go out to the open market and pick another provider.

But wait - isn't cloud changing all of that?  The answer to that simple question is a resounding YES.  There are now competitors that internal IT organizations are competing against for the business.  Can a regulated monopoly compete (in most cases) with an innovative capitalistic company?  The answer to this question is (typically) a resounding NO.

This is where IT4IT comes into play.  IT4IT by The Open Group describes a community developed operating model which enables existing IT organizations to transform into the type of service provider that users of IT demand.

Keep watching this space for new developments and discussions around IT4IT.  The Open Group has quarterly meetings, so expect new announcements at least 4 times a year......but probably more often.
Read more ...