Feb 232011
 

So far this year we have had:

  • in Australia, record flooding in three states on the eastern seaboard
  • Also in Australia and at about the same time, bushfires around Perth
  • Flooding in Brazil
  • Cyclone Yasi clobbering North Queensland (So bad in fact, that CNN thought Queensland had been sucked up and dumped the other side of Bass Strait.  Guess that’s Chicken Noodle News for you.)
  • The uprising and subsequent overthrowing of the government in Egypt
  • Numerous uprisings in Bahrain
  • Lybia practically booting out its ruler, although Gadhafi still appears to be under the delusion that he’s still in charge
  • Christchurch, New Zealand, levelled by earthquakes
  • Cyclone Carlos continuing to make a prick of himself off the Western Australia coastline
  • IANA exhausted of IPv4 addresses

And it’s not even March.

Feb 202011
 

That was one of the comments made following my piece in this week’s WIA national news.

We better start thinking up a better protocol then if that’s the truth. And we’ve only got 5-10 years to do it apparently, and migrate everyone.  The IPng working group started their work in the early 90’s.  It took them 5 years just to come up with the protocol, and it took a further 5 years before consumer operating systems included support for it.

My tip; IPv6 migration will be the easy route.  For starters, operating systems already support it.  Much software already works with it.

Mythbuster:

IPv6 is completely incompatible with IPv4

Addressing-wise, maybe… but TCP and UDP still work the same way.  The only catch is that you now need 16 bytes to store an address, instead of 4.  If your application passes IP addresses around in the upper layers, you just need to find room for the extra 12 bytes.  Not impossible, and not a show-stopper if your protocol was designed right in the first place.

There’s plenty of /8s allocated to companies, we can use those for the next 5-10 years!

Mmm hmm, you think they’ll just graciously give us that space?  And that it’ll last forever?  China alone if it gave one address to each of its citizens could fill up a whole /2 on its own.  How big’s a /2? 230 =~ 1 073 741 800 addresses.  And they’re growing.

Fact is, this may delay the ultimate IPocalypse, but whatever we do, it will probably take 5 years to migrate.  So our best move is to start moving now.  Not wait until the crunch happens.

Feb 172011
 

Hi all,

As promised I’ve put up some of the ebuilds needed to use the YubiKey in Gentoo.   This includes a PAM module for stand-alone authentication with the YubiKey, which I have patched to support concatenated two-factor authentication.  These are in a new overlay:

  • git://git.longlandclan.yi.org/overlays/yubikey.git
  • http://git.longlandclan.yi.org/git/overlays/yubikey.git

Stand-alone two factor authentication: Password + YubiKey with YubiPAM

The procedure for setting this up is pretty simple.  First, grab the overlay:

stuartl@beast /home/portage/overlays $ git clone git://git.longlandclan.yi.org/overlays/yubikey.git
Cloning into yubikey...
remote: Counting objects: 16, done.
remote: Compressing objects: 100% (13/13), done.
remote: Total 16 (delta 2), reused 16 (delta 2)
Receiving objects: 100% (16/16), 4.15 KiB, done.
Resolving deltas: 100% (2/2), done.

Now add it to your make.conf as per usual procedure, then unmask and install YubiPAM and ykpers ebuilds.  At last check, you need the live git ebuild for ykpers and libyubikey if you use the latest revision (2.2) keys like those handed out at linux.conf.au.  For the PAM module, I recommend using the non-live version, although a live ebuild is there for the adventurous (I had buffer overflow glitches).

# ( echo =sys-auth/yubipam-1.1_beta1
> echo =sys-auth/ykpers-99999999
> echo =sys-auth/libyubikey-99999999 ) >> /etc/portage/package.keywords
# emerge ykpers yubipam

This will install some tools to personalise the YubiKey and the PAM module.  At this step it’s now time to program the YubiKey.  This will break its ability to be used on the Yubico servers until you upload a copy of your new AES key to their site (see below).

Choose your public and private IDs, then program with the following command.  The fixed parameter should be 6-8 bytes long in hexadecimal.  If you intend to use this on Yubico servers later, it must be 6 bytes long, and must begin with ff.

# ykpersonalize -ofixed=$( modhex -h ffeeddccbbaa ) -ouid=112233445566
Firmware version 2.2.2 Touch level 1283 Program sequence 3
Passphrase to create AES key:
     Type in a long string of gobbledegook with lots of random letters,
     numbers and punctuation (not like this!) to keep people guessing.
     This will seed the AES keygen.
fixed: m:vvuuttrrnnll
uid: h:112233445566
key: h:afaaaa6021303d90740579cd7fc4e87f
acc_code: h:000000000000
ticket_flags: APPEND_CR
config_flags:
extended_flags:

At the bottom it asks whether you wish to program the key.  It didn’t here because in fact I had snuck in a little parameter which I haven’t shown here.  You’ll want to make a note of the parameters it tells you though, especially the generated key.

Once complete, you then need to tell YubiPAM about it:

# ykpasswd -a -c -f ffeeddccbbaa -k afaaaa6021303d90740579cd7fc4e87f  -p 112233445566 \
        --user stuartl -o vvuuttrrnnlllrgjglvnicujhnhhecfeitjureidlcer
Adding Yubikey entry for stuartl
Using public UID: ff ee dd cc bb aa
Using private UID: 11 22 33 44 55 66
Passcode: Completed successfully.

The last bit should be generated from the YubiKey itself, just type the rest in then press the button and it should add you after asking for a passcode (which can be different to your regular system password).  The final step is to set it up with PAM.  This I’m not 100% sure of, but I achieved working authentication by configuring my /etc/pam.d/system-auth as follows:

vk4mslp2 yubikey # cat /etc/pam.d/system-auth
auth            required        pam_env.so
auth            sufficient      pam_yubikey.so concat_two_factor
auth            required        pam_unix.so try_first_pass likeauth nullok

It should be noted; test the configuration in a new console session, do not log out or else you may lock yourself out.  There is a ykvalidate tool for testing, but it doesn’t seem to work properly with two-factor authentication.

The concat_two_factor parameter to pam_yubikey relies on a patch present in the ebuild, which I have sent up-stream, and is a work-around for (some would say, broken) PAM clients that do not support multiple Password fields such as KDM.

To log-in, type in your username.  In the password field, enter the passcode followed by a space, then tap the YubiKey to enter the OTP.  It should log you in.

Uploading the key to Yubico

First, a warning

The whole point of one-time password generators such as this, is to prevent someone intercepting your password and logging in to your systems.  If a one-time-password is captured, it’s useless because in the ideal case, all systems know it has already been used.  I say in the ideal case.  In this scenario, if you use the key on a public server, it is possible for someone to capture that OTP, and re-play it to the stand-alone system using YubiPAM to gain access.  They of course can only do this once, but that may be enough for them to make themselves at home.

With some care, you can reduce the risk of this … for instance making a point of gratuitously logging in to each system using the key immediately after using it on any of the systems is one way to try and manually synchronise the OTPs.  I’m giving this problem some thought, for my needs it isn’t such a big deal as it’s mostly for fun anyway, but this is a factor that must be considered when using the YubiKey (or any OTP device) in this manner.

How to

Go to https://upload.yubico.com/.  Enter the data as follows.

  • Your email address (needs no explanation)
  • The YubiKey’s serial number: printed on the back, 6 digit decimal code
  • YubiKey Prefix: This is also called the “fixed UID” and appears as the first 12 characters of the OTP.  It should be in modhex format.  In the above example, the prefix would be vvuuttrrnnll.
  • Internal identity: This is also called the “private UID”.  It should be in hexadecimal format.  In the above example, the internal identity would be 112233445566
  • AES Key: Again in hexadecimal… the key used in this example was afaaaa6021303d90740579cd7fc4e87f
  • Finally, an OTP from the device. Don’t fill this in yet.

Below this form there is a Capcha field to stop spambots.  Fill in the challenge and click the “I’m a human” button, and copy the text into the other box as it asks.  Now go back to the other form, click on the OTP field and press the YubiKey button.  You should then be able to test it on their demo server and use the key simultaneously on the web and your local systems.

Feb 152011
 

Amongst playing with the YubiKey, I also had a look at DHCPv6.  As people well know, IPv4’s days are numbered, and given we’re all going to have to jump across to IPv6 fairly soon, I figured I had better get acquainted with the newer protocols that come with it.

I’ve had my network running dual-stack for some time now.  This has been achieved using stateless autoconfiguration and router advertisments, which work fairly well.  Today though I decided I’d give DHCPv6 a crack.  For this, you will need the latest ISC dhcp package, net-misc/dhcp-4.1.0, which is hard-masked.

I hope to get something more mature going, but here are some notes who may wish to try this at home.

Setting up DHCPv6

Start by installing the net-misc/dhcp-4.1.0, on both server and clients…you will need to unmask it first:

# echo =net-misc/dhcp-4.1.0 | tee -a /etc/portage/package.unmask >> /etc/portage/package.keywords
# emerge -a dhcp

The -4 and -6 flags

Now, that will install the DHCP server and client.  The catch that initially caught me is that ISC dhcpd cannot run on IPv6 and IPv4 simultaneously. Neither can the client, but we’ll get to that.  Both server and client are put into IPv4 mode by running with -4 in the options, or -6 for IPv6 mode.  Documentation says it defaults to IPv6 mode, but my experience has been the opposite (maybe a Gentoo patch does this).

Server set-up

Needless to say, if you’ve got your network running IPv4, at most you might have to edit /etc/conf.d/dhcpd to add -4 to the start-up options (DHCPD_OPTS to be exact).  Easy.  It’ll work as before. If you want IPv6, okay, make that -6 in DHCPD_OPTS, no sweat. But what if you want both? Ohh fun, we need a second dhcpd instance.

My solution; copy each /etc/init.d/dhcpd and /etc/conf.d/dhcpd to /etc/init.d/dhcpd-v6 and /etc/conf.d/dhcpd-v6 respectively.  Make the necessary changes to /etc/init.d/dhcpd-v6 and /etc/conf.d/dhcpd-v6.  So that the two don’t clash, I thought it wise to substitute DHCPD with DHCPDV6 using a text-editor (replace all).

You’ll also want to rename the leases file (dhcpd.leases, I chose dhcpd-v6.leases) and the PID file.  In addition the init script calls dhcpd to check the configuration in checkconfig(), so add -6 there too.  To save you going back and adding it in /etc/conf.d/dhcpd-v6, you can also add the -6 flag to the start-stop-daemon call in in start().  Do similar manipulations to /etc/conf.d/dhcpd-v6.

As for the server configuration file itself, I called my IPv6 config file /etc/dhcp/dhcpd-v6.conf to differentiate it from v4.  The two will need separate configuration files.  At the top of the v6 configuration file, you’ll want to point it to new PID and leases files:

pid-file-name "/var/run/dhcp/dhcpd-v6.pid";
lease-file-name "/var/lib/dhcp/dhcpd-v6.leases";

Adding that to the top of dhcpd-v6.conf will take care of this.  If you’ve done everything right, you should be able to start the DHCPv6 daemon, and add it to your runlevels as per normal.  DHCPv6 listens on port 547/UDP — look for it in netstat.

Client set-up

Client set-up isn’t too difficult, the fun bit is integrating dhclient into the init scripts.  OpenRC knows how to drive dhclient in v4-mode, but not v6.  It too, cannot run in both v4 and v6 mode simultaneously.  The solution: a new net module.  Copy /lib/rc/net/dhclient.sh to /lib/rc/net/dhclientv6.sh.

Rename all the functions changing dhclient to dhclientv6 (don’t use replace-all this time), and change the “provide” line in dhclientv6_depend to dhcpv6.  In dhcpclientv6_expose, do likewise, rename the variables dhclientv6 and dhcpv6.  Finally,  in each  call to the dhclient binary itself, add -6 to it to put it in IPv6 mode, and rename the PID file to add -v6 to the file name.

Save the new file.  Now in /etc/conf.d/net, use the following:

config_eth0=( "dhcp" "dhcpv6" )

Things that I have not yet figured out

Dynamic DNS

This is one of the reasons why I wanted DHCPv6 in the first place.  Remembering IPv4 addresses is bad enough.  IPv6 is a pain.  I have more success citing Pi than remembering the IPv6 address of all my computers.  The statically assigned ones aren’t too bad since the prefixes are all the same, it’s the autoconfigured ones that are a nuisance.

It should be doable, but I haven’t yet worked out how to make dhcp update the nameserver with AAAA records.  This is still in its infancy though.  Lots of rough edges.  I note dhclient doesn’t seem to be passing on the hostname of the computer, which could be part of the problem.

Address pools and class-based assignment

Address pools are handy things.  In ISC dhcpd, I can classify each of the computers I have by their MAC address and assign each class an address pool.  Or at least I could when it’s in IPv4 mode.

I have a nice set-up on IPv4 where if the DHCP server knows the MAC address, it’ll put that computer on the right subnet.  We have three IPv4 subnets here; one for my computers, one for my father’s and a “de-militarised zone” where any foreign computers get put (along with the web server, well actually it exists on all three).  Below is an example:

subnet 192.168.64.0 netmask 255.255.255.0 {
  pool {
    range dynamic-bootp 192.168.64.32 192.168.64.254;
    allow members of "stuartslan";
  }

  ddns-domainname "redhatters.yi.org.";
  ddns-rev-domainname "in-addr.arpa.";
  ddns-updates on;
  update-conflict-detection off;
  allow client-updates;

  /* ... */
}
/* ... */
subclass "stuartslan" 1:6c:f0:49:ef:84:7c; # beast eth0
subclass "stuartslan" 1:6c:f0:49:ef:84:7e; # beast eth1
subclass "stuartslan" 1:08:00:27:ab:7c:b9; # Win2K VirtualBox
subclass "stuartslan" 1:08:00:27:27:bf:55; # uClibc VirtualBox
subclass "stuartslan" 1:00:08:0d:5c:08:51; # Laptop "vk4mslp2" ethernet
subclass "stuartslan" 1:00:12:f0:bd:de:06; # Laptop "vk4mslp2" wireless
/* ... */

I haven’t figured out how to replicate this in DHCPv6.  The following does not work:

subnet6 2001:388:d000:1153::/64 {
  pool {
    allow members of "stuartslan";
    range6 2001:388:d000:1153::1000 2001:388:d000:1153::ffff:ffff;
  }

  ddns-domainname "redhatters.yi.org.";
  ddns-rev-domainname "ip6.arpa.";
  ddns-updates on;
  allow client-updates;
  /* ... */

I plan to keep researching these things, and I’ll see what I can do about getting the updates into Gentoo’s init scripts so that DHCPv6 is handled.  A lot of what I did today were quick hacks that will likely make people shudder, but it’s working for now, we’ll see how it goes.

Feb 152011
 

At linux.conf.au, we all got given a YubiKey each.  These are a proprietary one-time-password generator device which plugs into USB and emulates a USB HID keyboard.  Full documentation on how the algorithm works is provided by Yubico and they have also provided a lot of software for interfacing to the keys under a quite liberal BSD license. The device itself, being a USB HID device, needs no drivers other than what the operating system provides. Plug it in, press the button, and you get:

vvhuhvhlhrhlnniidvbtvjhcfthvgkubiltfbccilbch

And don’t bother trying to use that, I have deliberately mangled some of the characters so it isn’t valid. (I’ve also used it in a few places since, so it’s old anyway.) The first 12-16 characters form the public ID, and are always the same, but unique for each key. The remaining 32 characters form the OTP data, and is encrypted internally using a 128-bit AES key.  The data is a variant of hexadecimal called modhex — the digits have been mapped to keycodes that should be the same on every model of keyboard.  This means the key will still work whether the computer is configured for QWERTY, QWERTZ, AZERTY, etc.  Not sure if it handles Dvorak though.

I’ve been doing a bit of tinkering with mine.  They can be used out-of-the-box with Yubico’s authentication servers for things such as OpenID.  The programming tool however, lets you define your own parameters, and use them completely stand-alone.  Yubico have a facility for uploading the key’s new AES key when you do this.  The bonus  with doing this is that you can use the same key for both stand-alone services you might set up, and for web-based services (with the caveat that it does open to replay attacks).

By the second day of the conference, I had my Yeeloong authenticating me using YubiPAM, a stand-alone PAM module. I’ve since configured my other laptop the same way, although I notice I get a buffer overflow when the authentication succeeds — not sure why as the Yeeloong works fine. I’m looking into what’s needed for Gentoo. I haven’t figured out how to get two-factor authentication to work there with KDM. I’m thinking maybe pam_python, and a homebrew solution may give me the flexibility I’m after.

Today, I had another look at it. This time, I was looking at what services I use that could make use of it. The obvious candidates: this blog, and OpenID.

On the OpenID front, I initially toyed with a copy of Yubico’s OpenID server, which is a very crude thing intended as a demo. I thought maybe I could extend it, but couldn’t figure it out. Figuring there must be a better solution, I went hunting, and found Community-ID. I managed to get Community-ID installed on my server without too much sweat, I had single-factor authentication using either a password or the YubiKey working in minutes. My instance is here, and now my devspace homepage functions as an OpenID, as does my blog.

As for two-factor authentication, I went digging for how it processed the password. Community-ID has a very strict model-view-controller structure that made things very easy.  I wasn’t sure how to go about adding a new field, but I figured, I didn’t have to.  The database stores the prefix so that it can identify who the person is logging in, and from that, I know the OTP will be the length of that prefix, plus 32 characters.  I was able to modify Community-ID to take the last strlen(prefix)+32 characters, check that using Yubico’s servers, then process the remainder and compare that against the stored password.  Bingo, two-factor authentication with one password field.  The patch is already upstream.

Now if I can make YubiPAM do this, I’ll be very happy.

For the blog, I ended up doing both.  I found a WordPress plug-in that does OpenID authentication.  At first I couldn’t figure out how to link my new OpenID identity to my existing account, so I then turned to the YubiKey and installed a plug-in that performs that task.  No sooner had I got that going, then I spotted where the fields were for associating OpenIDs with accounts, so I’ve configured that.  My blog will now accept any of the three, although using single-factor authentication (unless I use OpenID).

Guess that’s enough for a blog, it’s not like someone can lock me out of it given I have database access anyway and the password is stored as a hash.

Needless to say though, you can expect some further improvements for things using these keys.  I’ve got some other places in mind for the thing.

Feb 102011
 

Yes, Bloody Microsoft yet again.  I want to know to whom do I make my invoice out to.

We had a situation with one of my father’s laptops.  The DVD drive mysteriously stopped working a week or so ago.  Well, it could have been longer, but we noticed this then.   The machine is a Toshiba Satellite L300D running Windows XP Professional (it came with Windows Vista Home Premium, but after figuring out how to slipstream service pack 3 and SATA drivers, I soon fixed that).  It had been running great, except now all a sudden, no drive letter was being allocated for the DVD drive.

All we could gleam out of the system was the very non-descript error number 41.  What’s 41 now, one less than the meaning of life?  On to the oracle^W^WGoogle… and apparently Microsoft blame cabling or hardware.  Ohh wonderful.  Okay, off we trundle to buy an external USB DVD burner.  Actually, we decided to buy two, the Lemote systems here do not have CD-ROM drives, and it would be handy for them.  (And I’ve already tested both on Linux reading DVDs, and burned a CD on one of them using K3B.  Ergo, they both work.)

Plug it into the affected laptop, lo and behold, it’s apparently “not working” either for the same reason.  We plug the drive into another Windows XP laptop, working no worries.  Okay, the drive is brand new out of the box!  The only thing common to the two drives is the PCI bus (or is it PCIe, not sure), which would mean a dying laptop.  Okay, let’s prove that it’s not the hardware.

I rummage around for a LiveCD I can boot up that will reasonably test the system.  Ideally I wanted something with a full desktop as it’d put more stress on the DVD drive.  Normally I download minimal Gentoo LiveCDs, but late last year I had downloaded Fedora Core 13 AMD64 for work purposes (I was putting together a firmware build kit, initially using Gentoo/Prefix, and needed to test it on the same OS that they were using).  The L300D runs a AMD Turion X2 CPU (AMD64 architecture).  You beauty, that’ll do.

We stick it in, hit F12 at the BIOS prompt, and select the DVD drive (it sees it).  A minute passes, and I’m staring at the KDE desktop.  The DVD drive works.  Open up a shell, and sure enough, /dev/sr1 is there lurking on USB, and it too works.  So it’s not the hardware.

Okay, so it’s something common to both, but it’s not the hardware.  Disk controller drivers?  Nope, one’s USB storage, the other is either SATA or IDE (can’t remember which).  CD-ROM device driver?  Maybe.  On we search…

Microsoft put out a few tools for “fixing” these problems that cropped up.  One is the automated tool on KB982116.  I run it, no luck, the problem still persists.  I try booting up the Windows XP CD and entering the Recovery console.  In Windows 2000 you could tell the setup tool to go and copy over the original OS files again.  No such luck with Windows XP.  Aside from re-loading the boot sector, it can’t do much at all, so no help.

I had already spent 4 hours fixing this… AU$128 down the drain in labour alone.  My father continued the battle, trying yet more tools.  There’s big money in fixing the shite that goes on with this proprietary mess, and I fear if Microsoft ever gets their act together a big portion of the IT industry will come crashing down as a result.

Today, my father doing further searching managed to find this exerpt on the KB982116 page:

Windows XP

  1. Click Start, and then click Run.
  2. In the Open box, type regedit, and then click OK.
  3. In the navigation pane, locate and then click the following registry subkey:
    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{4D36E965-E325-11CE-BFC1-08002BE10318}
  4. In the right pane, click UpperFilters.Note You may also see an UpperFilters.bak registry entry. You do not have to remove that entry. Click UpperFilters only. If you do not see the UpperFilters registry entry, you still might have to remove the LowerFilters registry entry. To do this, go to step 7.
  5. On the Edit menu, click Delete.
  6. When you are prompted to confirm the deletion, click Yes.
  7. In the right pane, click LowerFilters.Note If you do not see the LowerFilters registry entry, unfortunately this content cannot help you any further. Go to the “Next Steps” section for information about how you can find more solutions or more help on the Microsoft Web site.
  8. On the Edit menu, click Delete.
  9. When you are prompted to confirm the deletion, click Yes.
  10. Exit Registry Editor.
  11. Restart the computer.

Of course, how stupid of me!  Yes, of course it’s HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{4D36E965-E325-11CE-BFC1-08002BE10318}.  Every computer user knows this…even the beginners!  That’s why the automated tool didn’t bother to even tell us about it, let alone do the above steps, because every computer knows about this instinctively!

The computer came very bloody close to getting a lesson in the order of the penguin with a liberal dosage of virtualisation.  I think that still may be on the cards, because we are both getting fed up with it.

Feb 062011
 

Well, I did mention about the spam problem getting worse when we’re stuck with carrier NAT?  A bit of amusement as to how desperate they are getting… I have edited the links so that they do not link to the sites that they wanted to advertise.

Hi, I left you a DOFOLLOW backlink on my website. This isnt a spam message, i actually did leave you a backlink on my site. If you check the top of the page you will see “Sites we like” and there will be a link to this site. Would you be kind enough to leave me a backlink? If so my website is http://… please use the anchor text “…” for the link and add it to a post or as a widget. Then please send me a email at backlink@… – If you want me to change your links anchor text let me know. Thanks

Sorry mate, backlink or no backlink, I consider your site spam if it isn’t on topic.

RE: It’s so hard to get backlinks these days, honestly i need a backlink by comments on your blog / forums or guestbook to make my website appear in search engine. I am getting desperate Now! I know you’ll laugh while reading this comment !!! Here is my website I know my comments do not relate to the topic, but PLEASE HELP ME!! APPROVING MY COMMENT!

So what is the problem my friends, I’m collecting backlinks to make my website appear in the search engines!! whether are the comments look like a crap!

Why the bloody hell should I? Go pay an advertising company like every other commercial entity.

Feb 052011
 

The following was a news article that I intended to record and have included in this week’s WIA National News service, however I had problems cutting it down to the 1:30 required. So, I’ve put in additional information that there wasn’t time for, and I intend to put in a short piece for next week’s news.

For the technically minded, I do apologise if it seems a bit dumbed down, but not all the target audience are computer-savvy.


The IPocalypse is upon us, no I’m not talking about some new Apple product, I am talking about the Internet Protocol, specifically version 4.  IPv4 has been with us since 1980, and has come to dominate all aspects of computer networking.  In fact, so popular is this networking protocol, that earlier this week, the Internet Assigned Numbers Authority, ran out of addresses.

At the recently held linux.conf.au conference in Brisbane, Google Vice President Dr. Vinton Cerf, and APNIC Chief Scientist Geoff Huston both gave talks covering this very issue.  For those who want an in-depth overview of the problem, I recommend viewing both these videos:

Back in 1973 when the beginnings of what became IPv4 was being conceived, it was decided that an address space of 2³² addresses (or 32-bits, about 4 billion) would be sufficient for what was considered, back then, an experiment.  The “Internet” (then known as ARPAnet) barely spanned 5 computers.  Computers occupied rooms and were not portable, nor was there any significant wireless telephony infrastructure at the time.  The problem is, the experiment never ended, and now IPv4 in this modern age of handheld computers and wireless Internet, is being pushed to its absolute limits.

Most people are familiar with using a telephone.  You need to know the number of the person you want to want to contact (or the phone number for directory assistance and quoting a name).  Only then can you place the call, and get in touch.  Now unlike a telephone network, where the call is established and a bi-directional connection exists for the duration of the contact, on the Internet, its more like dialling a voice mail service and leaving a message.  I need to leave that person my phone number so that they can get back in touch with me (or rather, leave a message in my voice mail box).

Extending the metaphor a bit, it is common for computers to have multiple connections going on at a time.  Servers also often run multiple services on the same system.  Thus, each system uses separate ports, akin to individual mailboxes.  Each computer has 65536 of them¹.  On the sending side, a free port is usually allocated at random and used for the duration of the connection.  At the server end, a fixed port is used to “listen” for incoming requests.  When sending data from one computer to another, the sender needs to tell the receiver which mailbox (or port) the data came from, and which it belongs in, so that data goes to the right place, and any replies can be correctly addressed.

The problem now, is that the address space on this global network is now in the hands of regional registries.  These regional centres look after the Internet services for a given geographic region.  Once those registries run out, it’s game over.  Internet service providers are forced into deciding between one of four actions:

  1. Turning away new users (the infamous “No Vacancy” sign)
  2. Implementing Carrier-wide Network Address Translators
  3. Becoming a walled garden
  4. Moving over to something new

I can see option 1 is not going to be popular, so I’m not even going to discuss it.

Option 2 is already happening in parts of Asia.  Rather than giving everyone a number that is recognised world-wide, they give you and fellow customers private ones.  They then employ an intermediate server, a Network Address Translator to re-write the addresses on the IP packets so that they appear to be sent from that server.  NATs of course are not just things that exist in ISPs, home internet routers often do exactly this.  Another example of NAT is Microsoft’s Internet Connection Sharing.

When a computer sitting behind the NAT wishes to contact a server outside, the NAT instead picks one of its ports, and places the outgoing message there.  It then replaces the source address and port with its publicly visible address, and the port number it chose, and forwards that on to the outside world.  When the reply comes back, it re-writes the destination on the reply to point to the original address and port number of the originating computer.

There isn’t a theoretical limit to the number of computers that can exist behind a NAT.  The limitation is the number of ports.  Ports may not be shared by two applications, if a program or service is already using a given port number, it is essentially unavailable for others until that program or service is finished.

That means that for any computer, there can be a maximum of 65536 connections at any one time.  NATs are not magical devices, and this limit applies to them too.  In this modern age of parallel computing, even web browsers will frequently launch multiple connections in parallel.  Some of these connections are short lived (such as the time taken to download the text off this page), some take a while (such as the time taken to download one of the keynote speeches linked to earlier).  The resource demand will change over time with user habits.

The first big problem with NATs though, comes when you have an application that needs to be contactable from the outside world.  The application for all intents and purposes is like a server, and is listening for connections.  The trouble is, this computer is behind a NAT, and its actual address is a private network address.  Even if an outside computer knew what it was, it wouldn’t know how to get there, and quite likely, wouldn’t be allowed even if it did.  So the only way to be contacted, is via this NAT box.

Now suppose you tell someone (or the application does on your behalf) your NAT box’s IP address, and the port number your application is listening on and an outsider tries to make contact.  The NAT box hears the request, but where does it send it?  It knows nothing about this port!  The NAT box has to be told to reserve one of its ports (which again must be unique), and to forward any packets sent there, to the right port on your computer.

The hardest bit here is that not all NAT devices work the same way in this regard, there is no de-jure standard for configuring a port-forward.  Microsoft UPNP is one of many de-facto standards that exist, and not all NAT devices or applications support it.  A lot of these also have lots of problems of their own.  In some cases, you have to set this up yourself.  Doable if the NAT device is under your control, but in the future we may be faced with NAT devices that are controlled by ISPs.

The applications that will be hardest hit by this will be any applications that rely on peer-to-peer communications.  This includes, amongst other things, the file-sharing services in instant messenger clients, peer-to-peer file sharing services such as Bit-Torrent, and Voice-over-Internet Protocol applications such as Skype and EchoLink.  IRLP, which relies on nodes having a static public IP address will be hit particularly hard, many ISPs already charge extra for the privilege of a static IP.

Hardware devices that use the Internet are not immune from this too — in fact the situation there may be made worse, since in a lot of cases, the port numbers used are hard coded in the device’s firmware.   You may ring up to get that special port forwarded, and already discover that another customer of the same ISP rang up 5 minutes ago and claimed it before you.

Ignoring these niggles, NATs don’t sound too bad if everyone is playing by the rules.  But what if someone decides to set up an Internet marketing company and starts filling up everyone’s email boxes with yet more “Discount Viagra” offers.  The way things are here in Australia, the ISP gives each customer a public IP address (which may be static, or it may change on a regular basis), and that is used as the public address on a NAT device owned by the customer.  If a customer were to do that, the IP address of that NAT device is visible in the emails sent — an ISP can simply look up who had that IP address at that time, and can immediately take action.

Now, suppose that instead, the ISP relied on NAT.  The IP address would be that of the ISP’s NAT box.  The culprit could be any one of the many users sitting behind it.  “Jjust log each connection on the NAT box” you say.  Deary me, could you imagine how slow that would be?  Not to mention the disk space used!

Now what happened if at the same time, other users were legitimately sending emails to that same network?  The logs point to a dozen users, which one was it?  If the complainant told you the source port used in the connection when the email was sent, maybe you can look that up, but I’m yet to see that sort of information recorded in system logs, email headers certainly don’t have them.

Clearly, this is not a solution.  It’ll make address space stretch a little further, but not without causing a world of pain for software developers who have to make their software compatible with differing standards, and causing the rest of us grief as we drown in a mountain of malware and spam.  If you think spam today is bad, you ain’t seen nothin’ yet!

The other way ISPs can go, is to close off from the world, and becoming a walled garden.  That is, you need to be a member of their network, to be in contact with other users that happen to also use their network.  Or if they provide connectivity to neighbours, it’s costly, and/or heavily controlled.  Anyone remember CompuServe, America Online, The Microsoft Network?  Ring any bells?  Those long-ago isolated bulletin board systems?  If they do, I apologise for stirring up bad memories.  If they don’t, count yourself lucky, and hope like hell ISPs don’t go back there!

I did say there was a fourth solution didn’t I?  Something new?  The Internet Engineering Task Force weren’t naïve enough to assume 32-bits would be enough.  They recognised that this would be a problem way back in the early 90’s.  They formed the Internet Protocol Next Generation working group, which in 1998 produced RFC2460Internet Protocol version 6.  IPv6 extends the address space to 128 bits, a big improvement on IPv4.  It also addresses a number of other bug-bears that people had with IPv4.

Some notable ones include: Mobile IPv6 extensions to allow a portable computer (such as a smart phone) to remain contactable at the same address as it roams between multiple networks, improved quality-of-service handling for real-time streaming and multimedia, automatic addressing and simplified headers to make routing easier.

The biggest feature though is the address space.  NAT is not implemented in IPv6, it is not necessary as there’s enough space to move around.  Rather than being given a single IPv4 address which you must share with all your computers, in IPv6, you get given a whole network address prefix.  Typically this prefix is 64-bits long, leaving you the remaining 64-bits of space to allocate to each of your computers.  How many addresses is that?  Remember the 4-billion (approximate) number I quoted for IPv4?  Square it!  If you have a computer network bigger than that, I do not want to see your power bill!

Modern computer operating systems can function on IPv6 already.  Microsoft Windows XP includes support, which can be enabled by following a few easy steps.  Windows Vista and 7 come with it enabled out-of-the-box, as do Mac OS X, Linux and the BSDs (FreeBSD, OpenBSD, NetBSD, etc…).  Hardware devices can be made to support IPv6 by a simple firmware upgrade, if one is available.  If a manufacturer has not published a firmware upgrade for a device you own to support IPv6, contact them now!

ISPs world wide are dragging the chain on IPv6 take-up.  There are some notable exceptions, here in Australia for instance Internode offer native IPv6 for their customers.  I’m unaware of others in Australia.  If your ISP is one of the IPv4 sheep, it’s now time to contact your ISP and ask them what they are doing about IPv6.  In the meantime, you can get an IPv6-in-4 tunnel from a tunnel broker such as AARnet, Hurricane Electric or Sixxs.

Many online services are slowly making the move over to IPv6.  Google can be accessed via ipv6.google.com for instance.  This blog is accessible via IPv6 (thanks to AARnet).  Sixxs have a big list of sites that are IPv6 enabled.  In June (the 8th to be exact) this year, there will be a world-wide test of IPv6.  Google (as in their entire site), FaceBook and Microsoft’s Bing search engine among many other sites will be going IPv6-enabled on World IPv6 day.  If you’re not already on IPv6, it’d be great if you could join us.

Openness is one of the things that made the Internet popular.   There is a very real threat that this openness or freedom we currently experience will be lost.  If you’re a software developer, we need you to ensure your software works with IPv6 for it to keep working into the future.  If you’re a network administrator, you need to ensure your network is IPv6 compatible.  If you’re a consumer, we need you to start pestering the help desks of these software companies, device manufacturers and ISPs to ensure the commercial world sees the user demand for this!

To quote Mark Pesce, “a resource shared is a resource squared”.  We need to ensure the Internet remains open and free, for all people into the future.


1. To be more accurate, there are 65536 TCP ports, and 65536 UDP ports. However, a UDP port cannot be used for TCP traffic, or vice versa.

2. RFC = Request for comment

Feb 052011
 

A thought just occurred to me…

With addressing in IPv6, there’s enough addresses to cover every square metre of the earth’s surface with something like 100 addresses or so.  Not sure if a standard exists for mapping geographic co-ordinates to addresses, but one just occurred to me that I might try some day.

The Maidenhead locator system divides the world up into a series of squares.  At its coarsest level, it divides into zones which are each 10? latitude and 20? longitude.  There form a 18×18 grid, and are usually denoted by a letter.

Maidenhead Locator zones

Wikipedia: The world is divided into 324 (18²) Maidenhead fields.

These are divided further into grid squares, measuring 1? × 2? in size.  They form a 10×10 grid, and are usually addressed by a number…

Maidenhead grid squares

Wikipedia: Fields are divided into 100 squares each.

Within this, there are subsquares, representing 2.5’×5′ (that’s minutes, not feet) forming a 24×24 grid, addressed again by letter.  The grid square where I’m located, QG62LN represents an area that covers the suburbs of The Gap, the southwest bit of Enoggera, the northwest bit of Bardon, and the western end of Ashgrove.

Suppose we were to encode this maidenhead locator into the address.  It’s probably less useful in traditional IP networks, but maybe it will have a use.  In Amateur Radio it may be useful for the purpose of routing between mobile stations.  In fact, it’s this mobile context where I see it being most useful.  Lets first consider how many bits we’d need to store each component:

  • Zone level, 18×18 grid: 5 bits for latitude, 5 bits for longitude, or alternatively for 324 zones, 9 bits.
  • Square level, 10×10 grid: 4 bits for latitude, 4 bits for longitude, or alternatively for 100 squares, 7 bits.
  • Subsquare level, 24×24 grid: 5 bits for latitude, 5 bits for longitude or alternatively for 576 subsquares, 10 bits.

Logically you’d be using numbers starting at zero for the addresses in all fields, so A would be translated to 0, etc.  My QTH locator (QG62LN) would be translated as follows: Q?16, G?6, 6?6, 2?2, L?11, N?13.

You can either address latitude and longitude individually, packing them as separate fields, or you can lump them together to possibly save one bit of space.  For instance, I can concatenate the two 5-bit values representing the zone QG into a 10-bit value: 10,0000,0110? = 0x206. Or I can save some space by realising there are only 324 zones which can be represented with 9 bits like so: ((16×18) + 6) = zone 294 ? 1,0010,0110? = 0x126. The grid square can be similarly encoded (0110,0010? = 0x62 or 011,1110? = 0x3e), and likewise the subsquare.

How would you pack these into an IP address? I was thinking something along one of these two:

   Zone      Square   Subsquare
 Lat   Lng   La   Ln   Lat   Lng
.---. .---. .--. .--. .---. .---.
10000 00110 0110 0010 01011 01101 = 28 bits

  Zone    Square  Subsquare
.-------. .-----. .--------.
100100110 0111110 0100010101      = 26 bits

Presumably these would form the lower 28 or 26 bits of your prefix.

Feb 052011
 

Well, I’m not sure where to ask this, I did ask on the netdev mailing list and while I don’t think it’ll get ignored indefinitely, I’m not sure that was the right place.  A stab in the dark if you will.  In the hope of netting more answers though, I cast this query into the blogosphere…

I’ve been toying with the idea of a small multicast VoIP/digital comms protocol for use over wireless radio links. The typical use case might be to replace UHF FM radio transceivers with modern smart phones, using multicast IPv6 networking over 802.11b. (It will have other modes too, transmission over amateur radio bands for instance.)

In some commercial settings, or over the Internet, it’d be great for traffic to be authenticated using HMAC-SHA1 or even encrypted. Looking at IPsec, I see it provides exactly this. My thought, why re-invent the
wheel when a solution may already exist?

The question though: Is it possible for a userspace application (non-privileged) to request that the UDP packets it generates/receives from/to a particular address be encrypted or hashed against a specified key?

i.e. if I decide to communicate with someone on the same wireless link, and by means of asymmetric crypto at higher layers we establish a shared AES key, can I configure the stack for traffic between these two hosts
on-the-fly and without root privileges?