Oct 102017
 

So, over the last few years, computing power has gotten us to the point where remotely operated aerial vehicles are not only a thing, but are cheap and widely available.

There are of course, lots of good points about these toys, lots of tasks in which they can be useful.  No, I don’t think Amazon Prime is one of them.

They come with their risks though, and there’s a big list of do’s and don’ts regarding their use.  For recreational use, CASA for example, have this list of rules.  This includes amongst other things, staying below 120m altitude, and 30m away from any person.

For a building, that might as well be 30m from the top of the roof, as you cannot tell if there are people within that building, or where in that building those people reside, or from what entrance they may exit.

I in principle have no problem with people playing around with them.  I draw the line where such vehicles enter a person’s property.

The laws are rather lax about what is considered trespass with regards to such vehicles.  The no-brainer is if the vehicle enters any building or lands (controlled or otherwise) on any surface within the property.  A big reason for this is that the legal system often trails technological advancement.

This does not mean it is valid to fly over someone’s property.  For one thing, you had better ensure there is absolutely no chance that your device might malfunction and cause damage or injury to any person or possession on that property.

Moreover, without speaking to the owner of said property, you make it impossible for that person to take any kind of preventative action that might reduce the risk of malfunction, or alert you to any risks posed on the property.

In my case, I operate an amateur radio station.  My transmitting equipment is capable of 100W transmit power between 1.8MHz and 54MHz, 50W transmit power between 144MHz and 148MHz, and 20W transmit power between 420MHz and 450MHz, using FM, SSB, AM and CW, and digital modes built on these analogue modulation schemes.

Most of my antennas are dipoles, so 2.2dBi, I do have some higher-gain whips, and of course, may choose to use yagis or even dish antennas.  The stations that I might choose to work are mostly terrestrial in nature, however, airborne stations such as satellites, or indeed bouncing off objects such as the Moon, are also possibilities.

Beyond the paperwork that was submitted when applying for my radio license (which for this callsign, was filed about 9 years ago now, or for my original callsign was filed back in December 2007), there is no paperwork required to be submitted or filled out prior to me commencing transmissions.  Not to the ACMA, not to CASA, not to registered drone operators in the local area, not anybody.

While I’ve successfully operated this station with no complaints from my neighbours for nearly 10 years… it is worth pointing out that the said neighbours are a good distance away from my transmitting equipment.  Far enough away that the electromagnetic fields generated are sufficiently diminished to pose no danger to themselves or their property.

Any drone that enters the property, is at risk of malfunction if it strays too close to transmitting antennas.  If you think I will cease activity because you are in the area, think again.  There is no expectation on my part that I should alter my activities due to the presence of a drone.  It is highly probable that, whilst being inside, I am completely unaware of your device’s presence.  I cannot, and will not, take responsibility for your device’s electromagnetic immunity, or lack thereof.

In the event that it does malfunction though… it will be deemed to have trespassed if it falls within the property, and may be confiscated.  If it causes damage to any person or possession within the property, it will be confiscated, and the owner will be expected to pay damages prior to the device’s return.

In short, until such time as the laws are clarified on the matter, I implore all operators of these devices, to not fly over any property without the express permission of the owner of that property.  At least then, we can all be on the same page, we can avoid problems, and make the operation safer for all.

Mar 252017
 

So, there’s been a bit of discussion lately about our communications infrastructure. I’ve been doing quite a bit of thinking about the topic.

The situation today

Here in Australia, a lot of people are being moved over to the National Broadband Network… with the analogue fixed line phone (if it hasn’t disappeared already) being replaced with a digital service.

For many, their cellular “mobile” phone is their only means of contact. More than the over-glorified two-way radios that was pre-cellular car phones used by the social elites in the early 70s, or the slightly more sophisticated and tennis-elbow inducing AMPS hand-held mobile phones that we saw in the 80s, mobile phones today are truly versatile and powerful hand-held computers.

In fact, they are more powerful than the teen-aged computer I am typing this on. (And yes, I have upgraded it; 1GB RAM, 250GB mSATA SSD, Linux kernel 4.0… this 2GHz P4 still runs, and yes I’ll update that kernel in a moment. Now, how’s that iPhone 3G going, still running well?)

All of these devices are able to provide data communications throughput in the order of millions of bits per second, and outside of emergencies, are generally, very reliable.

It is easy to forget just how much needs to work properly in order for you to receive that funny cat picture.

Mobile networks

One thing that is not clear about the NBN, is what happens when the power is lost. The electricity grid is not infallible, and requires regular maintenance, so while reliability is good, it is not guaranteed.

For FTTP users, battery backup is an optional extra. If you haven’t opted in, then your “land line” goes down when the power goes out.

This is not a fact that people think about. Most will say, “that’s fine, I’ve got my mobile” … but do you? The typical mobile phone cell tower has several hours battery back-up, and can be overwhelmed by traffic even in non-emergencies.They are fundamentally engineered to a cost, thus compromises are made on how long they can run without back-up power, and how much call capacity they carry.

In the 2008 storms that hit The Gap, I had no mobile telephone coverage for 2 days. My Nokia 3310 would occasionally pick up a signal from a tower in a neighbouring suburb such as Keperra, Red Hill or Bardon, and would thus occasionally receive the odd text message… but rarely could muster the effective radiated power to be able to reply back or make calls. (Yes, and Nokia did tell me that internal antennas surpassed the need for external ones. A 850MHz yagi might’ve worked!)

Emergency Services

Now, you tell yourself, “Well, the emergency services have their own radios…”, and this is correct. They do have their own radio networks. They too are generally quite reliable. They have their problems. The Emergency Alerting System employed in Victoria was having capacity problems as far back as 2006 (emphasis mine):

A high-priority project under the Statewide Integrated Public Safety Communications Strategy was establishing a reliable statewide paging system; the emergency alerting system. The EAS became operational in 2006 at a cost of $212 million. It provides coverage to about 96 per cent of Victoria through more than 220 remote transmitter sites. The system is managed by the Emergency Services Telecommunications Agency on behalf of the State and is used by the CFA, VICSES and Ambulance Victoria (rural) to alert approximately 37,400 personnel, mostly volunteers, to an incident. It has recently been extended to a small number of DSE and MFB staff.

Under the EAS there are three levels of message priority: emergency, non-emergency, and administrative. Within each category the system sends messages on a first-in, first-out basis. This means queued emergency messages are sent before any other message type and non-emergency messages have priority over administrative messages.

A problem with the transmission speed and coverage of messages was identified in 2006. The CFA expressed concern that areas already experiencing marginal coverage would suffer additional message loss when the system reached its limits during peak events.

To ensure statewide coverage for all pagers, in November 2006 EAS users decided to restrict transmission speed and respond to the capacity problems by upgrading the system. An additional problem with the EAS was caused by linking. The EAS can be configured to link messages by automatically sending a copy of a message to another pager address. If multiple copies of a message are sent the overall load on the system increases.

By February 2008 linking had increased by 25 per cent.

During the 2008 windstorm in Victoria the EAS was significantly short of delivery targets for non-emergency and administrative messages. The Emergency Services Telecommunications Agency subsequently reviewed how different agencies were using the system, including their message type selection and message linking. It recommended that the agencies establish business rules about the use of linking and processes for authorising and monitoring de-linking.

The planned upgrade was designed to ensure the EAS could cope better with more messages without the use of linking.

The upgrade was delayed several times and rescheduled for February 2009; it had not been rolled out by the time of Black Saturday. Unfortunately this affected the system on that day, after which the upgrade was postponed indefinitely.

I can find mention of this upgrade taking place around 2013. From what I gather, it did eventually happen, but it took a roasting from mother nature to make it happen. The lesson here is that even purpose built networks can fall over, and thus particularly in major incidents, it is prudent to have a back-up plan.

Alternatives

For the lay person, CB radio can be a useful tool for short-range (longer-than-yelling-range) voice communications. UHF CB will cover a few kilometres in urban environments and can achieve quite long distances if good line-of-sight is maintained. They require no apparatus license, and are relatively inexpensive.

It is worth having a couple of cheap ones, a small torch and a packet of AAA batteries (stored separately) in the car or in a bag you take with you. You can’t use them if they’re in a cupboard at home and you’re not there.

The downside with the hand-helds, particularly the low end ones, is effective radiated power. They will have small “rubber ducky” antennas, optimised for size, and will typically have limited transmit power, some can do the 5W limit, but most will be 1W or less.

If you need a bit more grunt, a mobile UHF CB set and magnetic mount antenna could be assembled and fitted to most cars, and will provide 5W transmit power, capable of about 5-10km in good conditions.

HF (27MHz) CB can go further, and with 12W peak envelope power, it is possible to get across town with one, even interstate or overseas when conditions permit. These too, are worth looking at, and many can be had cheaply second-hand. They require a larger antenna however to be effective, and are less common today.

Beware of fakes though… A CB radio must meet “type approval”, just being technically able to transmit in that band doesn’t automatically make it a CB, it must meet all aspects of the Citizens Band Radio Service Class License to be classified a CB.

If it does more than 5W on UHF, it is not a UHF CB. If it mentions a transmit range outside of 476-478MHz, it is not a UHF CB.  Programming it to do UHF channels doesn’t change this.

Similarly, if your HF CB radio can do 26MHz (NZ CB, not Australia), uses FM instead of SSB/AM (UK CB, again not Australia), does more than 12W, or can do 28-30MHz (10m amateur), it doesn’t qualify as being a CB under the class license.

Amateur radio licensing

If you’ve got a good understanding of high-school mathematics and physics, then a Foundation amateur radio license is well within reach.  In fact, I’d strongly recommend it for anyone doing first year Electrical Engineering … as it will give you a good practical grounding in electrical theories.

Doing so, you get to use up to 10W of power (double what UHF CB gives you; 6dB can matter!) and access to four HF, one VHF and one UHF band using analogue voice or hand-keyed Morse code.

You can then use those “CB radios” that sell on eBay/DealExtreme/BangGood/AliExpress…etc, without issue, as being un-modified “commercial off-the-shelf”, they are acceptable for use under the Foundation license.

Beyond Voice: amateur radio digital modes

Now, all good and well being able to get voice traffic across a couple of suburban blocks. In a large-scale disaster, it is often necessary to co-ordinate recovery efforts, which often means listings of inventory and requirements, welfare information, etc, needs to be broadcast.

You can broadcast this by voice over radio… very slowly!

You can put a spreadsheet on a USB stick and drive it there. You can deliver photos that way too. During an emergency event, roads may be in-passable, or they may be congested. If the regular communications channels are down, how does one get such files across town quickly?

Amateur radio requires operators who have undergone training and hold current apparatus licenses, but this service does permit the transmission of digital data (for standard and advanced licensees), with encryption if needed (“intercommunications when participating in emergency services operations or related training exercises”).

Amateur radio is by its nature, experimental. Lots of different mechanisms have been developed through experiment for intercommunication over amateur radio bands using digital techniques.

Morse code

The oldest by far is commonly known as “Morse code”, and while it is slower than voice, it requires simpler transmitting and receiving equipment, and concentrates the transmitted power over a very narrow bandwidth, meaning it can be heard reliably at times when more sophisticated modes cannot. However, not everybody can send or receive it (yours truly included).

I won’t dwell on it here, as there are more practical mechanisms for transmitting lots of data, but have included it here for completeness. I will point out though, due to its simplicity, it has practically no latency, thus it can be faster than SMS.

Radio Teletype

Okay, there are actually quite a few modes that can be described in this manner, and I’ll use this term to refer to the family of modes. Basically, you can think of it as two dumb terminals linked via a radio channel. When you type text into one, that text appears on the other in near real-time. The latency is thus very low, on par with Morse code.

The earliest of these is the RTTY mode, but more modern incarnations of the same idea include PSK31.

These are normally used as-is. With some manual copying and pasting pieces of text at each end, it is possible to encode other forms of data as short runs of text and send files in short hand-crafted “packets”, which are then hand-deconstructed and decoded at the far end.

This can be automated to remove the human error component.

The method is slow, but these radioteletype modes are known for being able to “punch through” poor signal conditions.

When I was studying web design back in 2001, we were instructed to keep all photos below 30kB in size. At the time, dial-up Internet was common, and loading times were a prime consideration.

Thus instead of posting photos like this, we had to shrink them down, like this. Yes, some detail is lost, but it is good enough to get an “idea” of the situation.

The former photo is 2.8MB, the latter is 28kB. Via the above contrived transmission system, it would take about 20 minutes to transmit.

The method would work well for anything that is text, particularly simple spread sheets, which could be converted to Comma Separated Values to strip all but the most essential information, bringing file sizes down into realms that would allow transmission times in the order of 5 minutes. Text also compresses well, thus in some cases, transmission time can be reduced.

To put this into perspective, a drive from The Gap where that photo was taken, into the Brisbane CBD, takes about 20 minutes in non-peak-hour normal circumstances. It can take an hour at peak times. In cases of natural disaster, the roads available to you may be more congested than usual, thus you can expect peak-hour-like trip times.

Radio Faximile and Slow Scan Television

This covers a wide variety of modes, ranging from the ancient like Hellschreiber which has its origins in the German Military back in World War II, various analogue slow-scan television modes through to the modern digital slow-scan television.

This allows the transmission of photos and visual information over radio. Some systems like EasyPAL and its elk (based on HamDRM, a variant of Digital Radio Mondiale) are in fact, general purpose modems for transmitting files, and thus can transmit non-graphical data too.

Transmit times can vary, but the analogue modes take between 30 seconds and two minutes depending on quality. For the HamDRM-based systems, transmit speeds vary between 86Bps up to 795kBps depending on the settings used.

Packet Radio

Packet radio is the concept of implementing packet-switched networks over radio links. There are various forms of this, the most common in amateur radio being PACTOR, WINMOR, the 1200-baud AFSK and 9600-baud FSK and 300-baud AFSK packet modes.

300-baud AFSK is normally used on HF links, and hails from experiments using surplus Bell 103 modems modified to work with radio. Similarly, on VHF and UHF FM radio, experiments were done with surplus Bell 202 modems, giving rise to the 1200-baud AFSK mode.

The 9600-baud FSK mode was the invention of James Miller G3RUH, and was one of the first packet radio modes actually developed by radio amateur operators for use on radio.

These are all general-purpose data modems, and while they can be used for radioteletype applications, they are designed with computer networking in mind.

The feature facilities like automatic repeating of lost messages, and in some cases support forward error correction. PACTOR/WINMOR is actually used with the Winlink radio network which provides email services.

The 300-baud, 1200-baud and 9600-baud versions generally use a networking protocol called AX.25, and by configuring stations with multiple such “terminal node controllers” (modems) connected and appropriate software, a station can operate as a router, relaying traffic received via one radio channel to a station that’s connected via another, or to non-AX.25 stations on Winlink or the Internet.

It is well suited to automatic stations, operating without human intervention.

AX.25 packet and PACTOR I are open standards, the later PACTOR modems are proprietary devices produced by SCS in Germany.

AX.25 packet is capable of transmit speeds between 15Bps (300 baud) and 1kBps (9600 baud). PACTOR varies between 5Bps and 650Bps.

In theory, it is possible to develop new modems for transmitting AX.25, the HamDRM modem used for slow-scan television and the FDMDV modem used in FreeDV being good starting points as both are proven modems with good performance.

These simply require an analogue interface between the computer sound card and radio, and appropriate software.  Such an interface made to link a 1200-baud TNC to a radio could be converted to link to a low-cost USB audio dongle for connection to a computer.

If someone is set up for 1200-baud packet, setting up for these other modes is not difficult.

High speed data

Going beyond standard radios, amateur radio also has some very high-speed data links available. D-Star Digital Data operates on the 23cm microwave band and can potentially transmit files at up to 16KBps, which approaches ADSL-lite speeds. Transceivers such as the Icom ID-1 provide this via an Ethernet interface for direct connection to a computer.

General Electric have a similar offering for industrial applications that operates on various commercial bands, some of which can reach amateur frequencies, thus would be usable on amateur bands. These devices offer transmit speeds up to 8KBps.

A recent experiment by amateurs using off-the-shelf 50mW 433MHz FSK modules and Realtek-based digital TV tuner receivers produced a high-speed speed data link capable of delivering data at up to 14KBps using a wideband (~230kHz) radio channel on the 70cm band.  They used it to send high definition photos from a high-altitude balloon.

The point?

We’ve got a lot of tools at our disposal for getting a message through, and collectively, 140 years of experience at our disposal. In an emergency situation, that means we have a lot of different options, if one doesn’t work, we can try another.

No, a 1200-baud VHF packet link won’t stream 4k HD video, but it has minimal latency and will take less than 20 minutes to transmit a 100kB file over distances of 10km or more.

A 1kB email will be at the other end before you can reach for your car keys.  Further experimentation and development means we can only improve.  Amateur radio is far from obsolete.

Jul 222016
 

Seems spying on citizens is the new black these days, most government “intelligence” agencies are at it in one form or another. Then the big software companies feel left out, so they join in the fun as well, funneling as much telemetry into their walled garden as possible. (Yes, I’m looking at you, Microsoft.)

This is something I came up with this morning. It’s incomplete, but maybe I can finish it off at some point. I wonder if Cortana has a singing voice?

Partial lyrics for the ASIO/GCHQ/NSA song book

Oct 312015
 

Well, it seems the updates to Microsoft’s latest aren’t going as its maker planned. A few people have asked me about my personal opinion of this OS, and I’ll admit, I have no direct experience with it.  I also haven’t had much contact with Windows 8 either.

That said, I do keep up with the news, and a few things do concern me.

The good news

It’s not all bad of course.  Windows 8 saw a big shrink in the footprint of a typical Windows install, and Windows 10 continues to be fairly lightweight.  The UI disaster from Windows 8 has been somewhat pared back to provide a more traditional desktop with a start menu that combines features from the start screen.

There are some limitations with the new start menu, but from what I understand, it behaves mostly like the one from Windows 7.  The tiled section still has some rough edges though, something that is likely to be addressed in future updates of Windows 10.

If this is all that had changed though, I’d be happily accepting it.  Sadly, this is not the case.

Rolling-release updates

Windows has, since day one, been on a long-term support release model.  That is, they bring out a release, then they support it for X years.  Windows XP was released in 2002 and was supported until last year for example.  Windows Vista is still on extended support, and Windows 7 will enter extended support soon.

Now, in the Linux world, we’ve had both long-term support releases and rolling release distributions for years.  Most of the current Linux users know about it, and the distribution makers have had many years to get it right.  Ubuntu have been doing this since 2004, Debian since 1998 and Red Hat since 1994.  Rolling releases can be a bumpy ride if not managed correctly, which is why the long-term support releases exist.  The community has recognised the need, and meets it accordingly.

Ubuntu are even predictable with their releases.  They release on a schedule.  Anything not ready for release is pushed back to the next release.  They do a release every 6 months, in April and October and every 2 years, the April release is a long-term support release.  That is; 8.04, 10.04, 12.04, 14.04 are all LTS releases.  The LTS releases get supported for about 3 years, the regular releases about 18 months.

Debian releases are basically LTS, unless you run Debian Testing or Debian Unstable.  Then you’re running rolling-release.

Some distributions like Gentoo are always rolling-release.  I’ve been running Gentoo for more than 10 years now, and I find the rolling releases rarely give me problems.  We’ve had our hiccups, but these days, things are smooth.  Updating an older Gentoo box to the latest release used to be a fight, but these days, is comparatively painless.

It took most of that 10 years to get to that point, and this is where I worry about Microsoft forcing the vast majority of Windows users onto a rolling-release model, as they will be doing this for the first time.  As I understand it, there will be four branches:

  1. Windows Insiders programme is like Debian Unstable.  The very latest features are pushed out to them first.  They are effectively running a beta version of Windows, and can expect many updates, many breakages, lots of things changing.  For some users, this will be fine, others it’ll be a headache.  There’s no option to skip updates, but you probably will have the option of resigning from the Windows Insiders programme.
  2. Home users basically get something like Debian Testing.  After updates have been thrashed out by the insiders, it gets force-fed to the general public.  The Home version of Windows 10 will not have an option to defer an update.
  3. Professional users get something more like the standard releases of Debian.  They’ll have the option of deferring an update for up to 30 days, so things can change less frequently.  It’s still rolling-release, but they can at least plan their updates to take place once a month, hopefully without disrupting too much.
  4. Enterprise users get something like the old-stable release of Debian.  Security updates, and they have the option to defer updates for a year.

Enterprise isn’t available unless you’re a large company buying lots of licenses.  If people must buy a Windows 10 machine, my recommendation would be to go for the professional version, then you have some right of veto, as not all the updates a purely security-related, some will be changing the UI and adding/removing features.

I can see this being a major headache though for anyone who has to support hardware or software on Windows 10 however, since it’s essentially the build number that becomes important: different release builds will behave differently.  Possibly different enough that things need much more testing and maintenance than what vendors are used to.

Some are very poor at supporting Linux right now due to the rolling-release model of things like the Linux kernel, so I can see Windows 10 being a nightmare for some.

Privacy concerns

One of the big issues to be raised with Windows 10 is the inclusion of telemetry to “improve the user experience” and other features that are seen as an invasion of privacy.  Many things can be turned off, but it will take someone who’s familiar with the OS or good at researching the problem to turn them off.

Probably the biggest concern from my prospective as a network administrator is the WiFi Sense feature.  This is a feature in Windows 10 (and Windows 8 Phone), turned on by default, that allows you to share WiFi passwords with other contacts.

If one of that person’s contacts then comes into range of your AP, their device contacts Microsoft’s servers which have the password on file, and can provide it to that person’s device (hopefully in a secured manner).  The password is never shown to the user themselves, but I believe it’s only a matter of time before someone figures out how to retrieve that password from WiFi Sense.  (A rogue AP would probably do the trick.)

We have discussed this at work where we have two WiFi networks: one WPA2 enterprise one for staff, and a WPA2 Personal one for guests.  Since we cannot control whether the users have this feature turned on or not, or whether they might accidentally “share” the password with world + dog, we’re considering two options:

  1. Banning the use of Windows 10 devices (and Windows 8 Phone) from being used on our guest WiFi network.
  2. Implementing a cron job to regularly change the guest WiFi password.  (The Cisco AP we have can be hit with SSH; automating this shouldn’t be difficult.)

There are some nasty points in the end user license agreement too that seem to give Microsoft free reign to make copies of any of the data on the system.  They say personal information will be removed, but even with the best of intentions, it is likely that some personal information will get caught in the net cast by telemetry software.

Forced “upgrades” to Windows 10

This is the bit about Windows 10 that really bugs me.  Okay, Microsoft is pushing a deal where they’ll provide it to you for free for a year.  Free upgrades, yaay!  But wait: how do you know if your hardware and software is compatible?  Maybe you’re not ready to jump on the bandwagon just yet, or maybe you’ve heard news about the privacy issues or rolling release updates and decided to hold back.

Many users of Windows 7, 8 and 8.1 are now being force-fed the new release, whether we asked for it or not.

Now the problem with this is it completely ignores the fact that some do not run with an always-on Internet connection with a large quota.  I know people who only have a 3G connection, with a very small (1GB) quota.  Windows 10 weighs in at nearly 3GB, so for them, they’ll be paying for 2GB worth of overuse charges just for the OS, never mind what web browsing, emailing and other things they might have actually bought their Internet connection for.

Microsoft employees have been outed for showing such contempt before.  It seems so many there are used to the idea of an Internet connection that is always there and has a big enough quota to be considered “unlimited” that they have forgotten that some parts of the world do not have such luxuries.  The computer and the Internet are just tools: we do not buy an Internet connection just for the sake of having one.

Stopping updates

There are a couple of tools that exist for managing this.  I have not tested any of them, and cannot vouch for their safety or reliability.

  • BlockWindows (github link) is a set of scripts that, when executed, uninstall and disable most of the Windows 10-related updates on Windows 7 and 8/8.1.
  • GWX Control Panel is a (proprietary?) tool for controlling the GWX process.  The download is here.

My recommendation is to keep good backups.  Find a tool that will do a raw partition back-up of your Windows partition, and keep your personal files on a separate partition.  Then, if Microsoft does come a-knocking, you can easily roll back.  Hopefully after the “free upgrade” offer has expired (about this time next year), they will cease and desist from this practise.

Apr 062015
 

I’ve been a long time user of PGP, had a keypair since about 2003.  OpenPGP has some nice advantages in that it’s a more social arrangement in that verification is done by physically meeting people.  I think it is more personal that way.

However, you still can get isolated islands, my old key was a branch of the strong set, having been signed by one person who did do a lot of key-signing, but sadly thanks to Heartbleed, I couldn’t trust it anymore.  So I’ve had to start anew.

The alternate way to ensure communications is to use some third party like a certificate authority and use S/MIME.  This is the other side of the coin, where a company verifies who you are.  The company is then entrusted to do their job properly.  If you trust the company’s certificate in your web browser or email client, you implicitly trust every non-revoked valid certificate that company has signed.  As such, there is a proliferation of companies that act as a CA, and a typical web browser will come with a list as long as your arm/leg/whatever.

I’ve just set up one such certificate for myself, using StartCOM‘s CA as the authority.  If you trust StartCOM, and want my GPG key, you’ll find a S/MIME signed email with my key here.  If you instead trust my GPG signature and want my S/MIME public key, you can get that here.  If you want to throw caution to the wind, you can get the bare GPG key or S/MIME public key instead.

Update: I noticed GnuPG 2.1 has been released, so I now have an ECDSA key; fingerprint B8AA 34BA 25C7 9416 8FAE  F315 A024 04BC 5865 0CF9.  You may use it or my existing RSA key if your software doesn’t support ECDSA.

Jun 272012
 

I think our telecommunications supplier has some explaining to do in regards to this issue.

Now, I’m not overly concerned that my usage is being tracked internally by Telstra. A lot of this recording is for tracking abuse of their network, and for billing purposes. This is fine, I have no quarms with that.

However, the above linked article, which I initially heard about on the radio this morning, discusses a more sinester form of tracking.

Here, I have keyed in a special URL… observe the access logs:

www.longlandclan.yi.org 149.135.145.110 - - [27/Jun/2012:09:57:28 +1000] "GET /~stuartl/test.htm HTTP/1.1" 200 102
www.longlandclan.yi.org 50.56.58.47 - - [27/Jun/2012:09:57:28 +1000] "GET /~stuartl/test.htm HTTP/1.0" 200 102

Now, you’ll note there wasn’t one, but two hits. Why? One is clearly from the phone I’m using, as it so happens my phone is hiding behind 149.135.145.110, one of Telstra’s many Carrier NAT gateways (and shame on you Telstra for using carrier NAT).

Who’s this other one? Someone on Rackspace, a US hosting company. What business is my Internet traffic to this other party?

The saving grace for me, most of my traffic is to the APRS-IS network, with some HTTP traffic checking that my tracker has my location up-to-date and the odd query here and there. Maybe a gratuituous download of an ISO or system updates towards the end of the billing period. They’ll get pretty bored with my NextG usage, there’d be hardly anything of commercial value there.

Others however, may have more reason to feel violated. Telstra have some explaining to do.