security

HTML Email ought to be considered harmful

Some people make fun of my plain-text emails, but really, I think it’s time we re-consider our desire for colours, hyperlinks and inline images in email messages, especially for those who use web-based email clients as their primary email interface.

The problem basically boils down to this: HTML gives too much opportunity for mischief by a malicious party. In most cases, HTML isn’t even necessary to convey the information required. Tables are about the only real “feature” that is hard to replicate in plain text, for everything else there’s reasonable de-facto standards already in existence.

Misleading hyperlinks

HTML has a feature where a link to a remote page can take on any descriptive text the author desires, including images and other valid URIs. For example, the following piece of HTML code is perfectly valid:

<a href="http://www.google.com.malicious.website.example.com/">
   http://www.google.com
</a>

There are many cases where this feature is “useful”, however in an email, it can be used to disguise phishing attempts. In the above example, the link is claiming to be to Google’s search website, however would otherwise re-direct that user to some other, likely malicious, website.

Granted, not every user can read a URI to determine if it is safe. There are adults who “grew up with the Internet”, that have never typed a URI in an address bar ever, instead relying on tools like search engines to locate websites of interest.

However, it would seem disingenuous to say that because a proportion of the community cannot read a URI, we should hide any and all links from everybody. For that small portion, showing the links won’t make a difference, but it will at least make it easier to avoid such traps.

Media exploits

Media decoders are written by humans, and humans are imperfect, thus it is fair to say there are media decoders that contain bugs, some of which could be disastrous for computer security.

Microsoft had such a problem in their GDI+ JPEG decoder back in 2004. More recently, there was a kernel-level security vulnerability in their TrueType font parser.

Modern HTML allows embedding of all this, and more. Most email clients will also allow you to “preview” an email without opening it. If an email embeds inline media which exploits vulnerabilities such as the one above, just previewing it will be sufficient to gain access.

Details are scarce, but it would appear it was a vulnerability along these lines that allowed unauthorised access into the Australian National University back in 2018.

Scripting

Modern web standards allow all kinds of means for embedding scripts, that is, small pieces of interpreted code which runs client-side in the HTML renderer. ECMAScript (JavaScript) can be embedded:

  • in <script> tags (the traditional way)
  • inside a hyperlink using a javascript: URI
  • HTC and XBL features in Internet Explorer and Mozilla Firefox, respectively.

Probably lots more ways I haven’t thought about.

Web-based email clients

Now, a stand-alone email client such as Microsoft Outlook, Eudora or Mozilla Thunderbird can simply not implement the scripting features, however the problem is highly acute where web-based email clients are used.

Here, you’re viewing an email in a HTML engine that has complete media and scripting capabilities. There’s dozens of ways to embed both forms of content into a blob of HTML, and you are entirely at the mercy of your web-based email client’s ability to sanitise the HTML before it dumps it inside the DOM tree that represents your email client.

As far as the web browser is concerned, the “email” is just another web page, and will not hesitate to execute embedded scripts or render inline media, whether the user wishes it to or not.

It’s not known what ANU uses for their email infrastructure, but many universities are big fans of web-based email since it means they don’t have to explain to end users how to configure their email clients, and provides portability for their users.

Putting users at risk

Despite the above, it would appear there are lots of organisations that are completely oblivious to this problem, and insist on forcing people to render their emails as HTML, putting their customers/users at risk of security breach.

The purpose of multiple formats in the same email is to provide alternate formats of the same content. Not to provide totally different emails because you can’t be stuffed!

For example, my workplace’s hosting provider, recently sent us an email, which when viewed as plain text, read as follows:

Hello Client,
 
Unfortunately your email client is outdated and does not support HTML emails, our system uses HTML emails as standard. You will NOT be able to read this email.
 
HOW DO I READ THIS EMAIL?
 
To read this email please login to your domain manager https://hostingprovider.example.com/login/ and click on Notifications to see a list of all sent emails.
 
Thank You
 
Customer Support

The suggestion that an email client configured to read emails as plain text, counts as it being “outdated” is naïve in the extreme, and I’d expect a hosting provider to know better. I’m thankful I personally don’t purchase services from them!

Then there’s financial service providers. One share registry’s handling of the situation is downright abusive:

Link Market Services sent numerous emails that looked exactly like this.

Yeah, rather than just omitting the text/plain component and letting the email client at this end try to render the HTML as plain text (which works reasonably well in many cases), in this case, they just sent an empty text/plain body:

From: …redacted… <comms@linkmarketservices.com.au>
To: …redacted…
Reply-To: donotreply@linkmarketservices.com.au
Date: Wed, 29 Jul 2020 04:42:30 +0000
Subject: …redacted… Funds Attribution Managed Investment Trust Member Annual
 Statement
Content-Type: multipart/alternative;
 boundary=--boundary_55327_8fbc43bd-48f3-4aa1-9ab7-9046df02b853
ZMID: 9f485235-9848-49cf-9a66-62c215ea86ba-1
Message-ID: <0100017398e10334-d45a0a83-ff4d-4b83-8d19-146b100017f6-000000@us-east-1.amazonses.com>
X-SES-Outgoing: 2020.07.29-54.240.9.110
Feedback-ID: 1.us-east-1.oB/l4dCmGdzC38jjMLirCbscajeK9vK6xBjWQPPJrkA=:AmazonSES


----boundary_55327_8fbc43bd-48f3-4aa1-9ab7-9046df02b853
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"


----boundary_55327_8fbc43bd-48f3-4aa1-9ab7-9046df02b853
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Content-Type: text/html; charset="utf-8"

PCFET0NUWVBFIGh0bWw+Cgo8aHRtbCBsYW5nPSJlbiI+CjxoZWFkPgo8bWV0YSBjaGFyc2V0PSJ1
dGYtOCI+CjxtZXRhIGNvbnRlbnQ9IndpZHRoPWRldmljZS13aWR0aCwgaW5pdGlhbC1zY2FsZT0x
IiBuYW1lPSJ2aWV3cG9ydCI+CjxtZXRhIGNvbnRlbnQ9IklFPWVkZ2UiIGh0dHAtZXF1aXY9Ilgt

Ohh yeah, I’m so fluent in BASE64! I’ve since told them to send it via snail mail. Ensuring they don’t burn down forests posting blank sheets of A4 paper will be their problem.

The alternative: wiki-style mark-up

So, our biggest problem is that HTML does “too much”, so much that it becomes a liability from a security perspective. HTML wasn’t the first attempt at rich text in email… some older email clients used enriched text.

Before enriched text and HTML, we made do with various formatting marks, for instance, *bold text might be surrounded by asterisks*, and /italics indicated with forwardslashes/. _Underlining_ could be done too.

No there was no colour or font size options, but then again this was from a time when teletype terminals were not uncommon. The terminals of that time only understood one fixed-size font, and most did not do colour.

More recently, Wikis have built on this to allow for some basic mark-up features, whilst still allowing the plain text to be human readable. Modern takes of this include reStructured Text and Markdown, the latter being the native format for Wikis on Github.

Both these formats allow for embedding images inline and for tables (Markdown itself doesn’t do tables, but extensions of it do).

In email clients such images should be replaced with place-holders until the user clicks a button to reveal the images (which they should only click if they trust the sender).

Likewise, hyperlinks should be rendered in full, e.g. in a web-based client, the link [a cool website](http://example.com/) might be rendered as <a href="http://example.com">[a cool website](<code>http://www.example.com</code>)</a> — thus allowing for malicious links to be more easily detected. (It also makes them printable.) Only plain text should be permitted as a “label” for a hyperlink.

Use of such a mark-up format would have a number of benefits:

  • the format is minimal, meaning a much reduced attack surface for security vulnerabilities
  • whilst minimal, it would cover the vast majority of peoples’ use cases for HTML email
  • the mark-up is light-weight, reducing bandwidth for those on low-speed links or using lower-power devices

The downside might be for businesses, which rely on more advanced features in HTML to basically make an email look like their letter head. The business community might wish to consider the differences between a printed letter sent via the post, and an email sent electronically. They are different beasts, and trying to treat one as a substitute for the other will end in tears.

In any case, a simple letter head could be embedded as an inline image quite safely if such a feature was indeed required.

It is in our interests to curtail the features used in email communications if we intend to ensure communications remain safe and reliable.

Cryptography back doors: 21st century’s worst idea

Yeah, so our illustrious Home Affairs minister, Peter Dutton has come out pushing his agenda for a “back door” to encrypted messaging applications.  How someone so naïve got to be in such a position of power, I have no idea.   Perhaps “Yes, Minister” is more of a documentary than a comedy than I’d like to imagine.

It’s not the first time a politician has suggested the idea, and each time, I wonder how much training they’ve had in things like mathematics (particularly on prime numbers, exponentiation, remainders from division: these are the building blocks for algorithms like RSA, Diffie-Hellman, etc).

Now, they’ll tell us, “We’re not banning encryption, we just want access to ${MESSAGING_APPLICATION}”.  Sure, fine… but ${MESSAGING_APPLICATION} isn’t the only one, and these days, it isn’t impossible to imagine that someone with appropriate skills can write their own secure messaging application.  The necessary components are integral to every modern web browser.  Internet routers and IP cameras, many of which have poor security and are rarely patched, provide easy means to host the server-side component of such a system freely as well as an abundance of cheap VPS hosting, and as far as ways of “obscuring the meaning” of communications, we’re spoiled for choice!

So, shut one application down, they’ll just move to another.

Then there’s the slippery slope.  After compromising maybe a dozen applications by legal force, it’s likely there’ll be laws passed to ban all encryption.  Maybe our government should talk to the International Telegraph Union and ask how their 1880s ban on codewords worked out?

The thing is, for such surveillance to work, they have to catch each and every message, and scrutinise it for alternate meanings, and such meanings may not be obvious to third parties.  Hell, my choice of words and punctuation on this very website may be a “signal” to that tells someone to dress up as Big Bird and do the Chicken Dance in the Queen Street Mall.

This post (ignoring the delivery mechanism) isn’t encrypted, but could have hidden meanings with agreed parties.  That, and modern technology provides all kinds of ways to hide data in plain sight.

Is this a photo of a funny sign, or does it have a message buried within?

Digital cameras often rely on SD cards that are formatted with the FAT file system.  This is a file system which stores files as a linked list of clusters.  These clusters can wind up being stored out-of-order, a problem known as fragmentation.  Defragmentation tools were big business in the 90s.

FAT is used because it’s simple to implement and widely supported, and on SD cards, seek times aren’t a problem so fragmentation has less of an effect on performance.

It’s not hard to conceive of a steganography technique for sharing a one-time pad which exploits this property to use some innocuous photos on a SD card, arranged in such a way so that the 4kB clusters are randomised in their distribution.  The one-time pad would be shared almost right under the noses of postal workers unnoticed, since when they plug the SD card into their computer, it’ll just show photos that look “normal”.  The one time pad would reach its destination, then could be used for secret communications that could not be broken.

So, the upshot is banning encryption will be useless because such messages can be easily hidden from view even without encryption.

Then there’s the impact of these back doors.  The private keys to these back doors had better be very very VERY secure, because everyone’s privacy depends on them.  I mean EVERYONE.  Mr. Dutton included.

Bear in mind that the movie industry tried a similar approach for securing DVDs and Bluray discs.  It failed miserably.  CSS encryption keys used on some DVDs were discovered, then it was found that CSS was weak anyway and could be trivially brute-forced.  HDCP used in Bluray also has had its secret encryption key discovered.

See, suppose a ban was imposed.  Things like this blog, okay, you’ll be hitting it over clear-text, the way it had been for a number of years… and for me to log in, I’d have to do so over plain-text HTTP.  I’d probably just update it when at home, where I can use wired Ethernet to connect to the blog.  No real security issue there.  There’s a problem of code injection for my few visitors, it’d be nice to be able to digitally “sign” the page without encrypting it to avoid that problem.  I guess if this became the reality, we’d be looking into it.

Internet banking and other “sensitive” activities would be a problem though.  I do have Internet banking now, but it’s thankfully on a separate account to my main savings, so if that got compromised, you wouldn’t get a lot of cash, however identity theft is a very real risk.

Then there’s our workplaces.  My workplace happens to do work for Defence from time to time.  They look after the energy management systems on a few SE Queensland bases: Enoggera (Gallipoli Barracks), Amberley (yours truly interrogated the Ethernet switches to draw a map of that network, which I still have a few old copies of), Canungra, Oakey, … to name a few.

We rely on encryption to keep our remote access to those sites secure.  Take that away, and we either have to do all that work “in the clear”, or send people on site.  The latter is expensive, and in some cases, the people who have clearance to step on site don’t have all the domain knowledge, so they’ll be bringing others who are not cleared and “supervising” them.

Johnny Jihadist doesn’t have to break into a defence base, they just have to look on as a contractor “logs in”.  If the electrical and water meters on a site indicate minimal usage, then maybe the barracks are empty and they can strike.  You can actually infer a lot of information from the sorts of data collected by an EMS.  A scary amount.

So our national security actually depends on civilian encryption being as strong as government encryption.  Setting up 256-bit AES with 4096-bit RSA key agreement and authentication is a few clicks and is nearly impenetrable: back-door it, and it’s worthless.

Even if you break the encryption, there’s no guarantee that you’ll be able to find the message that you’re looking for.  Or you might just wind up harassing some poor teenager that uploaded a cute but grainy kitten photo because you thought the background noise in the JPEG was some sort of coded message.

I think if we’re going to get on top of national security issues, the answer is not to spy on each other, it’s to openly talk to each other.  Get to know those around you, and accept each other’s differences.  Colonel Klink didn’t have any luck with the iron fist approach, what makes today’s ministers think they are different?

Mitigating BREACH: a “what if”

So, recently there was a task at my work to review enabling gzip compression on our nginx HTTP servers to compress the traffic.

Now, in principle it seemed like a good idea, but having been exposed to the security world a little bit, I was familiar with some of the issues with this, notably, CRIME, BEAST and BREACH.  Of these, only BREACH is unmitigated at the browser end.

The suggested mitigations, in order of effectiveness are:

  1. Disabling HTTP compression
  2. Separating secrets from user input
  3. Randomizing secrets per request
  4. Masking secrets (effectively randomizing by XORing with a random secret per request)
  5. Protecting vulnerable pages with CSRF
  6. Length hiding (by adding random number of bytes to the responses)
  7. Rate-limiting the requests

Now, we’ve effectively being doing (1) by default… but (2), (3) and (4) make me wonder how protocols like OAuth2 are supposed to work.  That got me thinking about a little toy I was given for attending the 2011 linux.conf.au… it’s a YubiKey, one of the early model ones.  The way it operates is that Yubico’s servers, and your key, share a secret AES key (I think it’s AES-128), some static data, and a counter.  Each time you generate a one-time pad with the key, it increments its counter, encrypts the value with the static data, then encodes the output as a hexdump using a keyboard-agnostic encoding scheme to be “typed” into the computer.

Yubico receive this token, decrypt it, then compare the counter value.  If it checks out, and is greater than the existing counter value at their end, they accept it, and store that new counter value.

The same made me wonder if that could work for requests from a browser… that is, you agree on a shared secret over HTTPS, or using Diffie Hellman.  You synchronise counters (either using your new shared secret, or over HTTPS at the same time as you make the shared key), then from there on, each request to your API made by the browser, is then accompanied by a one-time pad, generated by encrypting the counter value and the static data and sending that in the HTTP headers.

There are a few libraries that do AES in the browser, such as JSAES (GPLv3) and aes-js (MIT).

This is going to be expensive to do, so a compromise might be to use this every N requests, where N is small enough that BREACH doesn’t have a sufficient number of requests from which it can derive a secret.  By the time it figures out that secret, the token is expired.  Or they could be bulk-generated at the browser end in the background so there’s a ready supply.

I haven’t gone through the full in’s and out’s of this, and I’m no security expert, but that’s just some initial thinking.

A few gripes with AliExpress

So, recently I bit the bullet and decided to sign up for an account with AliExpress.

So far, what I’ve bought from there has been clothing (unbranded stuff, not counterfeit) … while there’s some very cheap electronics there, I’m leery about the quality of some of it, preferring instead to spend a little more to buy through a more reliable supplier.

Basically, it’s a supplier of last resort, if I can’t buy something anywhere else, I’ll look here.

So far the experience has been okay.  The sellers so far have been genuine, while the slow boat from China takes a while, it’s not that big a deal.

That said, it would appear the people who actually develop its back-end are a little clueless where it comes to matters on the Internet.

Naïve email address validation rules

Yes, they’re far from the first culprits, but it would seem perfectly compliant email addresses, such as foo+bar@gmail.com, are rejected as “invalid”.

News to you AliExpress, and to anyone else, You Can Put Plus Signs In Your Email Address!

Lots of SMTP servers and webmail providers support it, to quote Wikipedia:

Addresses of this form, using various separators between the base name and the tag, are supported by several email services, including Runbox (plus), Gmail (plus),[11] Yahoo! Mail Plus (hyphen),[12] Apple’s iCloud (plus), Outlook.com (plus),[13] ProtonMail (plus),[14] FastMail (plus and Subdomain Addressing),[15] MMDF (equals), Qmail and Courier Mail Server (hyphen).[16][17] Postfix allows configuring an arbitrary separator from the legal character set.[18]

You’ll note the ones that use other characters (e.g. MMDF, Yahoo, Qmail and Courier) are in the minority.  Postfix will let you pick nearly anything (within reason), all the others use the plus symbol.

Doing this means instead of using my regular email address, I can use user+secret@example.com — if I see a spoof email pretending to be from you sent to user@example.com, I know it is fake.  On the other hand, if I see someone else use user+secret@example.com, I know they got that email address from you.

Email validation is actually a lot more complex than most people realise… it’s gotten simpler with the advent of SMTP, but years ago …server1!server2!server3!me was legitimate in the days of UUCP.  During the transition, server1!server2!server3!user@somesmtpserver.example.com was not unheard of either.  Or maybe user%innnerhost@outerhost.net?  Again, within standards.

Protocol-relative URIs don’t work outside web browsers

This, I’ve reported to them before, but basically the crux of the issue is their message notification emails.  The following is a screenshot of an actual email received from AliExpress.

Now, it would not matter what the email client was.  In this case, it’s Thunderbird, but the same problem would exist for Eudora, Outlook, Windows Mail, Apple Mail, The Bat!, Pegasus Mail … or any other email client you care to name.  If it runs outside the browser, that URI is invalid.  Protocol-relative means you use the same protocol as the page the hyperlink exists on.

In this case, the “protocol” used to retrieve that “page” was imap; imap://msg.aliexpress.com is wrong.  So is pop3://msg.aliexpress.com.  The only place I see this working, is on webmail sites.

Clearly, someone needs a clue-by-four to realise that not everybody uses a web browser to browse email.

Weak password requirements

When I signed up, boy where they fussy about the password.  My standard passwords are gibberish with punctuation… something AliExpress did not like.  They do not allow anything except digits and letters, and you must choose between 6 and 20 characters.  Not even XKCD standards work here!

Again, they aren’t the only ones… Suncorp are another mob that come to mind (in fact, they’re even more “strict”, they only allow 8… this is for their Internet banking… in 2018).  Thankfully the one bank account I have Internet banking on, is a no-fee account that has bugger all cash in it… the one with my savings in it is a passbook account, and completely separate.  (To their credit though, they do allow + in an email address.  They at least got that right.)

I can understand the field having some limit… you don’t want to receive two blu-ray discs worth of “password” every time a user authenticates themselves… but geez… would it kill you to allow 50 characters?  Does your salted hashing algorithm (you are using salted hashes aren’t you?) really care what characters you use?  Should you be using it if it does?  Once hashed, the output is going to be a fixed width, ideal for a database, and Bobby Tables is going to be hard pushed to pick a password that will hash to “‘; drop table users; –“.

By requiting these silly rules, they’ve actually forced me to use a weaker password.  The passwords I would have used on each site, had I been given the opportunity to pick my own, would have featured a much richer choice of characters, and thus been harder to break.  Instead, you’ve hobbled your own security.  Go team!

Reporting website issues is more difficult than it needs to be

Reporting a website issue is neigh on impossible.  Hence the reason for this post.  Plenty is there if I want to pick a fight with a seller (I don’t), or if I think there’s an intellectual property issue (this isn’t).  I eventually did find a form, and maybe they’ll do something about it, but I’m not holding my breath.

Forget to whitelist a script, and you get sworn at, in Mandarin

This is a matter of “unhappy code paths” not receiving the attention that they need.  In fact, there are a few places where they haven’t really debugged their l10n support properly and so the untranslated Alibaba pops up.

Yeah, the way China is going with global domination, we might some day find ourselves having to brush up on our Mandarin, and maybe Cantonese too… but that day is not today.

Anyway, I think that more or less settles it for now.  I’ll probably find more to groan about, but I do need to get some sleep tonight and go to work tomorrow.

Solar Cluster: Beware, I’m ARMed

Last night, I got home, having made a detour on my way into work past Jaycar Wooloongabba to replace the faulty PSU.
It was a pretty open-and-shut case, we took it out of the box, plugged it in, and sure enough, no fan.  After the saleswoman asked the advice of a co-worker, it was confirmed that the fan should be running.
It took some digging, but they found a replacement, and so it was boxed up (in the box I supplied, they didn’t have one), and I walked out the door with PSU No. 3.
I had to go straight to work, so took the PSU with me, and that evening, I loaded it into the top box to transport home on the bicycle.
I get home, and it’s first thing on my mind.  I unlock the top box, get it out, and still decked out in my cycling gear, helmet and all (needed the headlight to see down the back of the rack anyway), I get to work.
I put the ring lugs on, plug it into the wall socket and flick the switch.
Nothing.
Toggle the switch on the front, still nothing.
Tried the other socket on the outlet, unplugging the load, still nothing.  Did the 10km trip from Milton to The Gap kill it?
Frustrated, I figure I’ll switch a light on.  Funny… no lights.
I wander into the study… sure enough, the router, modem and switch are dead as doornails.  Wander out to the MDB outside, saw the main breaker was still on, and tried hitting the test button.  Nothing.
I wander back inside, switching the bike helmet for my old hard hat, since it looks as if I’ll need the headlight a bit longer, then take a sticky beak down the road to see if anyone else is facing the same issue.
Sure enough, I look down the street, everyone’s out.
So there goes my second attempt at bootstrapping Gentoo, and my old server’s uptime.
The power did return about an hour or so later.  The PSU was fine, you don’t think of the mains being out as the cause of your problems.
I’ll re-start my build, but I’m not going to lose another build to failing power.  Nope, had enough of that for a joke.
I could have rigged up a UPS to the TS-7670, but I already have one, and it’s in the very rack where it’ll get installed anyway.  Thus, no time like the present to install it.
I’ll have to configure the switch to present the right VLANs to the TS-7670, but once I do that, it’ll be able to take over the role of routing between the management VLAN and the main network.
I didn’t want to do this in a VM because that means exposing the hosts and the VMs to the management VLAN, meaning anyone who managed to compromise a host would have direct access to the BMCs on the other nodes.
This is not a network with high bandwidth demands, and so the TS-7670 with its 100Mbps Ethernet (built into the SoC; not via USB) is an ideal machine for this task.
Having done this, all that’s left to do is to create a 2GB dual-core VM which will receive the contents of the old server, then that server can be shut down, after 8 years of good service.  I’ll keep it around for storing the on-site backups, but now I can keep it asleep and just wake it up with Wake-on-LAN when I want to make a back-up.
This should make a dint in our electricity bill!
Other changes…

  • Looks like we’ll be upgrading the solar with the addition of another 120W panel.
  • I will be hooking up my other network switches, the ADSL router and ADSL modem up to the battery bank on the cluster, just got to get some suitable cable for doing so.
  • I have no faith in this third PSU, so already, I have a MeanWell HEP-600C coming.  We’ll wire up a suicide lead to it, and that can replace the Powertech MP-3089 + Redarc BCDC1225, as the MeanWell has a remote on/off feature I can use to control it.

Modern web design, WTF is going on?

So, over the last few years we’ve seen a big shift in the way websites operate.

Once upon a time, JavaScript was a nice-to-have, and you as a web developer better be prepared for it to not be functional; the DOM was non-existent, and we were ooohing and ahhing over the de facto standard in Internet multimedia; MacroMedia Flash.  The engine we now call WebKit was still a primitive and quite basic renderer called KHTML in a little-known browser called Konqueror.  Mozilla didn’t exist as an open-source project yet; it was Netscape and Microsoft duelling it out together.

Back then, XMLHTTPRequest was so new, it wasn’t a standard yet; Microsoft had implemented the idea as an ActiveX control in IE5, no one else had it yet.  So if you wanted to update a page, you had to re-load the whole lot and render it server-side.  We had just shaken off our FONT tags for CSS (thank god!), but if you wanted to make an image change as the mouse cursor hovered over it, you still needed those onmouseover/onmouseout event handlers to swap the image.  Ohh, and scalable graphics?  Forget it.  Render as a GIF or JPEG and hope you picked the resolution right.

And bear in mind, the expectation was that, a user running an 800×600 pixel screen resolution, and connected via a 28.8kbps dial-up modem, should be able to load your page up within about 30 seconds, and navigate without needing to resort to horizontal scroll bars.  That meant images had to be compressed to be no bigger than 30kB.

That was 17 years ago.  Man I feel old!

This gets me thinking… today, the expectation is that your Internet connection is at least 256kbps.  Why then do websites take so long to load?

It seems our modern web designers have forgotten the art of how to pack down a website to minimise the amount of data needed to be transmitted so that the page is functional.  In this modern age of “pretty” web design, we’ve forgotten how to make a page practical.

Today, if you want to show an icon on a page, and have it fill the entire browser window, you can fire up Inkscape or Adobe Illustrator, let the creative juices flow and voilá, out pops a scalable vector graphic, which can be dropped straight into your HTML.  Turn on gzip compression on the web server, and that graphic will be on that 28.8kbps user’s screen in under 3 seconds, and can still be as big as they want.

If you want to make a page interactive, there’s no need to reload the entire page; XMLHTTPRequest is now a W3C standard, and implemented in all the major browsers.  Websockets means an end to any kind of polling; you can get updates as they happen.

It seems silly, but in spite of all the advancements, website page loads are not getting faster, they’re getting slower.  The “everybody has broadband” and “everybody has full-HD screens” argument is being used as an excuse for bloat and sloppy design practices.

More than once I’ve had to point someone to the horizontal scroll bar because the web designer failed to test their website at the rather common 1366×768 screen resolution of a typical laptop.  If I had a dollar for every time that’s happened in the last 12 months, I’d be able to buy the offending companies out and sack the web designers responsible!

One of the most annoying, from a security perspective, is the proliferation of “content distribution networks”.  It seems they’ve realised these big bulky blobs of JavaScript take a long time to load even on fast links.  So, what do the bright sparks do?  “I know… instead of loading it from one server, I’ll put it on 10 and increase my upload capacity 10-fold!”  Yes, they might have 1Gbps on each host.  1Gbps × 10 = 10Gbps, so the page will load at 10Gbps, right?

Cue sad tuba sound effect.

At my workplace, we have a 20Mbps Ethernet (not ADSL[2], fibre or cable; Ethernet) link to the Internet.  On that link, I’ve been watching the web get slower and slower… and I do not think our ISP is completely to blame, as I see the same issue at home too.  One where we feel the pain a lot, is Atlassian’s system, particularly Jira and Confluence.  To give you how bad they drink the CDN cool-aid, check out the number of sites I have to whitelist in order to get the page functional:

Atlassian’s JIRA… failing in spite of a crapton of scripts being loaded.

That’s 17 different hosts my web browser must make contact with, and download content from, before the page will function.  17 separate HTTP connections, which must fight with all other IP traffic on that 20Mbps Ethernet link for bandwidth.  20Mbps is the maximum that any one connection will do, and I can guarantee it will not reach even half that!

Interestingly, despite allowing all those scripts to load, they still failed to come up with the goods after a pregnant pause.  So the extra trashing of the link was for naught.  Then there’s the security implications.

At least 3 of those, are pages that Atlassian do not control.  If someone compromised ravenjs.com for example; they could inject any JavaScript they want on the JIRA site, and take control of a user’s account.  Atlassian are relying on these third partys’ promises and security practices, to ensure their site stays secure, and stays in their (third party’s) control.  Suppose someone forgets to renew the domain subscription, the result could be highly embarrassing!

So, I’m left wondering what they teach these days.  For a multitude of reasons, sites should be blazingly quick to load, partly because modern techniques ought to permit vastly improved efficiency of content representation and delivery; and that network link speeds are steadily improving.  However it seems the reverse is true… why are we failing so badly?

Digital Emergency comms ideas

Yesterday’s post was rather long, but was intended for mostly technical audiences outside of amateur radio.  This post serves as a brain dump of volatile memory before I go to sleep for the night.  (Human conscious memory is more like D-RAM than one might realise.)

Radio interface

So, many in our group use packet radio TNCs already, with a good number using the venerable Kantronics KPC3.  These have a DB9 port that connects to the radio and a second DB25 RS-323 port that connects to the computer.

My proposal: we make an audio interface that either plugs into that DB9 port and re-uses the interface cables we already have, or directly into the radio’s data port.

This should connect to an audio interface on the computer.

For EMI’s sake, I’d recommend a USB sound dongle like this, or these, or this as that audio interface.  I looked on Jaycar and did see this one, which would also work (and burn a hole in your wallet!).

If you walk in and the asking price is more than $30, I’d seriously consider these other options.  Of those options, U-Mart are here in Brisbane; go to their site, order a dongle then tell the site you’ll come and pick it up.  They’ll send you an email with an order number when it’s ready, you just need to roll up to the store, punch that number into a terminal in the shop, then they’ll call your name out for you to collect and pay for it.

Scorptec are in Melbourne, so you’ll have to have items shipped, but are also worth talking to.  (They helped me source some bits for my server cluster when U-Mart wouldn’t.)

USB works over two copper pairs; one delivers +5V and 0V, the other is a differential pair for data.  In short, the USB link should be pretty immune from EMI issues.

At worst, you should be able to deal with it with judicious application of ferrite beads to knock down the common mode current and using a combination of low-ESR electrolytic and ceramic capacitors across the power rails.

If you then keep the analogue cables as short as absolutely possible, you should have little opportunity for RF to get in.

I don’t recommend the TigerTronics Signalink interfaces, they use cheap and nasty isolation transformers that lead to serious performance issues.

Receive audio

For the receive audio, we feed the audio from the radio and we feed that via potentiometer to a 3.5mm TRS (“phono”) plug tip, with sleeve going to common.  This plugs into the Line-In or Microphone input on the sound device.

Push to Talk and Transmit audio

I’ve bundled these together for a good reason.  The conventional way for computers to drive PTT is via an RS-232 serial port.

We can do that, but we won’t unless we have to.

Unless you’re running an original SoundBLASTER card, your audio interface is likely stereo.  We can get PTT control via an envelope detector forming a minimal-latency VOX control.

Another 3.5mm TRS plug connects to the “headphone” or “line-out” jack on our sound device and breaks out the left and right channels.

The left and right channels from the sound device should be fed into the “throw” contacts on two single-pole double-throw toggle switches.

The select pin (mechanically operated by the toggle handle) on each switch thus is used to select the left or right channel.

One switch’s select pin feeds into a potentiometer, then to the radio’s input.  We will call that the “modulator” switch; it selects which channel “modulates” our audio.  We can again adjust the gain with the potentiometer.

The other switch first feeds through a small Schottky diode then across a small electrolytic capacitor (to 0V) then through a small resistor before finally into the base of a small NPN signal transistor (e.g. BC547).  The emitter goes to 0V, the collector is our PTT signal.

This is the envelope detector we all know and love from our old experiments with crystal sets.  In theory, we could hook a speaker to the collector up to a power source and listen to AM radio stations, but in this case, we’ll be sending a tone down this channel to turn the transistor, and thus or PTT, on.

The switch feeding this arrangement we’ll call the “PTT” switch.

By using this arrangement, we can use either audio channel for modulation or PTT control, or we can use one channel for both.  1200-baud AFSK, FreeDV, etc, should work fine with both on the one channel.

If we just want to pass through analogue audio, then we probably want modulation separate, so we can hold the PTT open during speech breaks without having an annoying tone superimposed on our signal.

It may be prudent to feed a second resistor into the base of that NPN, running off to the RTS pin on an RS-232 interface.  This will let us use software that relies on RS-232 PTT control, which can be added by way of a USB-RS232 dongle.

The cheap Prolific PL-2303 ones sold by a few places (including Jaycar) will work for this.  (If your software expects a 16550 UART interface on port 0x3f8 or similar, consider running it in a virtual machine.)

Ideally though, this should not be needed, and if added, can be left disconnected without harm.

Software

There are a few “off-the-shelf” packages that should work fine with this arrangement.

AX.25 software

AGWPE on Windows provides a software TNC.  On Linux, there’s soundmodem (which I have used, and presently mirror) and Direwolf.

Shouldn’t need a separate PTT channel, it should be sufficient to make the pre-amble long enough to engage PTT and rely on the envelope detector recognising the packet.

Digital Voice

FreeDV provides an open-source digital voice platform system for Windows, Linux and MacOS X.

This tool also lets us send analogue voice.  Digital voice should be fine, the first frame might get lost but as a frame is 40ms, we just wait before we start talking, like we would for regular analogue radio.

For the analogue side of things, we would want tone-driven PTT.  Not sure if that’s supported, but hey, we’ve got the source code, and yours truly has worked with it, it shouldn’t be hard to add.

Slow-scan television

The two to watch here would be QSSTV (Linux) and EasyPal (Windows).  QSSTV is open-source, so if we need to make modifications, we can.

Not sure who maintains EasyPal these days, not Eric VK4AES as he’s no longer with us (RIP and thank-you).  Here, we might need an RS-232 PTT interface, which as discussed, is not a hard modification.

Radioteletype

Most is covered by FLDigi.  Modes with a fairly consistent duty cycle will work fine with the VOX PTT, and once again, we have the source, we can make others work.

Custom software ideas

So we can use a few off-the-shelf packages to do basic comms.

  • We need auditability of our messaging system.  Analogue FM, we can just use a VOX-like function on the computer to record individual received messages, and to record outgoing traffic.  Text messages and files can be logged.
  • Ideally, we should have some digital signing of logs to make them tamper-resistant.  Then we can mathematically prove what was sent.
  • In a true  emergency, it may be necessary to encrypt what we transmit.  This is fine, we’re allowed to do this in such cases, and we can always turn over our audited logs for authorities anyway.
  • Files will be sent as blocks which are forward-error corrected (or forward-erasure coded).  We can use a block cipher such as AES-256 to encrypt these blocks before FEC.  OpenPGP would work well here rather doing it from scratch; just send the OpenPGP output using FEC blocks.  It should be possible to pick out symmetric key used at the receiving end for auditing, this would be done if asked for by Government.  DIY not necessary, the building blocks are there.
  • Digital voice is a stream, we can use block ciphers but this introduces latency and there’s always the issue of bit errors.  Stream ciphers on the other hand, work by generating a key stream, then XOR-ing that with the data.  So long as we can keep sync in the face of bit errors, use of a stream cipher should not impair noise immunity.
  • Signal fade is a worse problem, I suggest a cleartext (3-bit, 4-bit?) gray-code sync field for synchronisation.  Receiver can time the length of a fade, estimate the number of lost frames, then use the field to re-sync.
  • There’s more than a dozen stream ciphers to choose from.  Some promising ones are ACHTERBAHN-128, Grain 128a, HC-256, Phelix, Py, the Salsa20 family, SNOW 2/3G, SOBER-128, Scream, Turing, MUGI, Panama, ISAAC and Pike.
  • Most (all?) stream ciphers are symmetric.  We would have to negotiate/distribute a key somehow, either use Diffie-Hellman or send a generated key as an encrypted file transfer (see above).  The key and both encrypted + decrypted streams could be made available to Government if needed.
  • The software should be capable of:
    • Real-time digital voice (encrypted and clear; the latter being compatible with FreeDV)
    • File transfer (again, clear and encrypted using OpenPGP, and using good FEC, files will be cryptographically signed by sender)
    • Voice mail and SSTV, implemented using file transfer.
    • Radioteletype modes (perhaps PSK31, Olivia, etc), with logs made.
    • Analogue voice pass-through, with recordings made.
    • All messages logged and time-stamped, received messages/files hashed, hashes cryptographically signed (OpenPGP signature)
    • Operation over packet networks (AX.25, TCP/IP)
    • Standard message forms with some basic input validation.
    • Ad-hoc routing between interfaces (e.g. SSB to AX.25, AX.25 to TCP/IP, etc) should be possible.
  • The above stack should ideally work on low-cost single-board computers that are readily available and are low-power.  Linux support will be highest priority, Windows/MacOS X/BSD is a nice-to-have.
  • GNU Radio has building blocks that should let us do most of the above.

Emergency communications considerations

So, there’s been a bit of discussion lately about our communications infrastructure. I’ve been doing quite a bit of thinking about the topic.

The situation today

Here in Australia, a lot of people are being moved over to the National Broadband Network… with the analogue fixed line phone (if it hasn’t disappeared already) being replaced with a digital service.

For many, their cellular “mobile” phone is their only means of contact. More than the over-glorified two-way radios that was pre-cellular car phones used by the social elites in the early 70s, or the slightly more sophisticated and tennis-elbow inducing AMPS hand-held mobile phones that we saw in the 80s, mobile phones today are truly versatile and powerful hand-held computers.

In fact, they are more powerful than the teen-aged computer I am typing this on. (And yes, I have upgraded it; 1GB RAM, 250GB mSATA SSD, Linux kernel 4.0… this 2GHz P4 still runs, and yes I’ll update that kernel in a moment. Now, how’s that iPhone 3G going, still running well?)

All of these devices are able to provide data communications throughput in the order of millions of bits per second, and outside of emergencies, are generally, very reliable.

It is easy to forget just how much needs to work properly in order for you to receive that funny cat picture.

Mobile networks

One thing that is not clear about the NBN, is what happens when the power is lost. The electricity grid is not infallible, and requires regular maintenance, so while reliability is good, it is not guaranteed.

For FTTP users, battery backup is an optional extra. If you haven’t opted in, then your “land line” goes down when the power goes out.

This is not a fact that people think about. Most will say, “that’s fine, I’ve got my mobile” … but do you? The typical mobile phone cell tower has several hours battery back-up, and can be overwhelmed by traffic even in non-emergencies.They are fundamentally engineered to a cost, thus compromises are made on how long they can run without back-up power, and how much call capacity they carry.

In the 2008 storms that hit The Gap, I had no mobile telephone coverage for 2 days. My Nokia 3310 would occasionally pick up a signal from a tower in a neighbouring suburb such as Keperra, Red Hill or Bardon, and would thus occasionally receive the odd text message… but rarely could muster the effective radiated power to be able to reply back or make calls. (Yes, and Nokia did tell me that internal antennas surpassed the need for external ones. A 850MHz yagi might’ve worked!)

Emergency Services

Now, you tell yourself, “Well, the emergency services have their own radios…”, and this is correct. They do have their own radio networks. They too are generally quite reliable. They have their problems. The Emergency Alerting System employed in Victoria was having capacity problems as far back as 2006 (emphasis mine):

A high-priority project under the Statewide Integrated Public Safety Communications Strategy was establishing a reliable statewide paging system; the emergency alerting system. The EAS became operational in 2006 at a cost of $212 million. It provides coverage to about 96 per cent of Victoria through more than 220 remote transmitter sites. The system is managed by the Emergency Services Telecommunications Agency on behalf of the State and is used by the CFA, VICSES and Ambulance Victoria (rural) to alert approximately 37,400 personnel, mostly volunteers, to an incident. It has recently been extended to a small number of DSE and MFB staff.

Under the EAS there are three levels of message priority: emergency, non-emergency, and administrative. Within each category the system sends messages on a first-in, first-out basis. This means queued emergency messages are sent before any other message type and non-emergency messages have priority over administrative messages.

A problem with the transmission speed and coverage of messages was identified in 2006. The CFA expressed concern that areas already experiencing marginal coverage would suffer additional message loss when the system reached its limits during peak events.

To ensure statewide coverage for all pagers, in November 2006 EAS users decided to restrict transmission speed and respond to the capacity problems by upgrading the system. An additional problem with the EAS was caused by linking. The EAS can be configured to link messages by automatically sending a copy of a message to another pager address. If multiple copies of a message are sent the overall load on the system increases.

By February 2008 linking had increased by 25 per cent.

During the 2008 windstorm in Victoria the EAS was significantly short of delivery targets for non-emergency and administrative messages. The Emergency Services Telecommunications Agency subsequently reviewed how different agencies were using the system, including their message type selection and message linking. It recommended that the agencies establish business rules about the use of linking and processes for authorising and monitoring de-linking.

The planned upgrade was designed to ensure the EAS could cope better with more messages without the use of linking.

The upgrade was delayed several times and rescheduled for February 2009; it had not been rolled out by the time of Black Saturday. Unfortunately this affected the system on that day, after which the upgrade was postponed indefinitely.

I can find mention of this upgrade taking place around 2013. From what I gather, it did eventually happen, but it took a roasting from mother nature to make it happen. The lesson here is that even purpose built networks can fall over, and thus particularly in major incidents, it is prudent to have a back-up plan.

Alternatives

For the lay person, CB radio can be a useful tool for short-range (longer-than-yelling-range) voice communications. UHF CB will cover a few kilometres in urban environments and can achieve quite long distances if good line-of-sight is maintained. They require no apparatus license, and are relatively inexpensive.

It is worth having a couple of cheap ones, a small torch and a packet of AAA batteries (stored separately) in the car or in a bag you take with you. You can’t use them if they’re in a cupboard at home and you’re not there.

The downside with the hand-helds, particularly the low end ones, is effective radiated power. They will have small “rubber ducky” antennas, optimised for size, and will typically have limited transmit power, some can do the 5W limit, but most will be 1W or less.

If you need a bit more grunt, a mobile UHF CB set and magnetic mount antenna could be assembled and fitted to most cars, and will provide 5W transmit power, capable of about 5-10km in good conditions.

HF (27MHz) CB can go further, and with 12W peak envelope power, it is possible to get across town with one, even interstate or overseas when conditions permit. These too, are worth looking at, and many can be had cheaply second-hand. They require a larger antenna however to be effective, and are less common today.

Beware of fakes though… A CB radio must meet “type approval”, just being technically able to transmit in that band doesn’t automatically make it a CB, it must meet all aspects of the Citizens Band Radio Service Class License to be classified a CB.

If it does more than 5W on UHF, it is not a UHF CB. If it mentions a transmit range outside of 476-478MHz, it is not a UHF CB.  Programming it to do UHF channels doesn’t change this.

Similarly, if your HF CB radio can do 26MHz (NZ CB, not Australia), uses FM instead of SSB/AM (UK CB, again not Australia), does more than 12W, or can do 28-30MHz (10m amateur), it doesn’t qualify as being a CB under the class license.

Amateur radio licensing

If you’ve got a good understanding of high-school mathematics and physics, then a Foundation amateur radio license is well within reach.  In fact, I’d strongly recommend it for anyone doing first year Electrical Engineering … as it will give you a good practical grounding in electrical theories.

Doing so, you get to use up to 10W of power (double what UHF CB gives you; 6dB can matter!) and access to four HF, one VHF and one UHF band using analogue voice or hand-keyed Morse code.

You can then use those “CB radios” that sell on eBay/DealExtreme/BangGood/AliExpress…etc, without issue, as being un-modified “commercial off-the-shelf”, they are acceptable for use under the Foundation license.

Beyond Voice: amateur radio digital modes

Now, all good and well being able to get voice traffic across a couple of suburban blocks. In a large-scale disaster, it is often necessary to co-ordinate recovery efforts, which often means listings of inventory and requirements, welfare information, etc, needs to be broadcast.

You can broadcast this by voice over radio… very slowly!

You can put a spreadsheet on a USB stick and drive it there. You can deliver photos that way too. During an emergency event, roads may be in-passable, or they may be congested. If the regular communications channels are down, how does one get such files across town quickly?

Amateur radio requires operators who have undergone training and hold current apparatus licenses, but this service does permit the transmission of digital data (for standard and advanced licensees), with encryption if needed (“intercommunications when participating in emergency services operations or related training exercises”).

Amateur radio is by its nature, experimental. Lots of different mechanisms have been developed through experiment for intercommunication over amateur radio bands using digital techniques.

Morse code

The oldest by far is commonly known as “Morse code”, and while it is slower than voice, it requires simpler transmitting and receiving equipment, and concentrates the transmitted power over a very narrow bandwidth, meaning it can be heard reliably at times when more sophisticated modes cannot. However, not everybody can send or receive it (yours truly included).

I won’t dwell on it here, as there are more practical mechanisms for transmitting lots of data, but have included it here for completeness. I will point out though, due to its simplicity, it has practically no latency, thus it can be faster than SMS.

Radio Teletype

Okay, there are actually quite a few modes that can be described in this manner, and I’ll use this term to refer to the family of modes. Basically, you can think of it as two dumb terminals linked via a radio channel. When you type text into one, that text appears on the other in near real-time. The latency is thus very low, on par with Morse code.

The earliest of these is the RTTY mode, but more modern incarnations of the same idea include PSK31.

These are normally used as-is. With some manual copying and pasting pieces of text at each end, it is possible to encode other forms of data as short runs of text and send files in short hand-crafted “packets”, which are then hand-deconstructed and decoded at the far end.

This can be automated to remove the human error component.

The method is slow, but these radioteletype modes are known for being able to “punch through” poor signal conditions.

When I was studying web design back in 2001, we were instructed to keep all photos below 30kB in size. At the time, dial-up Internet was common, and loading times were a prime consideration.

Thus instead of posting photos like this, we had to shrink them down, like this. Yes, some detail is lost, but it is good enough to get an “idea” of the situation.

The former photo is 2.8MB, the latter is 28kB. Via the above contrived transmission system, it would take about 20 minutes to transmit.

The method would work well for anything that is text, particularly simple spread sheets, which could be converted to Comma Separated Values to strip all but the most essential information, bringing file sizes down into realms that would allow transmission times in the order of 5 minutes. Text also compresses well, thus in some cases, transmission time can be reduced.

To put this into perspective, a drive from The Gap where that photo was taken, into the Brisbane CBD, takes about 20 minutes in non-peak-hour normal circumstances. It can take an hour at peak times. In cases of natural disaster, the roads available to you may be more congested than usual, thus you can expect peak-hour-like trip times.

Radio Faximile and Slow Scan Television

This covers a wide variety of modes, ranging from the ancient like Hellschreiber which has its origins in the German Military back in World War II, various analogue slow-scan television modes through to the modern digital slow-scan television.

This allows the transmission of photos and visual information over radio. Some systems like EasyPAL and its elk (based on HamDRM, a variant of Digital Radio Mondiale) are in fact, general purpose modems for transmitting files, and thus can transmit non-graphical data too.

Transmit times can vary, but the analogue modes take between 30 seconds and two minutes depending on quality. For the HamDRM-based systems, transmit speeds vary between 86Bps up to 795kBps depending on the settings used.

Packet Radio

Packet radio is the concept of implementing packet-switched networks over radio links. There are various forms of this, the most common in amateur radio being PACTOR, WINMOR, the 1200-baud AFSK and 9600-baud FSK and 300-baud AFSK packet modes.

300-baud AFSK is normally used on HF links, and hails from experiments using surplus Bell 103 modems modified to work with radio. Similarly, on VHF and UHF FM radio, experiments were done with surplus Bell 202 modems, giving rise to the 1200-baud AFSK mode.

The 9600-baud FSK mode was the invention of James Miller G3RUH, and was one of the first packet radio modes actually developed by radio amateur operators for use on radio.

These are all general-purpose data modems, and while they can be used for radioteletype applications, they are designed with computer networking in mind.

The feature facilities like automatic repeating of lost messages, and in some cases support forward error correction. PACTOR/WINMOR is actually used with the Winlink radio network which provides email services.

The 300-baud, 1200-baud and 9600-baud versions generally use a networking protocol called AX.25, and by configuring stations with multiple such “terminal node controllers” (modems) connected and appropriate software, a station can operate as a router, relaying traffic received via one radio channel to a station that’s connected via another, or to non-AX.25 stations on Winlink or the Internet.

It is well suited to automatic stations, operating without human intervention.

AX.25 packet and PACTOR I are open standards, the later PACTOR modems are proprietary devices produced by SCS in Germany.

AX.25 packet is capable of transmit speeds between 15Bps (300 baud) and 1kBps (9600 baud). PACTOR varies between 5Bps and 650Bps.

In theory, it is possible to develop new modems for transmitting AX.25, the HamDRM modem used for slow-scan television and the FDMDV modem used in FreeDV being good starting points as both are proven modems with good performance.

These simply require an analogue interface between the computer sound card and radio, and appropriate software.  Such an interface made to link a 1200-baud TNC to a radio could be converted to link to a low-cost USB audio dongle for connection to a computer.

If someone is set up for 1200-baud packet, setting up for these other modes is not difficult.

High speed data

Going beyond standard radios, amateur radio also has some very high-speed data links available. D-Star Digital Data operates on the 23cm microwave band and can potentially transmit files at up to 16KBps, which approaches ADSL-lite speeds. Transceivers such as the Icom ID-1 provide this via an Ethernet interface for direct connection to a computer.

General Electric have a similar offering for industrial applications that operates on various commercial bands, some of which can reach amateur frequencies, thus would be usable on amateur bands. These devices offer transmit speeds up to 8KBps.

A recent experiment by amateurs using off-the-shelf 50mW 433MHz FSK modules and Realtek-based digital TV tuner receivers produced a high-speed speed data link capable of delivering data at up to 14KBps using a wideband (~230kHz) radio channel on the 70cm band.  They used it to send high definition photos from a high-altitude balloon.

The point?

We’ve got a lot of tools at our disposal for getting a message through, and collectively, 140 years of experience at our disposal. In an emergency situation, that means we have a lot of different options, if one doesn’t work, we can try another.

No, a 1200-baud VHF packet link won’t stream 4k HD video, but it has minimal latency and will take less than 20 minutes to transmit a 100kB file over distances of 10km or more.

A 1kB email will be at the other end before you can reach for your car keys.  Further experimentation and development means we can only improve.  Amateur radio is far from obsolete.

It is time we stopped trusting OEM-bundled operating systems

Some time back, Lenovo made the news with the Superfish fiasco.  Superfish was a piece of software that intercepted HTTPS connections by way of a trusted root certificate installed on the machine.  When the software detected a browser attempting to make a HTTPS connection, it would intercept it and connect on that software’s behalf.

When Superfish negotiated the connection, it would then generate on-the-fly a certificate for that website which it would then present to the browser.  This allowed it to spy on the web page content for the purpose of advertising.

Now Dell have been caught shipping an eDellRoot certificate on some of its systems.  Both laptops and desktops are affected.  This morning I checked the two newest computers in our office, both Dell XPS 8700 desktops running Windows 7.  Both had been built on the 13th of October, and shipped to us.  They both arrived on the 23rd of October, and they were both taken out of their boxes, plugged in, and duly configured.

I pretty much had two monitors and two keyboards in front of me, performing the same actions on both simultaneously.

Following configuration, one was deployed to a user, the other was put back in its box as a spare.  This morning I checked both for this certificate.  The one in the box was clean, the deployed machine had the certificate present.

Dell's dodgy certificate in action

Dell’s dodgy certificate in action

How do you check on a Dell machine?

A quick way, is to hit Logo+R (Logo = “Windows Key”, “Command Key” on Mac, or whatever it is on your keyboard, some have a penguin) then type certmgr.msc and press ENTER. Under “Trusted Root Certificate Store”, look for “eDellRoot”.

Another way is, using IE or Chrome, try one of the following websites:

(Don’t use Firefox: it has its own certificate store, thus isn’t affected.)

Removal

Apparently just deleting the certificate causes it to be re-installed after reboot.  qasimchadhar posted some instructions for removal, I’ll be trying these shortly:

You get rid of the certificate by performing following actions:

  1. Stop and Disable Dell Foundations Service
  2. Delete eDellRoot CA registry key here
    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\SystemCertificates\ROOT\Certificates\98A04E4163357790C4A79E6D713FF0AF51FE6927
  3. Then reboot and test.

Future recommendations

It is clear that the manufacturers do not have their user’s interests at heart when they ship Windows with new computers.  Microsoft has recognised this and now promote signature edition computers, which is a move I happen to support.  HOWEVER this should be standard not an option.

There are two reasons why third-party software should not be bundled with computers:

  1. The user may not have a need or use for, the said software, either not requiring its functionality or preferring an alternative.
  2. All non-trivial software is a potential security attack vector and must be kept up to date.  The version released on the OEM image is guaranteed to be at least months old by the time your machine arrives at your door, and will almost certainly be out-of-date when you come to re-install.

So we wind up either spending hours uninstalling unwanted or out-of-date crap, or we spend hours obtaining a fresh clean non-OEM installation disc, installing the bare OS, then chasing up drivers, etc.

This assumes the OEM image is otherwise clean.  It is apparent though that more than just demo software is being loaded on these machines, malware is being shipped.

With Dell and Lenovo now both in on this act, it’s now a question of if we can trust OEM installs.  Evidence seems to suggest that no, we can no longer trust such images, and have to consider all OS installations not done by the end user as suspect.

The manufacturers have abused our trust.  As far as convenience goes, we have been had.  It is clear that an OEM-supplied operating system does not offer any greater convenience to the end user, and instead, puts them at greater risk of malware attack.  I think it is time for this practice to end.

If manufacturers are unwilling to provide machines with images that would comply with Microsoft’s signature edition requirements, then they should ship the computer with a completely blank hard drive (or SSD) and unmodified installation media for a technically competent person (of the user’s choosing) to install.