Jun 062018
 

Recently, a stoush erupted between NBN chief executive Bill Morrow and the gaming community over whether “gamers” were “causing” the congestion issues experienced on fixed-wireless broadband links.

The ABC published this chart, comparing the average transfer rate, of various games, to the average transfer rate seen watching various movies.  It’s an interesting chart, but I think it completely misses the point.

One thing that raw download speeds miss, is latency.

Multimedia is hard real-time, however unless you’re doing a two-way video or voice call, a few seconds of latency is not going to bother you. Your playback device can buffer several seconds worth of movie to feed to your video and sound devices and keep their buffers fed. No problem.

If those buffers aren’t kept topped up, you get break-up in your audio and the video “freezes” momentarily, loosing the illusion of animation. So long as the data is received over the Internet link, passed to the decoder to be converted to raw video frames and audio samples, and stuffed into the relevant buffers in time, it all runs smoothly. Pre-recorded material makes this dead easy (by comparison). Uni-directional live streams are a bit more tricky, but again you can put up with quite a bit of latency.

Radio stations often have about 300-500ms of latency … just listen to the echo effect when a caller rings up with a radio on in the background, if it were truly live, it would howl like a PA microphone!

It’s two-way traffic that’s the challenge.

Imagine if, when typing an email… it was 5 seconds before the letters you just typed showed up. Or if you moved the mouse, it took 3 seconds before it registered that you had moved. If someone were just observing the screen (unaware of when the keystrokes/mouse clicks had been entered), they’d think the user was drunk!

And yes, I have personally experienced such links… type something, then go wait 30 seconds before hitting the ENTER key, or if you spot a mistake, count up the number of backspaces or cursor movements you need to type, then wait for the cursor to reach that spot before you make your correction. It’s frustrating!

Now consider online gaming, where reaction time requirements are akin to driving a race car. One false move, and suddenly your opposition has shot you, or they’ve successfully dodged your virtual bullet.

Carrier pigeons carrying MicroSD cards (which reach 128GB capacity these days) could actually outperform NBN in many places for raw data throughput. However, if the results from the Bergen Linux User’s Group experiments are anything to go by, you can expect a latency measured in hours. (Their ping log shows the round-trip-time to be about 53 minutes in the best case.)

The movie stream will be sending many large packets at a mostly regular rate. The video game will be sending lots of tiny packets that Must Be Delivered Right Now!

I think it naïve to directly compare the two in the manner these graphs simply due to the nature of the types of traffic involved. Video/VoIP calling would be a better metric, since a 100ms delay in a telephone conversation will have both parties verbally tripping over each other.

Tele-medicine is touted as one of the up-and-comming technologies, but for a surgeon to remotely operate on a patient, they need that robotic arm to respond right now, not in 30 seconds time.  It may not be a lot of data to say “rotate 2°”, or “move forward 500µm”, but it needs to get there quickly, and the feedback from said movement arrive back quickly if the patient is going to live.

The sooner we stop ignoring this elephant in the room, the better off we’ll all be.

May 312018
 

So, recently I bit the bullet and decided to sign up for an account with AliExpress.

So far, what I’ve bought from there has been clothing (unbranded stuff, not counterfeit) … while there’s some very cheap electronics there, I’m leery about the quality of some of it, preferring instead to spend a little more to buy through a more reliable supplier.

Basically, it’s a supplier of last resort, if I can’t buy something anywhere else, I’ll look here.

So far the experience has been okay.  The sellers so far have been genuine, while the slow boat from China takes a while, it’s not that big a deal.

That said, it would appear the people who actually develop its back-end are a little clueless where it comes to matters on the Internet.

Naïve email address validation rules

Yes, they’re far from the first culprits, but it would seem perfectly compliant email addresses, such as foo+bar@gmail.com, are rejected as “invalid”.

News to you AliExpress, and to anyone else, You Can Put Plus Signs In Your Email Address!

Lots of SMTP servers and webmail providers support it, to quote Wikipedia:

Addresses of this form, using various separators between the base name and the tag, are supported by several email services, including Runbox (plus), Gmail (plus),[11] Yahoo! Mail Plus (hyphen),[12] Apple’s iCloud (plus), Outlook.com (plus),[13] ProtonMail (plus),[14] FastMail (plus and Subdomain Addressing),[15] MMDF (equals), Qmail and Courier Mail Server (hyphen).[16][17] Postfix allows configuring an arbitrary separator from the legal character set.[18]

You’ll note the ones that use other characters (e.g. MMDF, Yahoo, Qmail and Courier) are in the minority.  Postfix will let you pick nearly anything (within reason), all the others use the plus symbol.

Doing this means instead of using my regular email address, I can use user+secret@example.com — if I see a spoof email pretending to be from you sent to user@example.com, I know it is fake.  On the other hand, if I see someone else use user+secret@example.com, I know they got that email address from you.

Email validation is actually a lot more complex than most people realise… it’s gotten simpler with the advent of SMTP, but years ago …server1!server2!server3!me was legitimate in the days of UUCP.  During the transition, server1!server2!server3!user@somesmtpserver.example.com was not unheard of either.  Or maybe user%innnerhost@outerhost.net?  Again, within standards.

Protocol-relative URIs don’t work outside web browsers

This, I’ve reported to them before, but basically the crux of the issue is their message notification emails.  The following is a screenshot of an actual email received from AliExpress.

Now, it would not matter what the email client was.  In this case, it’s Thunderbird, but the same problem would exist for Eudora, Outlook, Windows Mail, Apple Mail, The Bat!, Pegasus Mail … or any other email client you care to name.  If it runs outside the browser, that URI is invalid.  Protocol-relative means you use the same protocol as the page the hyperlink exists on.

In this case, the “protocol” used to retrieve that “page” was imap; imap://msg.aliexpress.com is wrong.  So is pop3://msg.aliexpress.com.  The only place I see this working, is on webmail sites.

Clearly, someone needs a clue-by-four to realise that not everybody uses a web browser to browse email.

Weak password requirements

When I signed up, boy where they fussy about the password.  My standard passwords are gibberish with punctuation… something AliExpress did not like.  They do not allow anything except digits and letters, and you must choose between 6 and 20 characters.  Not even XKCD standards work here!

Again, they aren’t the only ones… Suncorp are another mob that come to mind (in fact, they’re even more “strict”, they only allow 8… this is for their Internet banking… in 2018).  Thankfully the one bank account I have Internet banking on, is a no-fee account that has bugger all cash in it… the one with my savings in it is a passbook account, and completely separate.  (To their credit though, they do allow + in an email address.  They at least got that right.)

I can understand the field having some limit… you don’t want to receive two blu-ray discs worth of “password” every time a user authenticates themselves… but geez… would it kill you to allow 50 characters?  Does your salted hashing algorithm (you are using salted hashes aren’t you?) really care what characters you use?  Should you be using it if it does?  Once hashed, the output is going to be a fixed width, ideal for a database, and Bobby Tables is going to be hard pushed to pick a password that will hash to “‘; drop table users; –“.

By requiting these silly rules, they’ve actually forced me to use a weaker password.  The passwords I would have used on each site, had I been given the opportunity to pick my own, would have featured a much richer choice of characters, and thus been harder to break.  Instead, you’ve hobbled your own security.  Go team!

Reporting website issues is more difficult than it needs to be

Reporting a website issue is neigh on impossible.  Hence the reason for this post.  Plenty is there if I want to pick a fight with a seller (I don’t), or if I think there’s an intellectual property issue (this isn’t).  I eventually did find a form, and maybe they’ll do something about it, but I’m not holding my breath.

Forget to whitelist a script, and you get sworn at, in Mandarin

This is a matter of “unhappy code paths” not receiving the attention that they need.  In fact, there are a few places where they haven’t really debugged their l10n support properly and so the untranslated Alibaba pops up.

Yeah, the way China is going with global domination, we might some day find ourselves having to brush up on our Mandarin, and maybe Cantonese too… but that day is not today.

Anyway, I think that more or less settles it for now.  I’ll probably find more to groan about, but I do need to get some sleep tonight and go to work tomorrow.

Mar 192018
 

So, on Friday, I had a job to update some documentation.  Specifically, I had to update the code examples on a Confluence document.

No problem… or so I thought.  The issue I faced was that it seems the Confluence application is getting too clever for its own good.  Honestly, I’d be happier with a plain textarea which took some Wiki syntax such as Markdown… or heck… plain HTML!  I use WordPress on this blog here, and while the editor here isn’t bad, I’m thankful that going to the source editor is just a click away, as there’s some things the WYSIWYG editor can’t do well (inline code), or even at all (tables).

The editor in Confluence is much less polished.  Navigating with the arrow keys is an unpredictable experience, sometimes it moves by single lines, sometimes it jumps a page.  Sometimes, starting several lines deep in a code block, a single up-arrow will move you to the line above, sometimes it moves you to some line in a paragraph above the code block.  It’s an exercise in frustration.

Fine, I thought, I’ll just copy and paste the code into qvim.  Highlight… copy… paste… ohh brilliant, it’s now all stuffed onto one line!  Thankfully what I was editing, was JSON, so it’s real easy to re-format that, vim makes it real easy to pipe the buffer contents through an arbitrary external program such as python -m json.tool.  This lacked the flexibility to auto-format the JSON the way the code examples were formatted though, so I made a work-alike that made use of Python’s OrderedDict to sort the keys a bit more logically, and told json.dump to indent the code with 2-space indentation (this is how the existing examples were formatted).

Having done this, I thought I’d make mention to Atlassian about the issues with their editor.  I hit the Feedback link up the top of the page.  I pointed out the issues I was having.  In closing I also pointed out how sluggish their system was.  The desktop PC at work is a 8-core AMD Ryzen 7 1700 with 16GB of DDR4.  Not a slow machine.  Maybe it’s rose-coloured glasses, but I recall having a smoother editing experience with Microsoft Word for Windows 6.0 on my 33MHz 486/DX, which sported a whopping 8MB RAM.  Hot stuff back in 1994.  My present desktop does fine with LibreOffice, and this WordPress blog works fine in it, so I know it’s not my browser or hardware.  Yet Confluence struggles, on a PC that has 8 times the CPU cores, each running at nearly 10 times the clock speed, and with 2048 times the amount of RAM to boot.

I composed my feedback and sent it Friday afternoon.  I left the browser window open while I submitted the feedback, and went home.  This morning, I get in, enter my password to unlock the workstation, and see this:

Atlassian feedback … *still* sending after a whole week-end!

Yep, about 2kB of plain text has taken more than 50 hours to make its way from my desktop to their back-end servers.  Did a feral cat interrupt their RFC-1149 based Internet link?

Feb 132018
 

So, over the last few years we’ve seen a big shift in the way websites operate.

Once upon a time, JavaScript was a nice-to-have, and you as a web developer better be prepared for it to not be functional; the DOM was non-existent, and we were ooohing and ahhing over the de facto standard in Internet multimedia; MacroMedia Flash.  The engine we now call WebKit was still a primitive and quite basic renderer called KHTML in a little-known browser called Konqueror.  Mozilla didn’t exist as an open-source project yet; it was Netscape and Microsoft duelling it out together.

Back then, XMLHTTPRequest was so new, it wasn’t a standard yet; Microsoft had implemented the idea as an ActiveX control in IE5, no one else had it yet.  So if you wanted to update a page, you had to re-load the whole lot and render it server-side.  We had just shaken off our FONT tags for CSS (thank god!), but if you wanted to make an image change as the mouse cursor hovered over it, you still needed those onmouseover/onmouseout event handlers to swap the image.  Ohh, and scalable graphics?  Forget it.  Render as a GIF or JPEG and hope you picked the resolution right.

And bear in mind, the expectation was that, a user running an 800×600 pixel screen resolution, and connected via a 28.8kbps dial-up modem, should be able to load your page up within about 30 seconds, and navigate without needing to resort to horizontal scroll bars.  That meant images had to be compressed to be no bigger than 30kB.

That was 17 years ago.  Man I feel old!

This gets me thinking… today, the expectation is that your Internet connection is at least 256kbps.  Why then do websites take so long to load?

It seems our modern web designers have forgotten the art of how to pack down a website to minimise the amount of data needed to be transmitted so that the page is functional.  In this modern age of “pretty” web design, we’ve forgotten how to make a page practical.

Today, if you want to show an icon on a page, and have it fill the entire browser window, you can fire up Inkscape or Adobe Illustrator, let the creative juices flow and voilá, out pops a scalable vector graphic, which can be dropped straight into your HTML.  Turn on gzip compression on the web server, and that graphic will be on that 28.8kbps user’s screen in under 3 seconds, and can still be as big as they want.

If you want to make a page interactive, there’s no need to reload the entire page; XMLHTTPRequest is now a W3C standard, and implemented in all the major browsers.  Websockets means an end to any kind of polling; you can get updates as they happen.

It seems silly, but in spite of all the advancements, website page loads are not getting faster, they’re getting slower.  The “everybody has broadband” and “everybody has full-HD screens” argument is being used as an excuse for bloat and sloppy design practices.

More than once I’ve had to point someone to the horizontal scroll bar because the web designer failed to test their website at the rather common 1366×768 screen resolution of a typical laptop.  If I had a dollar for every time that’s happened in the last 12 months, I’d be able to buy the offending companies out and sack the web designers responsible!

One of the most annoying, from a security perspective, is the proliferation of “content distribution networks”.  It seems they’ve realised these big bulky blobs of JavaScript take a long time to load even on fast links.  So, what do the bright sparks do?  “I know… instead of loading it from one server, I’ll put it on 10 and increase my upload capacity 10-fold!”  Yes, they might have 1Gbps on each host.  1Gbps × 10 = 10Gbps, so the page will load at 10Gbps, right?

Cue sad tuba sound effect.

At my workplace, we have a 20Mbps Ethernet (not ADSL[2], fibre or cable; Ethernet) link to the Internet.  On that link, I’ve been watching the web get slower and slower… and I do not think our ISP is completely to blame, as I see the same issue at home too.  One where we feel the pain a lot, is Atlassian’s system, particularly Jira and Confluence.  To give you how bad they drink the CDN cool-aid, check out the number of sites I have to whitelist in order to get the page functional:

Atlassian’s JIRA… failing in spite of a crapton of scripts being loaded.

That’s 17 different hosts my web browser must make contact with, and download content from, before the page will function.  17 separate HTTP connections, which must fight with all other IP traffic on that 20Mbps Ethernet link for bandwidth.  20Mbps is the maximum that any one connection will do, and I can guarantee it will not reach even half that!

Interestingly, despite allowing all those scripts to load, they still failed to come up with the goods after a pregnant pause.  So the extra trashing of the link was for naught.  Then there’s the security implications.

At least 3 of those, are pages that Atlassian do not control.  If someone compromised ravenjs.com for example; they could inject any JavaScript they want on the JIRA site, and take control of a user’s account.  Atlassian are relying on these third partys’ promises and security practices, to ensure their site stays secure, and stays in their (third party’s) control.  Suppose someone forgets to renew the domain subscription, the result could be highly embarrassing!

So, I’m left wondering what they teach these days.  For a multitude of reasons, sites should be blazingly quick to load, partly because modern techniques ought to permit vastly improved efficiency of content representation and delivery; and that network link speeds are steadily improving.  However it seems the reverse is true… why are we failing so badly?

Jan 132018
 

Part of my day job involves being the technical contact for their website, which means we get lots of offers from people offering to put us on the “first page of Google”.

Hmm, last time I checked, the first page of Google was, strangely, Google.  Somehow, I don’t think they outsource their SEO strategy to get there… they wrote the bloody code!

These emails go straight to Spamcop generally… and they send nastygrams to the people hosting the email servers they used.  In some cases, I’ve taken the extraordinary step of blocking frequently abused hosts.

# Block Centrilogic and SmartMailer because they don't act on spam reports.
-A INPUT -s 173.240.14.0/24 -p tcp --dport 25 -j REJECT --reject-with icmp-host-prohibited
-A INPUT -s 199.43.203.0/24 -p tcp --dport 25 -j REJECT --reject-with icmp-host-prohibited
# Block OVH because they don't act on spam reports.
# List taken from https://mxtoolbox.com/SuperTool.aspx?action=asn%3aAS16276&run=toolpage
-A INPUT -s 5.39.0.0/17 -p tcp --dport 25 -j REJECT --reject-with icmp-host-prohibited
-A INPUT -s 5.135.0.0/16 -p tcp --dport 25 -j REJECT --reject-with icmp-host-prohibited
-A INPUT -s 5.196.0.0/16 -p tcp --dport 25 -j REJECT --reject-with icmp-host-prohibited
-A INPUT -s 8.7.244.0/24 -p tcp --dport 25 -j REJECT --reject-with icmp-host-prohibited
-A INPUT -s 8.18.128.0/24 -p tcp --dport 25 -j REJECT --reject-with icmp-host-prohibited
-A INPUT -s 8.18.136.0/21 -p tcp --dport 25 -j REJECT --reject-with icmp-host-prohibited
-A INPUT -s 8.18.172.0/24 -p tcp --dport 25 -j REJECT --reject-with icmp-host-prohibited
-A INPUT -s 8.20.110.0/24 -p tcp --dport 25 -j REJECT --reject-with icmp-host-prohibited
-A INPUT -s 8.21.41.0/24 -p tcp --dport 25 -j REJECT --reject-with icmp-host-prohibited
-A INPUT -s 8.24.8.0/21 -p tcp --dport 25 -j REJECT --reject-with icmp-host-prohibited
-A INPUT -s 8.26.94.0/24 -p tcp --dport 25 -j REJECT --reject-with icmp-host-prohibited
-A INPUT -s 8.29.224.0/24 -p tcp --dport 25 -j REJECT --reject-with icmp-host-prohibited
-A INPUT -s 8.30.208.0/21 -p tcp --dport 25 -j REJECT --reject-with icmp-host-prohibited
-A INPUT -s 8.33.96.0/21 -p tcp --dport 25 -j REJECT --reject-with icmp-host-prohibited
…

That is not an exhaustive list.  Sorry to people who use OVH for hosting and were trying to contact VRT/CETA legitimately, but OVH have shown themselves to be grossly incompetent with regard to management of network abuse.  Centrilogic/SmartMailer are more recent additions.

Of course, they keep trying, and thankfully, it takes longer for them to write the email than it does for me to deal with it. This doesn’t stop them claiming little gems like this:

Note: We are not spammers and are against spamming of any kind. If you are not interested then you can reply with a simple “NO”.

Errm, hate to disagree (actually no, in this case, I love disagreement)… but a few points:

  1. Your sending me an unsolicited content…
  2. … without my consent… (no listing in domain registration or scraping from a website is not consent)
  3. … that is advertising a paid-for service or otherwise something you’re hoping to make money from…
  4. … by electronic messaging.

That by definition is an Unsolicited Commercial Email… aka SPAM.  If you claim to be an Australian business, you better have a look at this.  If your ISP is complaining that you are abusing their services by sending spam, then perhaps you need to realise the people you are contacting are not interested!  You have your NO.

Sep 102017
 

… Come now, Microsoft… are you telling me your operating system just makes up its own error codes?  How can the error code be “unknown”?  The computer is doing what you told it to do!

Moreover, why can’t you fix your broken links?  Clearly the error I’m getting is not any of the ones you’ve listed, so why even offer them as suggestions?

Aug 132016
 

Sometimes I wonder.  Take this evening for example.

I recently purchased some microcontrollers to evaluate for a project, some Atmel ATTiny85s, because they have a rather nice PLL function which means they can do VHF-speed PWM, and some NXP LPC810s, because they happen to be the only DIP-package ARM chip on the market I know of.

The project I’m looking at is a re-work of my bicycle horn… the ATMega32U4 works well, but the LeoStick boards are expensive compared to a bare DIP MCU, and the wiring inside the original prototype is a mess.  I also never got USB working on them, so there’s no point in a USB-capable MCU.

I initially got ATMega1284s owing to the flash storage, but these being 40-pin DIPs, they’re bigger than anticipated, and the fact they’ve got dual USARTs, lots of GPIOs and plenty of storage space, I figured I’d put them aside for another project.

What to use?  Well I have some AT89C2051s from way back (but no programmer for them), some ATTiny24As which I bought for my solar cluster project, an ATMega8L from another project, a LeoStick (Arduino Leonardo clone).  The LeoStick I’m in the process of turning into a debugWire debugger so that I can figure out what the ADCs are doing in my cluster’s power controller (ATTiny24A).

I started building a programmer for the ‘2051s using my ATMega8L last weekend.  The MAX232 IC I grabbed for serial I/O was giving me jibberish, and today I confirmed it was misbehaving.  The board in general is misbehaving in that after flashing the MCU, it seems to stay in reset, so I’ve got more work to do.  If I got that going, I was thinking I could have PCM recordings in an I²C EEPROM and use port 1 on the ‘2051 with an R2R ladder DAC to play sound.  (These chips do not feature PWM.)

Thinking this morning, I thought the LPC810 might be worth a shot.  It only has 4kB of flash, half that of the ATTiny85, and doesn’t have as impressive PWM capabilities, but is good enough.  I really need about 16kB to store the waveforms in flash.  I do have some I²C EEPROMs, mostly <2kB ones that are sourced off old motherboards, but also a handful of 32kB ones that I had just bought especially for this… but then left behind on my desk at work.

I considered audio compression, and experimenting with ADPCM-style techniques, came to the conclusion that I didn’t like the reduced audio quality.  It really sounded harsh.  (Okay, I realise 4-bits per sample is never going to win over the audiophiles!)

Maybe instead of PCM, I could do a crude polyphonic synthesizer?  My horn effect is in fact synthesized using a Python script: the same can be done in C, and the chip probably has the CPU grunt to do it.  It’d save the flash space as I’d be basically doing “poor man’s MIDI” on the thing.  Similar has been done before on lesser hardware.

I did some rough design of data structures.  I figured out a data structure that would allow me to store the state of a “voice” in 8 bytes, and could describe note and timing events in 8-byte blocks.  So in a 2kB EEPROM, I’d store 256 notes, and could easily accommodate 8 or 16 voices in RAM, provided the CPU could keep up at 30MHz.

So, I pull a chip out, slap it in my breadboard, and start hooking it up to power, and to my shiny new USB-TTL serial cable.  Fire up lpc21isp and, nothing, no response from the chip.  Huh?  Check wiring, probe around, still nothing.  Tried different baud rates, etc.  No dice.

This stubborn chip was not going to talk to lpc21isp.  Okay, let’s see if it’ll do SWD.  I dig out my STLink/V2 and hook that up.

OpenOCD reports no response from the device.

Great, maybe a dud chip.  After a good hour or so of fruitless poking and prodding, I pull it out of the breadboard and go to get another from the tube it came from when I notice “Atmel” written on the tube.

I look closer at the chip: it was an ATTiny85!  Different pin-out, different ISP procedure, and even if the .hex file had uploaded, it almost certainly would not have executed.

Swap the chip for an actual LPC810, and OpenOCD reports:

Open On-Chip Debugger 0.10.0-dev-00120-g7a8915f (2015-11-25-18:49)
Licensed under GNU GPL v2
For bug reports, read
http://openocd.org/doc/doxygen/bugs.html
Info : auto-selecting first available session transport "hla_swd". To override use 'transport select '.
Info : The selected transport took over low-level target control. The results might differ compared to plain JTAG/SWD
adapter speed: 10 kHz
adapter_nsrst_delay: 200
Info : Unable to match requested speed 10 kHz, using 5 kHz
Info : Unable to match requested speed 10 kHz, using 5 kHz
Info : clock speed 5 kHz
Info : STLINK v2 JTAG v23 API v2 SWIM v4 VID 0x0483 PID 0x3748
Info : using stlink api v2
Info : Target voltage: 2.979527
Warn : UNEXPECTED idcode: 0x0bc11477
Error: expected 1 of 1: 0x0bb11477
in procedure 'init'
in procedure 'ocd_bouncer'

I haven’t figured out the cause of this yet, whether the ST programmer doesn’t like talking to a competitor’s part. It’d be nice to get SWD going since single-stepping code and peering into memory really spoils a developer like myself. I try lpc21isp again.

Success!  I see a LED blinking, consistent with the demo .hex file I loaded.  Of course now the next step is to try building my own, but at least I can load code onto the device now.

Apr 272016
 

It seems good old “common courtesy” is absent without leave, as is “common sense”. Some would say it’s been absent for most of my lifetime, but to me it seems particularly so of late.

In particular, where it comes to the safety of one’s self, and to others, people don’t seem to actually think or care about what they are doing, and how that might affect others. To say it annoys me is putting it mildly.

In February, I lost a close work colleague in a bicycle accident. I won’t mention his name, as I do not have his family’s permission to do so.

I remember arriving at my workplace early on Friday the 12th before 6AM, having my shower, and about 6:15 wandering upstairs to begin my work day. Reaching my desk, I recall looking down at an open TS-7670 industrial computer and saying out aloud, “It’s just you and me, no distractions, we’re going to get U-Boot working”, before sitting down and beginning my battle with the machine.

So much for the “no distractions” however. At 6:34AM, the office phone rings. I’m the only one there and so I answer. It was a social worker looking for “next of kin” details for a colleague of mine. Seems they found our office details via a Cab Charge card they happened to find in his wallet.

Well, first thing I do is start scrabbling for the office directory to get his home number so I can pass the bad news onto his wife only to find: he’s only listed his mobile number. Great. After getting in contact with our HR person, we later discover there isn’t any contact details in the employee records either. He was around before such paperwork existed in our company.

Common sense would have dictated that one carry an “in case of emergency” number on a card in one’s wallet! At the very least let your boss know!

We find out later that morning that the crash happened on a particularly sharp bend of the Go Between Bridge, where the offramp sweeps left to join the Bicentennial bikeway. It’s a rather sharp bend that narrows suddenly, with handlebar-height handrails running along its length and “Bicycle Only” signs clearly signposted at each end.

Common sense and common courtesy would suggest you slow down on that bridge as a cyclist. Common sense and common courtesy would suggest you use the other side as a pedestrian. Common sense would question the utility of hand rails on a cycle path.

In the meantime our colleague is still fighting for his life, and we’re all holding out hope for him as he’s one of our key members. As for me, I had a network to migrate that weekend. Two of us worked the Saturday and Sunday.

Sunday evening, emotions hit me like a freight train as I realised I was in denial, and realised the true horror of the situation.

We later find out on the Tuesday, our colleague is in a very bad way with worst-case scenario brain damage as a result of the crash. From shining light to vegetable, he’d never work for us again.

Wednesday I took a walk down to the crash site to try and understand what happened. I took a number of photographs, and managed to speak to a gentleman who saw our colleague being scraped off the pavement. Even today, some months later, the marks on the railings (possibly from handlebar grips) and a large blood smear on the path itself, can still be seen.

It was apparent that our colleague had hit this railing at some significant speed. He wasn’t obese, but he certainly wasn’t small, and a fully grown adult does not ricochet off a metal railing and slide face-first for over a metre without some serious kinetic energy involved.

Common sense seems to suggest the average cyclist goes much faster than the 20km/hr collision the typical bicycle helmet is designed for under AS/NZS 2063:2008.

I took the Thursday and Friday off as time-in-lieu for the previous weekend, as I was an emotional wreck. The following Tuesday I resumed cycling to work, and that morning I tried an experiment to reproduce the crash conditions. The bicycle I ride wasn’t that much different to his, both bikes having 29″ wheels.

From what I could gather that morning, it seemed he veered right just prior to the bend then lost control, listing to the right at what I estimated to be about a 30° angle. What caused that? We don’t know. It’s consistent with him dodging someone or something on the path — but this is pure speculation on my part.

Mechanical failure? The police apparently have ruled that out. There’s not much in the way of CCTV cameras in the area, plenty on the pedestrian side, not so much on the cycle side of the bridge.

Common sense would suggest relying on a cyclist to remember what happened to them in a crash is not a good plan.

In any case, common sense did not win out that day. Our colleague passed away from his injuries a little over a fortnight after his crash, aged 46. He is sadly missed.

I’ve since made a point of taking my breakfast down to that point where the bridge joins the cycleway. It’s the point where my colleague had his last conscious thoughts.

Over the course of the last few months, I’ve noticed a number of things.

Most cyclists sensibly slow down on that bend, but a few race past at ludicrous speed. One morning, I nearly thought they’d be an encore performance as two construction workers on City Cycle bikes, sans helmets, came careening around the corner, one almost losing it.

Then I see the pedestrians. There’s a well lit, covered walkway, on the opposite side of the bridge for pedestrian use. It has bench seats, drinking fountains, good lighting, everything you’d want as a pedestrian. Yet, some feel it is not worth the personal exertion to take the 100m extra distance to make use of it.

Instead, they show a lack of courtesy by using the bicycle path. Walking on a bicycle path isn’t just dangerous to the pedestrian like stepping out onto a road, it’s dangerous for the cyclist too!

If a car hits a pedestrian or cyclist, the damage to the occupants of the car is going to be minimal to nonexistent, compared to what happens to the cyclist or pedestrian. If a cyclist or motorcyclist hits a pedestrian however, they surround the frame, thus hit the ground first. Possibly at significant speed.

Yet, pedestrians think it is acceptable to play Russian roulette with their own lives and the lives of every cycle user by continuing to walk where it is not safe for them to go. They’d never do it on a motorway, but somehow a bicycle path is considered fair game.

Most pedestrians are understanding, I’ve politely asked a number to not walk on the bikeway, and most oblige after I point out how they get to the pedestrian walkway.

Common sense would suggest some signage on where the pedestrian can walk would be prudent.

However, I have had at least two that ignored me, one this morning telling me to “mind my own shit”. Yes mate, I am minding “my own shit” as you put it: I’m trying to stop the hypothetical me from possibly crashing into the hypothetical you!

It’s this sort of reaction that seems symbolic of the whole “lack of common courtesy” that abounds these days.

It’s the same attitude that seems to hint to people that it’s okay to park a car so that it blocks the footpath: newsflash, it’s not! I know of one friend of mine who frequently runs into this problem. He’s in a wheelchair — a vehicle not known for its off-road capabilities or ability to squeeze past the narrow gap left by a car.

It seems the drivers think it’s acceptable to force footpath users of all types, including the elderly, the young and the disabled, to “step out” onto the road to avoid the car that they so arrogantly parked there. It makes me wonder how many people subsequently become disabled as a result of a collision caused by them having to step around such obstacles. Would the owner of the parked car be liable?

I don’t know, I’m no lawyer, but I should think they should carry some responsibility!

In Queensland, pedestrians have right-of-way on the footpath. That includes cyclists: cyclists of all ages are allowed there subject to council laws and signage — but once again, they need to give way. In other words, don’t charge down the path like a lunatic, and don’t block it!

No doubt, the people who I’m trying to convince are too arrogant to care about the above, and what their actions might have on others. Still, I needed to get the above off my chest!

Nothing will bring my colleague back, a fact that truly pains me, and I’ve learned some valuable lessons about the sort of encouragement I give people. I regret not telling him to slow down, 5 minutes longer wouldn’t have killed him, and I certainly did not want a race! Was he trying to race me so he could keep an eye on me? I’ll never know.

He was a bright person though, it is proof though that even the intelligent among us are prone to possibly doing stupid things. With thrills come spills, and one might question whether one’s commute to work is the appropriate venue for such thrills, or whether those can wait for another time.

I for one have learned that it does not pay to be the hare, thus I intend to just enjoy the ride for what it is. No need to rush, common sense tells me it just isn’t worth it!

Nov 242015
 

Some time back, Lenovo made the news with the Superfish fiasco.  Superfish was a piece of software that intercepted HTTPS connections by way of a trusted root certificate installed on the machine.  When the software detected a browser attempting to make a HTTPS connection, it would intercept it and connect on that software’s behalf.

When Superfish negotiated the connection, it would then generate on-the-fly a certificate for that website which it would then present to the browser.  This allowed it to spy on the web page content for the purpose of advertising.

Now Dell have been caught shipping an eDellRoot certificate on some of its systems.  Both laptops and desktops are affected.  This morning I checked the two newest computers in our office, both Dell XPS 8700 desktops running Windows 7.  Both had been built on the 13th of October, and shipped to us.  They both arrived on the 23rd of October, and they were both taken out of their boxes, plugged in, and duly configured.

I pretty much had two monitors and two keyboards in front of me, performing the same actions on both simultaneously.

Following configuration, one was deployed to a user, the other was put back in its box as a spare.  This morning I checked both for this certificate.  The one in the box was clean, the deployed machine had the certificate present.

Dell's dodgy certificate in action

Dell’s dodgy certificate in action

How do you check on a Dell machine?

A quick way, is to hit Logo+R (Logo = “Windows Key”, “Command Key” on Mac, or whatever it is on your keyboard, some have a penguin) then type certmgr.msc and press ENTER. Under “Trusted Root Certificate Store”, look for “eDellRoot”.

Another way is, using IE or Chrome, try one of the following websites:

(Don’t use Firefox: it has its own certificate store, thus isn’t affected.)

Removal

Apparently just deleting the certificate causes it to be re-installed after reboot.  qasimchadhar posted some instructions for removal, I’ll be trying these shortly:

You get rid of the certificate by performing following actions:

  1. Stop and Disable Dell Foundations Service
  2. Delete eDellRoot CA registry key here
    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\SystemCertificates\ROOT\Certificates\98A04E4163357790C4A79E6D713FF0AF51FE6927
  3. Then reboot and test.

Future recomendations

It is clear that the manufacturers do not have their user’s interests at heart when they ship Windows with new computers.  Microsoft has recognised this and now promote signature edition computers, which is a move I happen to support.  HOWEVER this should be standard not an option.

There are two reasons why third-party software should not be bundled with computers:

  1. The user may not have a need or use for, the said software, either not requiring its functionality or preferring an alternative.
  2. All non-trivial software is a potential security attack vector and must be kept up to date.  The version released on the OEM image is guaranteed to be at least months old by the time your machine arrives at your door, and will almost certainly be out-of-date when you come to re-install.

So we wind up either spending hours uninstalling unwanted or out-of-date crap, or we spend hours obtaining a fresh clean non-OEM installation disc, installing the bare OS, then chasing up drivers, etc.

This assumes the OEM image is otherwise clean.  It is apparent though that more than just demo software is being loaded on these machines, malware is being shipped.

With Dell and Lenovo now both in on this act, it’s now a question of if we can trust OEM installs.  Evidence seems to suggest that no, we can no longer trust such images, and have to consider all OS installations not done by the end user as suspect.

The manufacturers have abused our trust.  As far as convenience goes, we have been had.  It is clear that an OEM-supplied operating system does not offer any greater convenience to the end user, and instead, puts them at greater risk of malware attack.  I think it is time for this practice to end.

If manufacturers are unwilling to provide machines with images that would comply with Microsoft’s signature edition requirements, then they should ship the computer with a completely blank hard drive (or SSD) and unmodified installation media for a technically competent person (of the user’s choosing) to install.

Oct 312015
 

Well, it seems the updates to Microsoft’s latest aren’t going as its maker planned. A few people have asked me about my personal opinion of this OS, and I’ll admit, I have no direct experience with it.  I also haven’t had much contact with Windows 8 either.

That said, I do keep up with the news, and a few things do concern me.

The good news

It’s not all bad of course.  Windows 8 saw a big shrink in the footprint of a typical Windows install, and Windows 10 continues to be fairly lightweight.  The UI disaster from Windows 8 has been somewhat pared back to provide a more traditional desktop with a start menu that combines features from the start screen.

There are some limitations with the new start menu, but from what I understand, it behaves mostly like the one from Windows 7.  The tiled section still has some rough edges though, something that is likely to be addressed in future updates of Windows 10.

If this is all that had changed though, I’d be happily accepting it.  Sadly, this is not the case.

Rolling-release updates

Windows has, since day one, been on a long-term support release model.  That is, they bring out a release, then they support it for X years.  Windows XP was released in 2002 and was supported until last year for example.  Windows Vista is still on extended support, and Windows 7 will enter extended support soon.

Now, in the Linux world, we’ve had both long-term support releases and rolling release distributions for years.  Most of the current Linux users know about it, and the distribution makers have had many years to get it right.  Ubuntu have been doing this since 2004, Debian since 1998 and Red Hat since 1994.  Rolling releases can be a bumpy ride if not managed correctly, which is why the long-term support releases exist.  The community has recognised the need, and meets it accordingly.

Ubuntu are even predictable with their releases.  They release on a schedule.  Anything not ready for release is pushed back to the next release.  They do a release every 6 months, in April and October and every 2 years, the April release is a long-term support release.  That is; 8.04, 10.04, 12.04, 14.04 are all LTS releases.  The LTS releases get supported for about 3 years, the regular releases about 18 months.

Debian releases are basically LTS, unless you run Debian Testing or Debian Unstable.  Then you’re running rolling-release.

Some distributions like Gentoo are always rolling-release.  I’ve been running Gentoo for more than 10 years now, and I find the rolling releases rarely give me problems.  We’ve had our hiccups, but these days, things are smooth.  Updating an older Gentoo box to the latest release used to be a fight, but these days, is comparatively painless.

It took most of that 10 years to get to that point, and this is where I worry about Microsoft forcing the vast majority of Windows users onto a rolling-release model, as they will be doing this for the first time.  As I understand it, there will be four branches:

  1. Windows Insiders programme is like Debian Unstable.  The very latest features are pushed out to them first.  They are effectively running a beta version of Windows, and can expect many updates, many breakages, lots of things changing.  For some users, this will be fine, others it’ll be a headache.  There’s no option to skip updates, but you probably will have the option of resigning from the Windows Insiders programme.
  2. Home users basically get something like Debian Testing.  After updates have been thrashed out by the insiders, it gets force-fed to the general public.  The Home version of Windows 10 will not have an option to defer an update.
  3. Professional users get something more like the standard releases of Debian.  They’ll have the option of deferring an update for up to 30 days, so things can change less frequently.  It’s still rolling-release, but they can at least plan their updates to take place once a month, hopefully without disrupting too much.
  4. Enterprise users get something like the old-stable release of Debian.  Security updates, and they have the option to defer updates for a year.

Enterprise isn’t available unless you’re a large company buying lots of licenses.  If people must buy a Windows 10 machine, my recommendation would be to go for the professional version, then you have some right of veto, as not all the updates a purely security-related, some will be changing the UI and adding/removing features.

I can see this being a major headache though for anyone who has to support hardware or software on Windows 10 however, since it’s essentially the build number that becomes important: different release builds will behave differently.  Possibly different enough that things need much more testing and maintenance than what vendors are used to.

Some are very poor at supporting Linux right now due to the rolling-release model of things like the Linux kernel, so I can see Windows 10 being a nightmare for some.

Privacy concerns

One of the big issues to be raised with Windows 10 is the inclusion of telemetry to “improve the user experience” and other features that are seen as an invasion of privacy.  Many things can be turned off, but it will take someone who’s familiar with the OS or good at researching the problem to turn them off.

Probably the biggest concern from my prospective as a network administrator is the WiFi Sense feature.  This is a feature in Windows 10 (and Windows 8 Phone), turned on by default, that allows you to share WiFi passwords with other contacts.

If one of that person’s contacts then comes into range of your AP, their device contacts Microsoft’s servers which have the password on file, and can provide it to that person’s device (hopefully in a secured manner).  The password is never shown to the user themselves, but I believe it’s only a matter of time before someone figures out how to retrieve that password from WiFi Sense.  (A rogue AP would probably do the trick.)

We have discussed this at work where we have two WiFi networks: one WPA2 enterprise one for staff, and a WPA2 Personal one for guests.  Since we cannot control whether the users have this feature turned on or not, or whether they might accidentally “share” the password with world + dog, we’re considering two options:

  1. Banning the use of Windows 10 devices (and Windows 8 Phone) from being used on our guest WiFi network.
  2. Implementing a cron job to regularly change the guest WiFi password.  (The Cisco AP we have can be hit with SSH; automating this shouldn’t be difficult.)

There are some nasty points in the end user license agreement too that seem to give Microsoft free reign to make copies of any of the data on the system.  They say personal information will be removed, but even with the best of intentions, it is likely that some personal information will get caught in the net cast by telemetry software.

Forced “upgrades” to Windows 10

This is the bit about Windows 10 that really bugs me.  Okay, Microsoft is pushing a deal where they’ll provide it to you for free for a year.  Free upgrades, yaay!  But wait: how do you know if your hardware and software is compatible?  Maybe you’re not ready to jump on the bandwagon just yet, or maybe you’ve heard news about the privacy issues or rolling release updates and decided to hold back.

Many users of Windows 7, 8 and 8.1 are now being force-fed the new release, whether we asked for it or not.

Now the problem with this is it completely ignores the fact that some do not run with an always-on Internet connection with a large quota.  I know people who only have a 3G connection, with a very small (1GB) quota.  Windows 10 weighs in at nearly 3GB, so for them, they’ll be paying for 2GB worth of overuse charges just for the OS, never mind what web browsing, emailing and other things they might have actually bought their Internet connection for.

Microsoft employees have been outed for showing such contempt before.  It seems so many there are used to the idea of an Internet connection that is always there and has a big enough quota to be considered “unlimited” that they have forgotten that some parts of the world do not have such luxuries.  The computer and the Internet are just tools: we do not buy an Internet connection just for the sake of having one.

Stopping updates

There are a couple of tools that exist for managing this.  I have not tested any of them, and cannot vouch for their safety or reliability.

  • BlockWindows (github link) is a set of scripts that, when executed, uninstall and disable most of the Windows 10-related updates on Windows 7 and 8/8.1.
  • GWX Control Panel is a (proprietary?) tool for controlling the GWX process.  The download is here.

My recommendation is to keep good backups.  Find a tool that will do a raw partition back-up of your Windows partition, and keep your personal files on a separate partition.  Then, if Microsoft does come a-knocking, you can easily roll back.  Hopefully after the “free upgrade” offer has expired (about this time next year), they will cease and desist from this practise.