Nov 242015

Some time back, Lenovo made the news with the Superfish fiasco.  Superfish was a piece of software that intercepted HTTPS connections by way of a trusted root certificate installed on the machine.  When the software detected a browser attempting to make a HTTPS connection, it would intercept it and connect on that software’s behalf.

When Superfish negotiated the connection, it would then generate on-the-fly a certificate for that website which it would then present to the browser.  This allowed it to spy on the web page content for the purpose of advertising.

Now Dell have been caught shipping an eDellRoot certificate on some of its systems.  Both laptops and desktops are affected.  This morning I checked the two newest computers in our office, both Dell XPS 8700 desktops running Windows 7.  Both had been built on the 13th of October, and shipped to us.  They both arrived on the 23rd of October, and they were both taken out of their boxes, plugged in, and duly configured.

I pretty much had two monitors and two keyboards in front of me, performing the same actions on both simultaneously.

Following configuration, one was deployed to a user, the other was put back in its box as a spare.  This morning I checked both for this certificate.  The one in the box was clean, the deployed machine had the certificate present.

Dell's dodgy certificate in action

Dell’s dodgy certificate in action

How do you check on a Dell machine?

A quick way, is to hit Logo+R (Logo = “Windows Key”, “Command Key” on Mac, or whatever it is on your keyboard, some have a penguin) then type certmgr.msc and press ENTER. Under “Trusted Root Certificate Store”, look for “eDellRoot”.

Another way is, using IE or Chrome, try one of the following websites:

(Don’t use Firefox: it has its own certificate store, thus isn’t affected.)


Apparently just deleting the certificate causes it to be re-installed after reboot.  qasimchadhar posted some instructions for removal, I’ll be trying these shortly:

You get rid of the certificate by performing following actions:

  1. Stop and Disable Dell Foundations Service
  2. Delete eDellRoot CA registry key here
  3. Then reboot and test.

Future recomendations

It is clear that the manufacturers do not have their user’s interests at heart when they ship Windows with new computers.  Microsoft has recognised this and now promote signature edition computers, which is a move I happen to support.  HOWEVER this should be standard not an option.

There are two reasons why third-party software should not be bundled with computers:

  1. The user may not have a need or use for, the said software, either not requiring its functionality or preferring an alternative.
  2. All non-trivial software is a potential security attack vector and must be kept up to date.  The version released on the OEM image is guaranteed to be at least months old by the time your machine arrives at your door, and will almost certainly be out-of-date when you come to re-install.

So we wind up either spending hours uninstalling unwanted or out-of-date crap, or we spend hours obtaining a fresh clean non-OEM installation disc, installing the bare OS, then chasing up drivers, etc.

This assumes the OEM image is otherwise clean.  It is apparent though that more than just demo software is being loaded on these machines, malware is being shipped.

With Dell and Lenovo now both in on this act, it’s now a question of if we can trust OEM installs.  Evidence seems to suggest that no, we can no longer trust such images, and have to consider all OS installations not done by the end user as suspect.

The manufacturers have abused our trust.  As far as convenience goes, we have been had.  It is clear that an OEM-supplied operating system does not offer any greater convenience to the end user, and instead, puts them at greater risk of malware attack.  I think it is time for this practice to end.

If manufacturers are unwilling to provide machines with images that would comply with Microsoft’s signature edition requirements, then they should ship the computer with a completely blank hard drive (or SSD) and unmodified installation media for a technically competent person (of the user’s choosing) to install.

Nov 212015

Well, in the last post I started to consider the thoughts of building my own computer from a spare 386 CPU I had liberated from an old motherboard.

One of the issues I face is implementing the bus protocol that the 386 uses, and decoding of interrupts.  The 386 expects an 8-bit interrupt request number that corresponds to the interrupting device.  I’m used to microcontrollers where you use a single GPIO line, but in this case, the interrupts are multiplexed.

For basic needs, you could do it with a demux IC.  That will work for a small number of interrupt lines.  Suppose I wanted more though?  How feasible is it to support many interrupt lines without tying up lots of GPIO lines?

CANBus has an interesting way of handling arbitration.  The “zeros” are dominant, and thus overrule “ones”.  The CAN transceiver is a full-duplex device, so as the station is transmitting, it listens to the state of the bus.  When some nodes want to talk (they are, of course, oblivious to each-others’ intentions), they start sending a start-bit (a zero) which synchronises all nodes, then begin sending an address.

While each node is sending the same “bit value”, the receiving nodes see that value.  As each node tries sending a 1 while the others are sending 0’s, it sees the disparity, and concludes that it has lost arbitration.  Eventually, you’re left with a single node that then proceeds to send its CANBus frame.

Now, we don’t need the complexity of CANBus to do what we’re after.  We can keep synchronisation by simple virtue that we can distribute a common clock (the one the CPU runs at).  Dominant and recessive bits can be implemented with transistors pulling down on a pull-up resistor, or a diode-OR: this will give us a system where ‘1’s are dominant.  Good enough.

So I figured up Logisim to have a fiddle, came up with this:

Interrupt controller using logic gates

Interrupt controller using logic gates

interrupt.circ is the actual LogiSim circuit if you wanted to have a fiddle; decompress it.  Please excuse the mess regarding the schematic.

On the left is the host-side of the interrupt controller.  This would ultimately interface with the 386.  On the right, are two “devices”, one on IRQ channel 0x01, the other on 0x05.  The controller handles two types of interrupts: “DMA interrupts”, where the device just wants to tell the DMA controller to put data into memory, or “IRQ”s, where we want to interrupt the CPU.

The devices are provided with the following control signals from the interrupt controller:

Signal Controlled by Description
DMA Devices Informs the IRQ controller if we’re interrupting for DMA purposes (high) or if we need to tell the CPU something (low).
IRQ Devices Informs the IRQ controller we want its attention
ISYNC Controller Informs the devices that they have the controller’s attention and to start transmitting address bits.
IRQBIT[2…0] Controller Instructs the devices what bit of their IRQ address to send (0 = MSB, 7 = LSB).
IDA Devices The inverted address bit value corresponding to the bit pointed to by IRQBIT.
IACK Devices Asserted by the device that wins arbitration.

Due to the dominant/recessive nature of the bits, the highest numbered device wins over lesser devices. IRQ requests also dominate over DMA requests.

In the schematic, the devices each have two D-flip-flops that are not driven by any control signals.  These are my “switches” for toggling the state of the device as a user.  The ones feeding into the XOR gate control the DMA signal, the others control the IRQ line.

Down the bottom, I’ve wired up a counter to count how long between the ISYNC signal going high and the controller determining a result.  This controller manages to determine which device requested its attention within 10 cycles.  If clocked at the same 20MHz rate as the CPU core, this would be good enough for getting a decoded IRQ channel number to the data lines of the 386 CPU by the end of its second IRQ acknowledge cycle, and can handle up to 256 devices.

A logical next step would be to look at writing this in Verilog and trying it out on an FPGA.  Thanks to the excellent work of Clifford Wolf in producing the IceStorm project, it is now possible to do this with completely open tools.  So, I’ve got a Lattice iCE40HX-8K FPGA board coming.  This should make a pretty mean SDRAM controller, interrupt controller and address decoder all in one chip, and should be a great introduction into configuring FPGAs.

Nov 072015

Well, I’ve been thinking a lot lately about single board computers. There’s a big market out there. Since the Raspberry Pi, there’s been a real explosion available to the small-end of town, the individual. Prior to this, development boards were mostly in the 4-figures sort of price range.

So we’re now rather spoiled for choice. I have a Raspberry Pi. There’s also the BeagleBone Black, Banana Pi, and several others. One gripe I have with the Raspberry Pi is the complete absence of any kind of analogue input. There’s an analogue line out, you can interface some USB audio devices (although I hear two is problematic), or you can get an I2S module.

There’s a GPU in there that’s capable of some DSP work and a CLKOUT pin that can generate a wide range of frequencies. That sounds like the beginnings of a decent SDR, however one glitch, while I can use the CLKOUT pin to drive a mixer and the GPIOs to do band selection, there’s nothing that will take that analogue signal and sample it.

If I want something wider than audio frequencies (and even a 192kHz audio CODEC is not guaranteed above ~20kHz) I have to interface to SPI, and the pickings are somewhat slim. Then I read this article on a DIY single board computer.

That got me thinking about whether I could do my own. At work we use the Technologic Systems TS-7670 single-board computers, and as nice as those machines are, they’re a little slow and RAM-limited. Something that could work as a credible replacement there too would be nice, key needs there being RS-485, Ethernet and a 85 degree temperature rating.

Form factor is a consideration here, and I figured something modular, using either header pins or edge connectors would work. That would make the module easily embeddable in hobby projects.

Since all the really nice SoCs are BGA packages, I figured I’d first need to know how easy I could work with them. We’ve got a stack of old motherboards sitting in a cupboard that I figured I could raid for BGAs to play with, just to see first-hand how fine the pins were. A crazy thought came to me: maybe for prototyping, I could do it dead-bug style?

Key thing here being able to solder directly to a ball securely, then route the wire to its destination. I may need to glue it to a bit of grounded foil to keep the capacitance in check. So, the first step I figured, would be to try removing some components from the boards I had laying around to see this first-hand.

In amongst the boards I came across was one old 386 motherboard that I initially mistook for a 286 minus the CPU. The empty (PLCC) socket is for an 80387 math co-processor. The board was in the cupboard for a good reason, corrosion from the CMOS battery had pretty much destroyed key traces on one corner of the board.

Corrosion on a motherboard caused by a CMOS battery

Corrosion on a motherboard caused by a CMOS battery

I decided to take to it with the heat gun first. The above picture was taken post-heatgun, but you can see just how bad the corrosion was. The ISA slots were okay, and so where a stack of other useful IC sockets, ICs, passive components, etc.

With the heat gun at full blast, I’d just wave it over an area of interest until the board started to de-laminate, then with needle-nose pliers, pull the socket or component from the board. Sometimes the component simply dropped out.

At one point I heard a loud “plop”. Looking under the board, one of the larger surface-mounted chips had fallen off. That gave me an idea, could the 386 chip be de-soldered? I aimed the heat-gun directly at the area underneath. A few seconds later and it too hit the deck.

All in all, it was a successful haul.

Parts off the 386 motherboard

Parts off the 386 motherboard

I also took apart an 8-bit ISA joystick card. It had some nice looking logic chips that I figured could be re-purposed. The real star though was the CPU itself:

Intel NG80306SX-20

Intel NG80306SX-20

The question comes up, what does one do with a crusty old 386 that’s nearly as old as I am? A quick search turned up this scanned copy of the Intel 80386SX datasheet. The chip has a 16-bit bus with 23 bits worth of address lines (bit 0 is assumed to be zero). It requires a clock that is double the chip’s operating frequency (there’s an internal divide-by-two). This particular chip runs internally at 20MHz. Nothing jumped out as being scary. Could I use this as a practice run for making an ARM computer module?

A dig around dug up some more parts:

More parts

More parts

In this pile we have…

I also have some SIMMs laying around, but the SDRAM modules look easier to handle since the controllers on board synchronise with what would otherwise be the front-side bus.  The datasheet does not give a minimum clock (although clearly this is not DC; DRAM does need to be refreshed) and mentions a clock frequency of 33MHz when set to run at a CAS latency of 1.  It just so happens that I have a 33MHz oscillator.  There’s a couple of nits in this plan though:

  • the SDRAM modules a 3.3V, the CPU is 5V: no problem, there are level conversion chips out there.
  • the SDRAM modules are 64-bits wide.  We’ll have to buffer the output to eight 8-bit registers.  Writes do a read-modify-write cycle, and we use a 2-in-4 decoder to select the CE pin on two of the registers from address bits 1 and 2 from the CPU.
  • Each SDRAM module holds 32MB.  We have a 23-bit address bus, which with 16-bit words gives us a total address space of 16MB.  Solution: the old 8-bit computers of yesteryear used bank-switching to address more RAM/ROM than they had address lines for, we can interface an 8-bit register at I/O address 0x0000 (easily decoded with a stack of Schottky diodes and a NOT gate) which can hold the remaining address bits mapping the memory to the lower 8MB of physical memory.  We then hijack the 386’s MMU to map the 8MB chunks and use the page faults to switch memory banks.  (If we put the SRAM and ROM up in the top 1MB, this gives us ~7MB of memory-mapped I/O to play with.)

So, not show stoppers.  There’s an example circuit showing interfacing an ATMega8515 to a single SDRAM chip for driving a VGA interface, and some example code, with comments in German. Unfortunately you’d learn more German in an episode of Hogan’s Heroes than what I know, but I can sort-of figure out the sequence used to read and write from/to the SDRAM chip. Nothing looks scary there either.  This SDRAM tutorial seems to be a goldmine.

Thus, it looks like I’ve got enough bits to have a crack at it.  I can run the 386 from that 33MHz brick; which will give me a chip running at 16.5MHz.  Somewhere I’ve got the 40MHz brick laying around from the motherboard (I liberated that some time ago), but that can wait.

A first step would be to try interfacing the 386 chip to an AVR, and feed it instructions one step at a time, check that it’s still alive.  Then, the next steps should become clear.

Oct 312015

Well, it seems the updates to Microsoft’s latest aren’t going as its maker planned. A few people have asked me about my personal opinion of this OS, and I’ll admit, I have no direct experience with it.  I also haven’t had much contact with Windows 8 either.

That said, I do keep up with the news, and a few things do concern me.

The good news

It’s not all bad of course.  Windows 8 saw a big shrink in the footprint of a typical Windows install, and Windows 10 continues to be fairly lightweight.  The UI disaster from Windows 8 has been somewhat pared back to provide a more traditional desktop with a start menu that combines features from the start screen.

There are some limitations with the new start menu, but from what I understand, it behaves mostly like the one from Windows 7.  The tiled section still has some rough edges though, something that is likely to be addressed in future updates of Windows 10.

If this is all that had changed though, I’d be happily accepting it.  Sadly, this is not the case.

Rolling-release updates

Windows has, since day one, been on a long-term support release model.  That is, they bring out a release, then they support it for X years.  Windows XP was released in 2002 and was supported until last year for example.  Windows Vista is still on extended support, and Windows 7 will enter extended support soon.

Now, in the Linux world, we’ve had both long-term support releases and rolling release distributions for years.  Most of the current Linux users know about it, and the distribution makers have had many years to get it right.  Ubuntu have been doing this since 2004, Debian since 1998 and Red Hat since 1994.  Rolling releases can be a bumpy ride if not managed correctly, which is why the long-term support releases exist.  The community has recognised the need, and meets it accordingly.

Ubuntu are even predictable with their releases.  They release on a schedule.  Anything not ready for release is pushed back to the next release.  They do a release every 6 months, in April and October and every 2 years, the April release is a long-term support release.  That is; 8.04, 10.04, 12.04, 14.04 are all LTS releases.  The LTS releases get supported for about 3 years, the regular releases about 18 months.

Debian releases are basically LTS, unless you run Debian Testing or Debian Unstable.  Then you’re running rolling-release.

Some distributions like Gentoo are always rolling-release.  I’ve been running Gentoo for more than 10 years now, and I find the rolling releases rarely give me problems.  We’ve had our hiccups, but these days, things are smooth.  Updating an older Gentoo box to the latest release used to be a fight, but these days, is comparatively painless.

It took most of that 10 years to get to that point, and this is where I worry about Microsoft forcing the vast majority of Windows users onto a rolling-release model, as they will be doing this for the first time.  As I understand it, there will be four branches:

  1. Windows Insiders programme is like Debian Unstable.  The very latest features are pushed out to them first.  They are effectively running a beta version of Windows, and can expect many updates, many breakages, lots of things changing.  For some users, this will be fine, others it’ll be a headache.  There’s no option to skip updates, but you probably will have the option of resigning from the Windows Insiders programme.
  2. Home users basically get something like Debian Testing.  After updates have been thrashed out by the insiders, it gets force-fed to the general public.  The Home version of Windows 10 will not have an option to defer an update.
  3. Professional users get something more like the standard releases of Debian.  They’ll have the option of deferring an update for up to 30 days, so things can change less frequently.  It’s still rolling-release, but they can at least plan their updates to take place once a month, hopefully without disrupting too much.
  4. Enterprise users get something like the old-stable release of Debian.  Security updates, and they have the option to defer updates for a year.

Enterprise isn’t available unless you’re a large company buying lots of licenses.  If people must buy a Windows 10 machine, my recommendation would be to go for the professional version, then you have some right of veto, as not all the updates a purely security-related, some will be changing the UI and adding/removing features.

I can see this being a major headache though for anyone who has to support hardware or software on Windows 10 however, since it’s essentially the build number that becomes important: different release builds will behave differently.  Possibly different enough that things need much more testing and maintenance than what vendors are used to.

Some are very poor at supporting Linux right now due to the rolling-release model of things like the Linux kernel, so I can see Windows 10 being a nightmare for some.

Privacy concerns

One of the big issues to be raised with Windows 10 is the inclusion of telemetry to “improve the user experience” and other features that are seen as an invasion of privacy.  Many things can be turned off, but it will take someone who’s familiar with the OS or good at researching the problem to turn them off.

Probably the biggest concern from my prospective as a network administrator is the WiFi Sense feature.  This is a feature in Windows 10 (and Windows 8 Phone), turned on by default, that allows you to share WiFi passwords with other contacts.

If one of that person’s contacts then comes into range of your AP, their device contacts Microsoft’s servers which have the password on file, and can provide it to that person’s device (hopefully in a secured manner).  The password is never shown to the user themselves, but I believe it’s only a matter of time before someone figures out how to retrieve that password from WiFi Sense.  (A rogue AP would probably do the trick.)

We have discussed this at work where we have two WiFi networks: one WPA2 enterprise one for staff, and a WPA2 Personal one for guests.  Since we cannot control whether the users have this feature turned on or not, or whether they might accidentally “share” the password with world + dog, we’re considering two options:

  1. Banning the use of Windows 10 devices (and Windows 8 Phone) from being used on our guest WiFi network.
  2. Implementing a cron job to regularly change the guest WiFi password.  (The Cisco AP we have can be hit with SSH; automating this shouldn’t be difficult.)

There are some nasty points in the end user license agreement too that seem to give Microsoft free reign to make copies of any of the data on the system.  They say personal information will be removed, but even with the best of intentions, it is likely that some personal information will get caught in the net cast by telemetry software.

Forced “upgrades” to Windows 10

This is the bit about Windows 10 that really bugs me.  Okay, Microsoft is pushing a deal where they’ll provide it to you for free for a year.  Free upgrades, yaay!  But wait: how do you know if your hardware and software is compatible?  Maybe you’re not ready to jump on the bandwagon just yet, or maybe you’ve heard news about the privacy issues or rolling release updates and decided to hold back.

Many users of Windows 7, 8 and 8.1 are now being force-fed the new release, whether we asked for it or not.

Now the problem with this is it completely ignores the fact that some do not run with an always-on Internet connection with a large quota.  I know people who only have a 3G connection, with a very small (1GB) quota.  Windows 10 weighs in at nearly 3GB, so for them, they’ll be paying for 2GB worth of overuse charges just for the OS, never mind what web browsing, emailing and other things they might have actually bought their Internet connection for.

Microsoft employees have been outed for showing such contempt before.  It seems so many there are used to the idea of an Internet connection that is always there and has a big enough quota to be considered “unlimited” that they have forgotten that some parts of the world do not have such luxuries.  The computer and the Internet are just tools: we do not buy an Internet connection just for the sake of having one.

Stopping updates

There are a couple of tools that exist for managing this.  I have not tested any of them, and cannot vouch for their safety or reliability.

  • BlockWindows (github link) is a set of scripts that, when executed, uninstall and disable most of the Windows 10-related updates on Windows 7 and 8/8.1.
  • GWX Control Panel is a (proprietary?) tool for controlling the GWX process.

My recommendation is to keep good backups.  Find a tool that will do a raw partition back-up of your Windows partition, and keep your personal files on a separate partition.  Then, if Microsoft does come a-knocking, you can easily roll back.  Hopefully after the “free upgrade” offer has expired (about this time next year), they will cease and desist from this practise.

Sep 282015

I’ve been doing a bit of thinking on my ride this morning.  A few weeks ago, a school student in the US decided to use his interest in electronics and his knowledge to make a digital clock.  Having gotten the circuit working, he decided to bring the device to school to show his physics teacher.

The physics teacher might not have been alarmed, but another certainly was, and he was frogmarched to the Principal’s office who then called the police.  It was a major uproar, and shows just how paranoid a society (globally) we’ve become.

Back in 2001, I used to have a portable CD player which I’d listen to on my commutes to and from school.  It was a basic affair, that took 4 AA cells that were forever going flat.  I tried rechargeable cells, but wasn’t satisfied with their life either.  Having gotten fed up with that, I looked to alternatives.  The alternative I went for was a small 12V 7Ah SLA battery about the size of a house brick.

Yes, heavy, but I carried it in the backpack with my textbooks, and it worked well.  I could go a week on a single charge.  In addition, I could run not just a CD player, but any 12V device, including a small fan, which made me the envy of a lot of fellow students in the middle of summer.  (Our classrooms were not air conditioned.)  I still use a cigarette lighter extension lead/4-way adapter that I made back then to give me extra sockets on the bicycle.

If I tried it today, I half expect I’d be explaining this to the AFP.

It raises a real serious question about what our future generations are meant to do with their lives.  Yes, there’s clearly a danger in experimenting with these things.  That SLA battery, if it ruptured, could leak highly dangerous sulphuric acid.  If I charge it or discharge it too fast (e.g. by shorting the terminals), the internal resistance would build up heat inside the cells which would then start boiling the water in the electrolyte and gas would build up, possibly triggering the cell walls to fail.

But, I had contingency plans.  The battery was set up with fuses to cut power in the event of a short.  Cables were well insulated.  Terminals were protected.  It never caused an issue.  These days, I use LiFePO4s, which, while forgiving, also have their dangers.  I steer clear of LiPol cells since they are very volatile.

The point being, I had been experimenting with electronics from a very young age.  I also learned about computer programming from a very young age.  I learned about how they worked, and learned how to control them.  You could compare it to learning to ride a horse.

One [way] is to get on him and learn by actual practice how each motion and trick may be best met; the other is to sit on a fence and watch the beast a while, and then retire to the house and at leisure figure out the best way of overcoming his jumps and kicks.  The latter system is the safest, but the former, on the whole, turns out the larger proportion of good riders.

— Wilbur Wright in his speech “Some Aeronautical Experiments”, 18th September, 1901.
(source: David McCullough, “The Wright Brothers: The Dramatic Story-Behind-the-Story”)

I learned to ride a couple of “horses”.  One in particular, was the computer.  Understanding the electronics behind it greatly helped here.  I was already familiar with the concept of DC current by the time I hit university and I was well advanced in my understanding of how to control a computer.  What University specifically taught me was some discipline in how to structure code and the specifics of particular languages.  The bulk of my study was done long before I applied for any degrees.

There seems to be a thinking in today’s society that “task X is difficult, leave it to the professionals”.  There are some fields, where one would do well to heed that advice.  Anything involving gas ducting or mains electricity being two examples.

You can possibly get quite a bit of plumbing work done yourself, however some professional oversight is usually a good idea.  You have a right to DIY in most cases, but rights come with responsibilities, and one is taking responsibility of something goes wrong.  They go together.

(Extra) Low voltage, at low current levels, there’s very little you can actually do that would result in serious harm.  If you go about things carefully in a controlled manner, this experimentation can be a great vehicle for serious study in a chosen field.  Computers, unless you’re doing something really risky like flashing boot firmware, are not easily “bricked” and can be recovered.  Playing with a second-hand old desktop (not a production machine) or a cheap machine like the plethora of ARM-based and AVR-based single-board computers available today is not likely to result in life-threatening injury.

Banning the experimentation in such fields is not going to serve our community in the long term.  This is a typical knee-jerk reaction when someone’s experimentation is seen to be doing harm, even if, like the US student’s case, the experimentation is completely benign.  Following this road over time is only going to leave to a nation of cave-dwelling hermits that shun technology as black magic.

Technology is mankind’s genie, you cannot simply stuff it back in the bottle.  The genie here is quite willing to grant us many wishes, but unlike the ones of myths and legends, this one expects some effort to be done in return.  It is not simply going to vanish just because we’ve decided that it’s too dangerous.  We as individuals either need to study how the genie operates, or we pay someone else that has studied how it operates.  If everyone chooses to do the latter, who is left to do the former?

Sep 272015

Well, lately I’ve been doing a bit of work hacking the firmware on the Rowetel SM1000 digital microphone.  For those who don’t know it, this is a hardware (microcontroller) implementation of the FreeDV digital voice mode: it’s a modem that plugs into the microphone/headphone ports of any SSB-capable transceiver and converts FreeDV modem tones to analogue voice.

I plan to set this unit of mine up on the bicycle, but there’s a few nits that I had.

  • There’s no time-out timer
  • The unit is half-duplex

If there’s no timeout timer, I really need to hear the tones coming from the radio to tell me it has timed out.  Others might find a VOX feature useful, and there’s active experimentation in the FreeDV 700B mode (the SM1000 currently only supports FreeDV 1600) which has been very promising to date.

Long story short, the unit needed a more capable UI, and importantly, it also needed to be able to remember settings across power cycles.  There’s no EEPROM chip on these things, and while the STM32F405VG has a pin for providing backup-battery power, there’s no battery or supercapacitor, so the SM1000 forgets everything on shut down.

ST do have an application note on their website on precisely this topic.  AN3969 (and its software sources) discuss a method for using a portion of the STM32’s flash for this task.  However, I found their “license” confusing.  So I decided to have a crack myself.  How hard can it be, right?

There’s 5 things that a virtual EEPROM driver needs to bear in mind:

  • The flash is organised into sectors.
  • These sectors when erased contain nothing but ones.
  • We store data by programming zeros.
  • The only way to change a zero back to a one is to do an erase of the entire sector.
  • The sector may be erased a limited number of times.

So on this note, a virtual EEPROM should aim to do the following:

  • It should keep tabs on what parts of the sector are in use.  For simplicity, we’ll divide this into fixed-size blocks.
  • When a block of data is to be changed, if the change can’t be done by changing ones to zeros, a copy of the entire block should be written to a new location, and a flag set (by writing zeros) on the old block to mark it as obsolete.
  • When a sector is full of obsolete blocks, we may erase it.
  • We try to put off doing the erase until such time as the space is needed.

Step 1: making room

The first step is to make room for the flash variables.  They will be directly accessible in the same manner as variables in RAM, however from the application point of view, they will be constant.  In many microcontroller projects, there’ll be several regions of memory, defined by memory address.  This comes from the datasheet of your MCU.

An example, taken from the SM1000 firmware, prior to my hacking (stm32_flash.ld at r2389):

/* Specify the memory areas */
  FLASH (rx)      : ORIGIN = 0x08000000, LENGTH = 1024K
  RAM (rwx)       : ORIGIN = 0x20000000, LENGTH = 128K
  CCM (rwx)       : ORIGIN = 0x10000000, LENGTH = 64K

The MCU here is the STM32F405VG, which has 1MB of flash starting at address 0x08000000. This 1MB is divided into (in order):

  • Sectors 0…3: 16kB starting at 0x08000000
  • Sector 4: 64kB starting at 0x0800c000
  • Sector 5 onwards: 128kB starting at 0x08010000

We need at least two sectors, as when one fills up, we will swap over to the other. Now it would have been nice if the arrangement were reversed, with the smaller sectors at the end of the device.

The Cortex M4 CPU is basically hard-wired to boot from address 0, the BOOT pins on the STM32F4 decide how that gets mapped. The very first few instructions are the interrupt vector table, and it MUST be the thing the CPU sees first. Unless told to boot from external memory, or system memory, then address 0 is aliased to 0x08000000. i.e. flash sector 0, thus if you are booting from internal flash, you have no choice, the vector table MUST reside in sector 0.

Normally code and interrupt vector table live together as one happy family. We could use a couple of 128k sectors, but 256k is rather a lot for just an EEPROM storing maybe 1kB of data tops. Two 16kB sectors is just dandy, in fact, we’ll throw in the third one for free since we’ve got plenty to go around.

However, the first one will have to be reserved for the interrupt vector table that will have the space to itself.

So here’s what my new memory regions look like (stm32_flash.ld at 2390):

/* Specify the memory areas */
  /* ISR vectors *must* be placed here as they get mapped to address 0 */
  VECTOR (rx)     : ORIGIN = 0x08000000, LENGTH = 16K
  /* Virtual EEPROM area, we use the remaining 16kB blocks for this. */
  EEPROM (rx)     : ORIGIN = 0x08004000, LENGTH = 48K
  /* The rest of flash is used for program data */
  FLASH (rx)      : ORIGIN = 0x08010000, LENGTH = 960K
  /* Memory area */
  RAM (rwx)       : ORIGIN = 0x20000000, LENGTH = 128K
  /* Core Coupled Memory */
  CCM (rwx)       : ORIGIN = 0x10000000, LENGTH = 64K

This is only half the story, we also need to create the section that will be emitted in the ELF binary:

  .isr_vector :
    . = ALIGN(4);
    . = ALIGN(4);
  } >FLASH

  .text :
    . = ALIGN(4);
    *(.text)           /* .text sections (code) */
    *(.text*)          /* .text* sections (code) */
    *(.rodata)         /* .rodata sections (constants, strings, etc.) */
    *(.rodata*)        /* .rodata* sections (constants, strings, etc.) */
    *(.glue_7)         /* glue arm to thumb code */
    *(.glue_7t)        /* glue thumb to arm code */

    KEEP (*(.init))
    KEEP (*(.fini))

    . = ALIGN(4);
    _etext = .;        /* define a global symbols at end of code */
    _exit = .;
  } >FLASH…

There’s rather a lot here, and so I haven’t reproduced all of it, but this is the same file as before at revision 2389, but a little further down. You’ll note the .isr_vector is pointed at the region called FLASH which is most definitely NOT what we want. The image will not boot with the vectors down there. We need to change it to put the vectors in the VECTOR region.

Whilst we’re here, we’ll create a small region for the EEPROM.

  .isr_vector :
    . = ALIGN(4);
    . = ALIGN(4);

  .eeprom :
    . = ALIGN(4);
    *(.eeprom)         /* special section for persistent data */
    . = ALIGN(4);

  .text :
    . = ALIGN(4);
    *(.text)           /* .text sections (code) */
    *(.text*)          /* .text* sections (code) */

THAT’s better! Things will boot now. However, there is still a subtle problem that initially caught me out here. Sure, the shiny new .eeprom section is unpopulated, BUT the linker has helpfully filled it with zeros. We cannot program zeroes back into ones! Either we have to erase it in the program, or we tell the linker to fill it with ones for us. Thankfully, the latter is easy (stm32_flash.ld at 2395):

  .eeprom :
    . = ALIGN(4);
    KEEP(*(.eeprom))   /* special section for persistent data */
    . = ALIGN(4);
  } >EEPROM = 0xff

Credit: Erich Styger

We have to do two things. One, is we need to tell it that we want the region filled with the pattern 0xff. Two, we need to make sure it gets filled with ones by telling the linker to write one as the very last byte. Otherwise, it’ll think, “Huh? There’s nothing here, I won’t bother!” and leave it as a string of zeros.

Step 2: Organising the space

Having made room, we now need to decide how to break this data up.  We know the following:

  • We have 3 sectors, each 16kB
  • The sectors have an endurance of 10000 program-erase cycles

Give some thought as to what data you’ll be storing.  This will decide how big to make the blocks.  If you’re storing only tiny bits of data, more blocks makes more sense.  If however you’ve got some fairly big lumps of data, you might want bigger blocks to reduce overheads.

I ended up dividing the sectors into 256-byte blocks.  I figured that was a nice round (binary sense) figure to work with.  At the moment, we have 16 bytes of configuration data, so I can do with a lot less, but I expect this to grow.  The blocks will need a header to tell you whether or not the block is being used.  Some checksumming is usually not a bad idea either, since that will clue you in to when the sector has worn out prematurely.  So some data in each block will be header data for our virtual EEPROM.

If we don’t care about erase cycles, this is fine, we can just make all blocks data blocks, however it’d be wise to track this, and avoid erasing and attempting to use a depleted sector, so we need somewhere to track this.  256 bytes gives us enough space to stash an erase counter and a map of what blocks are in use within that sector.

So we’ll reserve the first block in the sector to act as this index for the entire sector.  This gives us enough room to have 16-bits worth of flags for each block stored in the index.  That gives us 63 blocks per sector for data use.

It’d be handy to be able to use this flash region for a few virtual EEPROMs, so we’ll allocate some space to give us a virtual ROM ID.  It is prudent to do some checksumming, and the STM32F4 has a CRC32 module, so in that goes, and we might choose to not use all of a block, so we should throw in a size field (8 bits, since the size can’t be bigger than 255).  If we pad this out a bit to give us a byte for reserved data, we get a header with the following structure:

15 14 13 12 11 10 19 8 7 6 5 4 3 2 1 0
+0 CRC32 Checksum
+4 ROM ID Block Index
+6 Block Size Reserved

So that subtracts 8 bytes from the 256 bytes, leaving us 248 for actual program data. If we want to store 320 bytes, we use two blocks, block index 0 stores bytes 0…247 and has a size of 248, and block index 1 stores bytes 248…319 and has a size of 72.

I mentioned there being a sector header, it looks like this:

15 14 13 12 11 10 19 8 7 6 5 4 3 2 1 0
+0 Program Cycles Remaining
+8 Block 0 flags
+10 Block 1 flags
+12 Block 2 flags

No checksums here, because it’s constantly changing.  We can’t re-write a CRC without erasing the entire sector, we don’t want to do that unless we have to.  The flags for each block are currently allocated accordingly:

15 14 13 12 11 10 19 8 7 6 5 4 3 2 1 0
+0 Reserved In use

When the sector is erased, all blocks show up as having all flags set as ones, so the flags is considered “inverted”.  When we come to use a block, we mark the “in use” bit with a zero, leaving the rest as ones.  When we erase, we mark the entire flags block as zeros.  We can set other bits here as we need for accounting purposes.

Thus we have now a format for our flash sector header, and for our block headers.  We can move onto the algorithm.

Step 3: The Code

This is the implementation of the above ideas.  Our code needs to worry about 3 basic operations:

  • reading
  • writing
  • erasing

This is good enough if the size of a ROM image doesn’t change (normal case).  For flexibility, I made my code so that it works crudely like a file, you can seek to any point in the ROM image and start reading/writing, or you can blow the whole thing away.


It is bad taste to leave magic numbers everywhere, so constants should be used to represent some quantities:

  • VROM_SECT_SZ=16384:
    The virtual ROM sector size in bytes.  (Those watching Codec2 Subversion will note I cocked this one up at first.)
    The number of sectors.
  • VROM_BLOCK_SZ=256:
    The size of a block
  • VROM_START_ADDR=0x08004000:
    The address where the virtual ROM starts in Flash
    The base sector number where our ROM starts
  • VROM_MAX_CYCLES=10000:
    Our maximum number of program-erase cycles

Our programming environment may also define some, for example UINTx_MAX.

Derived constants

From the above, we can determine:

  • VROM_DATA_SZ = VROM_BLOCK_SZ – sizeof(block_header):
    The amount of data per block.
    The number of blocks per sector, including the index block
    The number of application blocks per sector (i.e. total minus the index block)

CRC32 computation

I decided to use the STM32’s CRC module for this, which takes its data in 32-bit words.  There’s also the complexity of checking the contents of a structure that includes its own CRC.  I played around with Python’s crcmod module, but couldn’t find some arithmetic that would allow it to remain there.

So I copy the entire block, headers and all to a temporary copy (on the stack), set the CRC field to zero in the header, then compute the CRC. Since I need to read it in 32-bit words, I pack 4 bytes into a word, big-endian style. In cases where I have less than 4 bytes, the least-significant bits are left at zero.

Locating blocks

We identify each block in an image by the ROM ID and the block index.  We need to search for these when requested, as they can be located literally anywhere in flash.  There are probably cleverer ways to do this, but I chose the brute force method.  We cycle through each sector and block, see if the block is allocated (in the index), see if the checksum is correct, see if it belongs to the ROM we’re looking for, then look and see if it’s the right index.

Reading data

To read from the above scheme, having been told a ROM ID (rom), start offset and a size, the latter two being in byte sand given a buffer we’ll call out, we first need to translate the start offset to a sector and block index and block offset.  This is simple integer division and modulus.

The first and last blocks of our read, we’ll probably only read part of.  The rest, we’ll read entire blocks in.  The block offset is only relevant for this first block.

So we start at the block we calculate to have the start of our data range.  If we can’t find it, or it’s too small, then we stop there, otherwise, we proceed to read out the data.  Until we run out of data to read, we increment the block index, try to locate the block, and if found, copy its data out.

Writing and Erasing

Writing is a similar affair.  We look for each block, if we find one, we overwrite it by copying the old data to a temporary buffer, copy our new data in over the top then mark the old block as obsolete before writing the new one out with a new checksum.

Trickery is in invoking the wear levelling algorithm on an as-needed basis.  We mark a block obsolete by setting its header fields to zero, but when we run out of free blocks, then we go looking for sectors that are full of obsolete blocks waiting to be erased.  When we encounter a sector that has been erased, we write a new header at the start and proceed to use its first data block.

In the case of erasing, we don’t bother writing anything out, we just mark the blocks as obsolete.


The full C code is in the Codec2 Subversion repository.  For those who prefer Git, I have a git-svn mirror (yes, I really should move it off that domain).  The code is available under the Lesser GNU General Public License v2.1 and may be ported to run on any CPU you like, not just ST’s.

Sep 262015

Well, a little nit I have to pick with chip manufacturers. On this occasion, it’s with ST, but they all do it, Freescale, TI, Atmel…

I’m talking about the assumptions they make about who uses their site.

Yes, I work as a “systems engineer” (really, programmer and network administrator, my role is more IT than Engineering).  However, when I’m looking at chip designs and application notes, that is usually in my recreation.

This morning, I had occasion to ask ST a question about one of their application notes.  Specifically AN3969, which deals with emulating an EEPROM using the in-built flash on a STM32F4 microcontroller.  Their “license” states:


      The enclosed firmware and all the related documentation are not covered
      by a License Agreement, if you need such License you can contact your
      local STMicroelectronics office.


Hmm, not licensed, but under a heading called “license”. Does that mean it’s public domain? Probably not. Do I treat this like MIT/BSD license? I’m looking to embed this into LGPLed firmware that will be publicly distributed: I really need an answer to this.  So over to the ST website I trundle.

I did have an account, but couldn’t think of the password.  They’ve revamped their site and I also have a new email address, so I figure, time for a new account.  I click their register link, and get this form:

ST Website registration

ST Website registration

Now, here’s where I have a gripe. Why do they always assume I am doing this for work purposes? This is something pretty much all the manufacturers do. The assumption is WRONG. My account on their website has absolutely nothing to do with my employer. I am doing this for recreation! Therefore, should not, mention them in any way.

Yet, they’re mandatory fields. I guess ST get a lot of employees of the “individual – not a company” company.

I filled out the form, got an email with a confirmation link which I click, and now this is something a lot of companies, not just chip makers, get wrong. Apart from the “wish it was” two factor (you can tell my answer was bogus), they dictate some minimum requirements, but then enforce undisclosed maximum requirements on the password.

ST Website password

ST Website password

WTF? “Special” characters? You mean like printable-ASCII characters? Or did a vertical tab slip in there somehow?  Password security, done properly, should not care how long, or how complex you choose to make your password: so long as it meets a minimum standard.  A maximum length in the order of 64 bytes or more might be reasonable, as might be a restriction to what can be typed on a “standard” US-style keyboard layout may be understandable.

In this case, the password had some punctuation characters.  Apparently these are “special”.  If they restrict them because of possible SQL injection, then I’m afraid ST, you are doing it wrong!  A base64 or hex encoded hash from something like bcrypt, PKCS12 or the like, should make such things impossible.

Obviously preventing abuse by preventing someone from using the dd-dump of a full-length Blu-ray movie as a password is perfectly acceptable, but once hashed, all passwords will be the same size and will contain no “special” characters that could upset middleware.

Sure, enforce a large maximum length (not 20 characters like eBay, but closer to 100) so that any reasonably long password won’t overflow a buffer.  Sure, enforce that some mixed character classes be used.  But don’t go telling people off for using a properly secure password!

Sep 122015

Well, I just had a “fun” afternoon.  For the past few weeks, the free DNS provider I was using,, has been unresponsive.  I had sent numerous emails to the administrator of the site, but heard nothing.  Fearing the worst, I decided it was time to move.  I looked around, and found I could get an domain cheaply, so here I am.

I’d like to thank Tyler MacDonald for providing the service for the last 10 years.  It helped a great deal, and until recently, was a real great service.  I’d still recommend it to people if the site was up.

So, I put the order in on a Saturday, and the domain was brought online on Monday evening.  I slowly moved my Internet estates across to it, and so I had my old URLs redirecting to new ones, the old email address became an alias of the new one, moving mailing list subscriptions over, etc.  Most of the migration would take place this weekend, when I’d set things up proper.

One of the things I thought I’d tackle was DNSSEC.  There are a number of guides, and I followed this one.


Before doing anything, I installed dnssec-tools as well as the dependencies, bind-utils and bind. I had to edit some things in /etc/dnssec-tools/dnssec-tools.conf to adjust some paths on Gentoo, and to set preferred signature options (I opted for RSASHA512 signatures, 4096-bit key-signing keys and 2048-bit zone-signing keys).

Getting the zone file

I constructed a zone file using what I could extract using dig:

The following is a dump of more or less what I got. Obviously the nameservers were for my domain registrar initially and not the ones listed here.

$ dig any 
;; Truncated, retrying in TCP mode.

; <<>> DiG 9.10.2-P2 <<>> any
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 60996
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 22, AUTHORITY: 0, ADDITIONAL: 10

; EDNS: version: 0, flags:; udp: 4096
;            IN      ANY

;; ANSWER SECTION:     86400   IN      SOA 2015091231 10800 3600 604800 3600     86400   IN      NS     86400   IN      NS     86400   IN      NS     86400   IN      NS     3600    IN      A     3600    IN      MX      10     3600    IN      TXT     "v=spf1 a ip6:2001:44b8:21ac:7000::/56 ip4: ~all"     3600    IN      AAAA    2001:44b8:21ac:7000::1

;; ADDITIONAL SECTION:       8439    IN      A       8439    IN      A       170395  IN      AAAA    2401:1400:1:1201:0:1:7853:1a5  3600    IN      A  3600    IN      AAAA    2001:44b8:21ac:7000::1 86400 IN    A 86400 IN    AAAA    2001:44b8:21ac:7000::1 3600   IN      A 3600   IN      AAAA    2001:44b8:21ac:7000::1

;; Query time: 3 msec
;; WHEN: Sat Sep 12 16:40:38 EST 2015
;; MSG SIZE  rcvd: 4715

I needed to translate this into a zone file. If there’s any secret sauce missing, now’s the time to add it. I wound up with a zone file (called that looked like this:

$TTL 3600
@	86400	IN	SOA (2015091231 10800 3600 604800 3600 )
@	86400   IN      NS
@	86400   IN      NS
@	86400   IN      NS
@	86400   IN      NS
@	3600	IN	MX	10
@	3600	IN	TXT	"v=spf1 a ip6:2001:44b8:21ac:7000::/56 ip4: ~all"
@	3600	IN	A
@	3600	IN	AAAA	2001:44b8:21ac:7000::1
atomos	3600	IN	A
atomos	3600	IN	AAAA	2001:44b8:21ac:7000::1
mail	3600	IN	A
mail	3600	IN	AAAA	2001:44b8:21ac:7000::1
ns	3600	IN	A
ns	3600	IN	AAAA	2001:44b8:21ac:7000::1
*	3600	IN	A
*	3600	IN	AAAA	2001:44b8:21ac:7000::1

Signing the zone

Next step, is to create domain keys and sign it.

$ zonesigner -genkeys

This generates a heap of files. Apart from the keys themselves, two are important as far as your DNS server are concerned: and The former contains the DS keys that you’ll need to give to your regristrar, the latter is what your DNS server needs to serve up.

Updating DNS

I figured the safest bet was to add the domain records first, then come back and do the DS keys since there’s a warning that messing with those can break the domain. At this time I had Zuver (my registrar) hosting my DNS, so over I trundle to add a record to the zone, except I discover that there aren’t any options there to add the needed records.

Okay, maybe they’ll appear when I add the DS keys“, I think. Their DS key form looks like this:

Zuver's DS Key Data form

Zuver’s DS Key Data form for me looked like this:     IN DS 12345 10 1 7AB4...     IN DS 12345 10 2 DE02...

Turns out, the 12345 goes by a number of names, such as key ID and in the Zuver interface, key tag.  So in they went.  The record literally is in the form:


The digest, if it has spaces, is to be entered without spaces.

Oops, I broke it!

So having added these keys, I note (as I thought might happen), the domain stopped working. I found I still couldn’t add the records, so I had to now move (quickly) my DNS over to another DNS server. One that permitted these kinds of records. I figured I’d do it myself, and get someone to act as a secondary.

First step was to take that file and throw it into the bind server’s data directory and point named.conf at it. To make sure you can hook a slave to it, create a ACL rule that will match the IP addresses of your possible slaves, and add that to the allow-transfer option for the zone:

acl buddyns {;;
acl stuartslan { ... };

zone "" IN {
        type master;
        file "pri/";
        allow-transfer { buddyns; localhost; stuartslan; };
        allow-query { any; };
        allow-update { localhost; stuartslan; };
        notify no;

Make sure that from another machine in your network, you can run dig +tcp axfr @${DNS_IP} ${DOMAIN} and get a full listing of your domain’s contents.

I really needed a slave DNS server and so went looking around, found one in BuddyNS. I then spent the next few hours arguing with bind as to whether it was authoritative for the domain or not. Long story short, make sure when you re-start bind, that you re-start ALL instances of it. In my case I found there was a rogue instance running with the old configuration.

BuddyNS was fairly simple to set up (once BIND worked). You basically sign up, pick out two of their DNS servers and submit those to your registrar as the authorative servers for your domain. I ended up picking two DNS servers, one in the US and one in Adelaide. I also added in an alias to my host using my old domain.

Adding nameservers
Adding nameservers

Working again

After doing that, my domain worked again, and DNSSEC seemed to be working. There are a few tools you can use to test it.

Updating the zone later

If for whatever reason you wish to update the zone, you need to sign it again. In fact, you’ll need to sign it periodically as the signatures expire. To do this:

$ zonesigner

Note the lack of -genkeys.

My advice to people trying DNSSEC

Before proceeding, make sure you know how to set up a DNS server so you can pull yourself out of the crap if it comes your way. Setting this up with some registrars is a one-way street, once you’ve added keys, there’s no removing them or going back, you’re committed.

Once domain signing keys are submitted, the only way to make that domain work will be to publish the signed record sets (RRSIG records) in your domain data, and that will need a DNS server that can host them.

Aug 232015

Something got me thinking tonight.  We were out visiting a friend of ours and it was decided we’d go out for dinner.  Nothing unusual there, and there were a few places we could have gone for a decent meal.

As it happens, we went to a bowls club for dinner.  I won’t mention which one.

Now, I’d admit that I do have a bit of a rebel streak in me.  Let’s face it, if nobody challenged the status quo, we’d still be in the trees, instead someone decided they liked the caves better and so developed modern man.

In my case, I’m not one to make a scene, but the more uptight the venue, the more uncomfortable I am being there.  If a place feels it necessary to employ a bouncer, or feels it necessary to place a big plaque out front listing rules in addition to what ought to be common sense, that starts to get the alarm bells ringing in my head.

Some rules are necessary, most of these are covered by the laws that maintain order on our streets.  In a club or restaurant, okay, you want to put some limits: someone turning up near-starkers is definitely not on.  Nobody would appreciate someone covered in grease or other muck leaving a trail throughout the place everywhere they go, nor should others be subjected to some T-shirt with text or imagery that is in any way “offencive” to the average person.

(I’ll ignore the quagmire of what people might consider offencive.  I’m sure someone would take exception to me wearing largely blank clothing.  I, for one, abhor branding or slogans on my clothing.)

Now, something that obstructs your ability to identify the said person, such as a full-face balaclava, burka (not sure how that’s spelled) or a full-face helmet: there’s quite reasonable grounds.

As for me, I never used to wear anything on my head until later in high school when I noted how much less distracted I was from overhead lighting.  I’m now so used to it, I consider myself partially undressed if I’m not wearing something.  Something just doesn’t feel right.  I don’t do it to obscure identity, if anything, it’d make me easier to identify.  (Coolie hats aren’t common in Brisbane, nor are spitfire or gatsby caps.)

It’s worth pointing out that the receptionist at this club not only had us sign in with full name and address, but also checked ID on entry.  So misbehaviour would be a pointless exercise: they already had our details, and CCTV would have shown us walking through the door.

The bit that got me with this club, was in amongst the lengthy list of things they didn’t permit, they listed “mens headwear”.  It seemed a sexist policy to me.  Apparently women’s headwear was fine, and indeed, I did see some teens wearing baseball caps as I left, no one seemed to challenge them.

In “western society”, many moons ago, it was considered “rude” for a man to wear a hat indoors.  I do not know what the rationale behind that was.  Women were exempt then from the rule, as their headwear was generally more elaborate and required greater preparation and care to put on and take off.

I have no idea whether a man would be exempt if his headgear was as difficult to remove in that time.  I certainly consider it a nuisance having to carry something that could otherwise just sit on my head and generally stay out of my way.

Today, people of both sexes, if they have anything on their head at all, it’s mostly of a unisex nature, and generally not complicated to put on or remove.  So the reasoning behind the exemption would appear to be largely moot now.

Then there’s the gender equality movement to consider.  Women for years, fought to have the same rights as men.  Today, there’s some inequality, but the general consensus seems to be that things have improved in that regard.

This said, if doing something is not acceptable for men, I don’t see how being female makes it better or worse.

Perhaps then, in the interests of equal rights, we should reconsider some of our old customs and their exemptions in the context of modern life.

Jun 142015

What is it? Well, the term “quaxing” originated from Auckland’s councillor, Dick Quax who stated:

@lukechristensen @BenRoss_AKL @Brycepearce no one in the entire western world uses the train for their shopping trips

@Brycepearce @lukechristensen @BenRoss_AKL the very idea that people lug home their weekly supermarket shopping on the train is fanciful

@Brycepearce @lukechristensen @BenRoss_AKL sounds like that would make great Tui ad. “I ride my bike to get my weekly shopping – yeah right”

While I’ve never described it as “quaxing”, and likely will not describe it that way, this is how I’ve been shopping for the past 5 years.

This, is my shopping trolley, literally… it gets unhitched and taken into the shop with me.

Shopping on the bike, "quaxing" to the twitter-croud.

Shopping on the bike, “quaxing” to the twitter-croud.

Now true, strictly speaking, Australia is not the “western world” geographically speaking. Neither is NZ; any further east and you hit the International Date Line. However this is a “westernised” country, as is NZ. Isn’t it funny how people assume cycling is merely a third-world phenomenon?