Sep 272015

Well, lately I’ve been doing a bit of work hacking the firmware on the Rowetel SM1000 digital microphone.  For those who don’t know it, this is a hardware (microcontroller) implementation of the FreeDV digital voice mode: it’s a modem that plugs into the microphone/headphone ports of any SSB-capable transceiver and converts FreeDV modem tones to analogue voice.

I plan to set this unit of mine up on the bicycle, but there’s a few nits that I had.

  • There’s no time-out timer
  • The unit is half-duplex

If there’s no timeout timer, I really need to hear the tones coming from the radio to tell me it has timed out.  Others might find a VOX feature useful, and there’s active experimentation in the FreeDV 700B mode (the SM1000 currently only supports FreeDV 1600) which has been very promising to date.

Long story short, the unit needed a more capable UI, and importantly, it also needed to be able to remember settings across power cycles.  There’s no EEPROM chip on these things, and while the STM32F405VG has a pin for providing backup-battery power, there’s no battery or supercapacitor, so the SM1000 forgets everything on shut down.

ST do have an application note on their website on precisely this topic.  AN3969 (and its software sources) discuss a method for using a portion of the STM32’s flash for this task.  However, I found their “license” confusing.  So I decided to have a crack myself.  How hard can it be, right?

There’s 5 things that a virtual EEPROM driver needs to bear in mind:

  • The flash is organised into sectors.
  • These sectors when erased contain nothing but ones.
  • We store data by programming zeros.
  • The only way to change a zero back to a one is to do an erase of the entire sector.
  • The sector may be erased a limited number of times.

So on this note, a virtual EEPROM should aim to do the following:

  • It should keep tabs on what parts of the sector are in use.  For simplicity, we’ll divide this into fixed-size blocks.
  • When a block of data is to be changed, if the change can’t be done by changing ones to zeros, a copy of the entire block should be written to a new location, and a flag set (by writing zeros) on the old block to mark it as obsolete.
  • When a sector is full of obsolete blocks, we may erase it.
  • We try to put off doing the erase until such time as the space is needed.

Step 1: making room

The first step is to make room for the flash variables.  They will be directly accessible in the same manner as variables in RAM, however from the application point of view, they will be constant.  In many microcontroller projects, there’ll be several regions of memory, defined by memory address.  This comes from the datasheet of your MCU.

An example, taken from the SM1000 firmware, prior to my hacking (stm32_flash.ld at r2389):

/* Specify the memory areas */
  FLASH (rx)      : ORIGIN = 0x08000000, LENGTH = 1024K
  RAM (rwx)       : ORIGIN = 0x20000000, LENGTH = 128K
  CCM (rwx)       : ORIGIN = 0x10000000, LENGTH = 64K

The MCU here is the STM32F405VG, which has 1MB of flash starting at address 0x08000000. This 1MB is divided into (in order):

  • Sectors 0…3: 16kB starting at 0x08000000
  • Sector 4: 64kB starting at 0x0800c000
  • Sector 5 onwards: 128kB starting at 0x08010000

We need at least two sectors, as when one fills up, we will swap over to the other. Now it would have been nice if the arrangement were reversed, with the smaller sectors at the end of the device.

The Cortex M4 CPU is basically hard-wired to boot from address 0, the BOOT pins on the STM32F4 decide how that gets mapped. The very first few instructions are the interrupt vector table, and it MUST be the thing the CPU sees first. Unless told to boot from external memory, or system memory, then address 0 is aliased to 0x08000000. i.e. flash sector 0, thus if you are booting from internal flash, you have no choice, the vector table MUST reside in sector 0.

Normally code and interrupt vector table live together as one happy family. We could use a couple of 128k sectors, but 256k is rather a lot for just an EEPROM storing maybe 1kB of data tops. Two 16kB sectors is just dandy, in fact, we’ll throw in the third one for free since we’ve got plenty to go around.

However, the first one will have to be reserved for the interrupt vector table that will have the space to itself.

So here’s what my new memory regions look like (stm32_flash.ld at 2390):

/* Specify the memory areas */
  /* ISR vectors *must* be placed here as they get mapped to address 0 */
  VECTOR (rx)     : ORIGIN = 0x08000000, LENGTH = 16K
  /* Virtual EEPROM area, we use the remaining 16kB blocks for this. */
  EEPROM (rx)     : ORIGIN = 0x08004000, LENGTH = 48K
  /* The rest of flash is used for program data */
  FLASH (rx)      : ORIGIN = 0x08010000, LENGTH = 960K
  /* Memory area */
  RAM (rwx)       : ORIGIN = 0x20000000, LENGTH = 128K
  /* Core Coupled Memory */
  CCM (rwx)       : ORIGIN = 0x10000000, LENGTH = 64K

This is only half the story, we also need to create the section that will be emitted in the ELF binary:

  .isr_vector :
    . = ALIGN(4);
    . = ALIGN(4);
  } >FLASH

  .text :
    . = ALIGN(4);
    *(.text)           /* .text sections (code) */
    *(.text*)          /* .text* sections (code) */
    *(.rodata)         /* .rodata sections (constants, strings, etc.) */
    *(.rodata*)        /* .rodata* sections (constants, strings, etc.) */
    *(.glue_7)         /* glue arm to thumb code */
    *(.glue_7t)        /* glue thumb to arm code */

    KEEP (*(.init))
    KEEP (*(.fini))

    . = ALIGN(4);
    _etext = .;        /* define a global symbols at end of code */
    _exit = .;
  } >FLASH…

There’s rather a lot here, and so I haven’t reproduced all of it, but this is the same file as before at revision 2389, but a little further down. You’ll note the .isr_vector is pointed at the region called FLASH which is most definitely NOT what we want. The image will not boot with the vectors down there. We need to change it to put the vectors in the VECTOR region.

Whilst we’re here, we’ll create a small region for the EEPROM.

  .isr_vector :
    . = ALIGN(4);
    . = ALIGN(4);

  .eeprom :
    . = ALIGN(4);
    *(.eeprom)         /* special section for persistent data */
    . = ALIGN(4);

  .text :
    . = ALIGN(4);
    *(.text)           /* .text sections (code) */
    *(.text*)          /* .text* sections (code) */

THAT’s better! Things will boot now. However, there is still a subtle problem that initially caught me out here. Sure, the shiny new .eeprom section is unpopulated, BUT the linker has helpfully filled it with zeros. We cannot program zeroes back into ones! Either we have to erase it in the program, or we tell the linker to fill it with ones for us. Thankfully, the latter is easy (stm32_flash.ld at 2395):

  .eeprom :
    . = ALIGN(4);
    KEEP(*(.eeprom))   /* special section for persistent data */
    . = ALIGN(4);
  } >EEPROM = 0xff

Credit: Erich Styger

We have to do two things. One, is we need to tell it that we want the region filled with the pattern 0xff. Two, we need to make sure it gets filled with ones by telling the linker to write one as the very last byte. Otherwise, it’ll think, “Huh? There’s nothing here, I won’t bother!” and leave it as a string of zeros.

Step 2: Organising the space

Having made room, we now need to decide how to break this data up.  We know the following:

  • We have 3 sectors, each 16kB
  • The sectors have an endurance of 10000 program-erase cycles

Give some thought as to what data you’ll be storing.  This will decide how big to make the blocks.  If you’re storing only tiny bits of data, more blocks makes more sense.  If however you’ve got some fairly big lumps of data, you might want bigger blocks to reduce overheads.

I ended up dividing the sectors into 256-byte blocks.  I figured that was a nice round (binary sense) figure to work with.  At the moment, we have 16 bytes of configuration data, so I can do with a lot less, but I expect this to grow.  The blocks will need a header to tell you whether or not the block is being used.  Some checksumming is usually not a bad idea either, since that will clue you in to when the sector has worn out prematurely.  So some data in each block will be header data for our virtual EEPROM.

If we don’t care about erase cycles, this is fine, we can just make all blocks data blocks, however it’d be wise to track this, and avoid erasing and attempting to use a depleted sector, so we need somewhere to track this.  256 bytes gives us enough space to stash an erase counter and a map of what blocks are in use within that sector.

So we’ll reserve the first block in the sector to act as this index for the entire sector.  This gives us enough room to have 16-bits worth of flags for each block stored in the index.  That gives us 63 blocks per sector for data use.

It’d be handy to be able to use this flash region for a few virtual EEPROMs, so we’ll allocate some space to give us a virtual ROM ID.  It is prudent to do some checksumming, and the STM32F4 has a CRC32 module, so in that goes, and we might choose to not use all of a block, so we should throw in a size field (8 bits, since the size can’t be bigger than 255).  If we pad this out a bit to give us a byte for reserved data, we get a header with the following structure:

15 14 13 12 11 10 19 8 7 6 5 4 3 2 1 0
+0 CRC32 Checksum
+4 ROM ID Block Index
+6 Block Size Reserved

So that subtracts 8 bytes from the 256 bytes, leaving us 248 for actual program data. If we want to store 320 bytes, we use two blocks, block index 0 stores bytes 0…247 and has a size of 248, and block index 1 stores bytes 248…319 and has a size of 72.

I mentioned there being a sector header, it looks like this:

15 14 13 12 11 10 19 8 7 6 5 4 3 2 1 0
+0 Program Cycles Remaining
+8 Block 0 flags
+10 Block 1 flags
+12 Block 2 flags

No checksums here, because it’s constantly changing.  We can’t re-write a CRC without erasing the entire sector, we don’t want to do that unless we have to.  The flags for each block are currently allocated accordingly:

15 14 13 12 11 10 19 8 7 6 5 4 3 2 1 0
+0 Reserved In use

When the sector is erased, all blocks show up as having all flags set as ones, so the flags is considered “inverted”.  When we come to use a block, we mark the “in use” bit with a zero, leaving the rest as ones.  When we erase, we mark the entire flags block as zeros.  We can set other bits here as we need for accounting purposes.

Thus we have now a format for our flash sector header, and for our block headers.  We can move onto the algorithm.

Step 3: The Code

This is the implementation of the above ideas.  Our code needs to worry about 3 basic operations:

  • reading
  • writing
  • erasing

This is good enough if the size of a ROM image doesn’t change (normal case).  For flexibility, I made my code so that it works crudely like a file, you can seek to any point in the ROM image and start reading/writing, or you can blow the whole thing away.


It is bad taste to leave magic numbers everywhere, so constants should be used to represent some quantities:

  • VROM_SECT_SZ=16384:
    The virtual ROM sector size in bytes.  (Those watching Codec2 Subversion will note I cocked this one up at first.)
    The number of sectors.
  • VROM_BLOCK_SZ=256:
    The size of a block
  • VROM_START_ADDR=0x08004000:
    The address where the virtual ROM starts in Flash
    The base sector number where our ROM starts
  • VROM_MAX_CYCLES=10000:
    Our maximum number of program-erase cycles

Our programming environment may also define some, for example UINTx_MAX.

Derived constants

From the above, we can determine:

  • VROM_DATA_SZ = VROM_BLOCK_SZ – sizeof(block_header):
    The amount of data per block.
    The number of blocks per sector, including the index block
    The number of application blocks per sector (i.e. total minus the index block)

CRC32 computation

I decided to use the STM32’s CRC module for this, which takes its data in 32-bit words.  There’s also the complexity of checking the contents of a structure that includes its own CRC.  I played around with Python’s crcmod module, but couldn’t find some arithmetic that would allow it to remain there.

So I copy the entire block, headers and all to a temporary copy (on the stack), set the CRC field to zero in the header, then compute the CRC. Since I need to read it in 32-bit words, I pack 4 bytes into a word, big-endian style. In cases where I have less than 4 bytes, the least-significant bits are left at zero.

Locating blocks

We identify each block in an image by the ROM ID and the block index.  We need to search for these when requested, as they can be located literally anywhere in flash.  There are probably cleverer ways to do this, but I chose the brute force method.  We cycle through each sector and block, see if the block is allocated (in the index), see if the checksum is correct, see if it belongs to the ROM we’re looking for, then look and see if it’s the right index.

Reading data

To read from the above scheme, having been told a ROM ID (rom), start offset and a size, the latter two being in byte sand given a buffer we’ll call out, we first need to translate the start offset to a sector and block index and block offset.  This is simple integer division and modulus.

The first and last blocks of our read, we’ll probably only read part of.  The rest, we’ll read entire blocks in.  The block offset is only relevant for this first block.

So we start at the block we calculate to have the start of our data range.  If we can’t find it, or it’s too small, then we stop there, otherwise, we proceed to read out the data.  Until we run out of data to read, we increment the block index, try to locate the block, and if found, copy its data out.

Writing and Erasing

Writing is a similar affair.  We look for each block, if we find one, we overwrite it by copying the old data to a temporary buffer, copy our new data in over the top then mark the old block as obsolete before writing the new one out with a new checksum.

Trickery is in invoking the wear levelling algorithm on an as-needed basis.  We mark a block obsolete by setting its header fields to zero, but when we run out of free blocks, then we go looking for sectors that are full of obsolete blocks waiting to be erased.  When we encounter a sector that has been erased, we write a new header at the start and proceed to use its first data block.

In the case of erasing, we don’t bother writing anything out, we just mark the blocks as obsolete.


The full C code is in the Codec2 Subversion repository.  For those who prefer Git, I have a git-svn mirror (yes, I really should move it off that domain).  The code is available under the Lesser GNU General Public License v2.1 and may be ported to run on any CPU you like, not just ST’s.

Sep 122015

Well, I just had a “fun” afternoon.  For the past few weeks, the free DNS provider I was using,, has been unresponsive.  I had sent numerous emails to the administrator of the site, but heard nothing.  Fearing the worst, I decided it was time to move.  I looked around, and found I could get an domain cheaply, so here I am.

I’d like to thank Tyler MacDonald for providing the service for the last 10 years.  It helped a great deal, and until recently, was a real great service.  I’d still recommend it to people if the site was up.

So, I put the order in on a Saturday, and the domain was brought online on Monday evening.  I slowly moved my Internet estates across to it, and so I had my old URLs redirecting to new ones, the old email address became an alias of the new one, moving mailing list subscriptions over, etc.  Most of the migration would take place this weekend, when I’d set things up proper.

One of the things I thought I’d tackle was DNSSEC.  There are a number of guides, and I followed this one.


Before doing anything, I installed dnssec-tools as well as the dependencies, bind-utils and bind. I had to edit some things in /etc/dnssec-tools/dnssec-tools.conf to adjust some paths on Gentoo, and to set preferred signature options (I opted for RSASHA512 signatures, 4096-bit key-signing keys and 2048-bit zone-signing keys).

Getting the zone file

I constructed a zone file using what I could extract using dig:

The following is a dump of more or less what I got. Obviously the nameservers were for my domain registrar initially and not the ones listed here.

$ dig any 
;; Truncated, retrying in TCP mode.

; <<>> DiG 9.10.2-P2 <<>> any
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 60996
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 22, AUTHORITY: 0, ADDITIONAL: 10

; EDNS: version: 0, flags:; udp: 4096
;            IN      ANY

;; ANSWER SECTION:     86400   IN      SOA 2015091231 10800 3600 604800 3600     86400   IN      NS     86400   IN      NS     86400   IN      NS     86400   IN      NS     3600    IN      A     3600    IN      MX      10     3600    IN      TXT     "v=spf1 a ip6:2001:44b8:21ac:7000::/56 ip4: ~all"     3600    IN      AAAA    2001:44b8:21ac:7000::1

;; ADDITIONAL SECTION:       8439    IN      A       8439    IN      A       170395  IN      AAAA    2401:1400:1:1201:0:1:7853:1a5  3600    IN      A  3600    IN      AAAA    2001:44b8:21ac:7000::1 86400 IN    A 86400 IN    AAAA    2001:44b8:21ac:7000::1 3600   IN      A 3600   IN      AAAA    2001:44b8:21ac:7000::1

;; Query time: 3 msec
;; WHEN: Sat Sep 12 16:40:38 EST 2015
;; MSG SIZE  rcvd: 4715

I needed to translate this into a zone file. If there’s any secret sauce missing, now’s the time to add it. I wound up with a zone file (called that looked like this:

$TTL 3600
@	86400	IN	SOA (2015091231 10800 3600 604800 3600 )
@	86400   IN      NS
@	86400   IN      NS
@	86400   IN      NS
@	86400   IN      NS
@	3600	IN	MX	10
@	3600	IN	TXT	"v=spf1 a ip6:2001:44b8:21ac:7000::/56 ip4: ~all"
@	3600	IN	A
@	3600	IN	AAAA	2001:44b8:21ac:7000::1
atomos	3600	IN	A
atomos	3600	IN	AAAA	2001:44b8:21ac:7000::1
mail	3600	IN	A
mail	3600	IN	AAAA	2001:44b8:21ac:7000::1
ns	3600	IN	A
ns	3600	IN	AAAA	2001:44b8:21ac:7000::1
*	3600	IN	A
*	3600	IN	AAAA	2001:44b8:21ac:7000::1

Signing the zone

Next step, is to create domain keys and sign it.

$ zonesigner -genkeys

This generates a heap of files. Apart from the keys themselves, two are important as far as your DNS server are concerned: and The former contains the DS keys that you’ll need to give to your regristrar, the latter is what your DNS server needs to serve up.

Updating DNS

I figured the safest bet was to add the domain records first, then come back and do the DS keys since there’s a warning that messing with those can break the domain. At this time I had Zuver (my registrar) hosting my DNS, so over I trundle to add a record to the zone, except I discover that there aren’t any options there to add the needed records.

Okay, maybe they’ll appear when I add the DS keys“, I think. Their DS key form looks like this:

Zuver's DS Key Data form

Zuver’s DS Key Data form for me looked like this:     IN DS 12345 10 1 7AB4...     IN DS 12345 10 2 DE02...

Turns out, the 12345 goes by a number of names, such as key ID and in the Zuver interface, key tag.  So in they went.  The record literally is in the form:


The digest, if it has spaces, is to be entered without spaces.

Oops, I broke it!

So having added these keys, I note (as I thought might happen), the domain stopped working. I found I still couldn’t add the records, so I had to now move (quickly) my DNS over to another DNS server. One that permitted these kinds of records. I figured I’d do it myself, and get someone to act as a secondary.

First step was to take that file and throw it into the bind server’s data directory and point named.conf at it. To make sure you can hook a slave to it, create a ACL rule that will match the IP addresses of your possible slaves, and add that to the allow-transfer option for the zone:

acl buddyns {;;
acl stuartslan { ... };

zone "" IN {
        type master;
        file "pri/";
        allow-transfer { buddyns; localhost; stuartslan; };
        allow-query { any; };
        allow-update { localhost; stuartslan; };
        notify no;

Make sure that from another machine in your network, you can run dig +tcp axfr @${DNS_IP} ${DOMAIN} and get a full listing of your domain’s contents.

I really needed a slave DNS server and so went looking around, found one in BuddyNS. I then spent the next few hours arguing with bind as to whether it was authoritative for the domain or not. Long story short, make sure when you re-start bind, that you re-start ALL instances of it. In my case I found there was a rogue instance running with the old configuration.

BuddyNS was fairly simple to set up (once BIND worked). You basically sign up, pick out two of their DNS servers and submit those to your registrar as the authorative servers for your domain. I ended up picking two DNS servers, one in the US and one in Adelaide. I also added in an alias to my host using my old domain.

Adding nameservers
Adding nameservers

Working again

After doing that, my domain worked again, and DNSSEC seemed to be working. There are a few tools you can use to test it.

Updating the zone later

If for whatever reason you wish to update the zone, you need to sign it again. In fact, you’ll need to sign it periodically as the signatures expire. To do this:

$ zonesigner

Note the lack of -genkeys.

My advice to people trying DNSSEC

Before proceeding, make sure you know how to set up a DNS server so you can pull yourself out of the crap if it comes your way. Setting this up with some registrars is a one-way street, once you’ve added keys, there’s no removing them or going back, you’re committed.

Once domain signing keys are submitted, the only way to make that domain work will be to publish the signed record sets (RRSIG records) in your domain data, and that will need a DNS server that can host them.

May 212015

This is more a quick dump of some proof-of-concept code.  We’re in the process of writing communications drivers for an energy management system, many of which need to communicate with devices like Modbus energy meters.

Traditionally I’ve just used the excellent pymodbus library with its synchronous interface for batch-processing scripts, but this time I need real-time and I need to do things asynchronously.  I can either run the synchronous client in a thread, or, use the Twisted interface.

We’re actually using Tornado for our core library, and thankfully there’s an adaptor module to allow you to use Twisted applications.  But how do you do it?  Twisted code requires quite a bit of getting used to, and I’ve still not got my head around it.  I haven’t got my head fully around Tornado either.

So how does one combine these?

The following code pulls out the first couple of registers out of a CET PMC330A energy meter that’s monitoring a few circuits in our office. It is a stripped down copy of this script.

#!/usr/bin/env python
Pymodbus Asynchronous Client Examples -- using Tornado

The following is an example of how to use the asynchronous modbus
client implementation from pymodbus.
# import needed libraries
import tornado
import tornado.platform.twisted
from twisted.internet import reactor, protocol
from pymodbus.constants import Defaults

# choose the requested modbus protocol
from pymodbus.client.async import ModbusClientProtocol
#from pymodbus.client.async import ModbusUdpClientProtocol

# configure the client logging
import logging
log = logging.getLogger()

# example requests
# simply call the methods that you would like to use. An example session
# is displayed below along with some assert checks. Note that unlike the
# synchronous version of the client, the asynchronous version returns
# deferreds which can be thought of as a handle to the callback to send
# the result of the operation.  We are handling the result using the
# deferred assert helper(dassert).
def beginAsynchronousTest(client):
    io_loop = tornado.ioloop.IOLoop.current()

    def _dump(result):'Register values: %s', result.registers)
    def _err(result):
        logging.error('Error: %s', result)

    rq = client.read_holding_registers(0, 4, unit=1)

    # close the client at some time later
    io_loop.add_timeout(io_loop.time() + 1, client.transport.loseConnection)
    io_loop.add_timeout(io_loop.time() + 2, io_loop.stop)

# choose the client you want
# make sure to start an implementation to hit against. For this
# you can use an existing device, the reference implementation in the tools
# directory, or start a pymodbus server.
defer = protocol.ClientCreator(reactor, ModbusClientProtocol
        ).connectTCP("", Defaults.Port)
May 152015

… or how to emulate Red Hat’s RPM dependency hell in Debian with Python.

There are times I love open source systems and times when it’s a real love-hate relationship. No more is this true than trying to build Python module packages for Debian.

On Gentoo this is easy: in the past we had g-pypi. I note that’s gone now and replaced with a gsourcery plug-in called gs-pypi. Both work. The latter is nice because it gives you an overlay potentially with every Python module.

Building packages for Debian in general is fiddly, but not difficult, but most Python packages follow the same structure: a script,, calls on distutils and provides a package builder and installer. You call this with some arguments, it builds the package, plops it in the right place for dpkg-buildpackage and the output gets bundled up in a .deb.

Easy. There’s even a helper script: stdeb that plugs into distutils and will do the Debian packaging all for you. However, stdeb will not source dependencies for you. You must do that yourself.

So quickly, building a package for Debian becomes reminiscent of re-living the bad old days with early releases of Red Hat Linux prior to yum/apt4rpm and finding the RPM you just obtained needs another that you’ll have to hunt down from somewhere.

Then you get the people who take the view, why have just one package builder when you can have two. fysom needs pybuilder to compile. No problems, I’ll just grab that. Checked it out of github, uhh ohh, it uses itself to build, and it needs other dependencies.

Lovely. It gets better though, those dependencies need pybuilder to build. I just love circular dependencies!

So as it turns out, in order to build this, you’ll need to enlist pip to install these behind Debian’s back (I just love doing that!) then you’ll have the dependencies needed to actually build pybuilder and ultimately fysom.

Your way out of this maze is to do the following:

  • Ensure you’ve got the python-stdeb, dh-python and python-pip packages installed.
  • Use pip to install the dependencies for pybuilder and its dependencies: pip install fluentmock pybuilder pyassert pyfix pybuilder-external-plugin-demo pybuilder_header_plugin pybuilder_release_plugin
  • Now you should be able to build pybuilder, do pyb publish in the directory, then look under target/dist/pybuilder-${VERSION} you should see the Python sources with a you can use with stdeb.

Any other dependencies are either in Debian repositories, or you can download the sources yourself and use the stdeb technique to build them.

Apr 112015

To whom it may concern,

There have been reports of web browser sessions from people outside China to websites inside China being hijacked and having malware injected.  Dubbed “Great Cannon”, this malware having the sole purpose of carrying out distributed denial of service attacks on websites that the Chinese Government attempts to censor from its people.  Whether it be the Government there itself doing this deliberately, or someone hijacking major routing equipment is fundamentally irrelevant here, either way the owner of the said equipment needs to be found, and a stop put to this malware.

I can understand you wish to prevent people within your borders from accessing certain websites, but let me make one thing abundantly clear.


I will not accept my web browser which is OUTSIDE China being hijacked and used as a mule for carrying out your attacks.  It is illegal for me to carry out these attacks, and I do not authorise the use of my hardware or Internet connection for this purpose.  If this persists, I will be blocking any and all Chinese-owned websites’ executable code in my browser.

This will hurt Chinese business more than it hurts me.  If you want to ruin yourselves economically, go ahead, it’ll be like old times before the Opium Wars.

Apr 102015

This afternoon, whilst waiting for a build job to complete I thought I’d do some further analysis on my annual mileage.

Now I don’t record my odometer readings daily (perhaps I should), but I do capture them every Sunday morning.  So I can possibly assume that the distance done for each day of a “run” is the total distance divided by the number of days.  I’m using a SQLite3 database to track this, question is, how do I extract this information?

This turned out to be the key to the answer.  I needed to enumerate all the days between two points.  SQLite3 has a julianday function, and with that I have been able to extract the information I need.

My database schema is simple. There are two tables:
CREATE TABLE bikes (id integer primary key not null, description varchar(64));
CREATE TABLE odometer (timestamp datetime not null default current_timestamp, action char(8) not null, bike_id integer not null, odometer real not null, constraint duplicate_log unique (timestamp, action, bike_id) on conflict replace);

Then there are the views.
CREATE VIEW run_id as select s.rowid as start_id, (select rowid from odometer where bike_id=s.bike_id and timestamp > s.timestamp and action='stop' order by timestamp asc limit 1) as stop_id from odometer as s where s.action='start';
CREATE VIEW "run" AS select start.timestamp as start_timestamp, stop.timestamp as stop_timestamp, start.bike_id as bike_id, start.odometer as start_odometer, stop.odometer as stop_odometer, stop.odometer-start.odometer as distance,julianday(start.timestamp) as start_day, julianday(stop.timestamp) as stop_day from (run_id join odometer as start on run_id.start_id=start.rowid) join odometer as stop on run_id.stop_id=stop.rowid;

The first view breaks up the start and stop events, and gives me row IDs for where each “run” starts and stops. I then use that in my run view to calculate distances and timestamps.

Here’s where the real voodoo lies, to enumerate days, I start at the very first timestamp in my dataset, find the Julian Day for that, then keep adding one day on until I get to the last timestamp. That gives me a list of Julian days that I can marry up to the data in the run view.

CREATE VIEW distance_by_day as
SELECT day_of_year, avg_distance FROM (
SELECT - julianday(date(,'start of year')) as day_of_year, sum(run.distance/max((run.stop_day-run.start_day),1))/count(*) as avg_distance
FROM run,
days(day) as (
SELECT julianday((select min(timestamp) from odometer))
union all
SELECT day+1 from days
limit cast(round(julianday((select max(timestamp) from odometer))-julianday((select min(timestamp) from odometer))) as int)
) SELECT day from days) as days
run.start_day < = AND run.stop_day >=
group by day_of_year) dist_by_doy;

This is the result.

Distance by Day Of Year

Distance by Day Of Year

Apr 062015

I’ve been a long time user of PGP, had a keypair since about 2003.  OpenPGP has some nice advantages in that it’s a more social arrangement in that verification is done by physically meeting people.  I think it is more personal that way.

However, you still can get isolated islands, my old key was a branch of the strong set, having been signed by one person who did do a lot of key-signing, but sadly thanks to Heartbleed, I couldn’t trust it anymore.  So I’ve had to start anew.

The alternate way to ensure communications is to use some third party like a certificate authority and use S/MIME.  This is the other side of the coin, where a company verifies who you are.  The company is then entrusted to do their job properly.  If you trust the company’s certificate in your web browser or email client, you implicitly trust every non-revoked valid certificate that company has signed.  As such, there is a proliferation of companies that act as a CA, and a typical web browser will come with a list as long as your arm/leg/whatever.

I’ve just set up one such certificate for myself, using StartCOM‘s CA as the authority.  If you trust StartCOM, and want my GPG key, you’ll find a S/MIME signed email with my key here.  If you instead trust my GPG signature and want my S/MIME public key, you can get that here.  If you want to throw caution to the wind, you can get the bare GPG key or S/MIME public key instead.

Update: I noticed GnuPG 2.1 has been released, so I now have an ECDSA key; fingerprint B8AA 34BA 25C7 9416 8FAE  F315 A024 04BC 5865 0CF9.  You may use it or my existing RSA key if your software doesn’t support ECDSA.

Mar 192015

Hi all,

This is more a note to self for future reference.  Qt has a nice handy reference counting memory management system by means of QSharedPointer and QWeakPointer.  The system is apparently thread-safe and seems to be totally transparent.

One gotcha though, is two QSharedPointer objects cannot share the same pointer unless one is cloned from the other (either directly or via QWeakPointer).  The other is that you must leave deletion of the object to QSharedPointer.  You’ve given it your precious pointer, it has adopted it and while you may call the object, it is no longer yours, so don’t go deleting it.

So you create an object, you want to pass a reference to yourself to some other object.  How?  Like this?

QSharedPointer<MyClass> MyClass::ref() {
    return QSharedPointer<MyClass>(this); /* NO! */

No, not like that! That will create QSharedPointer instances left right and centre. Not what you want to do at all. What you need to do, is create the initial reference, but then store a weak reference to it. Then all future calls, you simply call the toStrongRef method of the weak reference to get a QSharedPointer that’s linked to the first one you handed out.

Then, having done this, when you create your new object, you create it with the new keyword as normal, take a QSharedPointer reference to it, then forget all about the original pointer! You can get it back by calling the data method of the pointer object.

To make it simple, here’s a base class you can inherit to do this for you.

    #include <QWeakPointer>
    #include <QSharedPointer>

     * Self-Reference helper.  This allows for objects to maintain
     * references to "this" via the QSharedPointer reference-counting
     * smart pointers.
    template<typename T>
    class SelfRef {
             * Get a strong reference to this object.
            QSharedPointer<T>    ref()
                QSharedPointer<T> this_ref(this->this_weak);
                if (this_ref.isNull()) {
                    this_ref = QSharedPointer<T>((T*)this);
                    this->this_weak = this_ref.toWeakRef();
                return this_ref;

             * Get a weak reference to this object.
            QWeakPointer<T>        weakRef() const
                return this->this_weak;
            /*! A weak reference to this object */
            QWeakPointer<T>        this_weak;

Example usage:

#include <iostream>
#include <stdexcept>
#include "SelfRef.h"

class Test : public SelfRef<Test> {
                        std::cout << __func__ << std::endl;
                        this->freed = false;
                        std::cout << __func__ << std::endl;
                        this->freed = true;

                void test() {
                        if (this->freed)
                                throw std::runtime_error("Already freed!");
                                << "Test object is at "
                                << (void*)this
                                << std::endl;

                bool                    freed;
                QSharedPointer<Test>    another;

int main(void) {
        Test* a = new Test();
        if (a != NULL) {
                QSharedPointer<Test> ref1 = a->ref();
                if (!ref1.isNull()) {
                        QSharedPointer<Test> ref2 = a->ref();
        return 0;

Note that the line before the return is a deliberate use after free bug to prove the pointer really was freed.  Also note that the idea of setting a boolean flag to indicate the constructor has been called only works here because nothing happens between that use after free attempt and the destructor being called.  Don’t rely on this to see if your object is being called after destruction.  This is what the output session from gdb looks like:

RC=0 stuartl@rikishi /tmp/qtsp $ make CXXFLAGS=-g
g++ -c -g -I/usr/share/qt4/mkspecs/linux-g++ -I. -I/usr/include/qt4/QtCore -I/usr/include/qt4/QtGui -I/usr/include/qt4 -I. -I. -o test.o test.cpp
g++ -Wl,-O1 -o qtsp test.o    -L/usr/lib64/qt4 -lQtGui -L/usr/lib64 -L/usr/lib64/qt4 -L/usr/X11R6/lib -lQtCore -lgthread-2.0 -lglib-2.0 -lpthread 
RC=0 stuartl@rikishi /tmp/qtsp $ gdb ./qtsp 
GNU gdb (Gentoo 7.7.1 p1) 7.7.1
Copyright (C) 2014 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-pc-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
Find the GDB manual and other documentation resources online at:
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from ./qtsp...done.
(gdb) r
Starting program: /tmp/qtsp/qtsp 
warning: Could not load shared library symbols for
Do you need "set solib-search-path" or "set sysroot"?
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/".
Test object is at 0x555555759c90
Test object is at 0x555555759c90
terminate called after throwing an instance of 'std::runtime_error'
  what():  Already freed!

Program received signal SIGABRT, Aborted.
0x00007ffff5820775 in raise () from /lib64/
(gdb) bt
#0  0x00007ffff5820775 in raise () from /lib64/
#1  0x00007ffff5821bf8 in abort () from /lib64/
#2  0x00007ffff610cd75 in __gnu_cxx::__verbose_terminate_handler() ()
   from /usr/lib/gcc/x86_64-pc-linux-gnu/4.8.3/
#3  0x00007ffff6109ec8 in ?? () from /usr/lib/gcc/x86_64-pc-linux-gnu/4.8.3/
#4  0x00007ffff6109f15 in std::terminate() () from /usr/lib/gcc/x86_64-pc-linux-gnu/4.8.3/
#5  0x00007ffff610a2e9 in __cxa_throw () from /usr/lib/gcc/x86_64-pc-linux-gnu/4.8.3/
#6  0x0000555555555cea in Test::test (this=0x555555759c90) at test.cpp:20
#7  0x0000555555555315 in main () at test.cpp:41
(gdb) up
#1  0x00007ffff5821bf8 in abort () from /lib64/
(gdb) up
#2  0x00007ffff610cd75 in __gnu_cxx::__verbose_terminate_handler() ()
   from /usr/lib/gcc/x86_64-pc-linux-gnu/4.8.3/
(gdb) up
#3  0x00007ffff6109ec8 in ?? () from /usr/lib/gcc/x86_64-pc-linux-gnu/4.8.3/
(gdb) up
#4  0x00007ffff6109f15 in std::terminate() () from /usr/lib/gcc/x86_64-pc-linux-gnu/4.8.3/
(gdb) up
#5  0x00007ffff610a2e9 in __cxa_throw () from /usr/lib/gcc/x86_64-pc-linux-gnu/4.8.3/
(gdb) up
#6  0x0000555555555cea in Test::test (this=0x555555759c90) at test.cpp:20
20                                      throw std::runtime_error("Already freed!");
(gdb) up
#7  0x0000555555555315 in main () at test.cpp:41
41              a->test();
(gdb) quit
A debugging session is active.

        Inferior 1 [process 17906] will be killed.

Quit anyway? (y or n) y

You’ll notice it fails right on that second last line because the last QSharedPointer went out of scope before this.  This is why you forget all about the pointer once you create the first QSharedPointer.

To remove the temptation to use the pointer directly, you can make all your constructors protected (or private) and use a factory that returns a QSharedPointer to your new object.

A useful macro for doing this:

 * Create an instance of ClassName with the given arguments
 * and immediately return a reference to it.
 * @returns	QSharedPointer<ClassName> object
#define newRef(ClassName, args ...)	\
	((new ClassName(args))->ref().dynamicCast<ClassName>())
Feb 012015

How do software companies get things so wrong?  I aim this at both Google and Apple here, as both are equally as guilty of this.  Maybe Microsoft can learn from this?

So you see something on an “app store” that might be useful on the device you’re using.  Ohh, a free download, great!  Let’s download.  You click the link, and immediately get asked for a login and password.  There’s no option to proceed without.  They insist you create an account with them even if it’s the one and only thing you’re interested from them.

In the past my gripe has been with the Google Play store.  Even the “free” apps, require you to log in.  Ohh, and to add insult to injury, the Google Play store doesn’t just expect any Google account, it has to be one of their Gmail accounts.  Back in the late 90s I had an email address with most providers as the average quota was about 5MB.  I’ve had a mailbox of my own with a multi-gigabyte (actually limited only by disk capacity) “quota” since 2002, I have no use for Gmail, and only keep my old Yahoo! address (created in 1999) for historic reasons.

I have an Android phone (release 4.1: thanks to ZTE’s backward thinking and short attention span), and thankfully there’s the F-Droid store which has sufficient applications for my use.  So I can work around the Google Play store most of the time and so far, haven’t needed anything from there.

Today, my gripe is at Apple, and the “app” in question is MacOS X, which cannot be obtained anywhere else.

With all the high-profile attacks on websites that store user accounts, one has to ask, why?  It’s one extra username and password, which given the frequency I’m likely to use it, will have to be written down and stored somewhere secure as it won’t get sufficient use to commit it to memory.  Before people point out password managers, I’d like to point out one thing: it’s still writing it down!

There’s absolutely no need for an “app store” to know your email address, usernames, passwords, or any details.  If you are actually purchasing an application, they only need enough information to process a payment.

Usually this is by a debit/credit card, so they need to know the details on the card.  An alternative might be direct deposit through a bank, at which point they need to supply you with details on how to make the payment — details that include the information they need to match your payment in their ledger to your store purchase.  At no point do they need anything else.

For convenience an email address might be supplied so they can confirm your order or contact you if there’s a problem, however for debit/credit cards, this happens so quickly that it can be achieved via the web browser.

Despite this, they insist on you providing just about everything.

I’m no stranger to the “app store” concept.  Linux and BSD distributions have had this sort of concept for years.  BSD has had ports for as long as I can remember.  Debian had apt since 1998, Gentoo has had portage since its inception in 2003 and RPM-based distributions have had yum for some time too.

None of these actually need to know who you are in order to download a package.  Admittedly none of these are geared toward commercial sales of software, and so lack the ability to prompt for credentials or payment information.

Since both Google Play and the Apple App store have solved the former problem, I see no reason why they couldn’t solve the latter.  I don’t want to post anything to the site, I don’t want to leave feedback as I can hardly comment on something I haven’t received yet, and I don’t know when I’ll next visit the site.

If I was going to be back repeatedly, sure, I’ll make an account.  It’ll make everyone’s lives easier. (Including the blackhats!)  But I’m not.  I have a late-2008 model MacBook, probably the oldest machine that Apple support for their latest OS.  The machine dual-boots MacOS X 10.6 and Gentoo Linux, and spends 99% of its time in the latter OS.

Given the age of the machine and the frequency at which I use its native OS, it is not worth me spending a lot of time or expense updating it.  A 2GHz Core 2 Duo with 8GB RAM and a 750GB HDD is good enough for many tasks under Linux, but is the bare minimum to run OS X 10.10.  The only reason this machine doesn’t grace my desk at work anymore is the fact the lack of ports (USB in particular) proved to be a right pain.

Why update?  Well, applications these days seem to expect at least MacOS X 10.7 now.  I either have to build everything myself or update the OS, so I’m investigating the possibility of updating the OS to see if it’s feasible.  Apparently it’s a free download, so why not?

Well, why not indeed!  Instead of having a simple http, https or ftp link to the file in question (maybe a .dmg image) for software they’re not actually selling to me in the traditional sense, they instead insist on making me jump through hoops like requiring their “app store” client — so I can’t just grab the link, tell the web server here to download the file then grab it from there when I’m ready.

Since I can’t do the download any other way than via their “app store” client, I have to remain booted in MacOS X in order to download it regardless of what I might otherwise wish to do the machine and what OS that requires.

However, before I can even think about starting the download, I’ve got to register an account, supplying a username and password for something that will probably be used exactly once.  Details that they have to pay people big money to store securely.

Instead of spending some money paying someone to add an extra one-off button and form to their “app store” clients, they instead spend significantly more on infrastructure designed to meet the privacy requirements of various laws to store user information that simply is not necessary for the transaction to proceed.

In light of the sophistication of the modern cracker and the cut-throat nature of the mobile market, is this such a wise use of company funds?

Dec 042014

Just recently I’ve been looking into asynchronous programming.

Previously I had an aversion to asynchronous code due to the ugly twisted web of callback functions that it can turn into. However, after finding that having a large number of threads blocking on locks and semaphores still manages to thrash a machine, I’ve come to the conclusion that I should put aside my feelings and try it anyway.

Our codebase is written in Python 2.7, sadly, not new enough to have asyncio. However we do plan to eventually move to Python 3.x when things are a bit more stable in the Debian/Ubuntu department (Ubuntu 12.04 didn’t support it and there are a few sites that still run it, one or two still run 10.04).

That said, there’s thankfully a port of what became asyncio in the form of Trollius.

Reading through the examples though still had me lost and the documentation is not exactly extensive. In particular, coroutines and yielding. The yield operator is not new, it’s been in Python for some time, but until now I never really understood it or how it was useful in co-operative programming.

Thankfully, Sahand Saba has written a guide on how this all works:

I might put some more notes up as I learn more, but that guide explained a lot of the fundamentals behind a lot of event loop frameworks including asyncio.