Apr 272016
 

It seems good old “common courtesy” is absent without leave, as is “common sense”. Some would say it’s been absent for most of my lifetime, but to me it seems particularly so of late.

In particular, where it comes to the safety of one’s self, and to others, people don’t seem to actually think or care about what they are doing, and how that might affect others. To say it annoys me is putting it mildly.

In February, I lost a close work colleague in a bicycle accident. I won’t mention his name, as I do not have his family’s permission to do so.

I remember arriving at my workplace early on Friday the 12th before 6AM, having my shower, and about 6:15 wandering upstairs to begin my work day. Reaching my desk, I recall looking down at an open TS-7670 industrial computer and saying out aloud, “It’s just you and me, no distractions, we’re going to get U-Boot working”, before sitting down and beginning my battle with the machine.

So much for the “no distractions” however. At 6:34AM, the office phone rings. I’m the only one there and so I answer. It was a social worker looking for “next of kin” details for a colleague of mine. Seems they found our office details via a Cab Charge card they happened to find in his wallet.

Well, first thing I do is start scrabbling for the office directory to get his home number so I can pass the bad news onto his wife only to find: he’s only listed his mobile number. Great. After getting in contact with our HR person, we later discover there isn’t any contact details in the employee records either. He was around before such paperwork existed in our company.

Common sense would have dictated that one carry an “in case of emergency” number on a card in one’s wallet! At the very least let your boss know!

We find out later that morning that the crash happened on a particularly sharp bend of the Go Between Bridge, where the offramp sweeps left to join the Bicentennial bikeway. It’s a rather sharp bend that narrows suddenly, with handlebar-height handrails running along its length and “Bicycle Only” signs clearly signposted at each end.

Common sense and common courtesy would suggest you slow down on that bridge as a cyclist. Common sense and common courtesy would suggest you use the other side as a pedestrian. Common sense would question the utility of hand rails on a cycle path.

In the meantime our colleague is still fighting for his life, and we’re all holding out hope for him as he’s one of our key members. As for me, I had a network to migrate that weekend. Two of us worked the Saturday and Sunday.

Sunday evening, emotions hit me like a freight train as I realised I was in denial, and realised the true horror of the situation.

We later find out on the Tuesday, our colleague is in a very bad way with worst-case scenario brain damage as a result of the crash. From shining light to vegetable, he’d never work for us again.

Wednesday I took a walk down to the crash site to try and understand what happened. I took a number of photographs, and managed to speak to a gentleman who saw our colleague being scraped off the pavement. Even today, some months later, the marks on the railings (possibly from handlebar grips) and a large blood smear on the path itself, can still be seen.

It was apparent that our colleague had hit this railing at some significant speed. He wasn’t obese, but he certainly wasn’t small, and a fully grown adult does not ricochet off a metal railing and slide face-first for over a metre without some serious kinetic energy involved.

Common sense seems to suggest the average cyclist goes much faster than the 20km/hr collision the typical bicycle helmet is designed for under AS/NZS 2063:2008.

I took the Thursday and Friday off as time-in-lieu for the previous weekend, as I was an emotional wreck. The following Tuesday I resumed cycling to work, and that morning I tried an experiment to reproduce the crash conditions. The bicycle I ride wasn’t that much different to his, both bikes having 29″ wheels.

From what I could gather that morning, it seemed he veered right just prior to the bend then lost control, listing to the right at what I estimated to be about a 30° angle. What caused that? We don’t know. It’s consistent with him dodging someone or something on the path — but this is pure speculation on my part.

Mechanical failure? The police apparently have ruled that out. There’s not much in the way of CCTV cameras in the area, plenty on the pedestrian side, not so much on the cycle side of the bridge.

Common sense would suggest relying on a cyclist to remember what happened to them in a crash is not a good plan.

In any case, common sense did not win out that day. Our colleague passed away from his injuries a little over a fortnight after his crash, aged 46. He is sadly missed.

I’ve since made a point of taking my breakfast down to that point where the bridge joins the cycleway. It’s the point where my colleague had his last conscious thoughts.

Over the course of the last few months, I’ve noticed a number of things.

Most cyclists sensibly slow down on that bend, but a few race past at ludicrous speed. One morning, I nearly thought they’d be an encore performance as two construction workers on City Cycle bikes, sans helmets, came careening around the corner, one almost losing it.

Then I see the pedestrians. There’s a well lit, covered walkway, on the opposite side of the bridge for pedestrian use. It has bench seats, drinking fountains, good lighting, everything you’d want as a pedestrian. Yet, some feel it is not worth the personal exertion to take the 100m extra distance to make use of it.

Instead, they show a lack of courtesy by using the bicycle path. Walking on a bicycle path isn’t just dangerous to the pedestrian like stepping out onto a road, it’s dangerous for the cyclist too!

If a car hits a pedestrian or cyclist, the damage to the occupants of the car is going to be minimal to nonexistent, compared to what happens to the cyclist or pedestrian. If a cyclist or motorcyclist hits a pedestrian however, they surround the frame, thus hit the ground first. Possibly at significant speed.

Yet, pedestrians think it is acceptable to play Russian roulette with their own lives and the lives of every cycle user by continuing to walk where it is not safe for them to go. They’d never do it on a motorway, but somehow a bicycle path is considered fair game.

Most pedestrians are understanding, I’ve politely asked a number to not walk on the bikeway, and most oblige after I point out how they get to the pedestrian walkway.

Common sense would suggest some signage on where the pedestrian can walk would be prudent.

However, I have had at least two that ignored me, one this morning telling me to “mind my own shit”. Yes mate, I am minding “my own shit” as you put it: I’m trying to stop the hypothetical me from possibly crashing into the hypothetical you!

It’s this sort of reaction that seems symbolic of the whole “lack of common courtesy” that abounds these days.

It’s the same attitude that seems to hint to people that it’s okay to park a car so that it blocks the footpath: newsflash, it’s not! I know of one friend of mine who frequently runs into this problem. He’s in a wheelchair — a vehicle not known for its off-road capabilities or ability to squeeze past the narrow gap left by a car.

It seems the drivers think it’s acceptable to force footpath users of all types, including the elderly, the young and the disabled, to “step out” onto the road to avoid the car that they so arrogantly parked there. It makes me wonder how many people subsequently become disabled as a result of a collision caused by them having to step around such obstacles. Would the owner of the parked car be liable?

I don’t know, I’m no lawyer, but I should think they should carry some responsibility!

In Queensland, pedestrians have right-of-way on the footpath. That includes cyclists: cyclists of all ages are allowed there subject to council laws and signage — but once again, they need to give way. In other words, don’t charge down the path like a lunatic, and don’t block it!

No doubt, the people who I’m trying to convince are too arrogant to care about the above, and what their actions might have on others. Still, I needed to get the above off my chest!

Nothing will bring my colleague back, a fact that truly pains me, and I’ve learned some valuable lessons about the sort of encouragement I give people. I regret not telling him to slow down, 5 minutes longer wouldn’t have killed him, and I certainly did not want a race! Was he trying to race me so he could keep an eye on me? I’ll never know.

He was a bright person though, it is proof though that even the intelligent among us are prone to possibly doing stupid things. With thrills come spills, and one might question whether one’s commute to work is the appropriate venue for such thrills, or whether those can wait for another time.

I for one have learned that it does not pay to be the hare, thus I intend to just enjoy the ride for what it is. No need to rush, common sense tells me it just isn’t worth it!

Dec 062015
 

Recently, I learned about the IceStorm project, which is an effort to reverse engineer the Lattice iCE40-series of FPGAs.  I had run across FPGAs in my time before, but never really got to understand them.  This is for a few reasons:

  • The tools tended to be proprietary, with highly (unnecessarily?) restrictive licensing
  • FPGA boards were hellishly expensive

I wasn’t interested in doing the proprietary toolchain dance, did enough of that with some TI stuff years ago.  There, it was the MSP430, and one of their DSPs.  The former I could use gcc, but still needed a proprietary build of gdbproxy to program and debug the device, and that needed Windows.  The latter could only be programmed using TI’s Code Composer studio.

FPGAs were ten times worse.  Not only was the toolchain huge, occupying gigabytes, but the license was locked to the hardware.  The one project with anything FPGA-related, it was an Altera FPGA, and getting Quartus II to work was nothing short of a nightmare.  I gave up, and vowed never to touch FPGAs.

Fast forward 6 years, and things have changed.  We now have a Verilog synthesiser.  We now have a place-and-route tool.  We have tools for generating a bitstream for the iCE40 FPGAs.  We can now buy FPGA boards for well under the $100.  Heck, you can buy them for $5.

Lattice can do one of three things at this point:

  • They can actively try to stomp it out (discontinuing the iCE40 family, filing law suits, …etc)
  • They can pretend it doesn’t exist
  • They can partner with us and help build a hobby market for their FPGAs

Time will tell as to what they choose.  I’m hoping it’s the latter, but ignoring us is workable too.

So recently I bought an iCE40-HX8K breakout board.  This $80 board is pretty minimal, you get 8 LEDs, a FTDI serial-USB controller (which serves as programmer), a small serial flash EEPROM (for configuration), a linear regulator, a 12MHz oscillator and 4 40-pin headers for GPIOs.

The FPGA on this board is the iCE40HX8K-CT256.  At the time of writing, that’s the top of that particular series with 7680 look-up tables, two PLLs, and some integrated SPI/I²C smarts.

There’s not a lot in the way of tutorials for this particular board, most focus on the iCEStick, which uses the lesser iCE40HX1K-TQ144, has only a small handful of GPIOs exposed and has no configuration EEPROM (it’s one-time programmable).

Through some trial-and-error, and pouring over the schematics though, I managed to port Al Williams’ tutorial on Hackaday at least in part, to the iCE40-HX8k board.  The code for this is on Github.

Pretty much everything works on this board, even PLLs and block RAM.  There’s an example using the PLL on the iCEstick in this VGA demo project.

Some things I’ve learned:

  • If you open jumper J7, and rotate the jumpers on J6 to run horizontally (strapping pins 1-2 and 3-4), specifying -S to iceprog will program the CRAM without touching the SPI flash chip.
  • The PLL ceases to lock in when REFCLK/(1+DIV_R) drops to 10MHz or below.

FILTER_RANGE is a mystery though.  Haven’t figured out what the values correspond to.

It’s likely this particular board is destined to become a DRAM/Interrupt/DMA controller for my upcoming 386, but we’ll see.  In the meantime, I’m playing with a new toy. 🙂

Nov 212015
 

Well, in the last post I started to consider the thoughts of building my own computer from a spare 386 CPU I had liberated from an old motherboard.

One of the issues I face is implementing the bus protocol that the 386 uses, and decoding of interrupts.  The 386 expects an 8-bit interrupt request number that corresponds to the interrupting device.  I’m used to microcontrollers where you use a single GPIO line, but in this case, the interrupts are multiplexed.

For basic needs, you could do it with a demux IC.  That will work for a small number of interrupt lines.  Suppose I wanted more though?  How feasible is it to support many interrupt lines without tying up lots of GPIO lines?

CANBus has an interesting way of handling arbitration.  The “zeros” are dominant, and thus overrule “ones”.  The CAN transceiver is a full-duplex device, so as the station is transmitting, it listens to the state of the bus.  When some nodes want to talk (they are, of course, oblivious to each-others’ intentions), they start sending a start-bit (a zero) which synchronises all nodes, then begin sending an address.

While each node is sending the same “bit value”, the receiving nodes see that value.  As each node tries sending a 1 while the others are sending 0’s, it sees the disparity, and concludes that it has lost arbitration.  Eventually, you’re left with a single node that then proceeds to send its CANBus frame.

Now, we don’t need the complexity of CANBus to do what we’re after.  We can keep synchronisation by simple virtue that we can distribute a common clock (the one the CPU runs at).  Dominant and recessive bits can be implemented with transistors pulling down on a pull-up resistor, or a diode-OR: this will give us a system where ‘1’s are dominant.  Good enough.

So I figured up Logisim to have a fiddle, came up with this:

Interrupt controller using logic gates

Interrupt controller using logic gates

interrupt.circ is the actual LogiSim circuit if you wanted to have a fiddle; decompress it.  Please excuse the mess regarding the schematic.

On the left is the host-side of the interrupt controller.  This would ultimately interface with the 386.  On the right, are two “devices”, one on IRQ channel 0x01, the other on 0x05.  The controller handles two types of interrupts: “DMA interrupts”, where the device just wants to tell the DMA controller to put data into memory, or “IRQ”s, where we want to interrupt the CPU.

The devices are provided with the following control signals from the interrupt controller:

Signal Controlled by Description
DMA Devices Informs the IRQ controller if we’re interrupting for DMA purposes (high) or if we need to tell the CPU something (low).
IRQ Devices Informs the IRQ controller we want its attention
ISYNC Controller Informs the devices that they have the controller’s attention and to start transmitting address bits.
IRQBIT[2…0] Controller Instructs the devices what bit of their IRQ address to send (0 = MSB, 7 = LSB).
IDA Devices The inverted address bit value corresponding to the bit pointed to by IRQBIT.
IACK Devices Asserted by the device that wins arbitration.

Due to the dominant/recessive nature of the bits, the highest numbered device wins over lesser devices. IRQ requests also dominate over DMA requests.

In the schematic, the devices each have two D-flip-flops that are not driven by any control signals.  These are my “switches” for toggling the state of the device as a user.  The ones feeding into the XOR gate control the DMA signal, the others control the IRQ line.

Down the bottom, I’ve wired up a counter to count how long between the ISYNC signal going high and the controller determining a result.  This controller manages to determine which device requested its attention within 10 cycles.  If clocked at the same 20MHz rate as the CPU core, this would be good enough for getting a decoded IRQ channel number to the data lines of the 386 CPU by the end of its second IRQ acknowledge cycle, and can handle up to 256 devices.

A logical next step would be to look at writing this in Verilog and trying it out on an FPGA.  Thanks to the excellent work of Clifford Wolf in producing the IceStorm project, it is now possible to do this with completely open tools.  So, I’ve got a Lattice iCE40HX-8K FPGA board coming.  This should make a pretty mean SDRAM controller, interrupt controller and address decoder all in one chip, and should be a great introduction into configuring FPGAs.

Nov 072015
 

Well, I’ve been thinking a lot lately about single board computers. There’s a big market out there. Since the Raspberry Pi, there’s been a real explosion available to the small-end of town, the individual. Prior to this, development boards were mostly in the 4-figures sort of price range.

So we’re now rather spoiled for choice. I have a Raspberry Pi. There’s also the BeagleBone Black, Banana Pi, and several others. One gripe I have with the Raspberry Pi is the complete absence of any kind of analogue input. There’s an analogue line out, you can interface some USB audio devices (although I hear two is problematic), or you can get an I2S module.

There’s a GPU in there that’s capable of some DSP work and a CLKOUT pin that can generate a wide range of frequencies. That sounds like the beginnings of a decent SDR, however one glitch, while I can use the CLKOUT pin to drive a mixer and the GPIOs to do band selection, there’s nothing that will take that analogue signal and sample it.

If I want something wider than audio frequencies (and even a 192kHz audio CODEC is not guaranteed above ~20kHz) I have to interface to SPI, and the pickings are somewhat slim. Then I read this article on a DIY single board computer.

That got me thinking about whether I could do my own. At work we use the Technologic Systems TS-7670 single-board computers, and as nice as those machines are, they’re a little slow and RAM-limited. Something that could work as a credible replacement there too would be nice, key needs there being RS-485, Ethernet and a 85 degree temperature rating.

Form factor is a consideration here, and I figured something modular, using either header pins or edge connectors would work. That would make the module easily embeddable in hobby projects.

Since all the really nice SoCs are BGA packages, I figured I’d first need to know how easy I could work with them. We’ve got a stack of old motherboards sitting in a cupboard that I figured I could raid for BGAs to play with, just to see first-hand how fine the pins were. A crazy thought came to me: maybe for prototyping, I could do it dead-bug style?

Key thing here being able to solder directly to a ball securely, then route the wire to its destination. I may need to glue it to a bit of grounded foil to keep the capacitance in check. So, the first step I figured, would be to try removing some components from the boards I had laying around to see this first-hand.

In amongst the boards I came across was one old 386 motherboard that I initially mistook for a 286 minus the CPU. The empty (PLCC) socket is for an 80387 math co-processor. The board was in the cupboard for a good reason, corrosion from the CMOS battery had pretty much destroyed key traces on one corner of the board.

Corrosion on a motherboard caused by a CMOS battery

Corrosion on a motherboard caused by a CMOS battery

I decided to take to it with the heat gun first. The above picture was taken post-heatgun, but you can see just how bad the corrosion was. The ISA slots were okay, and so where a stack of other useful IC sockets, ICs, passive components, etc.

With the heat gun at full blast, I’d just wave it over an area of interest until the board started to de-laminate, then with needle-nose pliers, pull the socket or component from the board. Sometimes the component simply dropped out.

At one point I heard a loud “plop”. Looking under the board, one of the larger surface-mounted chips had fallen off. That gave me an idea, could the 386 chip be de-soldered? I aimed the heat-gun directly at the area underneath. A few seconds later and it too hit the deck.

All in all, it was a successful haul.

Parts off the 386 motherboard

Parts off the 386 motherboard

I also took apart an 8-bit ISA joystick card. It had some nice looking logic chips that I figured could be re-purposed. The real star though was the CPU itself:

Intel NG80306SX-20

Intel NG80306SX-20

The question comes up, what does one do with a crusty old 386 that’s nearly as old as I am? A quick search turned up this scanned copy of the Intel 80386SX datasheet. The chip has a 16-bit bus with 23 bits worth of address lines (bit 0 is assumed to be zero). It requires a clock that is double the chip’s operating frequency (there’s an internal divide-by-two). This particular chip runs internally at 20MHz. Nothing jumped out as being scary. Could I use this as a practice run for making an ARM computer module?

A dig around dug up some more parts:

More parts

More parts

In this pile we have…

I also have some SIMMs laying around, but the SDRAM modules look easier to handle since the controllers on board synchronise with what would otherwise be the front-side bus.  The datasheet does not give a minimum clock (although clearly this is not DC; DRAM does need to be refreshed) and mentions a clock frequency of 33MHz when set to run at a CAS latency of 1.  It just so happens that I have a 33MHz oscillator.  There’s a couple of nits in this plan though:

  • the SDRAM modules a 3.3V, the CPU is 5V: no problem, there are level conversion chips out there.
  • the SDRAM modules are 64-bits wide.  We’ll have to buffer the output to eight 8-bit registers.  Writes do a read-modify-write cycle, and we use a 2-in-4 decoder to select the CE pin on two of the registers from address bits 1 and 2 from the CPU.
  • Each SDRAM module holds 32MB.  We have a 23-bit address bus, which with 16-bit words gives us a total address space of 16MB.  Solution: the old 8-bit computers of yesteryear used bank-switching to address more RAM/ROM than they had address lines for, we can interface an 8-bit register at I/O address 0x0000 (easily decoded with a stack of Schottky diodes and a NOT gate) which can hold the remaining address bits mapping the memory to the lower 8MB of physical memory.  We then hijack the 386’s MMU to map the 8MB chunks and use the page faults to switch memory banks.  (If we put the SRAM and ROM up in the top 1MB, this gives us ~7MB of memory-mapped I/O to play with.)

So, not show stoppers.  There’s an example circuit showing interfacing an ATMega8515 to a single SDRAM chip for driving a VGA interface, and some example code, with comments in German. Unfortunately you’d learn more German in an episode of Hogan’s Heroes than what I know, but I can sort-of figure out the sequence used to read and write from/to the SDRAM chip. Nothing looks scary there either.  This SDRAM tutorial seems to be a goldmine.

Thus, it looks like I’ve got enough bits to have a crack at it.  I can run the 386 from that 33MHz brick; which will give me a chip running at 16.5MHz.  Somewhere I’ve got the 40MHz brick laying around from the motherboard (I liberated that some time ago), but that can wait.

A first step would be to try interfacing the 386 chip to an AVR, and feed it instructions one step at a time, check that it’s still alive.  Then, the next steps should become clear.

Oct 312015
 

Well, it seems the updates to Microsoft’s latest aren’t going as its maker planned. A few people have asked me about my personal opinion of this OS, and I’ll admit, I have no direct experience with it.  I also haven’t had much contact with Windows 8 either.

That said, I do keep up with the news, and a few things do concern me.

The good news

It’s not all bad of course.  Windows 8 saw a big shrink in the footprint of a typical Windows install, and Windows 10 continues to be fairly lightweight.  The UI disaster from Windows 8 has been somewhat pared back to provide a more traditional desktop with a start menu that combines features from the start screen.

There are some limitations with the new start menu, but from what I understand, it behaves mostly like the one from Windows 7.  The tiled section still has some rough edges though, something that is likely to be addressed in future updates of Windows 10.

If this is all that had changed though, I’d be happily accepting it.  Sadly, this is not the case.

Rolling-release updates

Windows has, since day one, been on a long-term support release model.  That is, they bring out a release, then they support it for X years.  Windows XP was released in 2002 and was supported until last year for example.  Windows Vista is still on extended support, and Windows 7 will enter extended support soon.

Now, in the Linux world, we’ve had both long-term support releases and rolling release distributions for years.  Most of the current Linux users know about it, and the distribution makers have had many years to get it right.  Ubuntu have been doing this since 2004, Debian since 1998 and Red Hat since 1994.  Rolling releases can be a bumpy ride if not managed correctly, which is why the long-term support releases exist.  The community has recognised the need, and meets it accordingly.

Ubuntu are even predictable with their releases.  They release on a schedule.  Anything not ready for release is pushed back to the next release.  They do a release every 6 months, in April and October and every 2 years, the April release is a long-term support release.  That is; 8.04, 10.04, 12.04, 14.04 are all LTS releases.  The LTS releases get supported for about 3 years, the regular releases about 18 months.

Debian releases are basically LTS, unless you run Debian Testing or Debian Unstable.  Then you’re running rolling-release.

Some distributions like Gentoo are always rolling-release.  I’ve been running Gentoo for more than 10 years now, and I find the rolling releases rarely give me problems.  We’ve had our hiccups, but these days, things are smooth.  Updating an older Gentoo box to the latest release used to be a fight, but these days, is comparatively painless.

It took most of that 10 years to get to that point, and this is where I worry about Microsoft forcing the vast majority of Windows users onto a rolling-release model, as they will be doing this for the first time.  As I understand it, there will be four branches:

  1. Windows Insiders programme is like Debian Unstable.  The very latest features are pushed out to them first.  They are effectively running a beta version of Windows, and can expect many updates, many breakages, lots of things changing.  For some users, this will be fine, others it’ll be a headache.  There’s no option to skip updates, but you probably will have the option of resigning from the Windows Insiders programme.
  2. Home users basically get something like Debian Testing.  After updates have been thrashed out by the insiders, it gets force-fed to the general public.  The Home version of Windows 10 will not have an option to defer an update.
  3. Professional users get something more like the standard releases of Debian.  They’ll have the option of deferring an update for up to 30 days, so things can change less frequently.  It’s still rolling-release, but they can at least plan their updates to take place once a month, hopefully without disrupting too much.
  4. Enterprise users get something like the old-stable release of Debian.  Security updates, and they have the option to defer updates for a year.

Enterprise isn’t available unless you’re a large company buying lots of licenses.  If people must buy a Windows 10 machine, my recommendation would be to go for the professional version, then you have some right of veto, as not all the updates a purely security-related, some will be changing the UI and adding/removing features.

I can see this being a major headache though for anyone who has to support hardware or software on Windows 10 however, since it’s essentially the build number that becomes important: different release builds will behave differently.  Possibly different enough that things need much more testing and maintenance than what vendors are used to.

Some are very poor at supporting Linux right now due to the rolling-release model of things like the Linux kernel, so I can see Windows 10 being a nightmare for some.

Privacy concerns

One of the big issues to be raised with Windows 10 is the inclusion of telemetry to “improve the user experience” and other features that are seen as an invasion of privacy.  Many things can be turned off, but it will take someone who’s familiar with the OS or good at researching the problem to turn them off.

Probably the biggest concern from my prospective as a network administrator is the WiFi Sense feature.  This is a feature in Windows 10 (and Windows 8 Phone), turned on by default, that allows you to share WiFi passwords with other contacts.

If one of that person’s contacts then comes into range of your AP, their device contacts Microsoft’s servers which have the password on file, and can provide it to that person’s device (hopefully in a secured manner).  The password is never shown to the user themselves, but I believe it’s only a matter of time before someone figures out how to retrieve that password from WiFi Sense.  (A rogue AP would probably do the trick.)

We have discussed this at work where we have two WiFi networks: one WPA2 enterprise one for staff, and a WPA2 Personal one for guests.  Since we cannot control whether the users have this feature turned on or not, or whether they might accidentally “share” the password with world + dog, we’re considering two options:

  1. Banning the use of Windows 10 devices (and Windows 8 Phone) from being used on our guest WiFi network.
  2. Implementing a cron job to regularly change the guest WiFi password.  (The Cisco AP we have can be hit with SSH; automating this shouldn’t be difficult.)

There are some nasty points in the end user license agreement too that seem to give Microsoft free reign to make copies of any of the data on the system.  They say personal information will be removed, but even with the best of intentions, it is likely that some personal information will get caught in the net cast by telemetry software.

Forced “upgrades” to Windows 10

This is the bit about Windows 10 that really bugs me.  Okay, Microsoft is pushing a deal where they’ll provide it to you for free for a year.  Free upgrades, yaay!  But wait: how do you know if your hardware and software is compatible?  Maybe you’re not ready to jump on the bandwagon just yet, or maybe you’ve heard news about the privacy issues or rolling release updates and decided to hold back.

Many users of Windows 7, 8 and 8.1 are now being force-fed the new release, whether we asked for it or not.

Now the problem with this is it completely ignores the fact that some do not run with an always-on Internet connection with a large quota.  I know people who only have a 3G connection, with a very small (1GB) quota.  Windows 10 weighs in at nearly 3GB, so for them, they’ll be paying for 2GB worth of overuse charges just for the OS, never mind what web browsing, emailing and other things they might have actually bought their Internet connection for.

Microsoft employees have been outed for showing such contempt before.  It seems so many there are used to the idea of an Internet connection that is always there and has a big enough quota to be considered “unlimited” that they have forgotten that some parts of the world do not have such luxuries.  The computer and the Internet are just tools: we do not buy an Internet connection just for the sake of having one.

Stopping updates

There are a couple of tools that exist for managing this.  I have not tested any of them, and cannot vouch for their safety or reliability.

  • BlockWindows (github link) is a set of scripts that, when executed, uninstall and disable most of the Windows 10-related updates on Windows 7 and 8/8.1.
  • GWX Control Panel is a (proprietary?) tool for controlling the GWX process.  The download is here.

My recommendation is to keep good backups.  Find a tool that will do a raw partition back-up of your Windows partition, and keep your personal files on a separate partition.  Then, if Microsoft does come a-knocking, you can easily roll back.  Hopefully after the “free upgrade” offer has expired (about this time next year), they will cease and desist from this practise.

Sep 282015
 

I’ve been doing a bit of thinking on my ride this morning.  A few weeks ago, a school student in the US decided to use his interest in electronics and his knowledge to make a digital clock.  Having gotten the circuit working, he decided to bring the device to school to show his physics teacher.

The physics teacher might not have been alarmed, but another certainly was, and he was frogmarched to the Principal’s office who then called the police.  It was a major uproar, and shows just how paranoid a society (globally) we’ve become.

Back in 2001, I used to have a portable CD player which I’d listen to on my commutes to and from school.  It was a basic affair, that took 4 AA cells that were forever going flat.  I tried rechargeable cells, but wasn’t satisfied with their life either.  Having gotten fed up with that, I looked to alternatives.  The alternative I went for was a small 12V 7Ah SLA battery about the size of a house brick.

Yes, heavy, but I carried it in the backpack with my textbooks, and it worked well.  I could go a week on a single charge.  In addition, I could run not just a CD player, but any 12V device, including a small fan, which made me the envy of a lot of fellow students in the middle of summer.  (Our classrooms were not air conditioned.)  I still use a cigarette lighter extension lead/4-way adapter that I made back then to give me extra sockets on the bicycle.

If I tried it today, I half expect I’d be explaining this to the AFP.

It raises a real serious question about what our future generations are meant to do with their lives.  Yes, there’s clearly a danger in experimenting with these things.  That SLA battery, if it ruptured, could leak highly dangerous sulphuric acid.  If I charge it or discharge it too fast (e.g. by shorting the terminals), the internal resistance would build up heat inside the cells which would then start boiling the water in the electrolyte and gas would build up, possibly triggering the cell walls to fail.

But, I had contingency plans.  The battery was set up with fuses to cut power in the event of a short.  Cables were well insulated.  Terminals were protected.  It never caused an issue.  These days, I use LiFePO4s, which, while forgiving, also have their dangers.  I steer clear of LiPol cells since they are very volatile.

The point being, I had been experimenting with electronics from a very young age.  I also learned about computer programming from a very young age.  I learned about how they worked, and learned how to control them.  You could compare it to learning to ride a horse.

One [way] is to get on him and learn by actual practice how each motion and trick may be best met; the other is to sit on a fence and watch the beast a while, and then retire to the house and at leisure figure out the best way of overcoming his jumps and kicks.  The latter system is the safest, but the former, on the whole, turns out the larger proportion of good riders.

— Wilbur Wright in his speech “Some Aeronautical Experiments”, 18th September, 1901.
(source: David McCullough, “The Wright Brothers: The Dramatic Story-Behind-the-Story”)

I learned to ride a couple of “horses”.  One in particular, was the computer.  Understanding the electronics behind it greatly helped here.  I was already familiar with the concept of DC current by the time I hit university and I was well advanced in my understanding of how to control a computer.  What University specifically taught me was some discipline in how to structure code and the specifics of particular languages.  The bulk of my study was done long before I applied for any degrees.

There seems to be a thinking in today’s society that “task X is difficult, leave it to the professionals”.  There are some fields, where one would do well to heed that advice.  Anything involving gas ducting or mains electricity being two examples.

You can possibly get quite a bit of plumbing work done yourself, however some professional oversight is usually a good idea.  You have a right to DIY in most cases, but rights come with responsibilities, and one is taking responsibility of something goes wrong.  They go together.

(Extra) Low voltage, at low current levels, there’s very little you can actually do that would result in serious harm.  If you go about things carefully in a controlled manner, this experimentation can be a great vehicle for serious study in a chosen field.  Computers, unless you’re doing something really risky like flashing boot firmware, are not easily “bricked” and can be recovered.  Playing with a second-hand old desktop (not a production machine) or a cheap machine like the plethora of ARM-based and AVR-based single-board computers available today is not likely to result in life-threatening injury.

Banning the experimentation in such fields is not going to serve our community in the long term.  This is a typical knee-jerk reaction when someone’s experimentation is seen to be doing harm, even if, like the US student’s case, the experimentation is completely benign.  Following this road over time is only going to leave to a nation of cave-dwelling hermits that shun technology as black magic.

Technology is mankind’s genie, you cannot simply stuff it back in the bottle.  The genie here is quite willing to grant us many wishes, but unlike the ones of myths and legends, this one expects some effort to be done in return.  It is not simply going to vanish just because we’ve decided that it’s too dangerous.  We as individuals either need to study how the genie operates, or we pay someone else that has studied how it operates.  If everyone chooses to do the latter, who is left to do the former?

Sep 272015
 

Well, lately I’ve been doing a bit of work hacking the firmware on the Rowetel SM1000 digital microphone.  For those who don’t know it, this is a hardware (microcontroller) implementation of the FreeDV digital voice mode: it’s a modem that plugs into the microphone/headphone ports of any SSB-capable transceiver and converts FreeDV modem tones to analogue voice.

I plan to set this unit of mine up on the bicycle, but there’s a few nits that I had.

  • There’s no time-out timer
  • The unit is half-duplex

If there’s no timeout timer, I really need to hear the tones coming from the radio to tell me it has timed out.  Others might find a VOX feature useful, and there’s active experimentation in the FreeDV 700B mode (the SM1000 currently only supports FreeDV 1600) which has been very promising to date.

Long story short, the unit needed a more capable UI, and importantly, it also needed to be able to remember settings across power cycles.  There’s no EEPROM chip on these things, and while the STM32F405VG has a pin for providing backup-battery power, there’s no battery or supercapacitor, so the SM1000 forgets everything on shut down.

ST do have an application note on their website on precisely this topic.  AN3969 (and its software sources) discuss a method for using a portion of the STM32’s flash for this task.  However, I found their “license” confusing.  So I decided to have a crack myself.  How hard can it be, right?

There’s 5 things that a virtual EEPROM driver needs to bear in mind:

  • The flash is organised into sectors.
  • These sectors when erased contain nothing but ones.
  • We store data by programming zeros.
  • The only way to change a zero back to a one is to do an erase of the entire sector.
  • The sector may be erased a limited number of times.

So on this note, a virtual EEPROM should aim to do the following:

  • It should keep tabs on what parts of the sector are in use.  For simplicity, we’ll divide this into fixed-size blocks.
  • When a block of data is to be changed, if the change can’t be done by changing ones to zeros, a copy of the entire block should be written to a new location, and a flag set (by writing zeros) on the old block to mark it as obsolete.
  • When a sector is full of obsolete blocks, we may erase it.
  • We try to put off doing the erase until such time as the space is needed.

Step 1: making room

The first step is to make room for the flash variables.  They will be directly accessible in the same manner as variables in RAM, however from the application point of view, they will be constant.  In many microcontroller projects, there’ll be several regions of memory, defined by memory address.  This comes from the datasheet of your MCU.

An example, taken from the SM1000 firmware, prior to my hacking (stm32_flash.ld at r2389):

/* Specify the memory areas */
MEMORY
{
  FLASH (rx)      : ORIGIN = 0x08000000, LENGTH = 1024K
  RAM (rwx)       : ORIGIN = 0x20000000, LENGTH = 128K
  CCM (rwx)       : ORIGIN = 0x10000000, LENGTH = 64K
}

The MCU here is the STM32F405VG, which has 1MB of flash starting at address 0x08000000. This 1MB is divided into (in order):

  • Sectors 0…3: 16kB starting at 0x08000000
  • Sector 4: 64kB starting at 0x0800c000
  • Sector 5 onwards: 128kB starting at 0x08010000

We need at least two sectors, as when one fills up, we will swap over to the other. Now it would have been nice if the arrangement were reversed, with the smaller sectors at the end of the device.

The Cortex M4 CPU is basically hard-wired to boot from address 0, the BOOT pins on the STM32F4 decide how that gets mapped. The very first few instructions are the interrupt vector table, and it MUST be the thing the CPU sees first. Unless told to boot from external memory, or system memory, then address 0 is aliased to 0x08000000. i.e. flash sector 0, thus if you are booting from internal flash, you have no choice, the vector table MUST reside in sector 0.

Normally code and interrupt vector table live together as one happy family. We could use a couple of 128k sectors, but 256k is rather a lot for just an EEPROM storing maybe 1kB of data tops. Two 16kB sectors is just dandy, in fact, we’ll throw in the third one for free since we’ve got plenty to go around.

However, the first one will have to be reserved for the interrupt vector table that will have the space to itself.

So here’s what my new memory regions look like (stm32_flash.ld at 2390):

/* Specify the memory areas */
MEMORY
{
  /* ISR vectors *must* be placed here as they get mapped to address 0 */
  VECTOR (rx)     : ORIGIN = 0x08000000, LENGTH = 16K
  /* Virtual EEPROM area, we use the remaining 16kB blocks for this. */
  EEPROM (rx)     : ORIGIN = 0x08004000, LENGTH = 48K
  /* The rest of flash is used for program data */
  FLASH (rx)      : ORIGIN = 0x08010000, LENGTH = 960K
  /* Memory area */
  RAM (rwx)       : ORIGIN = 0x20000000, LENGTH = 128K
  /* Core Coupled Memory */
  CCM (rwx)       : ORIGIN = 0x10000000, LENGTH = 64K
}

This is only half the story, we also need to create the section that will be emitted in the ELF binary:

SECTIONS
{
  .isr_vector :
  {
    . = ALIGN(4);
    KEEP(*(.isr_vector))
    . = ALIGN(4);
  } >FLASH

  .text :
  {
    . = ALIGN(4);
    *(.text)           /* .text sections (code) */
    *(.text*)          /* .text* sections (code) */
    *(.rodata)         /* .rodata sections (constants, strings, etc.) */
    *(.rodata*)        /* .rodata* sections (constants, strings, etc.) */
    *(.glue_7)         /* glue arm to thumb code */
    *(.glue_7t)        /* glue thumb to arm code */
	*(.eh_frame)

    KEEP (*(.init))
    KEEP (*(.fini))

    . = ALIGN(4);
    _etext = .;        /* define a global symbols at end of code */
    _exit = .;
  } >FLASH…

There’s rather a lot here, and so I haven’t reproduced all of it, but this is the same file as before at revision 2389, but a little further down. You’ll note the .isr_vector is pointed at the region called FLASH which is most definitely NOT what we want. The image will not boot with the vectors down there. We need to change it to put the vectors in the VECTOR region.

Whilst we’re here, we’ll create a small region for the EEPROM.

SECTIONS
{
  .isr_vector :
  {
    . = ALIGN(4);
    KEEP(*(.isr_vector))
    . = ALIGN(4);
  } >VECTOR


  .eeprom :
  {
    . = ALIGN(4);
    *(.eeprom)         /* special section for persistent data */
    . = ALIGN(4);
  } >EEPROM


  .text :
  {
    . = ALIGN(4);
    *(.text)           /* .text sections (code) */
    *(.text*)          /* .text* sections (code) */

THAT’s better! Things will boot now. However, there is still a subtle problem that initially caught me out here. Sure, the shiny new .eeprom section is unpopulated, BUT the linker has helpfully filled it with zeros. We cannot program zeroes back into ones! Either we have to erase it in the program, or we tell the linker to fill it with ones for us. Thankfully, the latter is easy (stm32_flash.ld at 2395):

  .eeprom :
  {
    . = ALIGN(4);
    KEEP(*(.eeprom))   /* special section for persistent data */
    . = ORIGIN(EEPROM) + LENGTH(EEPROM) - 1;
    BYTE(0xFF)
    . = ALIGN(4);
  } >EEPROM = 0xff

Credit: Erich Styger

We have to do two things. One, is we need to tell it that we want the region filled with the pattern 0xff. Two, we need to make sure it gets filled with ones by telling the linker to write one as the very last byte. Otherwise, it’ll think, “Huh? There’s nothing here, I won’t bother!” and leave it as a string of zeros.

Step 2: Organising the space

Having made room, we now need to decide how to break this data up.  We know the following:

  • We have 3 sectors, each 16kB
  • The sectors have an endurance of 10000 program-erase cycles

Give some thought as to what data you’ll be storing.  This will decide how big to make the blocks.  If you’re storing only tiny bits of data, more blocks makes more sense.  If however you’ve got some fairly big lumps of data, you might want bigger blocks to reduce overheads.

I ended up dividing the sectors into 256-byte blocks.  I figured that was a nice round (binary sense) figure to work with.  At the moment, we have 16 bytes of configuration data, so I can do with a lot less, but I expect this to grow.  The blocks will need a header to tell you whether or not the block is being used.  Some checksumming is usually not a bad idea either, since that will clue you in to when the sector has worn out prematurely.  So some data in each block will be header data for our virtual EEPROM.

If we don’t care about erase cycles, this is fine, we can just make all blocks data blocks, however it’d be wise to track this, and avoid erasing and attempting to use a depleted sector, so we need somewhere to track this.  256 bytes gives us enough space to stash an erase counter and a map of what blocks are in use within that sector.

So we’ll reserve the first block in the sector to act as this index for the entire sector.  This gives us enough room to have 16-bits worth of flags for each block stored in the index.  That gives us 63 blocks per sector for data use.

It’d be handy to be able to use this flash region for a few virtual EEPROMs, so we’ll allocate some space to give us a virtual ROM ID.  It is prudent to do some checksumming, and the STM32F4 has a CRC32 module, so in that goes, and we might choose to not use all of a block, so we should throw in a size field (8 bits, since the size can’t be bigger than 255).  If we pad this out a bit to give us a byte for reserved data, we get a header with the following structure:

15 14 13 12 11 10 19 8 7 6 5 4 3 2 1 0
+0 CRC32 Checksum
+2
+4 ROM ID Block Index
+6 Block Size Reserved

So that subtracts 8 bytes from the 256 bytes, leaving us 248 for actual program data. If we want to store 320 bytes, we use two blocks, block index 0 stores bytes 0…247 and has a size of 248, and block index 1 stores bytes 248…319 and has a size of 72.

I mentioned there being a sector header, it looks like this:

15 14 13 12 11 10 19 8 7 6 5 4 3 2 1 0
+0 Program Cycles Remaining
+2
+4
+6
+8 Block 0 flags
+10 Block 1 flags
+12 Block 2 flags

No checksums here, because it’s constantly changing.  We can’t re-write a CRC without erasing the entire sector, we don’t want to do that unless we have to.  The flags for each block are currently allocated accordingly:

15 14 13 12 11 10 19 8 7 6 5 4 3 2 1 0
+0 Reserved In use

When the sector is erased, all blocks show up as having all flags set as ones, so the flags is considered “inverted”.  When we come to use a block, we mark the “in use” bit with a zero, leaving the rest as ones.  When we erase, we mark the entire flags block as zeros.  We can set other bits here as we need for accounting purposes.

Thus we have now a format for our flash sector header, and for our block headers.  We can move onto the algorithm.

Step 3: The Code

This is the implementation of the above ideas.  Our code needs to worry about 3 basic operations:

  • reading
  • writing
  • erasing

This is good enough if the size of a ROM image doesn’t change (normal case).  For flexibility, I made my code so that it works crudely like a file, you can seek to any point in the ROM image and start reading/writing, or you can blow the whole thing away.

Constants

It is bad taste to leave magic numbers everywhere, so constants should be used to represent some quantities:

  • VROM_SECT_SZ=16384:
    The virtual ROM sector size in bytes.  (Those watching Codec2 Subversion will note I cocked this one up at first.)
  • VROM_SECT_CNT=3:
    The number of sectors.
  • VROM_BLOCK_SZ=256:
    The size of a block
  • VROM_START_ADDR=0x08004000:
    The address where the virtual ROM starts in Flash
  • VROM_START_SECT=1:
    The base sector number where our ROM starts
  • VROM_MAX_CYCLES=10000:
    Our maximum number of program-erase cycles

Our programming environment may also define some, for example UINTx_MAX.

Derived constants

From the above, we can determine:

  • VROM_DATA_SZ = VROM_BLOCK_SZ – sizeof(block_header):
    The amount of data per block.
  • VROM_BLOCK_CNT = VROM_SECT_SZ / VROM_BLOCK_SZ:
    The number of blocks per sector, including the index block
  • VROM_SECT_APP_BLOCK_CNT = VROM_BLOCK_CNT – 1
    The number of application blocks per sector (i.e. total minus the index block)

CRC32 computation

I decided to use the STM32’s CRC module for this, which takes its data in 32-bit words.  There’s also the complexity of checking the contents of a structure that includes its own CRC.  I played around with Python’s crcmod module, but couldn’t find some arithmetic that would allow it to remain there.

So I copy the entire block, headers and all to a temporary copy (on the stack), set the CRC field to zero in the header, then compute the CRC. Since I need to read it in 32-bit words, I pack 4 bytes into a word, big-endian style. In cases where I have less than 4 bytes, the least-significant bits are left at zero.

Locating blocks

We identify each block in an image by the ROM ID and the block index.  We need to search for these when requested, as they can be located literally anywhere in flash.  There are probably cleverer ways to do this, but I chose the brute force method.  We cycle through each sector and block, see if the block is allocated (in the index), see if the checksum is correct, see if it belongs to the ROM we’re looking for, then look and see if it’s the right index.

Reading data

To read from the above scheme, having been told a ROM ID (rom), start offset and a size, the latter two being in byte sand given a buffer we’ll call out, we first need to translate the start offset to a sector and block index and block offset.  This is simple integer division and modulus.

The first and last blocks of our read, we’ll probably only read part of.  The rest, we’ll read entire blocks in.  The block offset is only relevant for this first block.

So we start at the block we calculate to have the start of our data range.  If we can’t find it, or it’s too small, then we stop there, otherwise, we proceed to read out the data.  Until we run out of data to read, we increment the block index, try to locate the block, and if found, copy its data out.

Writing and Erasing

Writing is a similar affair.  We look for each block, if we find one, we overwrite it by copying the old data to a temporary buffer, copy our new data in over the top then mark the old block as obsolete before writing the new one out with a new checksum.

Trickery is in invoking the wear levelling algorithm on an as-needed basis.  We mark a block obsolete by setting its header fields to zero, but when we run out of free blocks, then we go looking for sectors that are full of obsolete blocks waiting to be erased.  When we encounter a sector that has been erased, we write a new header at the start and proceed to use its first data block.

In the case of erasing, we don’t bother writing anything out, we just mark the blocks as obsolete.

Implementation

The full C code is in the Codec2 Subversion repository.  For those who prefer Git, I have a git-svn mirror (yes, I really should move it off that domain).  The code is available under the Lesser GNU General Public License v2.1 and may be ported to run on any CPU you like, not just ST’s.

Aug 232015
 

Something got me thinking tonight.  We were out visiting a friend of ours and it was decided we’d go out for dinner.  Nothing unusual there, and there were a few places we could have gone for a decent meal.

As it happens, we went to a bowls club for dinner.  I won’t mention which one.

Now, I’d admit that I do have a bit of a rebel streak in me.  Let’s face it, if nobody challenged the status quo, we’d still be in the trees, instead someone decided they liked the caves better and so developed modern man.

In my case, I’m not one to make a scene, but the more uptight the venue, the more uncomfortable I am being there.  If a place feels it necessary to employ a bouncer, or feels it necessary to place a big plaque out front listing rules in addition to what ought to be common sense, that starts to get the alarm bells ringing in my head.

Some rules are necessary, most of these are covered by the laws that maintain order on our streets.  In a club or restaurant, okay, you want to put some limits: someone turning up near-starkers is definitely not on.  Nobody would appreciate someone covered in grease or other muck leaving a trail throughout the place everywhere they go, nor should others be subjected to some T-shirt with text or imagery that is in any way “offencive” to the average person.

(I’ll ignore the quagmire of what people might consider offencive.  I’m sure someone would take exception to me wearing largely blank clothing.  I, for one, abhor branding or slogans on my clothing.)

Now, something that obstructs your ability to identify the said person, such as a full-face balaclava, burka (not sure how that’s spelled) or a full-face helmet: there’s quite reasonable grounds.

As for me, I never used to wear anything on my head until later in high school when I noted how much less distracted I was from overhead lighting.  I’m now so used to it, I consider myself partially undressed if I’m not wearing something.  Something just doesn’t feel right.  I don’t do it to obscure identity, if anything, it’d make me easier to identify.  (Coolie hats aren’t common in Brisbane, nor are spitfire or gatsby caps.)

It’s worth pointing out that the receptionist at this club not only had us sign in with full name and address, but also checked ID on entry.  So misbehaviour would be a pointless exercise: they already had our details, and CCTV would have shown us walking through the door.

The bit that got me with this club, was in amongst the lengthy list of things they didn’t permit, they listed “mens headwear”.  It seemed a sexist policy to me.  Apparently women’s headwear was fine, and indeed, I did see some teens wearing baseball caps as I left, no one seemed to challenge them.

In “western society”, many moons ago, it was considered “rude” for a man to wear a hat indoors.  I do not know what the rationale behind that was.  Women were exempt then from the rule, as their headwear was generally more elaborate and required greater preparation and care to put on and take off.

I have no idea whether a man would be exempt if his headgear was as difficult to remove in that time.  I certainly consider it a nuisance having to carry something that could otherwise just sit on my head and generally stay out of my way.

Today, people of both sexes, if they have anything on their head at all, it’s mostly of a unisex nature, and generally not complicated to put on or remove.  So the reasoning behind the exemption would appear to be largely moot now.

Then there’s the gender equality movement to consider.  Women for years, fought to have the same rights as men.  Today, there’s some inequality, but the general consensus seems to be that things have improved in that regard.

This said, if doing something is not acceptable for men, I don’t see how being female makes it better or worse.

Perhaps then, in the interests of equal rights, we should reconsider some of our old customs and their exemptions in the context of modern life.

Feb 282015
 

Well, it’s been about 7½ years since I bought my first bike and started riding, and really, about 5 years since I started riding seriously as a means of transportation.

In late 2011 my father and I went halves in a pair of GPS/CB radio units, it was a 2 for 1 deal and so we bought these two units at about $400 each, normally they’d be about $700 individually. So there I started logging the distance I covered. I just used the in-built odometer on the GPS, resetting it when the bike went in for service.

When I got the mountain bike, I realised I needed to track the distance covered by each bike to ensure they all went in at their 1000km service on-time. So being a programmer by trade, I coded up a crude CGI/Perl script that used a SQLite back-end to log the odometer readings. It was a simple HTML form where I could enter the distance at regular intervals.  Crucially, it worked with the “feature phone” I used at the time.

The SQL views (no such thing as stored procedures in standard SQLite3) took care of actually calculating the differentials and so I used that to track my progress. So far so good. I’ve now had this in place since mid-2012 and I’ve brought in some of my data from early 2012, thus I’m now starting to see some trends.

Distances by year

Year Distance (km)
2012 5594.9
2013 4837.78
2014 4593.42

Am I getting lazy? Well, hard to say there. I go out less on the weekends and have also optimised my routes to reduce distances somewhat.  Some of this is weather-dependent, in the heat one does not feel like going outdoors.

Distance by month-of-year

Month Distance (km)
01 282.59
02 406.20
03 409.10
04 377.42
05 511.29
06 493.36
07 330.01
08 532.05
09 494.21
10 470.14
11 370.27
12 394.13

I’m not sure why there’s a lull in activity around July, but the most active months seem to be May and August.  The lull in January can be somewhat attributed to the end of the Christmas break.  I guess if anything, I should aim to be more active in July when the weather is the coolest.

Guess I’ll be keeping an eye on what happens over time with these stats and see if I can get them up a bit.

The following graph will continuously update as I pump data in. We’ll see what happens.

Distance by month-of-year

Jul 192014
 

My only mode of transport these days is a bicycle.  I might get lifts from other people on occasion, but normally I ride everywhere.

It’s a great way to get around, good form of exercise, cheap and whilst I won’t be breaking any speed records, it’s not overly time consuming.  I spend more time waiting for buses and trains than I do getting places on the bike.  The downside is what to wear whilst cycling.  For cycling use, car drivers have a hard enough time seeing a cyclist as it is, so I feel safer if I’m at the very least, light-coloured, ideally day/night high visibility compliant with AS/NZS 4602:1999.  I’ve been cycling as my main mode of transport now for nearly 5 years, and over this time I’ve tried a number of things for clothing.

Regular clothing

“Normal” clothing, was naturally what I started out with.  What I find is that it quickly wears out, particularly trousers, when subjected to this sort of treatment.  The cycling movement puts a lot of stress in the crutch and thus, I find they give out within a year or two.

Cycling is also very physical, so one will sweat a lot.  So at the very least you’ll want a shirt to wear cycling, and another to change into when you get to your destination.  The high-visibility polo shirts work well for this, they’re cheap and lightweight, keep the sun off well without being too hot.

Work clothing

By this I mean industrial work clothing.  After finding that my trousers were wearing out at an alarming rate, I decided I’d go for more industrial type clothing.

I hate wearing belts, so I looked around and bought some overalls.  My preference is for ones that have a front zip.  A bloody pain in the arse to find in this country!  The likes of King Gee, Bisley, Worksense and many others tend to make those sorts for markets like in NZ, but over here they tend to sell only stud-fastening ones which I find are more time consuming to fasten.  A zip: you’re done in about 2 seconds, studs you’ll be clipping them together for about 10.  But I digress…

The ones I found were medium-weight ones, 290gsm or something like that.  In the winter, they’re okay, but once the fabric gets soaked with sweat one’s body temperature then becomes rather uneven.  In summer they’re often too hot to consider.

Lighter-weight ones might fare better in the sweat stakes, not sure about durability.  Given the high cost ($70~$120 a pair) I’ll just have to keep looking.

Ones made out of the same material as the high-visibility polo shirts could work well, no idea where to find them though if they exist.

Seeking the all-weather cycling suit

Some at this point would be screaming at me “why not lycra”?  Well, I’ve never been a fan of lycra and have no intention of becoming a MAMIL.

One evening coming home a few weeks ago, we had some very windy weather. It’s mid-winter right now, and this wind was going right through me. My clothes were wet with sweat, and with the wind, made the cold weather that much worse.

This got me thinking: what have I got or can I get, that will block the wind, without making me sweat ridiculous amounts?  It’s presently winter, and so now’s a good time to go try an experiment, and see how they fare as the weather patterns shift towards the more humid summer weather.  If I’m still wearing this clothing in July 2015, I’ll be onto something.

Breathalon spray coveralls

I had some Breathalon coveralls lying around, previously I had worn these in wet weather, and found they are not bad.

I bought this pair for about $15 off eBay, but they’re rare as hens teeth. One company sells them for about the AU$150 mark. So not the cheapest, amongst my gripes is that they’re not the most comfortable fit and they have a one-way zip which is an annoyance when nature calls. Apart from that though, they’re a bright yellow, and they’re breathable.

The other gripe I have is no pockets: this particular pair I tried cutting access slits in to gain access to the pockets in my trousers. This proved to be unwise, they now leak in wet weather, so I’ll have to look at sealing those slits somehow.

I tried them one week: I found I sweat less than I did wearing other clothing. With just a lycra stinger suit underneath, I got to work mostly dry and comfortable. This was in dry weather. Summer humidity might be another matter, but in bright sunny winter weather, they were fine. However, they’re very hard to get hold of, and are still quite expensive.

That said, they’re probably 60% of the way there.

Disposable clothing

With the above experiment being largely successful, I considered what else would make the grade. The Breathalon coveralls were okay, but they lacked some features. Could I find some material and make my own?

Will Rietveld provided the inspiration for a cheap alternative: Tyvek coveralls. These are about AU$10 a pair, are generally white in colour (okay, not strictly daytime high-vis, but at least not black like motorcycle rainsuits), very lightweight and were apparently not much different to the old Gore Tex for breathability.

Before doing this, I did some research.  I had seen these before but had dismissed the idea thinking, they’re disposable, surely they won’t last!  Looking around, I found Barefoot Jake’s article which gave them the thumbs up, and Ken K’s forum post giving them the thumbs down.  In the forum post, the comment was the failure was in the seams.  The other two articles mention taping the seams to prevent this problem.

For the cost I thought it worth giving a go. There are a few different fabrics used in this sort of clothing. Tyvek being just one.  They’re usually described in therms of protection classes.

Class 6 coveralls tend to be very flimsy, made from single layered polypropylene and are by far the cheapest at ~AU$5 a pair.  You can just about see through them, wind and water will pass right through.  Maybe you can get some in a bright colour, in which case they’re about as good as a high-vis vest.  For keeping wind and water out: useless.

Class 5 coveralls are made from slightly heavier material such as SMS fabric and are more expensive (~AU$8 a pair).  They’re more opaque (although you can still see clothing through these), will repel water and light spray and block a small amount of wind.  If you’re like me, and a bit self-conscious, you could wear these over the top of more conventional cycle clothing.

I found that water will pool on the fabric, and they are a bit more breathable.  However, the slight transparency is a little disconcerting.  They’re worth a look.

Class 4 coveralls are used for things like asbestos removal.  Materials vary, but in amongst these are the Tyvek ones recommended by Wll’s article.  They can be had for about AU$10 a pair.

I decided to start with these, buying 3 pairs of these.  I noted the fact that the seams were taped a bright orange.  The fact they were taped seemed to suggest that someone had noticed this particular failure mode and had taken particular attention to the problem.  These ones I think are the Hazguard MP4 type material, similar to Tyvek, but with a plastic-like coating.

As I’m after a single-piece suit, I dispensed with the scissors.  When I got home, I tried grabbing a pair, turning a tap on and running the water over them to see what the waterproofing was like.  The water pooled, running my hand under the pool did not reveal any leaks.  So from that perspective, they should do exactly what I’m after.

Things were getting draughty outside so I put the pair on, and after wearing them for a few hours basically just pottering around the house, I hadn’t broken out into a ball of sweat, so breathability was there, a PVC suit would have had me sweating like a pig by then.  I wore them on my way into work to try them out.

First experiments with Class 4 coveralls

First thing that became apparent: as I cycled, the back part ballooned out.  Not necessarily a bad thing, as it made me very obvious to drivers by enlarging my apparent size.  Pedalling appeared to act like a pump, pushing air into the suit, and the air appeared to be trapped.  Like in Will’s experiment, I found that I was starting to sweat after about 20 minutes, and when I got to work, I was noticably more sweaty.  However, it was just humidity, I didn’t feel like I was overheating, nor did I feel cold when the wind blew.

So not quite there, but close.  I can buy Tyvek material on a roll cheap enough, so maybe with some work, we can improve on this.

Class 5 coveralls experiment

Since the humidity really did build up quickly, I thought maybe there was something a little more breathable.  I bought a pair of coveralls that were an SMS-type fabric.  The seems are not taped, and so I suspect these will probably have a blow out at some point.  I did the same waterproofness test and found the water pooled there also, however they’re considered splash resistant, so I suspect the water would seep through eventually.

It was at this point I noticed they were slightly more transparent.  So the following Monday I cycled in them, with one of my lycra stinger suits underneath.  I got to work, not quite as sweaty as the previous week, but still with a noticeable amount of moisture.

One hypothesis: with the Breathalon suit, I also had my stinger suit underneath.  Maybe that was helping by soaking up the sweat rather than letting it bead up on my skin, and allowing it to be more efficiently evaporated?

Class 4 + stinger suit

I tried the stinger suit underneath the class 4 coveralls, and found that the amount of sweat hadn’t changed.  In fact, doing this made things worse, the moist air didn’t dissipate fast enough and once I cooled down, the cold sweat kept me a little too cool.  Without the stinger suit, I’d eventually dry out inside the coveralls after about 15 minutes, but with the stinger suit, I was still damp after 30.

Alternative options

So I hit the web again.  Was the answer to buy another pair of spray coveralls like the Breathalon pair?  There aren’t too many options around here in Australia.  Elliots did make some out of their Zetel material, but they’ve stopped making those (pity, they had pockets!).  Castle Clothing over in the UK make something that looks ideal.  Alas, I tried emailing them to see if they had an Australian distributor — I’m yet to hear back.

Neither of these options are meant for cycling.  Looking around I saw the BikeSuit.  Clearly Olaf Wit had a similar idea, and actually got his to production.  A few comments:

  • The bikesuit comes in one colour: black.  There are some reflective stripes, so I guess that’s kinda class N (night-time: i.e. reflective) high visibility, but I’d like class D (daytime: i.e. bright colour) too.  In fact, if I had to choose between them, I’ll take class D over class N.
  • The idea of using ventilation to prevent sweat build-up looks like just what the doctor ordered.  That said, wearing this over regular clothes — I sweat in regular clothes without any waterproof gear over the top, surely this will not improve the situation?
  • The suit packs up into a bag about the volume of two soccer balls.
  • Watching the video, it appeared clumbersome to put on.  There are zips everywhere.  The fellow takes it out of its bag at time 0:20.  At 0:50, he’s still adjusting things.  10 seconds later, he’s ready to start cycling.
  • They cost over US$340.  Sure breathable and durable fabric can be expensive, but Ouch!

The class 4 coveralls: I timed myself, and it took me about 50 seconds and I was zipped up.  I had work boots on at the time which I did not remove.  About the only thing BikeSuit has over the dispsable coveralls, is ventilation, durability and built-in shoe covers.  It loses on price, availability and visibility.

Poor man’s “bike suit”?

That got me thinking, could I turn these coveralls into a poor man’s bike suit?  I observed how the back of my coveralls ballooned out, what if I made some ventilation holes?

I tried making 10 small holes just below the line of elastic at the back.  I covered the area over with plastic tape first to give the material some re-enforcing, then punched the holes.  The next day I got to work, not quite sweat free, but certainly much dryer than before.  About on par with my experiment in the Breathalon suit.

I’m thinking if I cut a slit horizontally about 30cm long, then glue (sewing is not good with Tyvek) a triangular patch of mesh fabric maybe 40cm wide and 60cm tall to the inside, that would allow the coveralls to vent.  Fold the material over at the bottom so the bottom of the slit is covered by a layer of material, or use some sheet Tyvek to make a flap, and I think I might be onto a low-cost alternative.  Tier Gear sell sheet Tyvek, so a metre or two of that would suffice for adding the extra flaps needed.

As for day/night high visibility: they exist.  More expensive obviously, but they do exist.

The only real question is one of durability.  Thankfully these things pack up so small and are lightweight enough, I can have a spare pair on the bike for wardrobe malfunction emergencies.  They should be good for WICEN events too: often I’m out on a checkpoint in the wind and rain.  Time will be the ultimate test, we shall see.