Sep 132017
 

So it seems that the Same Sex Marriage postal votes are finally being sent around.  This is good news in a way: we get to have a say in the matter and hopefully put the matter to bed one way or the other.

No more umming and arring, which I’m frankly sick and tired of, as I feel there are more pressing needs.  Yes, it’s important, but we have two nuclear armed crazy-haired nutters at opposite sides of the Pacific ready to light the planet up like a neon light!

I’m in support of the legislation changing by the way.  I think same-sex couples are entitled to the same rights, and it wasn’t that long ago that marriage was restricted to those not just of the opposite sex, but also had to be of the same “race” and religion.

To quote a song by John Williamson: “They’d chain you up to a boab tree, for kissing an Aborigine!”

So to my way of thinking, society changes.  What was taboo yesterday, we don’t think twice about today.  An Anglican family sending their children to a Catholic school would be heresy years ago… but for my sister and I, that is exactly what happened.  The world doesn’t seem to have imploded as a result.

The status quo regarding marriage is a hang-over from when the Church was the only place where you could get married, and ruled with far greater weight than today.  This is no longer the case, thus it no longer makes sense to hang onto this concept.

Anyway… my opinions on this are beside the point.  In spite of the good intentions, it looks as if the postal vote envelopes overlook one serious flaw: with sufficient light they are see through!

So my proposal: Put a thin piece of card in with the postal vote to block the light.  Not thick enough that it might cause the envelope to jam or interfere with sorting equipment, just opaque enough to prevent the contents being visible.  A small piece of black paper would likely do the job nicely.

Sure the ABS will have a little bit more paper to dispose of, but then at least, our votes are secure and people can’t “manipulate” the vote by snooping on sealed envelopes and discard the ones that disagree with their opinions.  At least then we won’t be wasting $122M.

Mar 262017
 

Yesterday’s post was rather long, but was intended for mostly technical audiences outside of amateur radio.  This post serves as a brain dump of volatile memory before I go to sleep for the night.  (Human conscious memory is more like D-RAM than one might realise.)

Radio interface

So, many in our group use packet radio TNCs already, with a good number using the venerable Kantronics KPC3.  These have a DB9 port that connects to the radio and a second DB25 RS-323 port that connects to the computer.

My proposal: we make an audio interface that either plugs into that DB9 port and re-uses the interface cables we already have, or directly into the radio’s data port.

This should connect to an audio interface on the computer.

For EMI’s sake, I’d recommend a USB sound dongle like this, or these, or this as that audio interface.  I looked on Jaycar and did see this one, which would also work (and burn a hole in your wallet!).

If you walk in and the asking price is more than $30, I’d seriously consider these other options.  Of those options, U-Mart are here in Brisbane; go to their site, order a dongle then tell the site you’ll come and pick it up.  They’ll send you an email with an order number when it’s ready, you just need to roll up to the store, punch that number into a terminal in the shop, then they’ll call your name out for you to collect and pay for it.

Scorptec are in Melbourne, so you’ll have to have items shipped, but are also worth talking to.  (They helped me source some bits for my server cluster when U-Mart wouldn’t.)

USB works over two copper pairs; one delivers +5V and 0V, the other is a differential pair for data.  In short, the USB link should be pretty immune from EMI issues.

At worst, you should be able to deal with it with judicious application of ferrite beads to knock down the common mode current and using a combination of low-ESR electrolytic and ceramic capacitors across the power rails.

If you then keep the analogue cables as short as absolutely possible, you should have little opportunity for RF to get in.

I don’t recommend the TigerTronics Signalink interfaces, they use cheap and nasty isolation transformers that lead to serious performance issues.

Receive audio

For the receive audio, we feed the audio from the radio and we feed that via potentiometer to a 3.5mm TRS (“phono”) plug tip, with sleeve going to common.  This plugs into the Line-In or Microphone input on the sound device.

Push to Talk and Transmit audio

I’ve bundled these together for a good reason.  The conventional way for computers to drive PTT is via an RS-232 serial port.

We can do that, but we won’t unless we have to.

Unless you’re running an original SoundBLASTER card, your audio interface is likely stereo.  We can get PTT control via an envelope detector forming a minimal-latency VOX control.

Another 3.5mm TRS plug connects to the “headphone” or “line-out” jack on our sound device and breaks out the left and right channels.

The left and right channels from the sound device should be fed into the “throw” contacts on two single-pole double-throw toggle switches.

The select pin (mechanically operated by the toggle handle) on each switch thus is used to select the left or right channel.

One switch’s select pin feeds into a potentiometer, then to the radio’s input.  We will call that the “modulator” switch; it selects which channel “modulates” our audio.  We can again adjust the gain with the potentiometer.

The other switch first feeds through a small Schottky diode then across a small electrolytic capacitor (to 0V) then through a small resistor before finally into the base of a small NPN signal transistor (e.g. BC547).  The emitter goes to 0V, the collector is our PTT signal.

This is the envelope detector we all know and love from our old experiments with crystal sets.  In theory, we could hook a speaker to the collector up to a power source and listen to AM radio stations, but in this case, we’ll be sending a tone down this channel to turn the transistor, and thus or PTT, on.

The switch feeding this arrangement we’ll call the “PTT” switch.

By using this arrangement, we can use either audio channel for modulation or PTT control, or we can use one channel for both.  1200-baud AFSK, FreeDV, etc, should work fine with both on the one channel.

If we just want to pass through analogue audio, then we probably want modulation separate, so we can hold the PTT open during speech breaks without having an annoying tone superimposed on our signal.

It may be prudent to feed a second resistor into the base of that NPN, running off to the RTS pin on an RS-232 interface.  This will let us use software that relies on RS-232 PTT control, which can be added by way of a USB-RS232 dongle.

The cheap Prolific PL-2303 ones sold by a few places (including Jaycar) will work for this.  (If your software expects a 16550 UART interface on port 0x3f8 or similar, consider running it in a virtual machine.)

Ideally though, this should not be needed, and if added, can be left disconnected without harm.

Software

There are a few “off-the-shelf” packages that should work fine with this arrangement.

AX.25 software

AGWPE on Windows provides a software TNC.  On Linux, there’s soundmodem (which I have used, and presently mirror) and Direwolf.

Shouldn’t need a separate PTT channel, it should be sufficient to make the pre-amble long enough to engage PTT and rely on the envelope detector recognising the packet.

Digital Voice

FreeDV provides an open-source digital voice platform system for Windows, Linux and MacOS X.

This tool also lets us send analogue voice.  Digital voice should be fine, the first frame might get lost but as a frame is 40ms, we just wait before we start talking, like we would for regular analogue radio.

For the analogue side of things, we would want tone-driven PTT.  Not sure if that’s supported, but hey, we’ve got the source code, and yours truly has worked with it, it shouldn’t be hard to add.

Slow-scan television

The two to watch here would be QSSTV (Linux) and EasyPal (Windows).  QSSTV is open-source, so if we need to make modifications, we can.

Not sure who maintains EasyPal these days, not Eric VK4AES as he’s no longer with us (RIP and thank-you).  Here, we might need an RS-232 PTT interface, which as discussed, is not a hard modification.

Radioteletype

Most is covered by FLDigi.  Modes with a fairly consistent duty cycle will work fine with the VOX PTT, and once again, we have the source, we can make others work.

Custom software ideas

So we can use a few off-the-shelf packages to do basic comms.

  • We need auditability of our messaging system.  Analogue FM, we can just use a VOX-like function on the computer to record individual received messages, and to record outgoing traffic.  Text messages and files can be logged.
  • Ideally, we should have some digital signing of logs to make them tamper-resistant.  Then we can mathematically prove what was sent.
  • In a true  emergency, it may be necessary to encrypt what we transmit.  This is fine, we’re allowed to do this in such cases, and we can always turn over our audited logs for authorities anyway.
  • Files will be sent as blocks which are forward-error corrected (or forward-erasure coded).  We can use a block cipher such as AES-256 to encrypt these blocks before FEC.  OpenPGP would work well here rather doing it from scratch; just send the OpenPGP output using FEC blocks.  It should be possible to pick out symmetric key used at the receiving end for auditing, this would be done if asked for by Government.  DIY not necessary, the building blocks are there.
  • Digital voice is a stream, we can use block ciphers but this introduces latency and there’s always the issue of bit errors.  Stream ciphers on the other hand, work by generating a key stream, then XOR-ing that with the data.  So long as we can keep sync in the face of bit errors, use of a stream cipher should not impair noise immunity.
  • Signal fade is a worse problem, I suggest a cleartext (3-bit, 4-bit?) gray-code sync field for synchronisation.  Receiver can time the length of a fade, estimate the number of lost frames, then use the field to re-sync.
  • There’s more than a dozen stream ciphers to choose from.  Some promising ones are ACHTERBAHN-128, Grain 128a, HC-256, Phelix, Py, the Salsa20 family, SNOW 2/3G, SOBER-128, Scream, Turing, MUGI, Panama, ISAAC and Pike.
  • Most (all?) stream ciphers are symmetric.  We would have to negotiate/distribute a key somehow, either use Diffie-Hellman or send a generated key as an encrypted file transfer (see above).  The key and both encrypted + decrypted streams could be made available to Government if needed.
  • The software should be capable of:
    • Real-time digital voice (encrypted and clear; the latter being compatible with FreeDV)
    • File transfer (again, clear and encrypted using OpenPGP, and using good FEC, files will be cryptographically signed by sender)
    • Voice mail and SSTV, implemented using file transfer.
    • Radioteletype modes (perhaps PSK31, Olivia, etc), with logs made.
    • Analogue voice pass-through, with recordings made.
    • All messages logged and time-stamped, received messages/files hashed, hashes cryptographically signed (OpenPGP signature)
    • Operation over packet networks (AX.25, TCP/IP)
    • Standard message forms with some basic input validation.
    • Ad-hoc routing between interfaces (e.g. SSB to AX.25, AX.25 to TCP/IP, etc) should be possible.
  • The above stack should ideally work on low-cost single-board computers that are readily available and are low-power.  Linux support will be highest priority, Windows/MacOS X/BSD is a nice-to-have.
  • GNU Radio has building blocks that should let us do most of the above.
Mar 252017
 

So, there’s been a bit of discussion lately about our communications infrastructure. I’ve been doing quite a bit of thinking about the topic.

The situation today

Here in Australia, a lot of people are being moved over to the National Broadband Network… with the analogue fixed line phone (if it hasn’t disappeared already) being replaced with a digital service.

For many, their cellular “mobile” phone is their only means of contact. More than the over-glorified two-way radios that was pre-cellular car phones used by the social elites in the early 70s, or the slightly more sophisticated and tennis-elbow inducing AMPS hand-held mobile phones that we saw in the 80s, mobile phones today are truly versatile and powerful hand-held computers.

In fact, they are more powerful than the teen-aged computer I am typing this on. (And yes, I have upgraded it; 1GB RAM, 250GB mSATA SSD, Linux kernel 4.0… this 2GHz P4 still runs, and yes I’ll update that kernel in a moment. Now, how’s that iPhone 3G going, still running well?)

All of these devices are able to provide data communications throughput in the order of millions of bits per second, and outside of emergencies, are generally, very reliable.

It is easy to forget just how much needs to work properly in order for you to receive that funny cat picture.

Mobile networks

One thing that is not clear about the NBN, is what happens when the power is lost. The electricity grid is not infallible, and requires regular maintenance, so while reliability is good, it is not guaranteed.

For FTTP users, battery backup is an optional extra. If you haven’t opted in, then your “land line” goes down when the power goes out.

This is not a fact that people think about. Most will say, “that’s fine, I’ve got my mobile” … but do you? The typical mobile phone cell tower has several hours battery back-up, and can be overwhelmed by traffic even in non-emergencies.They are fundamentally engineered to a cost, thus compromises are made on how long they can run without back-up power, and how much call capacity they carry.

In the 2008 storms that hit The Gap, I had no mobile telephone coverage for 2 days. My Nokia 3310 would occasionally pick up a signal from a tower in a neighbouring suburb such as Keperra, Red Hill or Bardon, and would thus occasionally receive the odd text message… but rarely could muster the effective radiated power to be able to reply back or make calls. (Yes, and Nokia did tell me that internal antennas surpassed the need for external ones. A 850MHz yagi might’ve worked!)

Emergency Services

Now, you tell yourself, “Well, the emergency services have their own radios…”, and this is correct. They do have their own radio networks. They too are generally quite reliable. They have their problems. The Emergency Alerting System employed in Victoria was having capacity problems as far back as 2006 (emphasis mine):

A high-priority project under the Statewide Integrated Public Safety Communications Strategy was establishing a reliable statewide paging system; the emergency alerting system. The EAS became operational in 2006 at a cost of $212 million. It provides coverage to about 96 per cent of Victoria through more than 220 remote transmitter sites. The system is managed by the Emergency Services Telecommunications Agency on behalf of the State and is used by the CFA, VICSES and Ambulance Victoria (rural) to alert approximately 37,400 personnel, mostly volunteers, to an incident. It has recently been extended to a small number of DSE and MFB staff.

Under the EAS there are three levels of message priority: emergency, non-emergency, and administrative. Within each category the system sends messages on a first-in, first-out basis. This means queued emergency messages are sent before any other message type and non-emergency messages have priority over administrative messages.

A problem with the transmission speed and coverage of messages was identified in 2006. The CFA expressed concern that areas already experiencing marginal coverage would suffer additional message loss when the system reached its limits during peak events.

To ensure statewide coverage for all pagers, in November 2006 EAS users decided to restrict transmission speed and respond to the capacity problems by upgrading the system. An additional problem with the EAS was caused by linking. The EAS can be configured to link messages by automatically sending a copy of a message to another pager address. If multiple copies of a message are sent the overall load on the system increases.

By February 2008 linking had increased by 25 per cent.

During the 2008 windstorm in Victoria the EAS was significantly short of delivery targets for non-emergency and administrative messages. The Emergency Services Telecommunications Agency subsequently reviewed how different agencies were using the system, including their message type selection and message linking. It recommended that the agencies establish business rules about the use of linking and processes for authorising and monitoring de-linking.

The planned upgrade was designed to ensure the EAS could cope better with more messages without the use of linking.

The upgrade was delayed several times and rescheduled for February 2009; it had not been rolled out by the time of Black Saturday. Unfortunately this affected the system on that day, after which the upgrade was postponed indefinitely.

I can find mention of this upgrade taking place around 2013. From what I gather, it did eventually happen, but it took a roasting from mother nature to make it happen. The lesson here is that even purpose built networks can fall over, and thus particularly in major incidents, it is prudent to have a back-up plan.

Alternatives

For the lay person, CB radio can be a useful tool for short-range (longer-than-yelling-range) voice communications. UHF CB will cover a few kilometres in urban environments and can achieve quite long distances if good line-of-sight is maintained. They require no apparatus license, and are relatively inexpensive.

It is worth having a couple of cheap ones, a small torch and a packet of AAA batteries (stored separately) in the car or in a bag you take with you. You can’t use them if they’re in a cupboard at home and you’re not there.

The downside with the hand-helds, particularly the low end ones, is effective radiated power. They will have small “rubber ducky” antennas, optimised for size, and will typically have limited transmit power, some can do the 5W limit, but most will be 1W or less.

If you need a bit more grunt, a mobile UHF CB set and magnetic mount antenna could be assembled and fitted to most cars, and will provide 5W transmit power, capable of about 5-10km in good conditions.

HF (27MHz) CB can go further, and with 12W peak envelope power, it is possible to get across town with one, even interstate or overseas when conditions permit. These too, are worth looking at, and many can be had cheaply second-hand. They require a larger antenna however to be effective, and are less common today.

Beware of fakes though… A CB radio must meet “type approval”, just being technically able to transmit in that band doesn’t automatically make it a CB, it must meet all aspects of the Citizens Band Radio Service Class License to be classified a CB.

If it does more than 5W on UHF, it is not a UHF CB. If it mentions a transmit range outside of 476-478MHz, it is not a UHF CB.  Programming it to do UHF channels doesn’t change this.

Similarly, if your HF CB radio can do 26MHz (NZ CB, not Australia), uses FM instead of SSB/AM (UK CB, again not Australia), does more than 12W, or can do 28-30MHz (10m amateur), it doesn’t qualify as being a CB under the class license.

Amateur radio licensing

If you’ve got a good understanding of high-school mathematics and physics, then a Foundation amateur radio license is well within reach.  In fact, I’d strongly recommend it for anyone doing first year Electrical Engineering … as it will give you a good practical grounding in electrical theories.

Doing so, you get to use up to 10W of power (double what UHF CB gives you; 6dB can matter!) and access to four HF, one VHF and one UHF band using analogue voice or hand-keyed Morse code.

You can then use those “CB radios” that sell on eBay/DealExtreme/BangGood/AliExpress…etc, without issue, as being un-modified “commercial off-the-shelf”, they are acceptable for use under the Foundation license.

Beyond Voice: amateur radio digital modes

Now, all good and well being able to get voice traffic across a couple of suburban blocks. In a large-scale disaster, it is often necessary to co-ordinate recovery efforts, which often means listings of inventory and requirements, welfare information, etc, needs to be broadcast.

You can broadcast this by voice over radio… very slowly!

You can put a spreadsheet on a USB stick and drive it there. You can deliver photos that way too. During an emergency event, roads may be in-passable, or they may be congested. If the regular communications channels are down, how does one get such files across town quickly?

Amateur radio requires operators who have undergone training and hold current apparatus licenses, but this service does permit the transmission of digital data (for standard and advanced licensees), with encryption if needed (“intercommunications when participating in emergency services operations or related training exercises”).

Amateur radio is by its nature, experimental. Lots of different mechanisms have been developed through experiment for intercommunication over amateur radio bands using digital techniques.

Morse code

The oldest by far is commonly known as “Morse code”, and while it is slower than voice, it requires simpler transmitting and receiving equipment, and concentrates the transmitted power over a very narrow bandwidth, meaning it can be heard reliably at times when more sophisticated modes cannot. However, not everybody can send or receive it (yours truly included).

I won’t dwell on it here, as there are more practical mechanisms for transmitting lots of data, but have included it here for completeness. I will point out though, due to its simplicity, it has practically no latency, thus it can be faster than SMS.

Radio Teletype

Okay, there are actually quite a few modes that can be described in this manner, and I’ll use this term to refer to the family of modes. Basically, you can think of it as two dumb terminals linked via a radio channel. When you type text into one, that text appears on the other in near real-time. The latency is thus very low, on par with Morse code.

The earliest of these is the RTTY mode, but more modern incarnations of the same idea include PSK31.

These are normally used as-is. With some manual copying and pasting pieces of text at each end, it is possible to encode other forms of data as short runs of text and send files in short hand-crafted “packets”, which are then hand-deconstructed and decoded at the far end.

This can be automated to remove the human error component.

The method is slow, but these radioteletype modes are known for being able to “punch through” poor signal conditions.

When I was studying web design back in 2001, we were instructed to keep all photos below 30kB in size. At the time, dial-up Internet was common, and loading times were a prime consideration.

Thus instead of posting photos like this, we had to shrink them down, like this. Yes, some detail is lost, but it is good enough to get an “idea” of the situation.

The former photo is 2.8MB, the latter is 28kB. Via the above contrived transmission system, it would take about 20 minutes to transmit.

The method would work well for anything that is text, particularly simple spread sheets, which could be converted to Comma Separated Values to strip all but the most essential information, bringing file sizes down into realms that would allow transmission times in the order of 5 minutes. Text also compresses well, thus in some cases, transmission time can be reduced.

To put this into perspective, a drive from The Gap where that photo was taken, into the Brisbane CBD, takes about 20 minutes in non-peak-hour normal circumstances. It can take an hour at peak times. In cases of natural disaster, the roads available to you may be more congested than usual, thus you can expect peak-hour-like trip times.

Radio Faximile and Slow Scan Television

This covers a wide variety of modes, ranging from the ancient like Hellschreiber which has its origins in the German Military back in World War II, various analogue slow-scan television modes through to the modern digital slow-scan television.

This allows the transmission of photos and visual information over radio. Some systems like EasyPAL and its elk (based on HamDRM, a variant of Digital Radio Mondiale) are in fact, general purpose modems for transmitting files, and thus can transmit non-graphical data too.

Transmit times can vary, but the analogue modes take between 30 seconds and two minutes depending on quality. For the HamDRM-based systems, transmit speeds vary between 86Bps up to 795kBps depending on the settings used.

Packet Radio

Packet radio is the concept of implementing packet-switched networks over radio links. There are various forms of this, the most common in amateur radio being PACTOR, WINMOR, the 1200-baud AFSK and 9600-baud FSK and 300-baud AFSK packet modes.

300-baud AFSK is normally used on HF links, and hails from experiments using surplus Bell 103 modems modified to work with radio. Similarly, on VHF and UHF FM radio, experiments were done with surplus Bell 202 modems, giving rise to the 1200-baud AFSK mode.

The 9600-baud FSK mode was the invention of James Miller G3RUH, and was one of the first packet radio modes actually developed by radio amateur operators for use on radio.

These are all general-purpose data modems, and while they can be used for radioteletype applications, they are designed with computer networking in mind.

The feature facilities like automatic repeating of lost messages, and in some cases support forward error correction. PACTOR/WINMOR is actually used with the Winlink radio network which provides email services.

The 300-baud, 1200-baud and 9600-baud versions generally use a networking protocol called AX.25, and by configuring stations with multiple such “terminal node controllers” (modems) connected and appropriate software, a station can operate as a router, relaying traffic received via one radio channel to a station that’s connected via another, or to non-AX.25 stations on Winlink or the Internet.

It is well suited to automatic stations, operating without human intervention.

AX.25 packet and PACTOR I are open standards, the later PACTOR modems are proprietary devices produced by SCS in Germany.

AX.25 packet is capable of transmit speeds between 15Bps (300 baud) and 1kBps (9600 baud). PACTOR varies between 5Bps and 650Bps.

In theory, it is possible to develop new modems for transmitting AX.25, the HamDRM modem used for slow-scan television and the FDMDV modem used in FreeDV being good starting points as both are proven modems with good performance.

These simply require an analogue interface between the computer sound card and radio, and appropriate software.  Such an interface made to link a 1200-baud TNC to a radio could be converted to link to a low-cost USB audio dongle for connection to a computer.

If someone is set up for 1200-baud packet, setting up for these other modes is not difficult.

High speed data

Going beyond standard radios, amateur radio also has some very high-speed data links available. D-Star Digital Data operates on the 23cm microwave band and can potentially transmit files at up to 16KBps, which approaches ADSL-lite speeds. Transceivers such as the Icom ID-1 provide this via an Ethernet interface for direct connection to a computer.

General Electric have a similar offering for industrial applications that operates on various commercial bands, some of which can reach amateur frequencies, thus would be usable on amateur bands. These devices offer transmit speeds up to 8KBps.

A recent experiment by amateurs using off-the-shelf 50mW 433MHz FSK modules and Realtek-based digital TV tuner receivers produced a high-speed speed data link capable of delivering data at up to 14KBps using a wideband (~230kHz) radio channel on the 70cm band.  They used it to send high definition photos from a high-altitude balloon.

The point?

We’ve got a lot of tools at our disposal for getting a message through, and collectively, 140 years of experience at our disposal. In an emergency situation, that means we have a lot of different options, if one doesn’t work, we can try another.

No, a 1200-baud VHF packet link won’t stream 4k HD video, but it has minimal latency and will take less than 20 minutes to transmit a 100kB file over distances of 10km or more.

A 1kB email will be at the other end before you can reach for your car keys.  Further experimentation and development means we can only improve.  Amateur radio is far from obsolete.

Apr 272016
 

It seems good old “common courtesy” is absent without leave, as is “common sense”. Some would say it’s been absent for most of my lifetime, but to me it seems particularly so of late.

In particular, where it comes to the safety of one’s self, and to others, people don’t seem to actually think or care about what they are doing, and how that might affect others. To say it annoys me is putting it mildly.

In February, I lost a close work colleague in a bicycle accident. I won’t mention his name, as I do not have his family’s permission to do so.

I remember arriving at my workplace early on Friday the 12th before 6AM, having my shower, and about 6:15 wandering upstairs to begin my work day. Reaching my desk, I recall looking down at an open TS-7670 industrial computer and saying out aloud, “It’s just you and me, no distractions, we’re going to get U-Boot working”, before sitting down and beginning my battle with the machine.

So much for the “no distractions” however. At 6:34AM, the office phone rings. I’m the only one there and so I answer. It was a social worker looking for “next of kin” details for a colleague of mine. Seems they found our office details via a Cab Charge card they happened to find in his wallet.

Well, first thing I do is start scrabbling for the office directory to get his home number so I can pass the bad news onto his wife only to find: he’s only listed his mobile number. Great. After getting in contact with our HR person, we later discover there isn’t any contact details in the employee records either. He was around before such paperwork existed in our company.

Common sense would have dictated that one carry an “in case of emergency” number on a card in one’s wallet! At the very least let your boss know!

We find out later that morning that the crash happened on a particularly sharp bend of the Go Between Bridge, where the offramp sweeps left to join the Bicentennial bikeway. It’s a rather sharp bend that narrows suddenly, with handlebar-height handrails running along its length and “Bicycle Only” signs clearly signposted at each end.

Common sense and common courtesy would suggest you slow down on that bridge as a cyclist. Common sense and common courtesy would suggest you use the other side as a pedestrian. Common sense would question the utility of hand rails on a cycle path.

In the meantime our colleague is still fighting for his life, and we’re all holding out hope for him as he’s one of our key members. As for me, I had a network to migrate that weekend. Two of us worked the Saturday and Sunday.

Sunday evening, emotions hit me like a freight train as I realised I was in denial, and realised the true horror of the situation.

We later find out on the Tuesday, our colleague is in a very bad way with worst-case scenario brain damage as a result of the crash. From shining light to vegetable, he’d never work for us again.

Wednesday I took a walk down to the crash site to try and understand what happened. I took a number of photographs, and managed to speak to a gentleman who saw our colleague being scraped off the pavement. Even today, some months later, the marks on the railings (possibly from handlebar grips) and a large blood smear on the path itself, can still be seen.

It was apparent that our colleague had hit this railing at some significant speed. He wasn’t obese, but he certainly wasn’t small, and a fully grown adult does not ricochet off a metal railing and slide face-first for over a metre without some serious kinetic energy involved.

Common sense seems to suggest the average cyclist goes much faster than the 20km/hr collision the typical bicycle helmet is designed for under AS/NZS 2063:2008.

I took the Thursday and Friday off as time-in-lieu for the previous weekend, as I was an emotional wreck. The following Tuesday I resumed cycling to work, and that morning I tried an experiment to reproduce the crash conditions. The bicycle I ride wasn’t that much different to his, both bikes having 29″ wheels.

From what I could gather that morning, it seemed he veered right just prior to the bend then lost control, listing to the right at what I estimated to be about a 30° angle. What caused that? We don’t know. It’s consistent with him dodging someone or something on the path — but this is pure speculation on my part.

Mechanical failure? The police apparently have ruled that out. There’s not much in the way of CCTV cameras in the area, plenty on the pedestrian side, not so much on the cycle side of the bridge.

Common sense would suggest relying on a cyclist to remember what happened to them in a crash is not a good plan.

In any case, common sense did not win out that day. Our colleague passed away from his injuries a little over a fortnight after his crash, aged 46. He is sadly missed.

I’ve since made a point of taking my breakfast down to that point where the bridge joins the cycleway. It’s the point where my colleague had his last conscious thoughts.

Over the course of the last few months, I’ve noticed a number of things.

Most cyclists sensibly slow down on that bend, but a few race past at ludicrous speed. One morning, I nearly thought they’d be an encore performance as two construction workers on City Cycle bikes, sans helmets, came careening around the corner, one almost losing it.

Then I see the pedestrians. There’s a well lit, covered walkway, on the opposite side of the bridge for pedestrian use. It has bench seats, drinking fountains, good lighting, everything you’d want as a pedestrian. Yet, some feel it is not worth the personal exertion to take the 100m extra distance to make use of it.

Instead, they show a lack of courtesy by using the bicycle path. Walking on a bicycle path isn’t just dangerous to the pedestrian like stepping out onto a road, it’s dangerous for the cyclist too!

If a car hits a pedestrian or cyclist, the damage to the occupants of the car is going to be minimal to nonexistent, compared to what happens to the cyclist or pedestrian. If a cyclist or motorcyclist hits a pedestrian however, they surround the frame, thus hit the ground first. Possibly at significant speed.

Yet, pedestrians think it is acceptable to play Russian roulette with their own lives and the lives of every cycle user by continuing to walk where it is not safe for them to go. They’d never do it on a motorway, but somehow a bicycle path is considered fair game.

Most pedestrians are understanding, I’ve politely asked a number to not walk on the bikeway, and most oblige after I point out how they get to the pedestrian walkway.

Common sense would suggest some signage on where the pedestrian can walk would be prudent.

However, I have had at least two that ignored me, one this morning telling me to “mind my own shit”. Yes mate, I am minding “my own shit” as you put it: I’m trying to stop the hypothetical me from possibly crashing into the hypothetical you!

It’s this sort of reaction that seems symbolic of the whole “lack of common courtesy” that abounds these days.

It’s the same attitude that seems to hint to people that it’s okay to park a car so that it blocks the footpath: newsflash, it’s not! I know of one friend of mine who frequently runs into this problem. He’s in a wheelchair — a vehicle not known for its off-road capabilities or ability to squeeze past the narrow gap left by a car.

It seems the drivers think it’s acceptable to force footpath users of all types, including the elderly, the young and the disabled, to “step out” onto the road to avoid the car that they so arrogantly parked there. It makes me wonder how many people subsequently become disabled as a result of a collision caused by them having to step around such obstacles. Would the owner of the parked car be liable?

I don’t know, I’m no lawyer, but I should think they should carry some responsibility!

In Queensland, pedestrians have right-of-way on the footpath. That includes cyclists: cyclists of all ages are allowed there subject to council laws and signage — but once again, they need to give way. In other words, don’t charge down the path like a lunatic, and don’t block it!

No doubt, the people who I’m trying to convince are too arrogant to care about the above, and what their actions might have on others. Still, I needed to get the above off my chest!

Nothing will bring my colleague back, a fact that truly pains me, and I’ve learned some valuable lessons about the sort of encouragement I give people. I regret not telling him to slow down, 5 minutes longer wouldn’t have killed him, and I certainly did not want a race! Was he trying to race me so he could keep an eye on me? I’ll never know.

He was a bright person though, it is proof though that even the intelligent among us are prone to possibly doing stupid things. With thrills come spills, and one might question whether one’s commute to work is the appropriate venue for such thrills, or whether those can wait for another time.

I for one have learned that it does not pay to be the hare, thus I intend to just enjoy the ride for what it is. No need to rush, common sense tells me it just isn’t worth it!

Dec 062015
 

Recently, I learned about the IceStorm project, which is an effort to reverse engineer the Lattice iCE40-series of FPGAs.  I had run across FPGAs in my time before, but never really got to understand them.  This is for a few reasons:

  • The tools tended to be proprietary, with highly (unnecessarily?) restrictive licensing
  • FPGA boards were hellishly expensive

I wasn’t interested in doing the proprietary toolchain dance, did enough of that with some TI stuff years ago.  There, it was the MSP430, and one of their DSPs.  The former I could use gcc, but still needed a proprietary build of gdbproxy to program and debug the device, and that needed Windows.  The latter could only be programmed using TI’s Code Composer studio.

FPGAs were ten times worse.  Not only was the toolchain huge, occupying gigabytes, but the license was locked to the hardware.  The one project with anything FPGA-related, it was an Altera FPGA, and getting Quartus II to work was nothing short of a nightmare.  I gave up, and vowed never to touch FPGAs.

Fast forward 6 years, and things have changed.  We now have a Verilog synthesiser.  We now have a place-and-route tool.  We have tools for generating a bitstream for the iCE40 FPGAs.  We can now buy FPGA boards for well under the $100.  Heck, you can buy them for $5.

Lattice can do one of three things at this point:

  • They can actively try to stomp it out (discontinuing the iCE40 family, filing law suits, …etc)
  • They can pretend it doesn’t exist
  • They can partner with us and help build a hobby market for their FPGAs

Time will tell as to what they choose.  I’m hoping it’s the latter, but ignoring us is workable too.

So recently I bought an iCE40-HX8K breakout board.  This $80 board is pretty minimal, you get 8 LEDs, a FTDI serial-USB controller (which serves as programmer), a small serial flash EEPROM (for configuration), a linear regulator, a 12MHz oscillator and 4 40-pin headers for GPIOs.

The FPGA on this board is the iCE40HX8K-CT256.  At the time of writing, that’s the top of that particular series with 7680 look-up tables, two PLLs, and some integrated SPI/I²C smarts.

There’s not a lot in the way of tutorials for this particular board, most focus on the iCEStick, which uses the lesser iCE40HX1K-TQ144, has only a small handful of GPIOs exposed and has no configuration EEPROM (it’s one-time programmable).

Through some trial-and-error, and pouring over the schematics though, I managed to port Al Williams’ tutorial on Hackaday at least in part, to the iCE40-HX8k board.  The code for this is on Github.

Pretty much everything works on this board, even PLLs and block RAM.  There’s an example using the PLL on the iCEstick in this VGA demo project.

Some things I’ve learned:

  • If you open jumper J7, and rotate the jumpers on J6 to run horizontally (strapping pins 1-2 and 3-4), specifying -S to iceprog will program the CRAM without touching the SPI flash chip.
  • The PLL ceases to lock in when REFCLK/(1+DIV_R) drops to 10MHz or below.

FILTER_RANGE is a mystery though.  Haven’t figured out what the values correspond to.

It’s likely this particular board is destined to become a DRAM/Interrupt/DMA controller for my upcoming 386, but we’ll see.  In the meantime, I’m playing with a new toy. 🙂

Nov 212015
 

Well, in the last post I started to consider the thoughts of building my own computer from a spare 386 CPU I had liberated from an old motherboard.

One of the issues I face is implementing the bus protocol that the 386 uses, and decoding of interrupts.  The 386 expects an 8-bit interrupt request number that corresponds to the interrupting device.  I’m used to microcontrollers where you use a single GPIO line, but in this case, the interrupts are multiplexed.

For basic needs, you could do it with a demux IC.  That will work for a small number of interrupt lines.  Suppose I wanted more though?  How feasible is it to support many interrupt lines without tying up lots of GPIO lines?

CANBus has an interesting way of handling arbitration.  The “zeros” are dominant, and thus overrule “ones”.  The CAN transceiver is a full-duplex device, so as the station is transmitting, it listens to the state of the bus.  When some nodes want to talk (they are, of course, oblivious to each-others’ intentions), they start sending a start-bit (a zero) which synchronises all nodes, then begin sending an address.

While each node is sending the same “bit value”, the receiving nodes see that value.  As each node tries sending a 1 while the others are sending 0’s, it sees the disparity, and concludes that it has lost arbitration.  Eventually, you’re left with a single node that then proceeds to send its CANBus frame.

Now, we don’t need the complexity of CANBus to do what we’re after.  We can keep synchronisation by simple virtue that we can distribute a common clock (the one the CPU runs at).  Dominant and recessive bits can be implemented with transistors pulling down on a pull-up resistor, or a diode-OR: this will give us a system where ‘1’s are dominant.  Good enough.

So I figured up Logisim to have a fiddle, came up with this:

Interrupt controller using logic gates

Interrupt controller using logic gates

interrupt.circ is the actual LogiSim circuit if you wanted to have a fiddle; decompress it.  Please excuse the mess regarding the schematic.

On the left is the host-side of the interrupt controller.  This would ultimately interface with the 386.  On the right, are two “devices”, one on IRQ channel 0x01, the other on 0x05.  The controller handles two types of interrupts: “DMA interrupts”, where the device just wants to tell the DMA controller to put data into memory, or “IRQ”s, where we want to interrupt the CPU.

The devices are provided with the following control signals from the interrupt controller:

Signal Controlled by Description
DMA Devices Informs the IRQ controller if we’re interrupting for DMA purposes (high) or if we need to tell the CPU something (low).
IRQ Devices Informs the IRQ controller we want its attention
ISYNC Controller Informs the devices that they have the controller’s attention and to start transmitting address bits.
IRQBIT[2…0] Controller Instructs the devices what bit of their IRQ address to send (0 = MSB, 7 = LSB).
IDA Devices The inverted address bit value corresponding to the bit pointed to by IRQBIT.
IACK Devices Asserted by the device that wins arbitration.

Due to the dominant/recessive nature of the bits, the highest numbered device wins over lesser devices. IRQ requests also dominate over DMA requests.

In the schematic, the devices each have two D-flip-flops that are not driven by any control signals.  These are my “switches” for toggling the state of the device as a user.  The ones feeding into the XOR gate control the DMA signal, the others control the IRQ line.

Down the bottom, I’ve wired up a counter to count how long between the ISYNC signal going high and the controller determining a result.  This controller manages to determine which device requested its attention within 10 cycles.  If clocked at the same 20MHz rate as the CPU core, this would be good enough for getting a decoded IRQ channel number to the data lines of the 386 CPU by the end of its second IRQ acknowledge cycle, and can handle up to 256 devices.

A logical next step would be to look at writing this in Verilog and trying it out on an FPGA.  Thanks to the excellent work of Clifford Wolf in producing the IceStorm project, it is now possible to do this with completely open tools.  So, I’ve got a Lattice iCE40HX-8K FPGA board coming.  This should make a pretty mean SDRAM controller, interrupt controller and address decoder all in one chip, and should be a great introduction into configuring FPGAs.

Nov 072015
 

Well, I’ve been thinking a lot lately about single board computers. There’s a big market out there. Since the Raspberry Pi, there’s been a real explosion available to the small-end of town, the individual. Prior to this, development boards were mostly in the 4-figures sort of price range.

So we’re now rather spoiled for choice. I have a Raspberry Pi. There’s also the BeagleBone Black, Banana Pi, and several others. One gripe I have with the Raspberry Pi is the complete absence of any kind of analogue input. There’s an analogue line out, you can interface some USB audio devices (although I hear two is problematic), or you can get an I2S module.

There’s a GPU in there that’s capable of some DSP work and a CLKOUT pin that can generate a wide range of frequencies. That sounds like the beginnings of a decent SDR, however one glitch, while I can use the CLKOUT pin to drive a mixer and the GPIOs to do band selection, there’s nothing that will take that analogue signal and sample it.

If I want something wider than audio frequencies (and even a 192kHz audio CODEC is not guaranteed above ~20kHz) I have to interface to SPI, and the pickings are somewhat slim. Then I read this article on a DIY single board computer.

That got me thinking about whether I could do my own. At work we use the Technologic Systems TS-7670 single-board computers, and as nice as those machines are, they’re a little slow and RAM-limited. Something that could work as a credible replacement there too would be nice, key needs there being RS-485, Ethernet and a 85 degree temperature rating.

Form factor is a consideration here, and I figured something modular, using either header pins or edge connectors would work. That would make the module easily embeddable in hobby projects.

Since all the really nice SoCs are BGA packages, I figured I’d first need to know how easy I could work with them. We’ve got a stack of old motherboards sitting in a cupboard that I figured I could raid for BGAs to play with, just to see first-hand how fine the pins were. A crazy thought came to me: maybe for prototyping, I could do it dead-bug style?

Key thing here being able to solder directly to a ball securely, then route the wire to its destination. I may need to glue it to a bit of grounded foil to keep the capacitance in check. So, the first step I figured, would be to try removing some components from the boards I had laying around to see this first-hand.

In amongst the boards I came across was one old 386 motherboard that I initially mistook for a 286 minus the CPU. The empty (PLCC) socket is for an 80387 math co-processor. The board was in the cupboard for a good reason, corrosion from the CMOS battery had pretty much destroyed key traces on one corner of the board.

Corrosion on a motherboard caused by a CMOS battery

Corrosion on a motherboard caused by a CMOS battery

I decided to take to it with the heat gun first. The above picture was taken post-heatgun, but you can see just how bad the corrosion was. The ISA slots were okay, and so where a stack of other useful IC sockets, ICs, passive components, etc.

With the heat gun at full blast, I’d just wave it over an area of interest until the board started to de-laminate, then with needle-nose pliers, pull the socket or component from the board. Sometimes the component simply dropped out.

At one point I heard a loud “plop”. Looking under the board, one of the larger surface-mounted chips had fallen off. That gave me an idea, could the 386 chip be de-soldered? I aimed the heat-gun directly at the area underneath. A few seconds later and it too hit the deck.

All in all, it was a successful haul.

Parts off the 386 motherboard

Parts off the 386 motherboard

I also took apart an 8-bit ISA joystick card. It had some nice looking logic chips that I figured could be re-purposed. The real star though was the CPU itself:

Intel NG80306SX-20

Intel NG80306SX-20

The question comes up, what does one do with a crusty old 386 that’s nearly as old as I am? A quick search turned up this scanned copy of the Intel 80386SX datasheet. The chip has a 16-bit bus with 23 bits worth of address lines (bit 0 is assumed to be zero). It requires a clock that is double the chip’s operating frequency (there’s an internal divide-by-two). This particular chip runs internally at 20MHz. Nothing jumped out as being scary. Could I use this as a practice run for making an ARM computer module?

A dig around dug up some more parts:

More parts

More parts

In this pile we have…

I also have some SIMMs laying around, but the SDRAM modules look easier to handle since the controllers on board synchronise with what would otherwise be the front-side bus.  The datasheet does not give a minimum clock (although clearly this is not DC; DRAM does need to be refreshed) and mentions a clock frequency of 33MHz when set to run at a CAS latency of 1.  It just so happens that I have a 33MHz oscillator.  There’s a couple of nits in this plan though:

  • the SDRAM modules a 3.3V, the CPU is 5V: no problem, there are level conversion chips out there.
  • the SDRAM modules are 64-bits wide.  We’ll have to buffer the output to eight 8-bit registers.  Writes do a read-modify-write cycle, and we use a 2-in-4 decoder to select the CE pin on two of the registers from address bits 1 and 2 from the CPU.
  • Each SDRAM module holds 32MB.  We have a 23-bit address bus, which with 16-bit words gives us a total address space of 16MB.  Solution: the old 8-bit computers of yesteryear used bank-switching to address more RAM/ROM than they had address lines for, we can interface an 8-bit register at I/O address 0x0000 (easily decoded with a stack of Schottky diodes and a NOT gate) which can hold the remaining address bits mapping the memory to the lower 8MB of physical memory.  We then hijack the 386’s MMU to map the 8MB chunks and use the page faults to switch memory banks.  (If we put the SRAM and ROM up in the top 1MB, this gives us ~7MB of memory-mapped I/O to play with.)

So, not show stoppers.  There’s an example circuit showing interfacing an ATMega8515 to a single SDRAM chip for driving a VGA interface, and some example code, with comments in German. Unfortunately you’d learn more German in an episode of Hogan’s Heroes than what I know, but I can sort-of figure out the sequence used to read and write from/to the SDRAM chip. Nothing looks scary there either.  This SDRAM tutorial seems to be a goldmine.

Thus, it looks like I’ve got enough bits to have a crack at it.  I can run the 386 from that 33MHz brick; which will give me a chip running at 16.5MHz.  Somewhere I’ve got the 40MHz brick laying around from the motherboard (I liberated that some time ago), but that can wait.

A first step would be to try interfacing the 386 chip to an AVR, and feed it instructions one step at a time, check that it’s still alive.  Then, the next steps should become clear.

Oct 312015
 

Well, it seems the updates to Microsoft’s latest aren’t going as its maker planned. A few people have asked me about my personal opinion of this OS, and I’ll admit, I have no direct experience with it.  I also haven’t had much contact with Windows 8 either.

That said, I do keep up with the news, and a few things do concern me.

The good news

It’s not all bad of course.  Windows 8 saw a big shrink in the footprint of a typical Windows install, and Windows 10 continues to be fairly lightweight.  The UI disaster from Windows 8 has been somewhat pared back to provide a more traditional desktop with a start menu that combines features from the start screen.

There are some limitations with the new start menu, but from what I understand, it behaves mostly like the one from Windows 7.  The tiled section still has some rough edges though, something that is likely to be addressed in future updates of Windows 10.

If this is all that had changed though, I’d be happily accepting it.  Sadly, this is not the case.

Rolling-release updates

Windows has, since day one, been on a long-term support release model.  That is, they bring out a release, then they support it for X years.  Windows XP was released in 2002 and was supported until last year for example.  Windows Vista is still on extended support, and Windows 7 will enter extended support soon.

Now, in the Linux world, we’ve had both long-term support releases and rolling release distributions for years.  Most of the current Linux users know about it, and the distribution makers have had many years to get it right.  Ubuntu have been doing this since 2004, Debian since 1998 and Red Hat since 1994.  Rolling releases can be a bumpy ride if not managed correctly, which is why the long-term support releases exist.  The community has recognised the need, and meets it accordingly.

Ubuntu are even predictable with their releases.  They release on a schedule.  Anything not ready for release is pushed back to the next release.  They do a release every 6 months, in April and October and every 2 years, the April release is a long-term support release.  That is; 8.04, 10.04, 12.04, 14.04 are all LTS releases.  The LTS releases get supported for about 3 years, the regular releases about 18 months.

Debian releases are basically LTS, unless you run Debian Testing or Debian Unstable.  Then you’re running rolling-release.

Some distributions like Gentoo are always rolling-release.  I’ve been running Gentoo for more than 10 years now, and I find the rolling releases rarely give me problems.  We’ve had our hiccups, but these days, things are smooth.  Updating an older Gentoo box to the latest release used to be a fight, but these days, is comparatively painless.

It took most of that 10 years to get to that point, and this is where I worry about Microsoft forcing the vast majority of Windows users onto a rolling-release model, as they will be doing this for the first time.  As I understand it, there will be four branches:

  1. Windows Insiders programme is like Debian Unstable.  The very latest features are pushed out to them first.  They are effectively running a beta version of Windows, and can expect many updates, many breakages, lots of things changing.  For some users, this will be fine, others it’ll be a headache.  There’s no option to skip updates, but you probably will have the option of resigning from the Windows Insiders programme.
  2. Home users basically get something like Debian Testing.  After updates have been thrashed out by the insiders, it gets force-fed to the general public.  The Home version of Windows 10 will not have an option to defer an update.
  3. Professional users get something more like the standard releases of Debian.  They’ll have the option of deferring an update for up to 30 days, so things can change less frequently.  It’s still rolling-release, but they can at least plan their updates to take place once a month, hopefully without disrupting too much.
  4. Enterprise users get something like the old-stable release of Debian.  Security updates, and they have the option to defer updates for a year.

Enterprise isn’t available unless you’re a large company buying lots of licenses.  If people must buy a Windows 10 machine, my recommendation would be to go for the professional version, then you have some right of veto, as not all the updates a purely security-related, some will be changing the UI and adding/removing features.

I can see this being a major headache though for anyone who has to support hardware or software on Windows 10 however, since it’s essentially the build number that becomes important: different release builds will behave differently.  Possibly different enough that things need much more testing and maintenance than what vendors are used to.

Some are very poor at supporting Linux right now due to the rolling-release model of things like the Linux kernel, so I can see Windows 10 being a nightmare for some.

Privacy concerns

One of the big issues to be raised with Windows 10 is the inclusion of telemetry to “improve the user experience” and other features that are seen as an invasion of privacy.  Many things can be turned off, but it will take someone who’s familiar with the OS or good at researching the problem to turn them off.

Probably the biggest concern from my prospective as a network administrator is the WiFi Sense feature.  This is a feature in Windows 10 (and Windows 8 Phone), turned on by default, that allows you to share WiFi passwords with other contacts.

If one of that person’s contacts then comes into range of your AP, their device contacts Microsoft’s servers which have the password on file, and can provide it to that person’s device (hopefully in a secured manner).  The password is never shown to the user themselves, but I believe it’s only a matter of time before someone figures out how to retrieve that password from WiFi Sense.  (A rogue AP would probably do the trick.)

We have discussed this at work where we have two WiFi networks: one WPA2 enterprise one for staff, and a WPA2 Personal one for guests.  Since we cannot control whether the users have this feature turned on or not, or whether they might accidentally “share” the password with world + dog, we’re considering two options:

  1. Banning the use of Windows 10 devices (and Windows 8 Phone) from being used on our guest WiFi network.
  2. Implementing a cron job to regularly change the guest WiFi password.  (The Cisco AP we have can be hit with SSH; automating this shouldn’t be difficult.)

There are some nasty points in the end user license agreement too that seem to give Microsoft free reign to make copies of any of the data on the system.  They say personal information will be removed, but even with the best of intentions, it is likely that some personal information will get caught in the net cast by telemetry software.

Forced “upgrades” to Windows 10

This is the bit about Windows 10 that really bugs me.  Okay, Microsoft is pushing a deal where they’ll provide it to you for free for a year.  Free upgrades, yaay!  But wait: how do you know if your hardware and software is compatible?  Maybe you’re not ready to jump on the bandwagon just yet, or maybe you’ve heard news about the privacy issues or rolling release updates and decided to hold back.

Many users of Windows 7, 8 and 8.1 are now being force-fed the new release, whether we asked for it or not.

Now the problem with this is it completely ignores the fact that some do not run with an always-on Internet connection with a large quota.  I know people who only have a 3G connection, with a very small (1GB) quota.  Windows 10 weighs in at nearly 3GB, so for them, they’ll be paying for 2GB worth of overuse charges just for the OS, never mind what web browsing, emailing and other things they might have actually bought their Internet connection for.

Microsoft employees have been outed for showing such contempt before.  It seems so many there are used to the idea of an Internet connection that is always there and has a big enough quota to be considered “unlimited” that they have forgotten that some parts of the world do not have such luxuries.  The computer and the Internet are just tools: we do not buy an Internet connection just for the sake of having one.

Stopping updates

There are a couple of tools that exist for managing this.  I have not tested any of them, and cannot vouch for their safety or reliability.

  • BlockWindows (github link) is a set of scripts that, when executed, uninstall and disable most of the Windows 10-related updates on Windows 7 and 8/8.1.
  • GWX Control Panel is a (proprietary?) tool for controlling the GWX process.  The download is here.

My recommendation is to keep good backups.  Find a tool that will do a raw partition back-up of your Windows partition, and keep your personal files on a separate partition.  Then, if Microsoft does come a-knocking, you can easily roll back.  Hopefully after the “free upgrade” offer has expired (about this time next year), they will cease and desist from this practise.

Sep 282015
 

I’ve been doing a bit of thinking on my ride this morning.  A few weeks ago, a school student in the US decided to use his interest in electronics and his knowledge to make a digital clock.  Having gotten the circuit working, he decided to bring the device to school to show his physics teacher.

The physics teacher might not have been alarmed, but another certainly was, and he was frogmarched to the Principal’s office who then called the police.  It was a major uproar, and shows just how paranoid a society (globally) we’ve become.

Back in 2001, I used to have a portable CD player which I’d listen to on my commutes to and from school.  It was a basic affair, that took 4 AA cells that were forever going flat.  I tried rechargeable cells, but wasn’t satisfied with their life either.  Having gotten fed up with that, I looked to alternatives.  The alternative I went for was a small 12V 7Ah SLA battery about the size of a house brick.

Yes, heavy, but I carried it in the backpack with my textbooks, and it worked well.  I could go a week on a single charge.  In addition, I could run not just a CD player, but any 12V device, including a small fan, which made me the envy of a lot of fellow students in the middle of summer.  (Our classrooms were not air conditioned.)  I still use a cigarette lighter extension lead/4-way adapter that I made back then to give me extra sockets on the bicycle.

If I tried it today, I half expect I’d be explaining this to the AFP.

It raises a real serious question about what our future generations are meant to do with their lives.  Yes, there’s clearly a danger in experimenting with these things.  That SLA battery, if it ruptured, could leak highly dangerous sulphuric acid.  If I charge it or discharge it too fast (e.g. by shorting the terminals), the internal resistance would build up heat inside the cells which would then start boiling the water in the electrolyte and gas would build up, possibly triggering the cell walls to fail.

But, I had contingency plans.  The battery was set up with fuses to cut power in the event of a short.  Cables were well insulated.  Terminals were protected.  It never caused an issue.  These days, I use LiFePO4s, which, while forgiving, also have their dangers.  I steer clear of LiPol cells since they are very volatile.

The point being, I had been experimenting with electronics from a very young age.  I also learned about computer programming from a very young age.  I learned about how they worked, and learned how to control them.  You could compare it to learning to ride a horse.

One [way] is to get on him and learn by actual practice how each motion and trick may be best met; the other is to sit on a fence and watch the beast a while, and then retire to the house and at leisure figure out the best way of overcoming his jumps and kicks.  The latter system is the safest, but the former, on the whole, turns out the larger proportion of good riders.

— Wilbur Wright in his speech “Some Aeronautical Experiments”, 18th September, 1901.
(source: David McCullough, “The Wright Brothers: The Dramatic Story-Behind-the-Story”)

I learned to ride a couple of “horses”.  One in particular, was the computer.  Understanding the electronics behind it greatly helped here.  I was already familiar with the concept of DC current by the time I hit university and I was well advanced in my understanding of how to control a computer.  What University specifically taught me was some discipline in how to structure code and the specifics of particular languages.  The bulk of my study was done long before I applied for any degrees.

There seems to be a thinking in today’s society that “task X is difficult, leave it to the professionals”.  There are some fields, where one would do well to heed that advice.  Anything involving gas ducting or mains electricity being two examples.

You can possibly get quite a bit of plumbing work done yourself, however some professional oversight is usually a good idea.  You have a right to DIY in most cases, but rights come with responsibilities, and one is taking responsibility of something goes wrong.  They go together.

(Extra) Low voltage, at low current levels, there’s very little you can actually do that would result in serious harm.  If you go about things carefully in a controlled manner, this experimentation can be a great vehicle for serious study in a chosen field.  Computers, unless you’re doing something really risky like flashing boot firmware, are not easily “bricked” and can be recovered.  Playing with a second-hand old desktop (not a production machine) or a cheap machine like the plethora of ARM-based and AVR-based single-board computers available today is not likely to result in life-threatening injury.

Banning the experimentation in such fields is not going to serve our community in the long term.  This is a typical knee-jerk reaction when someone’s experimentation is seen to be doing harm, even if, like the US student’s case, the experimentation is completely benign.  Following this road over time is only going to leave to a nation of cave-dwelling hermits that shun technology as black magic.

Technology is mankind’s genie, you cannot simply stuff it back in the bottle.  The genie here is quite willing to grant us many wishes, but unlike the ones of myths and legends, this one expects some effort to be done in return.  It is not simply going to vanish just because we’ve decided that it’s too dangerous.  We as individuals either need to study how the genie operates, or we pay someone else that has studied how it operates.  If everyone chooses to do the latter, who is left to do the former?

Sep 272015
 

Well, lately I’ve been doing a bit of work hacking the firmware on the Rowetel SM1000 digital microphone.  For those who don’t know it, this is a hardware (microcontroller) implementation of the FreeDV digital voice mode: it’s a modem that plugs into the microphone/headphone ports of any SSB-capable transceiver and converts FreeDV modem tones to analogue voice.

I plan to set this unit of mine up on the bicycle, but there’s a few nits that I had.

  • There’s no time-out timer
  • The unit is half-duplex

If there’s no timeout timer, I really need to hear the tones coming from the radio to tell me it has timed out.  Others might find a VOX feature useful, and there’s active experimentation in the FreeDV 700B mode (the SM1000 currently only supports FreeDV 1600) which has been very promising to date.

Long story short, the unit needed a more capable UI, and importantly, it also needed to be able to remember settings across power cycles.  There’s no EEPROM chip on these things, and while the STM32F405VG has a pin for providing backup-battery power, there’s no battery or supercapacitor, so the SM1000 forgets everything on shut down.

ST do have an application note on their website on precisely this topic.  AN3969 (and its software sources) discuss a method for using a portion of the STM32’s flash for this task.  However, I found their “license” confusing.  So I decided to have a crack myself.  How hard can it be, right?

There’s 5 things that a virtual EEPROM driver needs to bear in mind:

  • The flash is organised into sectors.
  • These sectors when erased contain nothing but ones.
  • We store data by programming zeros.
  • The only way to change a zero back to a one is to do an erase of the entire sector.
  • The sector may be erased a limited number of times.

So on this note, a virtual EEPROM should aim to do the following:

  • It should keep tabs on what parts of the sector are in use.  For simplicity, we’ll divide this into fixed-size blocks.
  • When a block of data is to be changed, if the change can’t be done by changing ones to zeros, a copy of the entire block should be written to a new location, and a flag set (by writing zeros) on the old block to mark it as obsolete.
  • When a sector is full of obsolete blocks, we may erase it.
  • We try to put off doing the erase until such time as the space is needed.

Step 1: making room

The first step is to make room for the flash variables.  They will be directly accessible in the same manner as variables in RAM, however from the application point of view, they will be constant.  In many microcontroller projects, there’ll be several regions of memory, defined by memory address.  This comes from the datasheet of your MCU.

An example, taken from the SM1000 firmware, prior to my hacking (stm32_flash.ld at r2389):

/* Specify the memory areas */
MEMORY
{
  FLASH (rx)      : ORIGIN = 0x08000000, LENGTH = 1024K
  RAM (rwx)       : ORIGIN = 0x20000000, LENGTH = 128K
  CCM (rwx)       : ORIGIN = 0x10000000, LENGTH = 64K
}

The MCU here is the STM32F405VG, which has 1MB of flash starting at address 0x08000000. This 1MB is divided into (in order):

  • Sectors 0…3: 16kB starting at 0x08000000
  • Sector 4: 64kB starting at 0x0800c000
  • Sector 5 onwards: 128kB starting at 0x08010000

We need at least two sectors, as when one fills up, we will swap over to the other. Now it would have been nice if the arrangement were reversed, with the smaller sectors at the end of the device.

The Cortex M4 CPU is basically hard-wired to boot from address 0, the BOOT pins on the STM32F4 decide how that gets mapped. The very first few instructions are the interrupt vector table, and it MUST be the thing the CPU sees first. Unless told to boot from external memory, or system memory, then address 0 is aliased to 0x08000000. i.e. flash sector 0, thus if you are booting from internal flash, you have no choice, the vector table MUST reside in sector 0.

Normally code and interrupt vector table live together as one happy family. We could use a couple of 128k sectors, but 256k is rather a lot for just an EEPROM storing maybe 1kB of data tops. Two 16kB sectors is just dandy, in fact, we’ll throw in the third one for free since we’ve got plenty to go around.

However, the first one will have to be reserved for the interrupt vector table that will have the space to itself.

So here’s what my new memory regions look like (stm32_flash.ld at 2390):

/* Specify the memory areas */
MEMORY
{
  /* ISR vectors *must* be placed here as they get mapped to address 0 */
  VECTOR (rx)     : ORIGIN = 0x08000000, LENGTH = 16K
  /* Virtual EEPROM area, we use the remaining 16kB blocks for this. */
  EEPROM (rx)     : ORIGIN = 0x08004000, LENGTH = 48K
  /* The rest of flash is used for program data */
  FLASH (rx)      : ORIGIN = 0x08010000, LENGTH = 960K
  /* Memory area */
  RAM (rwx)       : ORIGIN = 0x20000000, LENGTH = 128K
  /* Core Coupled Memory */
  CCM (rwx)       : ORIGIN = 0x10000000, LENGTH = 64K
}

This is only half the story, we also need to create the section that will be emitted in the ELF binary:

SECTIONS
{
  .isr_vector :
  {
    . = ALIGN(4);
    KEEP(*(.isr_vector))
    . = ALIGN(4);
  } >FLASH

  .text :
  {
    . = ALIGN(4);
    *(.text)           /* .text sections (code) */
    *(.text*)          /* .text* sections (code) */
    *(.rodata)         /* .rodata sections (constants, strings, etc.) */
    *(.rodata*)        /* .rodata* sections (constants, strings, etc.) */
    *(.glue_7)         /* glue arm to thumb code */
    *(.glue_7t)        /* glue thumb to arm code */
	*(.eh_frame)

    KEEP (*(.init))
    KEEP (*(.fini))

    . = ALIGN(4);
    _etext = .;        /* define a global symbols at end of code */
    _exit = .;
  } >FLASH…

There’s rather a lot here, and so I haven’t reproduced all of it, but this is the same file as before at revision 2389, but a little further down. You’ll note the .isr_vector is pointed at the region called FLASH which is most definitely NOT what we want. The image will not boot with the vectors down there. We need to change it to put the vectors in the VECTOR region.

Whilst we’re here, we’ll create a small region for the EEPROM.

SECTIONS
{
  .isr_vector :
  {
    . = ALIGN(4);
    KEEP(*(.isr_vector))
    . = ALIGN(4);
  } >VECTOR


  .eeprom :
  {
    . = ALIGN(4);
    *(.eeprom)         /* special section for persistent data */
    . = ALIGN(4);
  } >EEPROM


  .text :
  {
    . = ALIGN(4);
    *(.text)           /* .text sections (code) */
    *(.text*)          /* .text* sections (code) */

THAT’s better! Things will boot now. However, there is still a subtle problem that initially caught me out here. Sure, the shiny new .eeprom section is unpopulated, BUT the linker has helpfully filled it with zeros. We cannot program zeroes back into ones! Either we have to erase it in the program, or we tell the linker to fill it with ones for us. Thankfully, the latter is easy (stm32_flash.ld at 2395):

  .eeprom :
  {
    . = ALIGN(4);
    KEEP(*(.eeprom))   /* special section for persistent data */
    . = ORIGIN(EEPROM) + LENGTH(EEPROM) - 1;
    BYTE(0xFF)
    . = ALIGN(4);
  } >EEPROM = 0xff

Credit: Erich Styger

We have to do two things. One, is we need to tell it that we want the region filled with the pattern 0xff. Two, we need to make sure it gets filled with ones by telling the linker to write one as the very last byte. Otherwise, it’ll think, “Huh? There’s nothing here, I won’t bother!” and leave it as a string of zeros.

Step 2: Organising the space

Having made room, we now need to decide how to break this data up.  We know the following:

  • We have 3 sectors, each 16kB
  • The sectors have an endurance of 10000 program-erase cycles

Give some thought as to what data you’ll be storing.  This will decide how big to make the blocks.  If you’re storing only tiny bits of data, more blocks makes more sense.  If however you’ve got some fairly big lumps of data, you might want bigger blocks to reduce overheads.

I ended up dividing the sectors into 256-byte blocks.  I figured that was a nice round (binary sense) figure to work with.  At the moment, we have 16 bytes of configuration data, so I can do with a lot less, but I expect this to grow.  The blocks will need a header to tell you whether or not the block is being used.  Some checksumming is usually not a bad idea either, since that will clue you in to when the sector has worn out prematurely.  So some data in each block will be header data for our virtual EEPROM.

If we don’t care about erase cycles, this is fine, we can just make all blocks data blocks, however it’d be wise to track this, and avoid erasing and attempting to use a depleted sector, so we need somewhere to track this.  256 bytes gives us enough space to stash an erase counter and a map of what blocks are in use within that sector.

So we’ll reserve the first block in the sector to act as this index for the entire sector.  This gives us enough room to have 16-bits worth of flags for each block stored in the index.  That gives us 63 blocks per sector for data use.

It’d be handy to be able to use this flash region for a few virtual EEPROMs, so we’ll allocate some space to give us a virtual ROM ID.  It is prudent to do some checksumming, and the STM32F4 has a CRC32 module, so in that goes, and we might choose to not use all of a block, so we should throw in a size field (8 bits, since the size can’t be bigger than 255).  If we pad this out a bit to give us a byte for reserved data, we get a header with the following structure:

15 14 13 12 11 10 19 8 7 6 5 4 3 2 1 0
+0 CRC32 Checksum
+2
+4 ROM ID Block Index
+6 Block Size Reserved

So that subtracts 8 bytes from the 256 bytes, leaving us 248 for actual program data. If we want to store 320 bytes, we use two blocks, block index 0 stores bytes 0…247 and has a size of 248, and block index 1 stores bytes 248…319 and has a size of 72.

I mentioned there being a sector header, it looks like this:

15 14 13 12 11 10 19 8 7 6 5 4 3 2 1 0
+0 Program Cycles Remaining
+2
+4
+6
+8 Block 0 flags
+10 Block 1 flags
+12 Block 2 flags

No checksums here, because it’s constantly changing.  We can’t re-write a CRC without erasing the entire sector, we don’t want to do that unless we have to.  The flags for each block are currently allocated accordingly:

15 14 13 12 11 10 19 8 7 6 5 4 3 2 1 0
+0 Reserved In use

When the sector is erased, all blocks show up as having all flags set as ones, so the flags is considered “inverted”.  When we come to use a block, we mark the “in use” bit with a zero, leaving the rest as ones.  When we erase, we mark the entire flags block as zeros.  We can set other bits here as we need for accounting purposes.

Thus we have now a format for our flash sector header, and for our block headers.  We can move onto the algorithm.

Step 3: The Code

This is the implementation of the above ideas.  Our code needs to worry about 3 basic operations:

  • reading
  • writing
  • erasing

This is good enough if the size of a ROM image doesn’t change (normal case).  For flexibility, I made my code so that it works crudely like a file, you can seek to any point in the ROM image and start reading/writing, or you can blow the whole thing away.

Constants

It is bad taste to leave magic numbers everywhere, so constants should be used to represent some quantities:

  • VROM_SECT_SZ=16384:
    The virtual ROM sector size in bytes.  (Those watching Codec2 Subversion will note I cocked this one up at first.)
  • VROM_SECT_CNT=3:
    The number of sectors.
  • VROM_BLOCK_SZ=256:
    The size of a block
  • VROM_START_ADDR=0x08004000:
    The address where the virtual ROM starts in Flash
  • VROM_START_SECT=1:
    The base sector number where our ROM starts
  • VROM_MAX_CYCLES=10000:
    Our maximum number of program-erase cycles

Our programming environment may also define some, for example UINTx_MAX.

Derived constants

From the above, we can determine:

  • VROM_DATA_SZ = VROM_BLOCK_SZ – sizeof(block_header):
    The amount of data per block.
  • VROM_BLOCK_CNT = VROM_SECT_SZ / VROM_BLOCK_SZ:
    The number of blocks per sector, including the index block
  • VROM_SECT_APP_BLOCK_CNT = VROM_BLOCK_CNT – 1
    The number of application blocks per sector (i.e. total minus the index block)

CRC32 computation

I decided to use the STM32’s CRC module for this, which takes its data in 32-bit words.  There’s also the complexity of checking the contents of a structure that includes its own CRC.  I played around with Python’s crcmod module, but couldn’t find some arithmetic that would allow it to remain there.

So I copy the entire block, headers and all to a temporary copy (on the stack), set the CRC field to zero in the header, then compute the CRC. Since I need to read it in 32-bit words, I pack 4 bytes into a word, big-endian style. In cases where I have less than 4 bytes, the least-significant bits are left at zero.

Locating blocks

We identify each block in an image by the ROM ID and the block index.  We need to search for these when requested, as they can be located literally anywhere in flash.  There are probably cleverer ways to do this, but I chose the brute force method.  We cycle through each sector and block, see if the block is allocated (in the index), see if the checksum is correct, see if it belongs to the ROM we’re looking for, then look and see if it’s the right index.

Reading data

To read from the above scheme, having been told a ROM ID (rom), start offset and a size, the latter two being in byte sand given a buffer we’ll call out, we first need to translate the start offset to a sector and block index and block offset.  This is simple integer division and modulus.

The first and last blocks of our read, we’ll probably only read part of.  The rest, we’ll read entire blocks in.  The block offset is only relevant for this first block.

So we start at the block we calculate to have the start of our data range.  If we can’t find it, or it’s too small, then we stop there, otherwise, we proceed to read out the data.  Until we run out of data to read, we increment the block index, try to locate the block, and if found, copy its data out.

Writing and Erasing

Writing is a similar affair.  We look for each block, if we find one, we overwrite it by copying the old data to a temporary buffer, copy our new data in over the top then mark the old block as obsolete before writing the new one out with a new checksum.

Trickery is in invoking the wear levelling algorithm on an as-needed basis.  We mark a block obsolete by setting its header fields to zero, but when we run out of free blocks, then we go looking for sectors that are full of obsolete blocks waiting to be erased.  When we encounter a sector that has been erased, we write a new header at the start and proceed to use its first data block.

In the case of erasing, we don’t bother writing anything out, we just mark the blocks as obsolete.

Implementation

The full C code is in the Codec2 Subversion repository.  For those who prefer Git, I have a git-svn mirror (yes, I really should move it off that domain).  The code is available under the Lesser GNU General Public License v2.1 and may be ported to run on any CPU you like, not just ST’s.