May 082022
 

So, this is quite sad news… I learned this on Friday morning that one of Brisbane’s longer-serving radio stations will be taken over by new management and will change its format from being a “classic hits” music station, to being a 24/7 sports coverage station.

It had been operated by the Australian Radio Network who had recently done a merger with a rival network, Grant Broadcasters, picking up their portfolio which included their portfolio of stations which included a number of other Greater Brisbane region stations. This tipped them over the edge and so they had to let one go, the unlucky victim was their oldest: 4KQ.

Now, you’re thinking, big deal, there are lots of radio stations out there, including Internet radio. Here’s why this matters. Back in the 90s, pretty much all of the stations here in Brisbane were locally run. They might’ve been part of a wider network, but generally, the programming about shows and music was decided on by people in this area. Lots of songs were hits only in Brisbane. There are some songs that did not make the music charts anywhere else world-wide. But, here in Brisbane, we requested those songs.

Sometimes the artists knew about this, sometimes not.

Over time, other stations have adjusted their format, and in many cases, abandoned local programming, doing everything from Sydney and Melbourne. Southern Cross Austereo tried this with Triple M years ago, and in the end they had to reverse the decision as their ratings tanked and complaints inundated the station.

4KQ represented one of the last stations to keep local programming. I’m not sure how many still do, but in particular this station was unique amongst the offerings in this area due to its wide coverage of popular music spanning 1960 ~ 1995, and in particular, its focus on the Brisbane top-40 charts.

Some of the radio programs too were great: Brent James in particular had an art for painting a picture of Brisbane at that time for both people who were there to experience it, those who missed out because they lived someplace else, and people like myself who were either too young to remember or not alive at the time in the first place. A lot of their other staff too, had a lot of music knowledge and trivia — yes you can reproduce the play lists with one’s own music collection, but the stories behind the hits are harder to replicate. Laurel Edwards is due to celebrate her 30th year with the station — that’s a long commitment, and it’s sad to think that this will be her last through no fault or decision of her own.

It’s loss as a music station is a major blow to the history of this city. To paraphrase Joni Mitchell, they’ve torn down Festival Hall to put up an apartment block!

A new normal

The question is, where to now? The real sad bit is that this was a successful station that was only culled because of a regulatory compliance issue: ARN now had too many stations in the Greater Brisbane area, and had to let one go. They reluctantly put it up for sale, and sure enough, a buyer took it, but that buyer was not interested in preserving anything other than the frequency, license and broadcast equipment.

In some ways, AM is a better fit for the yap-fest that is SEN-Q. They presently broadcast on DAB+ at 24kbps in essentially AM-radio quality. 4KQ has always been a MW station, originally transmitting at 650kHz back in 1947, moving to 690kHz a year later… then getting shuffled up 3kHz to its present-day 693kHz in 1978 when the authorities (in their wisdom at the time) decided to “make room” by moving all stations to a 9kHz spacing.

Music has never been a particularly good fit for AM radio, but back in 1947 that was the only viable option. FM did exist thanks to the work of Edwin Armstrong, but his patents were still active back then and the more complicated system was less favourable to radio manufacturers at a time when few could afford a radio (or the receiver license to operate it). So AM it was for most broadcasters of that time. “FM radio” as we know it today, wouldn’t come into existence in Brisbane until around 1980, by which time 4KQ was well-and-truly established.

The question remains though… ratings were pretty good, clearly there is demand for such a station. They had a winning formula. Could an independent station carry forward their legacy?

The options

So, in July we’ll have to get used to a new status-quo. It’s not known how long this will last. I am not advocating vigilante action against the new owners. The question will be, is there enough support for a phoenix to rise out of the ashes, and if so, how?

Existing station adopting 4KQ’s old format?

This might happen. Not sure who would be willing to throw out what they have now to try this out but this may be an option. There are a few stations that might be “close enough” to absorb such a change:

  • 4BH (1116kHz AM) does specialise in the “older” music, but it tends to be the softer “easy listening” stuff, they don’t do the heavier stuff that 4KQ and others do. (e.g. you won’t hear AC/DC)
  • KIIS 97.3 (97.3MHz FM) was 4KQ’s sister station, at present they only do music from the 80s onwards.
  • Triple M (104.5MHz FM) would be their closest competitor. They still do some 60s-80s stuff, but they’re more focused on today’s music. There’s a sister-station, Triple M Classic Rock (202.928MHz DAB+) but they are an interstate station, with no regional focus.
  • Outside of Brisbane, River 94.9 (94.9MHz FM) in Ipswich would be the closest to 4KQ. They make frequent mentions of 4IP and its charts. Alas, they are likely beaming west as they are not receivable in this part of Brisbane at least. (VK4RAI on the other hand, located on the same tower can be received, and worked from here… so maybe it’s just a case of more transmit power and a new antenna to service Brisbane?)

I did a tune-around the other day and didn’t hear anything other than those which was in any way comparable.

Interesting aside, 4IP of course was the hit station of its day. These days, if you look up that call-sign, you get directed to RadioTAB… another sports radio station network. Ironic that its rival meets the same fate at the hands of a rival sports radio network.

A new station?

Could enough of us band together and start afresh? Well, this will be tough. It’d be a nice thing if we could, and maybe provide work for those who started the year thinking their job was mostly secure only to find they’ve got two more months left… but the tricky bit is we’re starting from scratch.

FM station?

A new FM station might be ideal in terms of suiting the format, and I did look into this. Alas, not going to happen unless there’s a sacrifice of some sort. I did a search on the ACMA license database; putting in Mt. Coot-tha as the location (likely position of hypothetical transmitter, I think I chose Ch 9 site, but any on that hill will do), giving a radius of 200km and a frequency range of 87-109MHz.

Broadcast FM radio stations are typically spaced out every 800kHz; so 87.7MHz, 88.5MHz, 89.3MHz, … etc. Every such frequency was either directly taken, or had a station within 400kHz of it. Even if the frequency “sounded” clear, it likely was being used by a station I could not receive. A big number of them are operated by churches and community centres, likely low-power narrowcast stations.

The FM broadcast band, as seen from a roof-top 2m “flower pot” in The Gap.

There’s only two ways a new station can spring up on FM in the Brisbane area:

  • an existing station closes down, relinquishing the frequency
  • all the existing stations reduce their deviation, allowing for new stations to be inserted in between the existing ones

The first is not likely to happen. Let’s consider the latter option though. FM bandwidth is decided by the deviation. That is, the modulating signal, as it swings from its minimum trough to its maximum peak, causes the carrier of the transmitter to deviate above or below its nominal frequency in proportion to the input signal amplitude. Sometimes the deviation is almost identical to the bandwidth of the modulating signal (narrowband FM) or sometimes it’s much greater (wideband FM).

UHF CB radios for example; deviate either 2.5kHz or 5kHz, depending on whether the radio is a newer “80-channel” device or an older “40-channel” device. This is narrowband FM. When the ACMA decided to “make room” on UHF CB, they did so by “grandfathering” the old 40-channel class license, and decreeing that new “80-channel” sets are to use a 2.5kHz deviation instead of 5kHz. This reduced the “size” of each channel by half. In between each 40-channel frequency, they inserted a new 80-channel frequency.

This is simple enough with a narrowband FM signal like UHF CB. There’s no sub-carriers to worry about, and it’s not high-fidelity, just plain old analogue voice.

Analogue television used FM for its audio, and in later years, did so in stereo. I’m not sure what the deviation is for broadcast FM radio or television, but I do know that the deviation used for television audio is narrower than that used for FM radio. So evidently, FM stereo stations could possibly have their deviation reduced, and still transmit a stereo signal. I’m not sure what the trade-off of that would be though. TV stations didn’t have to worry about mobile receivers, and most viewers were using dedicated, directional antennas which better handled multi-path propagation (which would otherwise cause ghosting).

Also, TV stations to my knowledge, while they did transmit sub-carriers for FM stereo, they didn’t transmit RDS like FM radio stations do. Reducing the deviation may have implications on signal robustness for mobile users and for over-the-air services like RDS. I don’t know.

That said, lets suppose it could be done, and say Triple M (104.5MHz) and B105 (105.3MHz) decided to drop their deviation by half: we could then maybe squeeze a new station in at 104.1MHz. The apparent “volume” of the other two stations would drop by maybe 3dB, so people will need to turn their volume knobs up higher, but might work.

I do not know however if this is technically possible though. In short, I think we can consider a new FM station a pipe dream that is unlikely to happen.

New AM station?

A new AM station might be more doable. A cursory look at the same database, putting in much the same parameters but this time, a 300km radius and a frequency range of 500kHz-1.7MHz, seems to suggest there are lots of seemingly “unallocated” 9kHz slots. I don’t know what the frequency allocation strategy is for AM stations within a geographic area. I went a wider radius because MW stations do propagate quite far at night: I can pick up 4BU in Bundaberg and ABC Radio Emerald from my home.

The tricky bit is physically setting up the transmitter. MW transmitters are big, and use lots of power. 4KQ for example transmitted 10kW during daylight hours. Given it’s a linear PA in that transmitter, that means it’s consuming 20kW, and when it hits a “peak” it will want that power now!

The antennas are necessarily large; 693kHz has a wavelength of 432m, so a ¼-wave groundplane is going to be in the order of 100m tall. You can compromise that a bit with some clever engineering (e.g. see 4QR’s transmitter site off the Bruce Highway at Bald Hills — guess what the capacitance hat on the top is for!) but nothing will shrink that antenna into something that will fit a suburban back yard.

You will need a big open area to erect the antenna, and that antenna will need an extensive groundplane installed in the ground. The stay-wires holding the mast up will also need a big clearance from the fence as they will be live! Then you’ve got to keep the transmitter fed with the power it demands.

Finding a place is going to be a challenge. It doesn’t have to be elevated for MW like it does for VHF services (FM broadcast, DAB+), but the sheer size of the area needed will make purchasing the land expensive.

And you’ve got to consider your potential neighbours too, some of whom may have valid concerns about the transmitter: not liking the appearance of a big tower “in their back yard”, concerns about interference, concerns about “health effects”… etc.

DAB+?

This could be more doable. I don’t know what costs would be, and the big downside is that DAB+ radios are more expensive, as well as the DAB+ signal being more fragile (particularly when mobile). Audio quality would be much better than AM, but not quite as good as FM (in my opinion).

It’d basically be a case of opening an account with Digital Radio Broadcasting Pty Ltd, who operate the Channel 9A (202.928MHz) and Channel 9B (204.64MHz) transmitters. Then presumably, we’d have to encode our audio stream as HE-AAC and stream it to them somehow, possibly over the Internet.

The prevalence of “pop-up” stations seems to suggest this method may be comparatively cost-effective for larger audiences compared to commissioning and running our own dedicated transmitter, since the price does not change whether we have 10 listeners or 10000: it’s one stream going to the transmitter, then from there, the same signal is radiated out to all.

Internet streaming?

Well, this really isn’t radio, it’s an audio stream on a website at this point. The listener will need an Internet connection of their own, and you, the station operator, will be paying for each listener that connects. The listener also pays too: their ISP will bill them for data usage.

A 64kbps audio stream will consume around 230MB every 8 hours. If you stream it during your typical 8-hour work day, think a CD landing on your desk every 3 days. That’s the data you’re consuming. That data needs to be paid for, because each listener will have their own stream. If there’s only a dozen or so listeners, Internet radio wins … but if things get big (and 4KQ’s listenership was big), it’ll get expensive fast.

The other downside is that some listeners may not have an Internet connection, or the technical know-how to stream a radio station. I for example, do not have Internet access when riding the bicycle, so Internet radio is a no-go in that situation. I also refuse to stream Internet radio at work as I do not believe I should be using a workplace Internet connection for personal entertainment.

Staff?

The elephant in the room is staffing… there’s a workforce that kept 4KQ going who would soon be out of work, would they still be around if such a station were to materialise in the near future? I don’t know. Some of the announcers may want a new position in the field, others may be willing to go back to other vocations, and some are of an age that they may decide hanging up the headphones sounds tempting.

I guess that will be a decision for each person involved. For the listeners though, we’ve come to know these people, and will miss not hearing from them if they do wind up not returning to the air.

In the meantime

What am I doing now? Well, not saving up for a broadcast radio license (as much as my 5-year-old self would be disgusted at me passing up such an opportunity). I am expanding my music collection… and I guess over the next two months, I’ll be taking special note of songs I listen to that aren’t in my collection so I can chase down copies: ideally CDs or FLAC recordings (legally purchased of course!)… or LPs if CDs are too difficult.

Record companies and artists could help here — there are services like ZDigital that allow people to purchase and download individual songs or full albums in FLAC format. There are also lots of albums that were released decades ago, that have not been re-released by record companies. Sometimes record companies don’t release particular songs because they seemingly “weren’t popular”, or were popular in only a few specific geographic areas (like Brisbane).

People like us do not want to pirate music. We want to support the artists. Their songs did get played on radio, and still do; but may not be for much longer. Not everything is on Spotify, and sometimes that big yellow taxi has a habit of taking those hits away that you previously purchased. They could help themselves, and the artists they represent, by releasing some of these “less popular” songs as FLAC recordings for people to purchase. (Or MP3 if they really insist… but some of us prefer FLAC for archival copies.)

The songs have been produced, the recordings already exist, it seems it’s little skin of their nose to just release them as digital-only singles on these purchase-for-download platforms. I can understand not wanting to spend money pressing discs and having to market and ship them, but a file? Some emails, a few signed agreements and one file transfer and it’s done. Not complicated or expensive.

Please, help us help you.

Anyway… I guess I have a shopping list to compile.

Jan 222022
 

Well, some might recall a few years ago I was trying ideas for cycle clothing, and later followed up with some findings.

My situation has changed a bit… the death of a former work colleague shook me up quite a bit, and while I have been riding, I haven’t been doing it nearly as much. Then, COVID-19 reared its ugly head.

Suffice to say, my commute is now one side of the bedroom to the other. Right at this moment, I’m in self-imposed lockdown until I can get my booster shot: I had my second AstraZenica shot on the 4th November, and the Queensland Government has moved the booster shots to being 3 months after the second shot, so for me, that means I’m due on the 4th February. I’m already booked in with a local chemist here in The Gap, I did that weeks ago so that the appointment would be nailed to the floor, and thus currently I’m doing everything in my power to ensure that appointment goes ahead on-time.

I haven’t been on the bike much at all. That doesn’t mean though that I stop thinking about how I can make my ride more comfortable.

Castle Clothing Coveralls

Yes, I’m the one clad in yellow far left.

They had quite few positives:

  • They were great in wet weather
  • They were great in ambient temperatures below 20°C
  • The pocket was handy for storing keys/a phone/a wallet
  • They had good visibility day and night
  • They keep the wind out well. (On the Main Range, Threadbo Top Station was reporting 87km/hr wind gusts that day.)

But, they weren’t without their issues:

  • They’re (unsurprisingly) no good on a sunny summer’s day (on the day that photo was taken, it was borderline too hot, weather prediction was for showers and those didn’t happen)
  • They’re knackered after about 30 washes or so: the outer waterproof layer peels off the lining
  • In intermittent rain / sunshine, they’d keep you dry during the rainy bit, but when the sun came out, you’d get steamed

To cap it off, they’re no longer being manufactured. Castle Clothing have basically canned them. They’ve got a plain yellow version with no stripes, but otherwise, nothing like their old product. I wound up buying 4 of them in the end… the first two had to be chucked because of the aforementioned peeling problem, the other two are in good condition now, but eventually they’ll need replacement.

Mammoth Workwear do have some alternatives. The “Supertouch” ones I have tried, they’re even shorter lived than the Castle ones, and feel like wearing a plastic bag. The others are either not night-time visible, or they’re lined for winter use.

So, back to research again.

Zentai suits?

Now, I know I’ve said previously I’m no MAMIL… and for the most part I stand by this. I did try wearing a stinger suit on the bike once… on the plus side they are very breathable, so quite comfortable to ride in. BUT, three negatives with stinger suits:

That got me thinking, what’s the difference between a stinger suit and an open-face zentai suit? Not a lot. The zentai suit, if it has gloves, can be bought as a “mitten” or (more commonly) a proper multi-finger glove version. They come in a lot more colours than a stinger suit does. They’re about the same price. And there’s no logos, just plain colours (or you can do various patterns/designs if that’s your thing).

A downside is that the zipper is at the back, which means answering calls from nature is more difficult. But then again, some stinger suits and most wetsuits also feature a back-entry.

I’ve got two coming to try the idea out. I suspect they’ll get worn over other clothing, I’ll just duck into a loo, take my shirt off, put the zentai suit on, then jump on the bike to ride to my destination… that way my shirt isn’t soaked with sweat. We’ll see.

One is a black one, which was primarily bought to replace one of the stinger suits for swimming activities, but I can also evaluate the fabric too (it is the usual lycra material).

The other is a silver one (thus a lycra/latex blend), to try out the visibility — it’ll be interesting to see whether it’s somewhat water-repellent due to the latex mix in the material, and see what effect this has on sweat.

Both of these are open-face! You should never try swimming with a full-face zentai suit. I can’t imagine getting caught in the rain ending well either, and the ability to see where you’re going is paramount when operating any vehicle (especially a bicycle)!

They’ll turn up in a week or two, I can try them out then. Maybe won’t be the final solution, but it may answer a few questions.

Heavy Wet/Cold weather gear

So, with the lighter-weight class out of the way, that turns my attention to what to do in truly foul weather, or just bitterly cold weather.

Now, let me define the latter: low single digits °C. Possibly with a westerly breeze carrying it. For some reading this, this will feel like a hot summer’s day, but for those of us in Brisbane, temperatures this low are what we see in the middle of winter.

The waterproof overalls I was wearing before worked well in dry-but-cold weather, however I did note my hands copped the cold… I needed gloves. The ends of the legs also could get tangled with the chain if I wasn’t careful, and my shoes would still get wet. Riggers boots work okay for this, but they’re hard to come by.

I happened to stumble on Sujuvat ratkaisut Oy, who do specialist wet-weather clothing meant for Europe. Meeko (who runs the site) has a commercial relationship with a few manufacturers, notably AJGroup who supply the material for a lot of Meeko’s “extreme” range.

The suits are a variant of PVC, which will mean they’re less breathable than what I have now, but should also mean they’re a lot more durable. There’s a decent range of colours available, with many options having the possibility of reflective bands, attached gloves and attached wellington boots. It’s worth noting the BikeSuit (no longer available) I was looking at 8 years ago was also a PVC outfit.

In the winter time, the big problem is not so much sweat, but rather, sweat being hit by wind-chill. Thus I’m ordering one of the Extreme Drainage Coveralls to try them out.

I’ve seen something similar out of AliExpress, however the options there are often built for the Chinese market… so rarely feature size options that fit someone like myself. Most of the Chinese ones are dark colours, with one “tan”-coloured option listed, and a couple of rubber ones that were lighter colours (a dark “pink”, and a yellow). Some of the rubber ones also had a strange opening arrangement: a tube opening in the stomach, which you pulled yourself through, then clamped shut with a peg. Innovative, but looks very untidy and just begging to get caught in something! I’ll stick with something a bit more conventional.

The coverall I’m ordering will be a 500g/m² white fabric… so about twice the weight of my current Castle workwear overalls (which are about 330g/m²), and will have the gloves and boots attached. I’m curious to see how that’s done up close, and see how it works out in my use case.

Being a white rather than a yellow/orange will make them less visible in the day time, but I suspect this won’t be much of an issue as it’s night-time visibility I’m particularly after. Also, being white instead of a “strong” fluro colour will likely be better at horse endurance rides, as horses tend to react to fluro colours.

The zip arrangement intrigues me as well… it’s been placed up high so that you can pretty much wade into water up to your chest and not get wet. There’s a lighter-weight option of the same suit, however with fewer options for colours. If the extreme version doesn’t work out for cycling, I might look at this alternative (the bike doesn’t react to strong colours like a horse does).

There’s about a 2-month lead-time on this gear because it’s made-to-order, a reasonable trade-off given you get to more-or-less get it made exactly how you want it. Looking around, I’m seeing off-the-shelf not-customisable outfits at AU$400 a pop, €160 (~AU$252) is looking a good option.

The fact that this is being run as a small side-hustle is commendable. I look forward to seeing the product.

Jan 032022
 

So, this year I had a new-year’s resolution of sorts… when we first started this “work from home” journey due to China’s “gift”, I just temporarily set up on the dinner table, which was of course, meant to be another few months.

Well, nearly 2 years later, we’re still working from home, and work has expanded to the point that a move to the office, on any permanent basis, is pretty much impossible now unless the business moves to a bigger building. With this in mind, I decided I’d clear off the dinner table, and clean up my room sufficiently to set up my workstation in there.

That meant re-arranging some things, but for the most part, I had the space already. So some stuff I wasn’t using got thrown into boxes to be moved into the garage. My CD collection similarly got moved into the garage (I have it on the computer, but need to retain the physical discs as they represent my “personal use license”), and lo and behold, I could set up my workstation.

The new workspace

One of my colleagues spotted the Indy and commented about the old classic SGI logo. Some might notice there’s also an O2 lurking in the shadows. Those who have known me for a while, will remember I did help maintain a Linux distribution for these classic machines, among others, and had a reasonable collection of my own:

My Indy, O2 and Indigo2 R10000
The Octane, booting up

These machines were all eBay purchases, as is the Sun monitor pictured (it came with the Octane). Sadly, fast forward a number of years, and these machines are mostly door stops and paperweights now.

The Octane’s and Indigo2’s demises

The Octane died when I tried to clean it out with a vacuum cleaner, without realising the effect of static electricity generated by the vacuum cleaner itself. I might add mine was a particularly old unit: it had a 175MHz R10000 CPU, and I remember the Linux kernel didn’t recognise the power management circuitry in the PSU without me manually patching it.

The Indigo2 mysteriously stopped working without any clear reason why, I’ve never actually tried to diagnose the issue.

That left the Indy and the O2 as working machines. I haven’t fired them up in a long time until today. I figured, what the hell, do they still run?

Trying the Indy and O2 out

Plug in the Indy, hit the power button… nothing. Dead as a doornail. Okay, put it aside… what about the O2?

I plug it in, shuffle it closer to the monitor so I can connect it. ‘Lo and behold:

The O2 lives!

Of course, the machine was set up to use a serial console as its primary interface, and boot-up was running very slow.

Booting up… very slowly…

It sat there like that for a while, figuring the action was happening on a serial port, I went to go get my null modem cable, only to find a log-in prompt by the time I got back.

Next was remembering what password I was using when I last ran this machine. We had the OpenSSL heartbleed vulnerability happen since then, and at about that time, I revoked all OpenPGP keys and changed all passwords, so it isn’t what I use today. I couldn’t get in as root, but my regular user account worked, and I was able to change the root password via sudo.

Remembering my old log-in credentials, from 22 years ago it seems

The machine soon crashed after that. I tried rebooting, this time I tweaked some PROM settings (and yes, I was rusty remembering how to do it) to be able to see what was going. (I had the null modem cable in hand, but didn’t feel like trying to blindly find the serial port at the back of my desktop.)

Changing PROM settings
The subsequent boot, and crash

Evidently, I had a dud disk. This did not surprise me in the slightest. I also noticed the PSU fan was not spinning, possibly seized after years of non-use.

Okay, there were two disks installed in this machine, both 80-pin SCA SCSI drives. Which one was it? I took a punt and tried the one furtherest away from the I/O ports.

Success, she boots now

I managed to reset the root password, before the machine powered itself off (possibly because of overheating). I suspect the machine will need the dust blown out of it (safely! — not using the method that killed the Octane!), and the HDDs will need replacements. The guilty culprit was this one (which I guessed correctly first go):

a 4GB HDD was a big drive back in 1998!

The computer I’m typing this on has a HDD that stores 1000 of these drives. Today, there are modern alternatives, such as SCSI2SD that could get this machine running fully if needed. The tricky bit would be handling the 80-pin hot-swap interface. There’d be some hardware hacking needed to connect the two, but AU$145 plus an adaptor seems like a safer bet than trying some random used HDD.

So, replacement for the HDDs, a clean-out, and possibly a new fan or two, and that machine will be back to “working” state. Of course the Linux landscape has moved on since then, Debian no longer support the MIPS4 ISA that the RM5200 CPU understands, Gentoo still could work on this though, and maybe OpenBSD still support this too. In short, this machine is enough of a “go-er” that it should not be sent to land-fill… yet.

Turning my attention back to the Indy

So the Indy was my first SGI machine. Bought to better understand the MIPS processor architecture, and perhaps gain enough understanding to try and breathe life into a Cobalt Qube II server appliance (remember those?), it did teach me a lot about Linux and how things vary between platforms.

I figured I might as well pop the cover and see if there’s anything “obviously” wrong. The procedure I was rusty on, but I recalled there was a little catch on the back of the case that needed to be release before the cover slid off. So I lugged the 20″ CRT off the top of the machine, pulled the non-functioning Indy out, and put it on the table to inspect further.

Upon trying to pop the cover (gently!), the top of the case just exploded. Two pieces of the top cover go flying, and the RF shield parts company with the cover’s underside.

The RF shield parted company with the underside of the lid

I was left with a handful of small plastic fragments that were the heat-set posts holding the RF shield to the inside of the lid.

Some of the fragments that once held the RF shield in place

Clearly, the plastic has become brittle over the years. These machines were released in 1993, I think this might be a 1994-model as it has a slightly upgraded R4600 CPU in it.

As to the machine itself, I had a quick sticky-beak, there didn’t seem to be any immediately obvious things, but to be honest, I didn’t do a very thorough check. Maybe there’s some corrosion under the motherboard I didn’t spot, maybe it’s just a blown fuse in the PSU, who knows?

The inside of the Indy

This particular machine had 256MB RAM (a lot for its day), 8-bit Newport XL graphics, the “Indy Presenter” LCD interface (somewhere, we have the 15″ monitor it came with — sadly the connecting cable has some damaged conductors), and the HDD is a 9.1GB HDD I added some time back.

Where to now?

I was hanging on to these machines with the thinking that someone who was interested in experimenting with RISC machines might want them — find them a new home rather than sending them to landfill. I guess that’s still an option for the O2, as it still boots: so long as its remaining HDD doesn’t die it’ll be fine.

For the others, there’s the possibility of combining bits to make a functional frankenmachine from lots of parts. The Indy will need a new PROM battery if someone does manage to breathe life into it.

The Octane had two SCSI interfaces, one of which was dead — a problem that was known-of before I even acquired it. The PROM would bitch and moan about the dead SCSI interface for a good two minutes before giving up and dumping you in the boot menu. Press 1, and it’d hand over to arcload, which would boot a Linux kernel from a disk on the working controller. Linux would see the dead SCSI controller, and proceed to ignore it, booting just fine as if nothing had happened.

The Indigo2 R10000 was always the red-hedded step child: an artefact of the machine’s design. The IP22 design (Indy and Indigo2) was never designed with the intent of being used with a R10000 CPU, and the speculative execution features played havoc with caching on this design. The Octane worked fine because it was designed from the outset to run this CPU. The O2 could be made to work because of a quirk of its hardware design, but the Indigo2 was not so flexible, so kernel-space code had to hack around the problem in software.

I guess I’d still like to see the machines go to a good home, but no idea who that’d be in this day and age. Somewhere, I have a partial disc set of Irix 6.5, and there’s also a 20″ SGI GDM5410 monitor (not the Sun monitor pictured above) that, at last check, did still work.

It’ll be a sad day when these go to the tip.

Sep 192021
 

I stumbled across this article regarding the use of TCP over sensor networks. Now, TCP has been done with AX.25 before, and generally suffers greatly from packet collisions. Apparently (I haven’t read more than the first few paragraphs of this article), implementations TCP can be tuned to improve performance in such networks, which may mean TCP can be made more practical on packet radio networks.

Prior to seeing this, I had thought 6LoWHAM would “tunnel” TCP over a conventional AX.25 connection using I-frames and S-frames to carry TCP segments with some header prepended so that multiple TCP connections between two peers can share the same AX.25 connection.

I’ve printed it out, and made a note of it here… when I get a moment I may give this a closer look. Ultimately I still think multicast communications is the way forward here: radio inherently favours one-to-many communications due to it being a shared medium, but there are definitely situations in which being able to do one-to-one communications applies; and for those, TCP isn’t a bad solution.

Comments having read the article

So, I had a read through it. The take-aways seem to be this:

  • TCP was historically seen as “too heavy” because the MCUs of the day (circa 2002) lacked the RAM needed for TCP data structures. More modern MCUs have orders of magnitude more RAM (32KiB vs 512B) today, and so this is less of an issue.
    • For 6LoWHAM, intended for single-board computers running Linux, this will not be an issue.
  • A lot of early experiments with TCP over sensor networks tried to set a conservative MSS based on the actual link MTU, leading to TCP headers dominating the lower-level frame. Leaning on 6LoWPAN’s ability to fragment IP datagrams lead to much improved performance.
    • 6LoWHAM uses AX.25 which can support 256-byte frames; vs 128-byte 802.15.4 frames on 6LoWPAN. Maybe gains can be made this way, but we’re already a bit ahead on this.
  • Much of the document considered battery-powered nodes, in which the radio transceiver was powered down completely for periods of time to save power, and the effects this had on TCP communications. Optimisations were able to be made that reduced the impact of such power-down events.
    • 6LoWHAM will likely be using conventional VHF/UHF transceivers. Hand-helds often implement a “battery saver” mode — often this is configured inside the device with no external control possible (thus it will not be possible for us to control, or even detect, when the receiver is powered down). Mobile sets often do not implement this, and you do not want to frequently power-cycle a modern mobile transceiver at the sorts of rates that 802.15.4 radios get power-cycled!
  • Performance in ideal conditions favoured TCP, with the article authors managing to achieve 30% of the raw link bandwidth (75kbps of a theoretical 250kbps maximum), with the underlying hardware being fingered as a possible cause for performance issues.
    • Assuming we could manage the same percentage; that would equate to ~360bps on 1200-baud networks, or 2.88kbps on 9600-baud networks.
  • With up to 15% packet loss, TCP and CoAP (its nearest contender) can perform about the same in terms of reliability.
  • A significant factor in effective data rate is CSMA/CA. aioax25 effectively does CSMA/CA too.

Its interesting to note they didn’t try to do anything special with the TCP headers (e.g. Van Jacobson compression). I’ll have to have a look at TCP and see just how much overhead there is in a typical segment, and whether the roughly double MTU of AX.25 will help or not: the article recommends using MSS of approximately 3× the link MTU for “fair” conditions (so ~384 bytes), and 5× in “good” conditions (~640 bytes).

It’s worth noting a 256-byte AX.25 frame takes ~2 seconds to transmit on a 1200-baud link. You really don’t want to make that a habit! So smaller transmissions using UDP-based protocols may still be worthwhile in our application.

Sep 162021
 

So, one evening I was having difficulty sleeping, so like some people count sheep, turned to a different problem…6LoWPAN relies on all nodes sharing a common “context”. This is used as a short-hand to “compress” the rather lengthy IPv6 addresses for allowing two nodes to communicate with one another by substituting particular IPv6 address subnets with a “context number” which can be represented in 4 bits.

Fundamentally, this identifier is a stand-in for the subnet address. This was a sticking-point with earlier thoughts on 6LoWHAM: how do we agree on what the context should be? My thought was, each network should be assigned a 3-bit network ID. Why 3-bit? Well, this means we can reserve some context IDs for other uses. We use SCI/DCI values 0-7 and leave 8-15 reserved; I’ll think of a use for the other half of the contexts.

The node “group” also share a SSID; the “group” SSID. This is a SSID that receives all multicast traffic for the nodes on the immediate network. This might be just a generic MCAST-n SSID, where n is the network ID; or it could be a call-sign for a local network coordinator, e.g. I might decide my network will use VK4MSL-0 for my group SSID (network 0). Probably nodes that are listening on a custom SSID should still listen for MCAST-n traffic, in case a node is attempting to join without knowing the group SSID.

AX.25 allows for 16 SSIDs per call-sign, so what about the other 8? Well, if we have a convention that we reserve SSIDs 0-7 for groups; that leaves 8-15 for stations. This can be adjusted for local requirements where needed, and would not be enforced by the protocol.

Joining a network

How does a new joining node “discover” this network? Firstly, the first node in an area is responsible for “forming” the network — a node which “forms” a network must be manually programmed with the local subnet, group SSID and other details. Ensuring all nodes with “formation” capability for a given network is beyond the scope of 6LoWHAM.

When a node joins; at first it only knows how to talk to immediate nodes. It can use MCAST-n to talk to immediate neighbours using the fe80::/64 subnet. Anyone in earshot can potentially reply. Nodes simply need to be listening for traffic on a reserved UDP port (maybe 61631; there’s an optimisation in 6LoWPAN for 61616-61631). The joining node can ask for the network context, maybe authenticate itself if needed (using asymmetric cryptography – digital signatures, no encryption).

The other nodes presumably already know the answer, but for all nodes to reply simultaneously, would lead to a pile-up. Nodes should wait a randomised delay, and if nothing is heard in that period, they then transmit what they know of the context for the given network ID.

The context information sent back should include:

  • Group SSID
  • Subnet prefix
  • (Optional) Authentication data:
    • Public key of the forming network (joining node will need to maintain its own “trust” database)
    • Hash of all earlier data items
    • Digital signature signed with included public key

Once a node knows the context for its chosen network, it is officially “joined”.

Routing to non-local endpoints

So, a node may wish to send a message to another node that’s not directly reachable. This is, after-all, the whole point of using a routing protocol atop AX.25. If we knew a route, we could encode it in the digipeater path, and use conventional AX.25 source routing. Nodes that know a reliable route are encouraged to do exactly that. But what if you don’t know your way around?

APRS uses WIDEN-n to solve this problem: it’s a dumb broadcast, but it achieves this aim beautifully. n just stands for the number of hops, and it gets decremented with each hop. Each digipeater inserts itself into the path as it sends the frame on. APRS specs normally call for everyone to broadcast all at once, pile-up be damned. FM capture effect might help here, but I’m not sure its a good policy. Simple, but in our case, we can do a little better.

We only need to broadcast far enough to reach a node that knows a route. We’ll use ROUTE-n to stand for a digipeater that is no more than n hops away from the station listed in the AX.25 destination field. n must be greater than 0 for a message to be relayed. AX.25 2.0 limits the number of digipeaters to 8 (and 2.2 to 2!), so naturally n cannot be greater than 8.

So we’ll have a two-tier approach.

Routing from a node that knows a viable route

If a node that receives a ROUTE-n destination message, knows it has a good route that is n or less hops away from the target; it picks a randomised delay (maybe 0-5 seconds range), and if no reply is heard from another node; it relays the message: the ROUTE-n is replaced by its own SSID, followed by the required digipeater path to reach the target node.

Routing from a node that does not know a viable route

In the case where a node receives this same ROUTE-n destination message, does not know a route, and hasn’t heard anyone else relay that same message; it should pick a randomised delay (5-10 second range), and if it hasn’t heard the message relayed via a specific path in that time, should do one of the following:

If n is greater than 1:

Substitute ROUTE-n in the digipeater path with its own SSID followed by ROUTE-(n-1) then transmit the message.

If n is 1 (or 0):

Substitute ROUTE-n with its own SSID (do not append ROUTE-0) then transmit the message.

Routing multicast traffic

Discovering multicast listeners

I’ll have to research MLD (RFC-3810 / RFC-4604), but that seems the sensible way forward from here.

Relaying multicast traffic

If a node knows of downstream nodes that ordinarily rely on it to contact the sender of a multicast message, and it knows the downstream nodes are subscribers to the destination multicast group, it should wait a randomised period, and forward the message on (appending its SSID in the digipeater path) to the downstream nodes.

Application thoughts

I think I have done some thoughts on what the applications for this system may be, but the other day I was looking around for “prior art” regarding one-to-many file transfer applications.

One such system that could be employed is UFTP. Yes, it mentions encryption, but that is an optional feature (and could be useful in emcomm situations). That would enable SSTV-style file sharing to all participants within the mesh network. Its ability to be proxied also lends itself to bridging to other networks like AMPRnet, D-Star packet, DMR and other systems.

Jun 202021
 

So, today on the radio I heard that from this Friday, our state government was “expanding” the use of their Check-in Queensland program. Now, since my last post on the topic, I have since procured a new tablet. The tablet was purchased for completely unrelated reasons, namely:

  1. to provide navigation assistance, current speed monitoring and positional logging whilst on the bicycle (basically, what my Garmin Rino-650 does)
  2. to act as a media player (basically what my little AGPTek R2 is doing — a device I’ve now outgrown)
  3. to provide a front-end for a SDR receiver I’m working on
  4. run Slack for monitoring operations at work

Since it’s a modern Android device, it happens to be able to run the COVID-19 check-in programs. So I have COVIDSafe and Check-in Queensland installed. For those to work though, I have to run my existing phone’s WiFi hotspot. A little cumbersome, but it works, and I get the best of both worlds: modern Android + my phone’s excellent cell tower reception capability.

The snag though comes when these programs need to access the Internet at times when using my phone is illegal. Queensland laws around mobile phone use changed a while back, long before COVID-19. The upshot was that, while people who hold “open” driver’s licenses may “use” a mobile phone (provided that they do not need to handle it to do so), anybody else may not “use” a phone for “any purpose”. So…

  • using it for talking to people? Banned. Even using “hands-free”? Yep, still banned.
  • using it for GPS navigation? Banned.
  • using it for playing music? Banned.

It’s a $1000 fine if you’re caught. I’m glad I don’t use a wheelchair: such mobility aids are classed as a “vehicle” under the Queensland traffic act, and you can be fined for “drink driving” if you operate one whilst drunk. So traffic laws that apply to “motor vehicles” also apply to non-“motor vehicles”.

I don’t have a driver’s license of any kind, and have no interest in getting one, my primary mode of private transport is by bicycle. I can’t see how I’d be granted permission to do something that someone on a learner’s permit or P1 provisional license is forbidden from doing. The fact that I’m not operating a “motor vehicle” does not save me, the drink-driving in a wheelchair example above tells me that I too, would be fined for riding my bicycle whilst drunk. Likely, the mobile phones apply to me too. Given this, I made the decision to not “use” a mobile phone on the bicycle “for any purpose”. “For any purpose” being anything that requires the device to be powered on.

If I’m going to be spending a few hours at the destination, and in a situation that may permit me to use the phone, I might carry it in the top-box turned off (not certain if this is permitted, but kinda hard to police), but if it’s a quick trip to the shops, I leave the mobile phone at home.

What’s this got to do with the Check-in Queensland application or my new shiny-shiny you ask? Glad you did.

The new tablet is a WiFi-only device… specifically because of the above restrictions on using a “mobile phone”. The day those restrictions get expanded to include the tablet, you can bet the tablet will be ditched when travelling as well. Thus, it receives its Internet connection via a WiFi access point. At home, that’s one of two Cisco APs that provide my home Internet service. No issue there.

If I’m travelling on foot, or as a passenger on someone else’s vehicle, I use the WiFi hot-spot function on my phone to provide this Internet service… but this obviously won’t work if I just ducked up the road on my bike to go get some grocery shopping done, as I leave the phone at home for legal reasons.

Now, the Check-in Queensland application does not work without an Internet connection, and bringing my own in this situation is legally problematic.

I can also think of situations where an Internet connection is likely to be problematic.

  • If your phone doesn’t have a reliable cell tower link, it won’t reliably connect to the Internet, Check-in Queensland will fail.
  • If your phone is on a pre-paid service and you run out of credit, your carrier will deny you an Internet service, Check-in Queensland will fail.
  • If your carrier has a nation-wide whoopsie (Telstra had one a couple of years back, Optus and Vodafone have had them too), you can find yourself with a very pretty but very useless brick in your hand. Check-in Queensland will fail.

What can be done about this?

  1. The venues could provide a WiFi service so people can log in to that, and be provided with limited Internet access to allow the check-in program to work whilst at the venue. I do not see this happening for most places.
  2. The Check-in Queensland application could simply record the QR code it saw, date/time, co-visitors, and simply store it on the device to be uploaded later when the device has a reliable Internet link.
  3. For those who have older phones (and can legally carry them), the requirement of an “application” seems completely unnecessary:
    1. Most devices made post-2010 can run a web browser capable of running an in-browser QR code scanner, and storage of the customer’s details can be achieved either through using window.localStorage or through RFC-6265 HTTP cookies. In the latter case, you’d store the details server-side, and generate an “opaque” token which would be stored on the device as a cookie. A dedicated program is not required to do the function that Check-in Queensland is performing.
    2. For older devices, pretty much anything that can access the 3G network can send and receive SMS messages. (Indeed, most 2G devices can… the only exception I know to this would be the Motorola MicroTAC 5200 which could receive but not send SMSes. The lack of a 2G network will stop you though.) Telephone carriers are required to capture and verify contact details when provisioning pre-paid and post-paid cellular services, so already have a record of “who” has been assigned which telephone number. So why not get people to text the 6-digit code that Check-In Queensland uses, to a dedicated telephone number? If there’s an outbreak, they simply contact the carrier (or the spooks in Canberra) to get the contact details.
  4. The Check-in Queensland application has a “business profile” which can be used for manual entry of a visitor’s details… hypothetically, why not turn this around? Scan a QR code that the visitor carries and provides. Such QR codes could be generated by the Check-in Queensland website, printed out on paper, then cut out to make a business-card sized code which visitors can simply carry in their wallets and present as needed. No mobile phone required! For the record, the Electoral Commission of Queensland has been doing this for our state and council elections for years.

It seems the Queensland Government is doing this fancy “app” thing “because we can”. Whilst I respect the need to effectively contact-trace, the truth is there’s no technical reason why “this” must be the implementation. We just seem to be playing a game of “follow the shepherd”. They keep trying to advertise how “smart” we are, why not prove it?

Apr 112021
 

So, for the past 12 months we’ve basically had a whirlwind of different “solutions” to the problem of contact tracing. The common theme amongst them seems to be they’re all technical-based, and they all assume people carry a smartphone, registered with one of the two major app stores, and made in the last few years.

Quite simply, if you’re carrying an old 3G brick from 2010, you don’t exist to these “apps”. Our own federal government tried its hand in this space by taking OpenTrace (developed by the Singapore Government and released as GPLv3 open-source) and rebadging that (and re-licensing it!) as COVIDSafe.

This had very mild success to say the least, with contact tracers telling us that this fancy “app” wasn’t telling them anything new. So much focus has been put on signing into and out of venues.

To be honest, I’m fine with this until such time as we get this gift from China under control. The concept is not what irks me, it’s its implementation.

At first, it was done on paper. Good old fashioned pen and paper. Simple, nearly foolproof, didn’t crash, didn’t need credit, didn’t need recharging, didn’t need network coverage… except for two problems:

  1. people who can’t successfully operate a pen (Hmm, what went wrong, Education Queensland?)
  2. people who can’t take the process seriously (and an app solves this how?)

So they demanded that all venues use an electronic system. Fine, so we had a myriad of different electronic web-based systems, a little messy, but it worked, and for the most part, the venue’s system didn’t care what your phone was.

A couple, even could take check-in by SMS. Still rocking a Nokia 3210 from 1998? Assuming you’ve found a 2G cell tower in range, you can still check in. Anything that can do at least 3G will be fine.

An advantage of this solution is that they have your correct mobile phone number then and it’s a simple matter for Queensland Health to talk to Telstra/Optus/Vodaphone/whoever to get your name and address from that… as a bonus, the cell sites may even have logs of your device’s IMEI roaming, so there’s more for the contact tracing kitty.

I only struck one venue out of dozens, whose system would not talk to my phone. Basically some JavaScript library didn’t load, and so it fell in a heap.

Until yesterday.

The Queensland Government has decided to foist its latest effort on everybody, the “Check-in Queensland” app. It is available on Google Play Store and Apple App Store, and their QR codes are useless without it. I can’t speak about the Apple version of the software, but for the Android one, it requires Android 5.0 or above.

Got an old reliable clunker that you keep using because it pulls the weakest signals and has a stand-by time that can be measured in days? Too bad. For me, my Android 4.1 device is not welcome. There are people out there for whom, even that, is a modern device.

Why not buy a newer phone? Well, when I bought this particular phone, back in 2015… I was looking for 3 key features:

  1. Make and receive (voice) telephone calls
  2. Send and receive short text messages
  3. Provide a Internet link for my laptop via USB/WiFi

Anything else is a bonus. It has a passable camera. It can (and does) play music. There’s a functional web browser (Firefox). There’s a selection of software I can download (via F-Droid). It Does What I Need It To Do. The battery still lasts 2-3 days between charges on stand-by. I’ve seen it outperform nearly every contemporary device on the market in areas with weak mobile coverage, and I can connect an external antenna to boost that if needed.

About the only thing I could wish for is open-source firmware and a replaceable battery. (Well, it sort-of is replaceable. Just a lot of frigging around to get at it. I managed to replace a GPS battery, so this should be doable.)

So, given this new check-in requirement, what is someone like me to do? Whilst the Queensland Government is urging people to install their application, they recognise that there are those of us who cannot because we lack anything that will run it. So they ask that venues have a device on hand that can be used to check visitors in if this situation arises.

My little “hack” simply exploits this:

# This file is part of pylabels, a Python library to create PDFs for printing
# labels.
# Copyright (C) 2012, 2013, 2014 Blair Bonnett
#
# pylabels is free software: you can redistribute it and/or modify it under the
# terms of the GNU General Public License as published by the Free Software
# Foundation, either version 3 of the License, or (at your option) any later
# version.
#
# pylabels is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
# A PARTICULAR PURPOSE.  See the GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along with
# pylabels.  If not, see <http://www.gnu.org/licenses/>.

import argparse
import labels
import time
from reportlab.lib.units import mm
from reportlab.graphics import shapes
from reportlab.lib import colors
from reportlab.graphics.barcode import qr

rows = 4
cols = 2
# Specifications for Avery C32028 2×4 85×54mm
specs = labels.Specification(210, 297, cols, rows, 85, 54, corner_radius=0,
        left_margin=17, right_margin=17, top_margin=31, bottom_margin=32)

def draw_label(label, width, height, checkin_id):
    label.add(shapes.String(
        42.5*mm, 50*mm,
        'COVID-19 Check-in Card',
        fontName="Helvetica", fontSize=12, textAnchor='middle'
    ))
    label.add(shapes.String(
        42.5*mm, 46*mm,
        'The Queensland Government has chosen to make the',
        fontName="Helvetica", fontSize=8, textAnchor='middle'
    ))
    label.add(shapes.String(
        42.5*mm, 43*mm,
        'CheckIn QLD application incompatible with my device.',
        fontName="Helvetica", fontSize=8, textAnchor='middle'
    ))
    label.add(shapes.String(
        42.5*mm, 40*mm,
        'Please enter my contact details into your system',
        fontName="Helvetica", fontSize=8, textAnchor='middle'
    ))
    label.add(shapes.String(
        42.5*mm, 37*mm,
        'at your convenience.',
        fontName="Helvetica", fontSize=8, textAnchor='middle'
    ))

    label.add(shapes.String(
        5*mm, 32*mm,
        'Name: Joe Citizen',
        fontName="Helvetica", fontSize=12
    ))
    label.add(shapes.String(
        5*mm, 28*mm,
        'Phone: 0432 109 876',
        fontName="Helvetica", fontSize=12
    ))
    label.add(shapes.String(
        5*mm, 24*mm,
        'Email address:',
        fontName="Helvetica", fontSize=12
    ))
    label.add(shapes.String(
        84*mm, 20*mm,
        'myaddress+c%o@example.com' % checkin_id,
        fontName="Courier", fontSize=12, textAnchor='end'
    ))
    label.add(shapes.String(
        5*mm, 16*mm,
        'Home address:',
        fontName="Helvetica", fontSize=12
    ))
    label.add(shapes.String(
        15*mm, 12*mm,
        '12 SomeDusty Rd',
        fontName="Helvetica", fontSize=12
    ))
    label.add(shapes.String(
        15*mm, 8*mm,
        'BORING SUBURB, QLD, 4321',
        fontName="Helvetica", fontSize=12
    ))

    label.add(shapes.String(
        2, 2, 'Date: ',
        fontName="Helvetica", fontSize=10
    ))
    label.add(shapes.Rect(
        10*mm, 2, 12*mm, 4*mm,
        fillColor=colors.white, strokeColor=colors.gray
    ))
    label.add(shapes.String(
        22.5*mm, 2, '-', fontName="Helvetica", fontSize=10
    ))
    label.add(shapes.Rect(
        24*mm, 2, 6*mm, 4*mm,
        fillColor=colors.white, strokeColor=colors.gray
    ))
    label.add(shapes.String(
        30.5*mm, 2, '-', fontName="Helvetica", fontSize=10
    ))
    label.add(shapes.Rect(
        32*mm, 2, 6*mm, 4*mm,
        fillColor=colors.white, strokeColor=colors.gray
    ))
    label.add(shapes.String(
        40*mm, 2, 'Time: ',
        fontName="Helvetica", fontSize=10
    ))
    label.add(shapes.Rect(
        50*mm, 2, 6*mm, 4*mm,
        fillColor=colors.white, strokeColor=colors.gray
    ))
    label.add(shapes.String(
        56.5*mm, 2, ':', fontName="Helvetica", fontSize=10
    ))
    label.add(shapes.Rect(
        58*mm, 2, 6*mm, 4*mm,
        fillColor=colors.white, strokeColor=colors.gray
    ))

    label.add(shapes.String(
        10*mm, 5*mm, 'Year',
        fontName="Helvetica", fontSize=6, fillColor=colors.gray
    ))
    label.add(shapes.String(
        24*mm, 5*mm, 'Month',
        fontName="Helvetica", fontSize=6, fillColor=colors.gray
    ))
    label.add(shapes.String(
        32*mm, 5*mm, 'Day',
        fontName="Helvetica", fontSize=6, fillColor=colors.gray
    ))
    label.add(shapes.String(
        50*mm, 5*mm, 'Hour',
        fontName="Helvetica", fontSize=6, fillColor=colors.gray
    ))
    label.add(shapes.String(
        58*mm, 5*mm, 'Minute',
        fontName="Helvetica", fontSize=6, fillColor=colors.gray
    ))

    label.add(qr.QrCodeWidget(
            '%o' % checkin_id,
            barHeight=12*mm, barWidth=12*mm, barBorder=1,
            x=73*mm, y=0
    ))

# Grab the arguments
OCTAL_T = lambda x : int(x, 8)
parser = argparse.ArgumentParser()
parser.add_argument(
        '--base', type=OCTAL_T,
        default=(int(time.time() / 86400.0) << 8)
)
parser.add_argument('--offset', type=OCTAL_T, default=0)
parser.add_argument('pages', type=int, default=1)
args = parser.parse_args()

# Figure out cards per sheet (max of 256 cards per day)
cards = min(rows * cols * args.pages, 256)

# Figure out check-in IDs
start_id = args.base + args.offset
end_id = start_id + cards
print ('Generating cards from %o to %o' % (start_id, end_id))

# Create the sheet.
sheet = labels.Sheet(specs, draw_label, border=True)

sheet.add_labels(range(start_id, end_id))

# Save the file and we are done.
sheet.save('checkin-cards.pdf')
print("{0:d} cards(s) output on {1:d} page(s).".format(sheet.label_count, sheet.page_count))

That script (which may look familiar), generates up to 256 check-in cards. The check-in cards are business card sized and look like this:

That card has:

  1. the person’s full name
  2. a contact telephone number
  3. an email address with a unique sub-address component for verification purposes (compatible with services that use + for sub-addressing like Gmail)
  4. home address
  5. date and time of check-in (using ISO-8601 date format)
  6. a QR code containing a “check-in number” (which also appears in the email sub-address)

Each card has a unique check-in number (seen above in the email address and as the content of the QR code) which is derived from the number of days since 1st January 1970 and a 8-bit sequence number; so we can generate up to 256 cards a day. The number is just meant to be unique to the person generating them, two people using this script can, and likely will, generate cards with the same check-in ID.

I actually added the QR code after I printed off a batch (thought of the idea too late). Maybe the next batch will have the QR code. This can be used with a phone app of your choosing (e.g. maybe use BarcodeScanner to copy the check-in number to the clip-board then paste it into a spreadsheet, or make your own tool) to add other data. In my case, I’ll use a paper system:

The script that generates those is here:

# This file is part of pylabels, a Python library to create PDFs for printing
# labels.
# Copyright (C) 2012, 2013, 2014 Blair Bonnett
#
# pylabels is free software: you can redistribute it and/or modify it under the
# terms of the GNU General Public License as published by the Free Software
# Foundation, either version 3 of the License, or (at your option) any later
# version.
#
# pylabels is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
# A PARTICULAR PURPOSE.  See the GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along with
# pylabels.  If not, see <http://www.gnu.org/licenses/>.

import argparse
import labels
import time
from reportlab.lib.units import mm
from reportlab.graphics import shapes
from reportlab.lib import colors

rows = 4
cols = 2
# Specifications for Avery C32028 2×4 85×54mm
specs = labels.Specification(210, 297, cols, rows, 85, 54, corner_radius=0,
        left_margin=17, right_margin=17, top_margin=31, bottom_margin=32)

def draw_label(label, width, height, checkin_id):
    label.add(shapes.String(
        42.5*mm, 50*mm,
        'COVID-19 Check-in Log',
        fontName="Helvetica", fontSize=12, textAnchor='middle'
    ))

    label.add(shapes.Rect(
        1*mm, 3*mm, 20*mm, 45*mm,
        fillColor=colors.lightgrey,
        strokeColor=None
    ))
    label.add(shapes.Rect(
        41*mm, 3*mm, 28*mm, 45*mm,
        fillColor=colors.lightgrey,
        strokeColor=None
    ))

    for row in range(3, 49, 5):
        label.add(shapes.Line(1*mm, row*mm, 84*mm, row*mm, strokeWidth=0.5))
    for col in (1, 21, 41, 69, 84):
        label.add(shapes.Line(col*mm, 48*mm, col*mm, 3*mm, strokeWidth=0.5))

    label.add(shapes.String(
        2*mm, 44*mm,
        'In',
        fontName="Helvetica", fontSize=8
    ))

    label.add(shapes.String(
        22*mm, 44*mm,
        'Check-In #',
        fontName="Helvetica", fontSize=8
    ))

    label.add(shapes.String(
        42*mm, 44*mm,
        'Place',
        fontName="Helvetica", fontSize=8
    ))

    label.add(shapes.String(
        83*mm, 44*mm,
        'Out',
        fontName="Helvetica", fontSize=8, textAnchor='end'
    ))

# Grab the arguments
parser = argparse.ArgumentParser()
parser.add_argument('pages', type=int, default=1)
args = parser.parse_args()

cards = rows * cols * args.pages

# Create the sheet.
sheet = labels.Sheet(specs, draw_label, border=True)

sheet.add_labels(range(cards))

# Save the file and we are done.
sheet.save('checkin-log-cards.pdf')
print("{0:d} cards(s) output on {1:d} page(s).".format(sheet.label_count, sheet.page_count))

When I see one of these Check-in Queensland QR codes, I simply pull out the log card, a blank check-in card, and a pen. I write the check-in number from the blank card (visible in the email address) in my log with the date/time, place, and on the blank card, write the same date/time and hand that to the person collecting the details.

They can write that into their device at their leisure, and it saves time not having to spell it all out. As for me, I just have to remember to write the exit time. If Queensland Health come a ringing, I have a record of where I’ve been on hand… or if I receive an email, I can use the check-in number to validate that this is legitimate, or even tell if a venue has on-sold my personal details to an advertiser.

I guess it’d be nice if the Queensland Government could at least add a form to their fancy pages that their flashy QR codes send people to, so that those who do not have the application can still at least check-in without it, but that’d be too much to ask.

In the meantime, this at least meets them half-way, and hopefully does so which ensures minimal contact and increases efficiency.

Dec 312020
 

So, this last 2 years, I’ve been trying to keep multiple projects on the go, then others come along and pile their own projects on top. It kinda makes a mess of one’s free time, including for things like keeping on top of where things have been put.

COVID-19 has not helped here, as it’s meant I’ve lugged a lot of gear that belongs to my workplace, or belongs at my workplace, home, to use there. This all needs tracking to ensure nothing is lost.

Years ago, I threw together a crude parts catalogue system. It was built on Django, django-mptt and PostgreSQL, and basically abused the admin part of Django to manage electronic parts storage.

I later re-purposed some of its code for an estate database for my late grandmother: I just wrote a front-end so that members of the family could be given login accounts, and “claim” certain items of the estate. In that sense, the concept was extremely powerful.

The overarching principle of how both these systems worked is that you had “items” stored within “locations”. Locations were in a tree-structure (hence django-mptt) where a location could contain further “locations”… e.g. a root-level location might be a bed room, within that might be a couple of wardrobes and draws, and there might be containers within those.

You could nest locations as deeply as you liked. In my parts database, I didn’t consider rooms, but I’d have labelled boxes like “IC Parts 1”, “IC Parts 2”, these were Plano StowAway 932 boxes… which work okay, although I’ve since discovered you don’t leave the inner boxes exposed to UV light: the plastic becomes brittle and falls apart.

The inner boxes themselves were labelled by their position within the outer box (row, column), and each “bin” inside the inner box was labelled by row and column.

IC tubes themselves were also labelled, so if I had several sitting in a box, I could identify them and their location. Some were small enough to fit inside these boxes, others were stored in large storage tubs (I have two).

If I wanted to know where I had put some LM311 op-amps, I might look up the database and it’d tell me that there were 3 of them in IC Box 1/Row 2/Row 3/Column 5. If luck was on my side, I’d go to that box, pull out the inner box, open it up and find what I was looking for plugged into some anti-static foam or stashed in a small IC tube.

The parts themselves were fairly basic, just a description, a link to a data sheet, and some other particulars. I’d then have a separate table that recorded how many of each part was present, and in which location.

So from the locations perspective, it did everything I wanted, but parametric search was out of the question.

The place here looks like a tip now, so I really do need to get on top of what I have, so much so I’m telling people no more projects until I get on top of what I have now.

Other solutions exist. OpenERP had a warehouse inventory module, and I suspect Odoo continues this, but it’s a bit of a beast to try and figure out and it seems customisation has been significantly curtailed from the OpenERP days.

PartKeepr (if you can tolerate deliberate bad spelling) is another option. It seems to have very good parametric search of parts, but one downside is that it has a flat view of locations. There’s a proposal to enhance this, but it’s been languishing for 4 years now.

VRT used to have a semi-active track-and-trace business built on a tracking software package called P-Trak. P-Trak had some nice ideas (including a surprisingly modern message-passing back-end, even if it was a proprietary one), but is overkill of my needs, and it’s a pain to try and deploy, even if I was licensed to do so.

That doesn’t mean though I can’t borrow some ideas from it. It integrated barcode scanners as part of the user interface, something these open-source part inventory packages seem to overlook. I don’t have a dedicated barcode scanner, but I do have a phone with a camera, and a webcam on my netbook. Libraries exist to do this from a web browser, such as this one for QR codes.

My big problem right now is the need to do a stock-take to see what I’ve still got, and what I’ve added since then, along with where it has gone. I’ve got a lot of “random boxes” now which are unlabelled, and just have random items thrown in due to lack-of-time. It’s likely those items won’t remain there either. I need some frictionless way to record where things are getting put. It doesn’t matter exactly where something gets put, just so long as I record that information for use later. If something is going to move to a new location, I want to be able to record that with as little fuss as possible.

So the thinking is this:

  • Print labels for all my storage locations with UUIDs stored as barcodes
  • Enter those storage locations into a database using the UUIDs allocated
  • Expand (or re-write) my parts catalogue database to handle these UUIDs:
    • adding new locations (e.g. when a consignment comes in)
    • recording movement of containers between parent locations
    • sub-dividing locations (e.g. recording the content of a consignment)
    • (partial and complete) merging locations (e.g. picking parts from stock into a project-specific container)

The first step on this journey is to catalogue the storage containers I have now. Some are already entered into the old system, so I’ve grabbed a snapshot of that and can pick through it. Others are new boxes that have arrived since, and had additional things thrown in.

I looked at ways I could label the boxes. Previously that was a spirit pen hand-writing a label, but this does not scale. If I’m to do things efficiently, then a barcode seems the logical way to go since it uses what I already have.

Something new comes in? Put a barcode on the box, scan it, enter it into the system as a new location, then mark where that box is being stored by scanning the location barcode where I’ll put the box. Later, I’ll grab the box, open it up, and I might repeat the process with any IC tubes or packets of parts inside, marking them as being present inside that box.

Need something? Look up where it is, then “check it out” into my work area… now, ideally when I’m finished, it should go back there, but if I’m in a hurry, I just throw it in a box, somewhere, then record that I put it there. Next time I need it, I can look up where it is. Logical order isn’t needed up front, and can come later.

So, step 1 is to label all the locations. Since I’m doing this before the database is fully worked-out, I want to avoid ID clashes, I’m using UUIDs to label all the locations. Initially I thought of QR codes, but then realised some of the “locations” are DIP IC storage tubes, which do not permit large square labels. I did some experiments with Code-128, but found it was near impossible to reliably encode a UUID that way, my phone had difficulty recognising the entire barcode.

I returned to the idea of QR-codes, and found that my phone will scan a 10mm×10mm QR code printed on a page. That’s about the right height for the side of an IC tube. We had some inkjet labels kicking around, small 38.1×21.2mm labels arranged in a 5×11 grid (Avery J8651/L7651 layout). Could I make a script that generated a page full of QR codes?

Turns out, pylabels will do this. It is built on reportlab which amongst other things, embeds a barcode generator that supports various symbologies including QR codes. @hugohadfield had contributed a pull request which demonstrated using this tool with QR codes. I just had to tweak this for my needs.

# This file is part of pylabels, a Python library to create PDFs for printing
# labels.
# Copyright (C) 2012, 2013, 2014 Blair Bonnett
#
# pylabels is free software: you can redistribute it and/or modify it under the
# terms of the GNU General Public License as published by the Free Software
# Foundation, either version 3 of the License, or (at your option) any later
# version.
#
# pylabels is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
# A PARTICULAR PURPOSE.  See the GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along with
# pylabels.  If not, see <http://www.gnu.org/licenses/>.

import uuid

import labels
from reportlab.graphics.barcode import qr
from reportlab.lib.units import mm

# Create an A4 portrait (210mm x 297mm) sheets with 5 columns and 13 rows of
# labels. Each label is 38.1mm x 21.2mm with a 2mm rounded corner. The margins
# are automatically calculated.
specs = labels.Specification(210, 297, 5, 13, 38.1, 21.2, corner_radius=2,
        left_margin=6.7, right_margin=3, top_margin=10.7, bottom_margin=10.7)

def draw_label(label, width, height, obj):
    size = 12 * mm
    label.add(qr.QrCodeWidget(
            str(uuid.uuid4()),
            barHeight=height, barWidth=size, barBorder=2))

# Create the sheet.
sheet = labels.Sheet(specs, draw_label, border=True)

sheet.add_labels(range(1, 66))

# Save the file and we are done.
sheet.save('basic.pdf')
print("{0:d} label(s) output on {1:d} page(s).".format(sheet.label_count, sheet.page_count))

The alignment is slightly off, but not severely. I’ll fine tune it later. I’m already through about 30 of those labels. It’s enough to get me started.

For the larger J8165 2×4 sheets, the following specs work. (I can see this being a database table!)

# Specifications for Avery J8165 2×4 99.1×67.7mm
specs = labels.Specification(210, 297, 2, 4, 99.1, 67.7, corner_radius=3,
        left_margin=5.5, right_margin=4.5, top_margin=13.5, bottom_margin=12.5)

Later when I get the database ready (standing up a new VM to host the database and writing the code) I can enter this information in and get back on top of my inventory once again.

Dec 282020
 

So, a while back I tore apart an old Logitech wireless headset with the intention of using its bits to make a wireless USB audio interface. I was undecided whether the headset circuitry would “live” in a new headset, or whether it’d be a separate unit to which I could attach any headset.

I ended up doing the latter. I found through Mouser a suitable enclosure for the original circuitry and have fitted it with cable glands and sockets for the charger input (which now sports a standard barrel jack) and a DIN-5 connector for the earpiece/microphone connections.

The first thing to do was to get rid of that proprietary power connector. The two outer contacts are the +6V and 0V pins, shown here in orange and white/orange coloured cable respectively. I used a blob of heat-melt glue to secure it so I didn’t rip pads off.

Replacing the power connector. +6V is orange, 0V is orange/white.

The socket is “illuminated” by a LED on the PCB. Maybe I’ll look at some sort of light-pipe arrangement to bring that outside, we’ll see.

The other end, just got wired to a plain barrel jack. Future improvement might be to put a 6V DC-DC converter, allowing me to plug in any old 12V source, but for now, this’ll do. I just have to remember to watch what lead I grab. Whilst I was there, I also put in a cable gland for the audio interface connection.

Power socket and audio connections mounted in case.

One challenge with the board design is that there is not one antenna, but two, plus some rather lumpy tantalum capacitors near the second antenna. I suspect the two antennas are for handling polarisation, which will shift as you move your head and as the signal propagates. Either way, they meant the PCB wouldn’t sit “flat”. No problem, I had some old cardboard boxes which provided the solution:

PCB spacer, with cut-out for high-clearance parts.

The cardboard is a good option since it’s readily available and won’t attenuate the 2.4GHz signal much. It was also easy to work with.

I haven’t exposed the three push-buttons on that side of the PCB at this stage. I suppose drilling a hole and making a small “poker” to hit the buttons isn’t out of the question. This isn’t much different to what Logitech’s original case did. I’ll tackle that later. I need a similar solution for the slide-switch used for power.

One issue I faced was wrangling the now over-length FFC that linked the two sides. Previously, this spanned the headband, but now it only needed to reach a few centimetres at most. Eyeballing the original cable, I found this short replacement. I’ll have to figure out how to mount that floating PCB somehow, but at least it’s a clean solution.

Replacement FFC.

At this point, it was a case of finish wiring everything up. I haven’t tried any audio as yet, that will come in time. It still powers up, sees the transceiver, so there’s still “life” in this.

Powering up post-surgery.

I plugged it into its charger and let it run for a while just to top the LiPo cell.

Charging for the first time since mounting.

One thing I’m not happy with is the angle the battery is sitting at, since it’s just a bit wider than the space between the mounting posts. I might try shaving some material off the posts to see if I can get the battery to sit “flat”. I only need about 1mm, which should still allow enough clearance for the screwdriver and screw to pass the cell safely.

The polarity of the speakers is a guess on my part. Neither end seemed to be grounded, hopefully the drivers don’t mind being “common-ed”, otherwise I might need to cram some small isolation transformers in there.

Dec 132020
 

So, in the last 12 months or so, I’ve grown my music collection in a big way. Basically over the Christmas – New Year break, I was stuck at home, coughing and spluttering due to the bushfire smoke in the area (and yes, I realise it was no where near as bad in Brisbane as it was in other parts of the country).

I spent a lot of time listening to the radio, and one of the local radio stations was doing a “25 years in 25 days” feature, covering many iconic tracks from the latter part of last decade. Now, I’ve always been a big music listener. Admittedly, I’m very much a music luddite, with the vast majority of my music spanning 1965~1995… with some spill over as far back as 1955 and going as forward as 2005 (maybe slightly further).

Trouble is, I’m not overly familiar with the names, and the moment I walk into a music shop, I’m like the hungry patron walking into a food court: I want to eat something, but what? My mind goes blank as my mind is bombarded with all kinds of possibilities.

So when this count-down appeared on the radio, naturally, I found myself looking up the play list, and I came away with a long “shopping list” of songs I’d look for. Since then, a decent amount has been obtained as CDs from the likes of Amazon and Sanity… however, for some songs, I found it was easiest to obtain them as a digital download in FLAC format.

Now, for me, my music is a long-term investment. An investment that transcends changes in media formats. I do agree with ensuring that the “creators” of these works are suitably compensated for their efforts, but I do not agree with paying for the same thing multiple times.

A few people have had to perform in a studio (or on stage), someone’s had to collect the recordings, mix them, work with the creators to assemble those into an album, work with other creative people to come up with cover art, marketing… all that costs money, and I’m happy to contribute to that. The rest is simply an act of duplication: and yes, that has a cost, but it’s minimal and highly automated compared to the process of creating the initial work in the first place.

To me, the physical media represents one “license”, to perform that work, in private, on one device. Even if I make a few million copies myself, so long as I only play one of those copies at a time, I am keeping in the spirit of that license.

Thus, I work on the principle of keeping an “archival” copy, from which I can derive working copies that get day-to-day playback. The day-to-day copy will be in some lossy format for convenience.

A decade ago that was MP3, but due to licensing issues, that became awkward, so I switched over to Ogg/Vorbis, which also reduced the storage requirements by 40% whilst not having much audible impact on the sound quality (if anything, it improved). Since I also had to ditch the illegally downloaded MP3s in the process, that also had a “cleaning” effect: I insisted then on that I have a “license” for each song after that, whether that be wax cylinder, tape reel, 8-track, cassette tape, vinyl record, CD, whatever.

This year saw the first time I returned to music downloads, but this time, downloading legally purchased FLAC files. This leads to an interesting problem, how do you store these files in a manner that will last?

Audio archiving and CDs

I’m far from the first person with this problem, and the problem isn’t specific to audio. The archiving business is big money, and sometimes it does go wrong, whether it be old media being re-purposed (e.g. old tapes of “The Goon Show” being re-recorded with other material by the BBC), destruction (e.g. Universal Studios fire), or just old fashioned media degredation.

The procedure for film-based media (whether it be optical film, or magnetic media) usually involves temperature and humidity control, along with periodic inspection. Time-consuming, expensive, error prone.

CDs are reasonably resilient, particularly proper audio CDs made to the Red Book audio disc standard. In the CD-DA standard, uncompressed PCM audio is Reed Solomon encoded to achieve forward error correction of the PCM data. Thus, if a minor surface defect develops on the media, there is hopefully enough data intact to recover the audio samples and play on as if nothing had happened.

The fact that one can take a disc purchased decades ago, and still play it, is testament to this design feature.

I’m not sure what features exist in DVDs along the same lines. While there is the “video object” container format, the purpose of this seems to be more about copyright protection than about resiliency of the content.

Much of the above applies to pressed media. Recordable media (CD-Rs) sadly isn’t as resilient. In particular, the quality of blanks varies, with some able to withstand years of abuse, and others degrading after 18 months. Notably, the dye fades, and so you start to experience data loss beginning with the edge of the disc.

This works great for stuff I’ve purchased on CDs. Vinyl records if looked after, will also age well, although it’d be nice to have a CD back-up in case my record player packs it in. However, this presents a problem for my digital downloads.

At the moment, my strategy is to download the files to a directory, save a copy of the email receipt with them, place my GPG public key along-side, take SHA-256 hashes of all of the files, then digitally sign the hashes. I then place a copy on an old 1TB HDD, and burn a copy to CD-R or DVD-R. This will get me by for the next few years, but I’ve been “burned” by recordable media failing, and HDDs are not infallible either.

Getting discs pressed only makes sense when you need thousands of copies. I just need one or two. So I need some media that will last the distance, but can be produced in small quantities at home from readily available blanks.

Archiving formats

So, there are a few options out there for archival storage. Let’s consider a few:

Magnetic tape

Professional outfits seem to work on tape storage. Magnetic media, with all the overheads that implies. The newest drive in the house is a DDS-2 DAT drive, the media for which has not been produced in years, so that’s a lame duck. LTO is the new kid on the block, and LTO-6 drives are pricey!

Magneto-Optical

MO drives are another option from the past… we do have a 5¼” SCSI MO drive sitting in the cupboard, which takes 2GB cartridges, but where do you get the media from? Moreover, what do I do when this unit croaks (if it hasn’t already)?

Flash

Flash media sounds tempting, but then one must remember how flash works. It’s a capacitor on the gate of a MOSFET, storing a charge. The dielectric material around this capacitor has a finite resistance, which will cause “leakage” of the charge, meaning over time, your data “rots” much like it does on magnetic media. No one is quite sure what the retention truly is. NOR flash is better for endurance than NAND, but if it’s a recent device with more than about 32MB of storage, it’ll likely be NAND.

PROM

I did consider whether PROMs could be used for this, the idea being you’d work out what you wanted to store, burn a PROM with the data as ISO9660, then package it up with a small MCU that presents it as CD-ROM. The concept could work since it worked great for game consoles from the 80s. In practice they don’t make PROMs big enough. Best I can do is about 1 floppy’s worth: maybe 8 seconds of audio.

Hard drives

HDDs are an option, and for now that’s half my present interim solution. I have a 1TB drive formatted UDF which I store my downloads on. The drive is one of the old object storage drives from the server cluster after I upgraded to 2TB drives. So not a long-term solution. I am presently also recovering data from an old 500GB drive (PATA!), and observing what age does to these disks when they’re not exercised frequently. In short, I can’t rely on this alone.

CDs, DVDs and Bluray

So, we’re back to optical media. All three of these are available as blank record-able media, and even Bluray drives can read CDs. (Unlike LTO: where an LTO-$X drive might be backward compatible with LTO-$(X-2) but no further.)

There are blanks out there that are designed for archival use, notably the M-Disc DVD media, are allegedly capable of lasting 1000 years.

I don’t plan to wait that long to see if their claims stack up.

All of these formats use the same file systems normally, either ISO-9660 or UDF. Neither of these file systems offer any kind of forward error correction of data, so if the dye fades, or the disc gets scratched, you can potentially lose data.

Right now, my other mechanism, is to use CDs and DVDs, burned with the same material I put on the aforementioned 1TB HDD. The optical media is formatted ISO-9660 with Joliet and Rock-Ridge extensions. It works for now, but I know from hard experience that CD-Rs and DVD-Rs aren’t forever. Question is, can they be improved?

File system thoughts

Obviously genuinely better quality media will help in this archiving endeavour, but the thought is can I improve the odds? Can I sacrifice some storage capacity to achieve data resilience?

Audio CDs, as I mentioned, use Reed-Solomon encoding. Specifically, Cross-Interleaved Reed-Solomon encoding. ISO-9660 is a file system that supports extensions on the base standard.

I mentioned two before, Rock-Ridge and Joliet. On top of Rock-Ridge, there’s also zisofs, which adds transparent decompression to a Rock-Ridge file system. What if, I could make a copy of each file’s blocks that were RS-encoded, and placed them around the disc surface so that if the original file was unreadable, we could turn to the forward-error corrected copy?

There is some precedent in such a proposal. In Usenet, the “parchive” format was popularised as a way of adding FEC to files distributed on Usenet. That at least has the concept of what I’m wishing to achieve.

The other area of research is how can I make the ISO-9660 filesystem metadata more resilient. No good the files surviving if the filesystem metadata that records where they are is dead.

Video DVD are often dual UDF/ISO-9660 file systems, the so-called “UDF Bridge” format. Thus, it must be possible for a foreign file system to live amongst the blocks of an ISO-9660 file system. Conceptually, if we could take a copy of the ISO-9660 filesystem metadata, FEC-encode those blocks, and map them around the drive, we can make the file system resilient too.

FEC algorithms are another consideration. RS is a tempting prospect for two reasons:

zfec used in Tahoe-LAFS is another option, as is Golay, and many others. They’ll need to be assessed on their merits.

Anyway, there are some ideas… I’ll ponder further details in another post.