Computing

New laptop: StarBook Mk VI

I rarely replace computers… I’ll replace something when it is no longer able to perform its usual duty, or if I feel it might decide to abruptly resign anyway. For the last 10 years, I’ve been running a Panasonic CF-53 MkII as my workhorse, and it continues to be a reliable machine.

I just replaced the battery in it, so I now have two batteries, the original which now has about 1.5-2 hours of capacity, and a new one which gives me about 6 hours. A nice thing about that particular machine is it still implements legacy interfaces like RS-232 and Cardbus/PCMCIA. I’ve upgraded the internal storage to a 2TB SSD and replaced the DVD burner with a Blu-ray burner. There is one thing though it does lack which didn’t matter much prior to 2020: an internal microphone. I can plug a headset in, and that works okay for joining in on work meetings, but if there’s a group of us, that doesn’t work so well.

The machine is also a hefty lump to lug around due to being a “semi-rugged”. There’s also no webcam, not a deal breaker, but again, a reflection of how we communicate in 2023 vs what was typical in 2013.

Given I figured it “didn’t owe me anything”… it was time to look at a replacement and get that up and running before the old faithful decided to quit working and leave me stranded. I wanted something designed for open-source software ground-up this time around. The Panasonic worked great for that because it was quite conservative on specs — despite being purchased new in 2013, it sported an Intel IvyBridge-class Core i5, whereas the latest and greatest was the Haswell generation. Linux worked well, and still does, but it did so because of conservatism rather than explicit design.

Enter the StarBook Mk VI. This machine was built for Linux first and foremost. Windows is an option, that you pay extra for on this system. You also can choose your preferred CPU option, and even choose your preferred boot firmware, with AMI UEFI and coreboot (*Intel models only for now) available.

Figuring, I’ll probably be using this for the better part of 10 years from now… I aimed for the stars:

  • CPU: AMD Ryzen 7 5800U 8-core CPU with hyperthreading
  • RAM: 64GiB DDR4
  • SSD: 1.8TB NVMe
  • Boot firmware: coreboot
  • OS: Ubuntu 22.04 LTS (used to test the machine then install Gentoo)
  • Keyboard Layout: US
  • Power adapter: AU with 2m USB-C cable
         -/oyddmdhs+:.                stuartl@vk4msl-sb 
     -odNMMMMMMMMNNmhy+-`             ----------------- 
   -yNMMMMMMMMMMMNNNmmdhy+-           OS: Gentoo Linux x86_64 
 `omMMMMMMMMMMMMNmdmmmmddhhy/`        Host: StarBook Version 1.0 
 omMMMMMMMMMMMNhhyyyohmdddhhhdo`      Kernel: 6.5.7-vk4msl-sb-… 
.ydMMMMMMMMMMdhs++so/smdddhhhhdm+`    Uptime: 1 hour, 15 mins 
 oyhdmNMMMMMMMNdyooydmddddhhhhyhNd.   Packages: 2497 (emerge) 
  :oyhhdNNMMMMMMMNNNmmdddhhhhhyymMh   Shell: bash 5.1.16 
    .:+sydNMMMMMNNNmmmdddhhhhhhmMmy   Resolution: 1920x1080 
       /mMMMMMMNNNmmmdddhhhhhmMNhs:   WM: fvwm3 
    `oNMMMMMMMNNNmmmddddhhdmMNhs+`    Theme: Adwaita [GTK2/3] 
  `sNMMMMMMMMNNNmmmdddddmNMmhs/.      Icons: oxygen [GTK2/3] 
 /NMMMMMMMMNNNNmmmdddmNMNdso:`        Terminal: konsole 
+MMMMMMMNNNNNmmmmdmNMNdso/-           Terminal Font: Terminus (TTF) 16 
yMMNNNNNNNmmmmmNNMmhs+/-`             CPU: AMD Ryzen 7 5800U (16) @ 4.507GHz 
/hMMNNNNNNNNMNdhs++/-`                GPU: AMD ATI Radeon Vega Series / Radeon Vega Mobile Series 
`/ohdmmddhys+++/:.`                   Memory: 4685MiB / 63703MiB 
  `-//////:--.

First impressions

The machine arrived on Thursday, and I’ve spent much of the last few days setting it up. I first checked it out with the stock Ubuntu install: the machine boots up into an installer of sorts, which is good as it means you set up the user account yourself — there’s no credentials loose in the box. Downside is you don’t get to pick the partition layout.

The machine, despite being ordered with coreboot boot firmware, actually arrived with AMI boot firmware instead. Apparently the port of coreboot for AMD systems is still under active development, and I’m told there will be a guide published describing the procedure for installing coreboot. Minor irritation, I was looking forward to trying out coreboot on this machine — but not a show-stopper… I look forward to trying the guide when it becomes available.

The machine itself felt quite zippy… but then again, when you’re used to a ~12-year-old CPU, 8GB RAM and a 2TB SATA-II SSD for storage, it isn’t much of a surprise that the performance would be a big jump.

Installing Gentoo

After trying the machine out, I booted up a SysRescueCD USB stick and used gparted to shove-over the Ubuntu install into the last 32GiB of the partition, then proceeded to create a set of partitions for Gentoo’s root, a 80GiB swap partition (seems a lot, but it’s 64GiB for suspend-to-disk plus 16GiB for contingencies) some space for a /home partition, some LVM space for VMs, and my Ubuntu install right at the end.

I booted back into Ubuntu, and used it as my environment for bootstrapping Gentoo, that way I could experience how the machine behaved under a heavy load. Firefox was, not bad, under the circumstances. My only gripe being the tug-o-war between Ubuntu insisting that I use their Snap package, and me preferring a native install due to the former’s inability to respect my existing profile settings. This is a weekly battle I have with the office workstation.

In discussing with Starlabs Systems, they mentioned two possible gremlins to watch out for, WiFi (important since this machine has no Ethernet) and the touch pad.

I used a self-built Gentoo stage 3, unwittingly I used one built against the still-experimental 23.0 profiles, which meant it used a merged /usr base layout… but we’ll see how that goes anyway… since it’s the direction that Debian and others are going anyway. So far, the only issue has been the inability to install openrc and minicom together since both install a runscript binary in the same place.

Once I had enough installed to be able to boot the Gentoo install, including building a kernel, I got the boot-loader installed, re-configured UEFI to boot that in preference to Ubuntu, then booted the new OS.

First boot under Gentoo

OS boot-up was near instantaneous. I’m used to about 10-15 seconds spent, but this took no time at all.

WiFi worked out-of-the-box with kernel 6.5.7, but the touch pad was not detected. Actually, under X11 the keyboard was unresponsive too because I forgot to install the various drivers for X.org. Oops! I sorted out the drivers easy enough, but the touch pad was still an issue.

Troubleshooting the touch pad

To get the touch pad working, I ended up taking the Ubuntu kernel config, setting NVMe and btrfs to being built-in, and re-built the whole thing again… took a long time, but success, I had the touch pad working.

The tricky bit is the touch pad is a I²C device connected via the AMD chipset, and described in the ACPI. Not quite sure how this will work under coreboot, but we’ll cross that bridge later. I spent a little time today refining the kernel down a little from the everything kernel that Ubuntu use… to something a little more specific. Notably, things you can’t directly plug into this machine (like ISA/PCI/PCIe cards, CardBus/PCMCIA, etc) or interfaces the machine did not have (e.g. floppy drive, 8250 serial), I took out. Things that could conceivably be plugged in like USB devices were left in.

It took several tries, but I got something that’s workable for this hardware in the end.

Final kernel configuration

The end result is this kernel config. Intel StarBook users might be better off starting with the Ubuntu kernel config like I did and pare it back, but that file may still give you some clues.

Thoughts

Whilst compiling, this machine does not muck around… being a 8-core SMT machine it actually builds things quite rapidly, although on this occasion I gave the machine a helping hand on some bigger packages like Chromium by using a pre-built binary built for my other machines.

Everything I use seems to work just fine under Gentoo… and of course, having copied my /home from the Panasonic (you never realise how much crap you’ve got until you move house!), it was just a little tweaking of settings to suit the newer software installed.

I’m yet to try it out a full day running on the battery to see how that fares. Going flat-chat doing builds it only lasted about 2 hours, but that’s to be expected when you’ve got the machine under a heavy load.

Zoom sees the webcam and can pick up the microphone just fine. I expect Slack will too, but I’ll find that out when I return to work (ugh!) in a fortnight.

My only gripe right now is that my right pinkie finger keeps finding the SysRq/PrintScreen button when moving around with the arrow keys… been used to that arrow cluster being on the far-right of the keyboard not one row back like this one. Other than that, the keyboard seems reasonable for typing on. The touch pad not being recessed sometimes picks up stray movements when typing, but one can disable/enable it pretty easily via Fn+F10 (yes, I have Fn-lock enabled). The keyboard backlight is a nice touch too.

The lack of an Ethernet port is my other gripe, but not hard to work-around, I have a USB-C “dock” that I bought to use with my tablet that gives me 3×USB-3A, full-size SD, microSD, 2×HDMI, Ethernet and audio out and pass-through USB-C for charging. The Ethernet port on that works and the laptop happily charges through it, so that works well enough.

The power supply for this thing is tiny, 65W with USB-A and USB-C ports. I also tried charging this laptop with a conventional USB-A charger but it did not want to know (the PSU probably doesn’t do USB PD). Should be possible to find a 12V-powered USB-C charger that will work though.

The Toughbook will likely be my go-to on camping trips and WICEN events, despite being a heavier and bigger unit, as usually I’m not lugging the thing around, it’s better ruggedised for outdoor activities, and it’s also looks about 10 years older than it really is, so not attractive to steal.

Leave at last

So… I’ve been busy at work lately, and that for the last few months has been my primary focus. A big reason why I’ve been keeping my head low is because a few years ago, it was pointed out that I had been physically with the company I’m working for for about 10 years.

Here in Australia, that milestone grants long-service leave; a bonus 8⅔ weeks of leave. This is something that’s automatic for full-time employees of a company, and harks back to the days of when people used to travel to Australia from England by ship to work, this gave that person an opportunity to travel back and visit family “back home”.

But at the time, I wasn’t there yet! See, for the first few years I was a contractor, so not a full-time employee. I didn’t become full-time until 2013, meaning my 10 years would not tick up until this year.

While the milestone is 99% symbolic, the thing is at my age (nearing 40), I’m unlikely to ever see that milestone come up again. If I did something that blew it or put it in jeopardy in any way, it’d be up in smoke.

There are some select cases where such leave may be granted early (between 7-10 years):

  • if the person dies, suffers total physical disability or serious illness
  • the person’s position becomes untenable
  • the person’s domestic situation forces them to leave (e.g. dropping out of work to become a carer for a family member)
  • the employer dismisses the person for reasons other than that person’s performance, conduct or capacity
  • unfair dismissal

I thought, it was worth sticking it out… after 10 years, it’s a done deal, the person is entitled to the full amount. If they booted me out after that, they’d still have to pay out that, plus the holiday leave (which I still have lots because I haven’t taken much since 2018).

Employment plans

Right now, I’m not going anywhere, I’ve got nowhere to go anyway. While doing work on things like electricity billing brings me no joy whatsoever (“bill shock as-a-service” is what it feels like), it pays the bills, and I’m not quite at the point where I can safely let it all go and coast into an early retirement.

Work has actually changed a lot in the past few years. Years ago, we did a lot of Python work, I also did some work in C. Today, it’s lots of JavaScript, which is an idiosyncratic language to say the least, and don’t get me started on the moving target that is UI frameworks for web pages!

Dealing with the disaster that is Office365 (which is threatening to invade even more into my work) doesn’t make this any easier, but sadly that piece of malware has infected most organisations now. (Just do a dig -t MX <employer-domain>, or just look at the headers of an email from their employees, many show Office365 today). I’ve so far dodged Microsoft Teams which I now flatly refuse to use as I do not consent to my likeness/voice being used in AI models and Microsoft isn’t being open about how it uses AI.

Most people my age get shepherded into management positions, really not my scene. In a new job I’d be competing with 20-somethings that are more familiar with the current software stacks. Non-technical jobs exist, but many assume you own a motor vehicle and possess the requisite license to operate it.

This pretty much means I’m unemployable if I leave or are booted out, so whatever I have in the bank balance needs to make it through until my time on this planet is done.

Thus, I must stick it out a bit longer… I might not get the 15-year bonus (4⅓ weeks), but at least I can’t lose what I have now. If excrement does meet a rotary cooling device though, simulations suggest with some creative accounting, I may just scrape through. I don’t plan on setting up a donations page and talking to Centrelink is a waste of time, I’ll die a pauper before they answer the phone.

Plans for this month

So I have holiday leave off until November. Unlike previous times I’ve taken big amounts off, I won’t be travelling far this time around. Instead, it’s a project month.

Financial work

I need to plan ahead for the possibility that I wind up in long-term unemployment. I don’t expect to live long (the planet cannot sustain everyone living to >100 years), but I do need to be around to finalise the estates of my parents and see my cat out.

That suggests I need to keep the lights on for another 20~30 years. Presently my annual expenditure has been around the $30k mark, but much of that is discretionary (most of it has been on music), and I could possibly reduce that to around the $10k mark.

I have some shares, but need to expand on this further. David Rowe posted some ideas in a two part series which provides some food for thought here.

At the moment, I’m nowhere near that 10% yield figure mentioned…that post was written in 2015 and lot has changed in 8 years. Interest rates are currently at ~5% for term deposits.

I do plan to start one though all the same. After Suncorp closed both The Gap and Ashgrove branches (forcing me all the way to Michelton), I set up an account at BOQ who have branches in both Ashgrove and The Gap… so I can do a term deposit with either, and they’re both offering a 5% 12-month term deposit.

I have a year’s worth sitting at BOQ in an interest bearing account… so that’s money that’s readily accessible. The remainder I have, I plan to split — some going into the aforementioned term deposit, the other will go into that interest bearing account in case I decide to buy more shares.

That should start building the reserves up.

Hardware refurbishment and replacement

Some of my equipment is getting a bit long in the tooth. The old desktop I bought back in 2010 is playing silly-buggers at the moment, and even the laptop I’m typing this on is nearing 10 years old. I have one desktop which used to be my office workstation before the pandemic, so between it and the old desktop, I have decent processing capacity.

The server rack needs work though. One compute node is down, and I’m actually running out of space. I also need to greatly expand the battery bank. I bought a full-height open-frame rack to replace the old one, and was gifted a new solar controller, so some time during this break, I’ll be assembling that, moving the old servers into it… and getting the replacement compute node up and running.

Software updates

I’ve been doing this to critical servers… I recently replaced the mail server with a new VM instance which made the maintenance work-load a lot lower… but there’s still some machines that need my attention.

I’m already working on getting my Mastodon instance up to release 4.2.0 (I bumped it to 4.1.9 to at least get a security patch off my back), there are a couple of OpenBSD routers that need updates and some similar remedial work.

Projects

Already mentioned is the server cluster (hardware and software), but there are some other projects that need my attention.

  • setuptools-pyproject-migration is a project that David Zaslavsky and I have been working on that is intended to help projects migrate from the old setup.py scripts in Python projects to the new pyproject.toml structure. Work has kept me busy, but the project is nearly ready for the first release. I need to help finish up the bits that are missing, and get that out there.
  • aioax25 could use some love, connected mode nearly works, plus it could do with a modernisation.
  • Brisbane WICEN‘s RFID tracking project is something I have not posted much about, but nonetheless got a lot of attention at the Tom Quilty this year, this needs further work.

Self-Training

Some things I’d like to try and get my head around, if possible…

  • Work uses NodeJS for a lot of things, but we’re butting up against its limits a lot. We use a lot of projects that are written in GoLang (e.g. InfluxDB, Grafana, Terraform, Vault), and while I did manage to hack some features into s3sync needed for work, I should get to know GoLang properly.
  • Rust interests me a lot. I should at least have a closer look at this and learn a little. It has been getting a mention around the office in the context of writing NodeJS extensions. Definitely worth looking into further.
  • I need to properly get to understand OAuth2, as I don’t think I completely understand it as it stands now. I’m not sure I’m doing it “right”.
  • COSE would have applications in both the WideSky Hub (end-to-end encryption) and in Brisbane WICEN’s RFID tracking system (digital signatures).

Physical exercise

I have not been out on the bike for some time, and it shows! I need to get out more. I intend to do quite a bit of that over the next few weeks.

Maybe I might do the odd over-nighter, but we’ll see.

Generating ball tickets/programmes using LaTeX

My father does a lot of Scottish Country dancing, he was a treasurer for the Clan MacKenzie association for quite a while, and a president there for about 10 years too. He was given a task for making some ball tickets, but each one being uniquely numbered.

After hearing him swear at LibreOffice for a bit, then at Avery’s label making software, I decided to take matters into my own hands.

First step was to come up with a template. The programs were to be A6-size booklets; made up of A5 pages folded in half. For ease of manufacture, they would be printed two to a page on A4 pages.

The first step was to come up with the template that would serve as the outer and inner pages. The outer page would have a placeholder that we’d substitute.

The outer pages of the programme/ticket booklet… yes there is a typo in the last line of the “back” page.
\documentclass[a5paper,landscape,16pt]{minimal}
\usepackage{multicol}
\setlength{\columnsep}{0cm}
\usepackage[top=1cm, left=0cm, right=0cm, bottom=1cm]{geometry}
\linespread{2}
\begin{document}
\begin{multicols}{2}[]

\vspace*{1cm}

\begin{center}
\begin{em}
We thank you for your company today\linebreak
and helping to celebrate 50 years of friendship\linebreak
fun and learning in the Redlands.
\end{em}
\end{center}

\begin{center}
\begin{em}
May the road rise to greet you,\linebreak
may the wind always be at your back,\linebreak
may the sun shine warm upon your face,\linebreak
the rains fall soft upon your fields\linebreak
and until we meet again,\linebreak
may God gold you in the palm of his hand.
\end{em}
\end{center}

\vspace*{1cm}

\columnbreak
\begin{center}
\begin{em}
\textbf{CLEVELAND SCOTTISH COUNTRY DANCERS\linebreak
50th GOLD n' TARTAN ANNIVERSARY TEA DANCE}\linebreak
\linebreak
1973 - 2023\linebreak
Saturday 20th May 2023\linebreak
1.00pm for 1.30pm - 5pm\linebreak
Redlands Memorial Hall\linebreak
South Street\linebreak
Cleveland\linebreak
\end{em}
\end{center}

\begin{center}
\begin{em}
Live Music by Emma Nixon \& Iain Mckenzie\linebreak
Black Bear Duo
\end{em}
\end{center}

\vspace{1cm}

\begin{center}
\begin{em}
Cost \$25 per person, non-dancer \$15\linebreak
\textbf{Ticket No \${NUM}}
\end{em}
\end{center}
\end{multicols}
\end{document}

The inner pages were the same for all booklets, so we just came up with one file that was used for all. I won’t put the code here, but suffice to say, it was similar to the above.

The inner pages, no placeholders needed here.

So we had two files; ticket-outer.tex and ticket-inner.tex. What next? Well, we needed to make 100 versions of ticket-outer.tex, each with a different number substituted for $NUM, and rendered as PDF. Similarly, we needed the inner pages rendered as a PDF (which we can do just once, since they’re all the same).

#!/bin/bash
NUM_TICKETS=100

set -ex

pdflatex ticket-inner.tex
for n in $( seq 1 ${NUM_TICKETS} ); do
	sed -e 's:\\\${NUM}:'${n}':' \
            < ticket-outer.tex \
            > ticket-outer-${n}.tex
	pdflatex ticket-outer-${n}.tex
done

This gives us a single ticket-outer.pdf, and 100 different ticket-inner-NN.pdf files that look like this:

A ticket outer pages document with substituted placeholder

Now, we just need to put everything together. The final document should have no margins, and should just import the relevant PDF files in-place. So naturally, we just script it; this time stepping every 2 tickets, so we can assemble the A4 PDF document with our A5 tickets: outer pages of the odd-numbered ticket, outer pages of the even-numbered ticket, followed by two copies of the inner pages. Repeat for all tickets. We also need to ensure that initial paragraph lines are not indented, so setting \parindent solves that.

This is the rest of my quick-and-dirty shell script:

cat > tickets.tex <<EOF
\documentclass[a4paper]{minimal}
\usepackage[top=0cm, left=0cm, right=0cm, bottom=0cm]{geometry}
\usepackage{pdfpages}
\setlength{\parindent}{0pt}
\begin{document}
EOF
for n in $( seq 1 2 ${NUM_TICKETS} ); do
	m=$(( ${n} + 1 ))
	cat >> tickets.tex <<EOF
\includegraphics[width=21cm]{ticket-outer-${n}.pdf}
\includegraphics[width=21cm]{ticket-outer-${m}.pdf}
\includegraphics[width=21cm]{ticket-inner.pdf}
\includegraphics[width=21cm]{ticket-inner.pdf}
EOF
done
cat >> tickets.tex <<EOF
\end{document}
EOF
pdflatex tickets.tex

The result is a 100-page PDF, which when printed double-sided, will yield a stack of tickets that are uniquely numbered and serve as programmes.

A crude attempt at memory management

The other day I had a bit of a challenge to deal with. My workplace makes embedded data collection devices which are built around the Texas Instruments CC2538 SoC (internal photos visible here) and run OpenThread. To date, everything we’ve made has been an externally-powered device, running off either DC power (9-30V) or mains (120/240V 50/60Hz AC). CC2592 range extender support was added to OpenThread for this device.

The CC2538, although very light on RAM (32KiB), gets the job done with some constraints. Necessity threw us a curve-ball the other day, we wanted a device that ran off a battery. That meant going into sleep mode periodically, deep sleep! The CC2538 has a number of operating modes:

  1. running mode (pretty much everything turned on)
  2. light sleep mode (clocks, CPU and power stays on, but we pause a few peripherals)
  3. deep sleep mode — this comes in four flavours
    • PM0: Much like light-sleep, but we’ve got the option to pause clocks to more peripherals
    • PM1: PM0, plus we halt the main system clock (32MHz crystal or 16MHz RC), halting the CPU
    • PM2: PM1 plus we power down the bottom 16KiB of RAM and some other internal peripherals
    • PM3: PM2 plus we turn off the 32kHz crystal used by the sleep timer and watchdog.

We wanted PM2, which meant while we could use the bottom 16KiB of RAM during run-time, the moment we went to sleep, we had to forget about whatever was kept in that bottom 16KiB RAM — since without power it would lose its state anyway.

The challenge

Managing RAM in a device like this is always a challenge. malloc() is generally frowned upon, however in some cases it’s a necessary evil. OpenThread internally uses mbedTLS and that, relies on having a heap. It can use one implemented by OpenThread, or one provided by you. Our code also uses malloc for some things, notably short-term tasks like downloading a new configuration file or for buffering serial traffic.

The big challenge is that OpenThread itself uses a little over 9KiB RAM. We have a 4KiB stack. We’ve got under 3KiB left. That’s bare-bones OpenThread. If you want JOINER support, for joining a mesh network, that pulls in DTLS, which by default, will tell OpenThread to static-allocate a 6KiB buffer.

9KiB becomes about 15KiB; plus the stack, that’s 19KiB. This is bigger than 16KiB — the linker gives up.

Using heap memory

There is a work-around that gets things linking; you can build OpenThread with the option OPENTHREAD_CONFIG_HEAP_EXTERNAL_ENABLE — if you set this to 1, OpenThread forgoes its own heap and just uses malloc / free instead, implemented by your toolchain.

OpenThread builds and links in 16KiB RAM, hooray… but then you try joining, and; NoBufs is the response. We’re out of RAM. Moving things to the heap just kicked the can down the road, we still need that 6KiB, but we only have under 3KiB to give it. Not enough.

We have a problem in that, the toolchain we use, is built on newlib, and while it implements malloc / free / realloc; it does so with a primitive called _sbrk(). We define a pointer initialised up the top of our .bss, and whenever malloc needs more memory for the heap, it calls _sbrk(N); we grab the value of our pointer, add N to it, and return the old value. Easy.

Except… we don’t just have one memory pool now, we have two. One of which, we cannot use all the time. OpenThread, via mbedTLS also winds up calling on malloc() very early in the initialisation (as early as the otInstanceInitSingle() call to initialise OpenThread). We need that block of RAM to wind up in the upper 16KiB that stays powered on — so we can’t start at address 0x2000:0000 and just skip over .data/.bss when we run out.

malloc() will also get mighty confused if we suddenly hand it an address that’s lower than the one we handed out previously. We can’t go backwards.

I looked at replacing malloc() with a dual-pool-aware version, but newlib is hard-coded in a few places to use its own malloc() and not a third-party one. picolibc might let us swap it out, but getting that integrated looked like a lot of work.

So we’re stuck with newlib‘s malloc() for better or worse.

The hybrid approach

One option, we can’t control what malloc the newlib functions use. So use newlib‘s malloc with _sbrk() to manage the upper heap. Wrap that malloc with our own creation that we pass to OpenThread: we implement otPlatCAlloc and otPlatFree — which are essentially, calloc and free wrappers.

The strategy is simple; first try the normal calloc, if that returns NULL, then use our own.

Re-purposing an existing allocator

The first rule of software engineering, don’t write code you don’t have to. So naturally I went looking for options.

Page upon page of “No man don’t do it!!!”

jemalloc looked promising at first, it is the FreeBSD malloc(), but that there, lies a problem — it’s a pretty complicated piece of code aimed at x86 computers with megabytes of RAM minimum. It used uint64_ts in a lot of places and seemed like it would have a pretty high overhead on a little CC2538.

I tried avr-libc‘s malloc — it’s far simpler, and actually is a free-list implementation like newlib‘s version, but there is a snag. See, AVR microcontrollers are 8-bit beasts, they don’t care about memory alignment. But the Cortex M3 does! avrlibc_malloc did its job, handed back a pointer, but then I wound up in a HARDFAULT condition because mbedTLS tried to access a 32-bit word that was offset by a few bytes.

A simple memory allocator

The approach I took was a crude one. I would allocate memory in fixed-sized “blocks”. I first ran the OpenThread code under a debugger and set a break-point on malloc to see what sizes it was asking for — mostly blocks around the 128 byte mark, sometimes bigger, sometimes smaller. 64-byte blocks would work pretty well, although for initial testing, I went the lazy route and used 8-byte blocks: uint64_ts.

In my .bss, I made an array of uint8_ts; size equal to the number of 8-byte blocks in the lower heap divided by 4. This would be my usage bitmap — each block was allocated two bits, which I accessed using bit-banding: one bit I called used, and that simply reported the block was being used. The second was called chained, and that indicated that the data stored in this block spilled over to the next block.

To malloc some memory, I’d simply look for a string of free blocks big enough. When it came to freeing memory, I simply started at the block referenced, and cleared bits until I got to a block whose chained bit was already cleared. Because I was using 8-byte blocks, everything was guaranteed to be aligned.

8-byte blocks in 16KiB (2048 blocks) wound up with 512 bytes of usage data. As I say, using 64-byte blocks would be better (only 256 blocks, which fits in 64 bytes), but this was a quick test. The other trick would be to use the very first few blocks to store that bitmap (for 64-byte blocks, we only need to reserve the first block).

The scheme is somewhat inspired by the buddy allocator scheme, but simpler.

Bit banding was simple enough; I defined my struct for accessing the bits:

struct lowheap_usage_t {
        uint32_t used;
        uint32_t chained;
};

and in my code, I used a C macro to do the arithmetic:

#define LOWHEAP_USAGE                                                   \
        ((struct lowheap_usage_t*)(((((uint32_t)&lowheap_usage_bytes)   \
                                     - 0x20000000)                      \
                                    * 32)                               \
                                   + 0x22000000))

The magic numbers here are:

  • 0x20000000: the start of SRAM on the CC2538
  • 0x22000000: the start of the SRAM bit-band region
  • 32: the width of each word in the CC2538

Then, in my malloc, I could simply call…

struct lowheap_usage_t* usage = LOWHEAP_USAGE;

…and treat usage like an array; where element 0 was the usage data for the very first block down the bottom of SRAM.

To implement a memory allocator, I needed five routines:

  • one that scanned through, and told me where the first free block was after a given block number (returning the block number) — static uint16_t lowheap_first_free(uint16_t block)
  • one that, given the start of a run of free blocks, told me how many blocks following it were free — static uint16_t lowheap_chunk_free_length(uint16_t block, uint16_t required)
  • one that, given the start of a run of chained used blocks, told me how many blocks were chained together — static uint16_t lowheap_chunk_used_length(uint16_t block)
  • one that, given a block number and count, would claim that number of blocks starting at the given starting point — static void lowheap_chunk_claim(uint16_t block, uint16_t length)
  • one that, given a starting block, would clear the used bit for that block, and if chained was set; clear it and repeat the step on the following block (and keep going until all blocks were freed) — static void lowheap_chunk_release(uint16_t block)

From here, implementing calloc was simple:

  1. first, try the newlib calloc and see if that succeeded. Return the pointer we’re given if it’s not NULL.
  2. if we’re still looking for memory, round up the memory requirement to the block size.
  3. initialise our starting block number (start_nr) by calling lowheap_first_free(0) to find the first block; then in a loop:
    • find the size of the free block (chunk_len) by calling lowheap_chunk_free_length(start_nr, required_blocks).
    • If the returned size is big enough, break out of the loop.
    • If not big enough, increment start_nr by the return value from lowheap_chunk_used_length(start_nr + chunk_len) to advance it past the too-small free block and the following used chunk.
    • Stop iterating of start_nr is equal to or greater than the total number of blocks in the heap.
  4. If start_nr winds up being past the end of the heap, fail with errno = ENOMEM and return NULL.
  5. Otherwise, we’re safe, call lowheap_chunk_claim(start_nr, required_blocks); to reserve our space, zero out the actual blocks allocated, then return the address of the first block cast to void*.

Implementing free was not a challenge either: either the pointer was above our heap, in which case we simply passed the pointer to newlib‘s free — or if it was in our heap space, we did some arithmetic to figure out which block that address was in, and passed that to lowheap_chunk_release().

I won’t publish the code because I didn’t get it working properly in the end, but I figured I’d put the notes here on how I put it together to re-visit in the future. Maybe the thoughts might inspire someone else. 🙂

Demise of classic hardware: the final act

So today I finally got around to the SGI kit in my possession. Not quite sure where all of it went, there’s a SGI PS/2 keyboard, Indy Presenter LCD and a SGI O2 R5000 180MHz CPU module that have gone AWOL, but this morning I took advantage of the Brisbane City Council kerb-side clean-up.

Screenshot of the post — yes I need to get Mastodon post embedding working

I rounded up some old Plextor 12× CD-ROM drives (SCSI interface) that took CD caddies (remember those?) as well to go onto the pile, and some SCSI HDDs I found laying around — since there’s a good chance the disks in the machines are duds. I did once boot the Indy off one of those CD-ROM drives, so I know they work with the SGI kit.

The machines themselves had gotten to the point where they no longer powered on. The O2 at first did, and I tried saving it, but I found:

  1. it was unreliable, frequently freezing up — until one day it stopped powering on
  2. the case had become ridiculously brittle

The Indy exploded last time I popped the cover, and fragments of the Indigo2 were falling off. The Octane is the only machine whose case seemed largely intact. I had gathered up what IRIX kit I had too, just in case the new owners wanted to experiment. archive.org actually has the images, and I had a crack at patching irixboot to be able to use them. Never got to test that though.

Today I made the final step of putting the machines out on the street to find a new home. It looks like exactly that has happened, someone grabbed the homebrew DB15 to 13W3 cable I used for interfacing to the Indy and Indigo2, then later in the day I found the lot had disappeared.

With more room, I went and lugged the old SGI monitor down, it’s still there, but suspect it’ll possibly go too. The Indy and Indigo2 looked to be pretty much maxxed-out on RAM, so should be handy for bits for restoring other similar-era SGI kit. I do wish the new owners well with their restoration journey, if that’s what they choose to do.

For me though, it’s the end of an era. Time to move on.

HTML Email ought to be considered harmful: auDA shows us why

I’m the owner of two domain licenses, longlandclan.id.au and vk4msl.id.au, both purchased for personal use. The former I share with other family members where as the latter I use for my own use. Consequently, I’m on auDA’s mailing lists and receive the occasional email from them. No big deal. Lately, they’ve been pushing .au domains (i.e. dropping the .id bit out), which I’m not worried about myself, but I can see the appeal for businesses.

Anyway… I practice what I preach with regards to email: I do not send email in HTML format — and my email client is set to receive emails in plain text, not HTML, unless there is no plain-text component. This morning, I received what I consider, a textbook example of why I think HTML email is so bad for the Internet today.

From: .au Domain Administration <noreply@auda.com.au>
Subject: Notice: .au Direct Registration
Date: Wed, 10 Aug 2022 23:00:04 +0000
Reply-To: .au Domain Administration <noreply@auda.com.au>
X-Mailer: Mailchimp Mailer - **CID292f65320f63be5c3fcd**

The .au Domain Administration (auDA) recently launched Australia’s newest domain namespace – .au direct.

Dear Stuart Longland,

The .au Domain Administration (https://www.auda.org.au/)  (auDA), recently launched Australia’s newest domain namespace – .au direct. The new namespace provides eligible registrants the option to register domain names directly before the .au for the first time (e.g. forexample.au).

Registrants with an existing .au domain name licence are eligible to apply for a direct match of their .au direct domain name through the Priority Allocation Process (e.g. if you hold forexample.com.au (https://aus01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fforexample.com.au%2F&data=05%7C01%7Cprivate.address%40auda.org.au%7C95a9271d4eff4973013b08da3240a115%7C81810bc45d6845f6ba4e3d6c9fb37e43%7C0%7C0%7C637877550424818538%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=2WhPYMxV3FI9nEpXDEk8KdyJwWGyqcI%2FwRd%2FNc7DQks%3D&reserved=0) , you can apply for Priority Status to register forexample.au). Information about your existing domain name licence is available here:  https://whois.auda.org.au/. The Priority Allocation Process is now open and will close on 20 Sept 2022.

That is the email, as it appeared in my email client (I have censored the unfortunate auDA employee’s email address). I can see what happened:

Someone composed an email (likely in HTML format) that would be part of the marketing campaign they were going to send via MailChimp. The person composing the email for MailChimp clearly is using Microsoft Outlook (or maybe that should be called Microsoft LookOut!). Microsoft’s software saw what it thought was a hyperlink and thought, “I need to ‘protect’ this”, and made it a “safe” link. A link with the user’s email address embedded in it!

Funnily enough, this seems to be the only place where a link was mangled by Microsoft’s mal^H^H^Hsoftware. I think this underscores the importance of verifying that you are indeed sending what you think you are sending — and highlights how difficult HTML (and Microsoft) have made this task.

  1. don’t assume that people will only see the HTML email
  2. don’t assume that what you see in the HTML view is identical to what will be seen in plain text

Might be better to compose the plain text, get that right… then paste that into the HTML view and “make it pretty”… or perhaps don’t bother and just go back to plain-text? KISS principle!

Resurrecting an SGI O2

Years ago, I was getting into Linux on esoteric architectures, which started with a Gateway Microserver which runs the MIPS architecture… to better understand this platform I also obtained a few SiliconGraphics workstations, including this O2.

Originally a R5000SC at 180MHz and 128MB RAM, I managed to get hold of an RM5200 300MHz CPU module for it, and with the help of people on the #mipslinux channel on the old irc.freenode.net, managed to obtain a PROM update to get the RM5200 working. Aside from new HDDs (the original died just recently), it’s largely the stock hardware.

I figured it deserved to go to a new home, and a fellow on Gumtree (in WA) happened to be looking for some of these iconic machines, so I figured I might as well clean this machine up as best I can and get it over there while it’s still somewhat functioning. That said, age has not been friendly to this beast.

  • the CD-ROM drive tray gear has popped off the motor spindle, rendering the CD-ROM drive non-functional
  • in trying to fix the CD-ROM issue (I tried disassembling the drive but couldn’t get at the parts needed), the tab that holds the lid of the machine on broke
  • the PSU fan, although free to spin, does not appear to be operational
  • the machine seems to want to shut off after a few minutes of run-time

The latter two are related I think: likely things get too hot and a protection circuit kicks in to turn the machine off. There’s no dust in the machine to cause a lack of air flow, I thus suspect the fan is the issue. This will be my biggest challenge I suspect. It looks to be a fairly standard case fan, so opening up the power supply (not for the feint of heart!) to replace it with a modern one isn’t out of the question.

The CD-ROM drive is a different matter. SGI machines use 512-byte sectors on their CDs, and this requires CD-ROM firmware that supports this. I have a couple of Plextor SCSI drives that do offer this (there is a jumper marked “BLOCK”), but they won’t physically fit in the O2 (they are caddy-loading drives). Somewhere around the house I have a 68-pin SCSI cable, I should be able to link that to the back of the O2 via its external SCSI port, then cobble together a power supply to run the drive externally… but that’ll be a project for another day.

A working monitor was a possible challenge, but a happy accident: it seems some LCD montiors can do sync-on-green, and thus are compatible with the O2. I’m using a little 7″ USB-powered WaveShare LCD which I normally use for provisioning Raspberry Pi PCs. I just power the monitor via a USB power supply and use the separately-provided VGA adaptor cable to plug it into the O2. So I don’t have to ship a bulky 20″ CRT across the country.

The big issue is getting an OS onto the thing. I may have to address the sudden-shutdown issue first before I can get a reasonable chance at an OS install. The big problem being an OS that these things can run. My options today seem to be:

  • Debian Jessie and earlier (Stretch dropped support for mips4 systems, favouring newer mips64r2/mips32r2 systems)
  • Gentoo Linux (which it currently does run)
  • OpenBSD 6.9 and earlier (7.0 discontinues the sgi port)
  • NetBSD 9.2
  • IRIX 6.5

The fellow ideally wants IRIX 6.5 on the thing, which is understandable, that is the native OS for these machines. I never had a full copy of IRIX install media, and have only used IRIX once (it came installed on the Indy). I’ve only ever installed Gentoo on the O2.

Adding to the challenge, I’ll have to network boot the thing because of the duff CD-ROM drive. I had thought I’d just throw NetBSD on the thing since that is “current” and would at least prove the hardware works with a minimum of fuss… but then I stumbled on some other bits and pieces:

  • irixboot is a Vagrant-based virtual machine with tools needed to network-boot a SGI workstation. The instructions used for IP22 hardware (Indy/Indigo² should work here because IP32 hardware like the O2 also have 32-bit PROMs)
  • The Internet Archive provides CD images for IRIX 6.5, including the foundation discs which I’ve never posessed

Thus, there seems to be all the bits needed to get IRIX onto this thing, if I can get the machine to stay running long enough.

Network juju on the fly: migrating a corporate network to VPN-based connectivity

So, this week mother nature threw South East Queensland a curve-ball like none of us have seen in over a decade: a massive flood. My workplace, VRT Systems / WideSky.Cloud Pty Ltd resides at 38b Douglas Street, Milton, which is a low-lying area not far from the Brisbane River. Sister company CETA is just two doors down at no. 40. Mid-February, a rain depression developed in the Sunshine Coast Hinterland / Wide Bay area north of Brisbane.

That weather system crawled all the way south, hitting Brisbane with constant heavy rain for 5 days straight… eventually creeping down to the Gold Coast and over the border to the Northern Rivers part of NSW.

The result on our offices was devastating. (Copyright notice: these images are placed here for non-commercial use with permission of the original photographers… contact me if you wish to use these photos and I can forward your request on.)

Some of the stock still worked after the flood — the Siemens UH-40s pictured were still working (bar a small handful) and normally sell for high triple-figures. The WideSky Hubs and CET meters all feature a conformal coating on the PCBs that will make them robust to water ingress and the Wavenis meters are potted sealed units. So not all a loss — but there’s a big pile of expensive EDMI meters (Mk7s and Mk10s) though that are not economic to salvage due to approval requirements which is going to hurt!

Le Mans Motors, pictured in those photos is an automotive workshop, so would have had lots of lubricants, oils and grease in stock needed to service vehicles — much of those contaminants were now across the street, so washing that stuff off the surviving stock was the order of the day for much of Wednesday, before demolition day Friday.

As for the server equipment, knowing that this was a flood-prone area (which also by the way means insurance is non-existent), we deliberately put our server room on the first floor, well above the known flood marks of 1974 and 2011. This flood didn’t get that high, getting to about chest-height on the ground floor. Thus, aside from some desktops, laptops, a workshop (including a $7000 oscilloscope belonging to an employee), a new coffee machine (that hurt the coffee drinkers), and lots of furniture/fittings, most of the IT equipment came through unscathed. The servers “had the opportunity to run without the burden of electricity“.

We needed our stuff working, so we needed to first rescue the machines from the waterlogged building and set them up elsewhere. Elsewhere wound up being at the homes of some of our staff with beefy NBN Internet connections. Okay, not beefy compared to the 500Mbps symmetric microwave link we had, but 50Mbps uplinks were not to be snorted at in this time of need.

The initial plan was the machines that once shared an Ethernet switch, now would be in physically separate locations — but we still needed everything to look like the old network. We also didn’t want to run more VPN tunnels than necessary. Enter OpenVPN L2 mode.

Establishing the VPN server

Up to this point, I had deployed a temporary VPN server as a VPS in a Sydney data centre. This was a plain-jane Ubuntu 20.04 box with a modest amount of disk and RAM, but hopefully decent CPU grunt for the amount of cryptographic operations it was about to do.

Most of our customer sites used OpenVPN tunnels, so I migrated those first — I managed to grab a copy of the running server config as the waters rose before the power tripped out. Copying that config over to the new server, start up OpenVPN, open a UDP port to the world, then fiddled DNS to point the clients to the new box. They soon joined.

Connecting staff

Next problem was getting the staff linked — originally we used a rather aging Cisco router with its VPN client (or vpnc on Linux/BSD), but I didn’t feel like trying to experiment with an IPSec server to replicate that — so up came a second OpenVPN instance, on a new subnet. I got the Engineering team to run the following command to generate a certificate signing request (CSR):

openssl req -newkey rsa:4096 -nodes -keyout <name>.key -out <name>.req

They sent me their .req files, and I used EasyRSA v3 to manage a quickly-slapped-together CA to sign the requests. Downloading them via Slack required that I fish them out of the place where Slack decided to put them (without asking me) and place it in the correct directory. Sometimes I had to rename the file too (it doesn’t ask you what you want to call it either) so it had a .req extension. Having imported the request, I could sign it.

$ mv ~/Downloads/theclient.req pki/reqs/
$ ./easyrsa sign-req client theclient

A new file pki/issued/theclient.crt could then be sent back to the user. I also provided them with pki/ca.crt and a configuration file derived from the example configuration files. (My example came from OpenBSD’s OpenVPN package.)

They were then able to connect, and see all the customer site VPNs, so could do remote support. Great. So far so good. Now the servers.

Server connection VPN

For this, a third OpenVPN daemon was deployed on another port, but this time in L2 mode (dev tap) not L3 mode. In addition, I had servers on two different VLANs, I didn’t want to have to deploy yet more VPN servers and clients, so I decided to try tunnelling 802.1Q. This required boosting the MTU from the default of 1500 to 1518 to accommodate the 802.1Q VLAN tag.

The VPN server configuration looked like this:

port 1196
proto udp
dev tap
ca l2-ca.crt
cert l2-server.crt
key l2-server.key
dh data/dh4096.pem
server-bridge
client-to-client
keepalive 10 120
cipher AES-256-CBC
persist-key
persist-tun
status /etc/openvpn/l2-clients.txt
verb 3
explicit-exit-notify 1
tun-mtu 1518

In addition, we had to tell netplan to create some bridges, we created a vpn.conf in /etc/netplan/vpn.yaml that looked like this:

network:
    version: 2
    ethernets:
        # The VPN tunnel itself
        tap0:
            mtu: 1518
            accept-ra: false
            dhcp4: false
            dhcp6: false
    vlans:
        vlan10-phy:
            id: 10
            link: tap0
        vlan11-phy:
            id: 11
            link: tap0
        vlan12-phy:
            id: 12
            link: tap0
        vlan13-phy:
            id: 13
            link: tap0
    bridges:
        vlan10:
            interfaces:
                - vlan10-phy
            accept-ra: false
            link-local: [ ipv6 ]
            addresses:
                - 10.0.10.1/24
                - 2001:db8:10::1/64
        vlan11:
            interfaces:
                - vlan11-phy
            accept-ra: false
            link-local: [ ipv6 ]
            addresses:
                - 10.0.11.1/24
                - 2001:db8:11::1/64
        vlan12:
            interfaces:
                - vlan12-phy
            accept-ra: false
            link-local: [ ipv6 ]
            addresses:
                - 10.0.12.1/24
                - 2001:db8:12::1/64
        vlan13:
            interfaces:
                - vlan13-phy
            accept-ra: false
            link-local: [ ipv6 ]
            addresses:
                - 10.0.13.1/24
                - 2001:db8:13::1/64

Those aren’t the real VLAN IDs or IP addresses, but you get the idea. Bridge up on the cloud end isn’t strictly necessary, but it does mean we can do other forms of tunnelling if needed.

On the clients, we did something very similar. OpenVPN client config:

client
dev tap
proto udp
remote vpn.example.com 1196
resolv-retry infinite
nobind
persist-key
persist-tun
ca l2-ca.crt
cert l2-client.crt
key l2-client.key
remote-cert-tls server
cipher AES-256-CBC
verb 3
tun-mtu 1518

and for netplan:

network:
    version: 2
    ethernets:
        tap0:
            accept-ra: false
            dhcp4: false
            dhcp6: false
    vlans:
        vlan10-eth:
            id: 10
            link: eth0
        vlan11-eth:
            id: 11
            link: eth0
        vlan12-eth:
            id: 12
            link: eth0
        vlan13-eth:
            id: 13
            link: eth0
        vlan10-vpn:
            id: 10
            link: tap0
        vlan11-vpn:
            id: 11
            link: tap0
        vlan12-vpn:
            id: 12
            link: tap0
        vlan13-vpn:
            id: 13
            link: tap0
    bridges:
        vlan10:
            interfaces:
                - vlan10-vpn
                - vlan10-eth
            accept-ra: false
            link-local: [ ipv6 ]
            addresses:
                - 10.0.10.2/24
                - 2001:db8:10::2/64
        vlan11:
            interfaces:
                - vlan11-vpn
                - vlan11-eth
            accept-ra: false
            link-local: [ ipv6 ]
            addresses:
                - 10.0.11.2/24
                - 2001:db8:11::2/64
        vlan12:
            interfaces:
                - vlan12-vpn
                - vlan12-eth
            accept-ra: false
            link-local: [ ipv6 ]
            addresses:
                - 10.0.12.2/24
                - 2001:db8:12::2/64
        vlan13:
            interfaces:
                - vlan13-vpn
                - vlan13-eth
            accept-ra: false
            link-local: [ ipv6 ]
            addresses:
                - 10.0.13.2/24
                - 2001:db8:13::2/64

I also tried using a Raspberry Pi with Debian, the /etc/network/interfaces config looked like this:

auto eth0
iface eth0 inet dhcp
        mtu 1518

auto tap0
iface tap0 inet manual
        mtu 1518

auto vlan10
iface vlan10 inet static
        address 10.0.10.2
        netmask 255.255.255.0
        bridge_ports tap0.10 eth0.10
iface vlan10 inet6 static
        address 2001:db8:10::2
        netmask 64

auto vlan11
iface vlan11 inet static
        address 10.0.11.2
        netmask 255.255.255.0
        bridge_ports tap0.11 eth0.11
iface vlan11 inet6 static
        address 2001:db8:11::2
        netmask 64

auto vlan12
iface vlan12 inet static
        address 10.0.12.2
        netmask 255.255.255.0
        bridge_ports tap0.12 eth0.12
iface vlan12 inet6 static
        address 2001:db8:12::2
        netmask 64

auto vlan13
iface vlan13 inet static
        address 10.0.13.2
        netmask 255.255.255.0
        bridge_ports tap0.13 eth0.13
iface vlan13 inet6 static
        address 2001:db8:13::2
        netmask 64

Having done this, we had the ability to expand our virtual “L2” network by simply adding more clients on other home Internet connections, the bridges would allow all servers to see each-other as if they were connected to the same Ethernet switch.

Building my own wireless headset interface

So, I’ve been wanting to do this for the better part of a decade… but lately, the cost of more capable embedded devices has come right down to make this actually feasible.

It’s taken a number of incarnations, the earliest being the idea of DIYing it myself with a UHF-band analogue transceiver. Then the thought was to pair a I²S audio CODEC with a ESP8266 or ESP32.

I don’t want to rely on technology that might disappear from the market should relations with China suddenly get narky, and of course, time marches on… I learn about protocols like ROC. Bluetooth also isn’t what it was back when I first started down this path — back then A2DP was one-way and sounded terrible, HSP was limited to 8kHz mono audio.

Today, Bluetooth headsets are actually pretty good. I’ve been quite happy with the Logitech Zone Wireless for the most part — the first one I bought had a microphone that failed, but Logitech themselves were good about replacing it under warranty. It does have a limitation though: it will talk to no more than two Bluetooth devices. The USB dongle it’s supplied with, whilst a USB Audio class device, also occupies one of those two slots.

The other day I spent up on a DAB+ radio and a shortwave radio — it’d be nice to listen to these via the same Bluetooth headset I use for calls and the tablet. There are Bluetooth audio devices that I could plug into either of these, then pair with my headset, but I’d have to disconnect either the phone or the tablet to use it.

So, bugger it… the wireless headset interface will get an upgrade. The plan is a small pocket audio swiss-army-knife that can connect to…

  • an analogue device such as a wired headset or radio receiver/transceiver
  • my phone via Bluetooth
  • my tablet via Bluetooth
  • the aforementioned Bluetooth headset
  • a desktop PC or laptop over WiFi

…and route audio between them as needs require.

The device will have a small LCD display for control with a directional joystick button for control, and will be able to connect to a USB host for management.

Proposed parts list

The chip crisis is actually a big limitation, some of the bits aren’t as easily available as I’d like. But, I’ve managed to pull together the following:

The only bit that’s old stock is the LCD, it’s been sitting on my shelf gathering dust for over a decade. Somewhere in one of my junk boxes I’ve got some joystick buttons also bought many years ago.

Proposed software

For the sake of others looking to duplicate my efforts, I’ll stick with Raspberry Pi OS. As my device is an ARMv6 device, I’ll have to stick with the 32-bit release. Not that big a deal, and long-term I’ll probably look at using OpenEmbedded or Gentoo Embedded long-term to make a minimalist image that just does what I need it to do.

The starter kit came with a SD card loaded with NOOBS… I ignored this and just flashed the SD card with a bare minimum Debian Bullseye image. The plan is I’ll get PipeWire up and running on this for its Bluetooth audio interface. Then we’ll try and get the hardware bits going.

Right now, I have the zero booting up, connecting to my local WiFi network, and making itself available via SSH. A good start.

Data sheet for the LCD

The LCD will possibly be one of the more challenging bits. This is from a phone that was new last century! As it happens though, Bergthaller Iulian-Alexandru was kind enough to publish some details on a number of LCD screens. Someone’s since bought and squatted the domain, but The Wayback Machine has an archive of the site.

I’ve mirrored his notes on various Ericsson LCDs here:

The diagrams on that page appear to show the connections as viewed from the front of the LCD panel. I guess if I let magic smoke out, too bad! The alternative is I do have two Nokia 3310s floating around, so harvest the LCDs out of them — in short, I have a fallback plan!

PipeWire on the Pi Zero

This will be the interesting bit. Not sure how well it’ll work, but we’ll give it a shot. The trickiest bit is getting binaries for the device, no one builds for armhf yet. There are these binaries for Ubuntu AMD64, and luckily there are source packages available.

I guess worst case scenario is I put the Pi Zero W aside and get a Pi Zero 2 W instead. Key will be to test PipeWire first before I warm up the soldering iron, let’s at least prove the software side of things, maybe using USB audio devices in place of the AudioInjector board.

I’m going through and building the .debs for armhf myself now, taking notes as I go. I’ll post these when I’m done.