Open Source

New laptop: StarBook Mk VI

I rarely replace computers… I’ll replace something when it is no longer able to perform its usual duty, or if I feel it might decide to abruptly resign anyway. For the last 10 years, I’ve been running a Panasonic CF-53 MkII as my workhorse, and it continues to be a reliable machine.

I just replaced the battery in it, so I now have two batteries, the original which now has about 1.5-2 hours of capacity, and a new one which gives me about 6 hours. A nice thing about that particular machine is it still implements legacy interfaces like RS-232 and Cardbus/PCMCIA. I’ve upgraded the internal storage to a 2TB SSD and replaced the DVD burner with a Blu-ray burner. There is one thing though it does lack which didn’t matter much prior to 2020: an internal microphone. I can plug a headset in, and that works okay for joining in on work meetings, but if there’s a group of us, that doesn’t work so well.

The machine is also a hefty lump to lug around due to being a “semi-rugged”. There’s also no webcam, not a deal breaker, but again, a reflection of how we communicate in 2023 vs what was typical in 2013.

Given I figured it “didn’t owe me anything”… it was time to look at a replacement and get that up and running before the old faithful decided to quit working and leave me stranded. I wanted something designed for open-source software ground-up this time around. The Panasonic worked great for that because it was quite conservative on specs — despite being purchased new in 2013, it sported an Intel IvyBridge-class Core i5, whereas the latest and greatest was the Haswell generation. Linux worked well, and still does, but it did so because of conservatism rather than explicit design.

Enter the StarBook Mk VI. This machine was built for Linux first and foremost. Windows is an option, that you pay extra for on this system. You also can choose your preferred CPU option, and even choose your preferred boot firmware, with AMI UEFI and coreboot (*Intel models only for now) available.

Figuring, I’ll probably be using this for the better part of 10 years from now… I aimed for the stars:

  • CPU: AMD Ryzen 7 5800U 8-core CPU with hyperthreading
  • RAM: 64GiB DDR4
  • SSD: 1.8TB NVMe
  • Boot firmware: coreboot
  • OS: Ubuntu 22.04 LTS (used to test the machine then install Gentoo)
  • Keyboard Layout: US
  • Power adapter: AU with 2m USB-C cable
         -/oyddmdhs+:.                stuartl@vk4msl-sb 
     -odNMMMMMMMMNNmhy+-`             ----------------- 
   -yNMMMMMMMMMMMNNNmmdhy+-           OS: Gentoo Linux x86_64 
 `omMMMMMMMMMMMMNmdmmmmddhhy/`        Host: StarBook Version 1.0 
 omMMMMMMMMMMMNhhyyyohmdddhhhdo`      Kernel: 6.5.7-vk4msl-sb-… 
.ydMMMMMMMMMMdhs++so/smdddhhhhdm+`    Uptime: 1 hour, 15 mins 
 oyhdmNMMMMMMMNdyooydmddddhhhhyhNd.   Packages: 2497 (emerge) 
  :oyhhdNNMMMMMMMNNNmmdddhhhhhyymMh   Shell: bash 5.1.16 
    .:+sydNMMMMMNNNmmmdddhhhhhhmMmy   Resolution: 1920x1080 
       /mMMMMMMNNNmmmdddhhhhhmMNhs:   WM: fvwm3 
    `oNMMMMMMMNNNmmmddddhhdmMNhs+`    Theme: Adwaita [GTK2/3] 
  `sNMMMMMMMMNNNmmmdddddmNMmhs/.      Icons: oxygen [GTK2/3] 
 /NMMMMMMMMNNNNmmmdddmNMNdso:`        Terminal: konsole 
+MMMMMMMNNNNNmmmmdmNMNdso/-           Terminal Font: Terminus (TTF) 16 
yMMNNNNNNNmmmmmNNMmhs+/-`             CPU: AMD Ryzen 7 5800U (16) @ 4.507GHz 
/hMMNNNNNNNNMNdhs++/-`                GPU: AMD ATI Radeon Vega Series / Radeon Vega Mobile Series 
`/ohdmmddhys+++/:.`                   Memory: 4685MiB / 63703MiB 
  `-//////:--.

First impressions

The machine arrived on Thursday, and I’ve spent much of the last few days setting it up. I first checked it out with the stock Ubuntu install: the machine boots up into an installer of sorts, which is good as it means you set up the user account yourself — there’s no credentials loose in the box. Downside is you don’t get to pick the partition layout.

The machine, despite being ordered with coreboot boot firmware, actually arrived with AMI boot firmware instead. Apparently the port of coreboot for AMD systems is still under active development, and I’m told there will be a guide published describing the procedure for installing coreboot. Minor irritation, I was looking forward to trying out coreboot on this machine — but not a show-stopper… I look forward to trying the guide when it becomes available.

The machine itself felt quite zippy… but then again, when you’re used to a ~12-year-old CPU, 8GB RAM and a 2TB SATA-II SSD for storage, it isn’t much of a surprise that the performance would be a big jump.

Installing Gentoo

After trying the machine out, I booted up a SysRescueCD USB stick and used gparted to shove-over the Ubuntu install into the last 32GiB of the partition, then proceeded to create a set of partitions for Gentoo’s root, a 80GiB swap partition (seems a lot, but it’s 64GiB for suspend-to-disk plus 16GiB for contingencies) some space for a /home partition, some LVM space for VMs, and my Ubuntu install right at the end.

I booted back into Ubuntu, and used it as my environment for bootstrapping Gentoo, that way I could experience how the machine behaved under a heavy load. Firefox was, not bad, under the circumstances. My only gripe being the tug-o-war between Ubuntu insisting that I use their Snap package, and me preferring a native install due to the former’s inability to respect my existing profile settings. This is a weekly battle I have with the office workstation.

In discussing with Starlabs Systems, they mentioned two possible gremlins to watch out for, WiFi (important since this machine has no Ethernet) and the touch pad.

I used a self-built Gentoo stage 3, unwittingly I used one built against the still-experimental 23.0 profiles, which meant it used a merged /usr base layout… but we’ll see how that goes anyway… since it’s the direction that Debian and others are going anyway. So far, the only issue has been the inability to install openrc and minicom together since both install a runscript binary in the same place.

Once I had enough installed to be able to boot the Gentoo install, including building a kernel, I got the boot-loader installed, re-configured UEFI to boot that in preference to Ubuntu, then booted the new OS.

First boot under Gentoo

OS boot-up was near instantaneous. I’m used to about 10-15 seconds spent, but this took no time at all.

WiFi worked out-of-the-box with kernel 6.5.7, but the touch pad was not detected. Actually, under X11 the keyboard was unresponsive too because I forgot to install the various drivers for X.org. Oops! I sorted out the drivers easy enough, but the touch pad was still an issue.

Troubleshooting the touch pad

To get the touch pad working, I ended up taking the Ubuntu kernel config, setting NVMe and btrfs to being built-in, and re-built the whole thing again… took a long time, but success, I had the touch pad working.

The tricky bit is the touch pad is a I²C device connected via the AMD chipset, and described in the ACPI. Not quite sure how this will work under coreboot, but we’ll cross that bridge later. I spent a little time today refining the kernel down a little from the everything kernel that Ubuntu use… to something a little more specific. Notably, things you can’t directly plug into this machine (like ISA/PCI/PCIe cards, CardBus/PCMCIA, etc) or interfaces the machine did not have (e.g. floppy drive, 8250 serial), I took out. Things that could conceivably be plugged in like USB devices were left in.

It took several tries, but I got something that’s workable for this hardware in the end.

Final kernel configuration

The end result is this kernel config. Intel StarBook users might be better off starting with the Ubuntu kernel config like I did and pare it back, but that file may still give you some clues.

Thoughts

Whilst compiling, this machine does not muck around… being a 8-core SMT machine it actually builds things quite rapidly, although on this occasion I gave the machine a helping hand on some bigger packages like Chromium by using a pre-built binary built for my other machines.

Everything I use seems to work just fine under Gentoo… and of course, having copied my /home from the Panasonic (you never realise how much crap you’ve got until you move house!), it was just a little tweaking of settings to suit the newer software installed.

I’m yet to try it out a full day running on the battery to see how that fares. Going flat-chat doing builds it only lasted about 2 hours, but that’s to be expected when you’ve got the machine under a heavy load.

Zoom sees the webcam and can pick up the microphone just fine. I expect Slack will too, but I’ll find that out when I return to work (ugh!) in a fortnight.

My only gripe right now is that my right pinkie finger keeps finding the SysRq/PrintScreen button when moving around with the arrow keys… been used to that arrow cluster being on the far-right of the keyboard not one row back like this one. Other than that, the keyboard seems reasonable for typing on. The touch pad not being recessed sometimes picks up stray movements when typing, but one can disable/enable it pretty easily via Fn+F10 (yes, I have Fn-lock enabled). The keyboard backlight is a nice touch too.

The lack of an Ethernet port is my other gripe, but not hard to work-around, I have a USB-C “dock” that I bought to use with my tablet that gives me 3×USB-3A, full-size SD, microSD, 2×HDMI, Ethernet and audio out and pass-through USB-C for charging. The Ethernet port on that works and the laptop happily charges through it, so that works well enough.

The power supply for this thing is tiny, 65W with USB-A and USB-C ports. I also tried charging this laptop with a conventional USB-A charger but it did not want to know (the PSU probably doesn’t do USB PD). Should be possible to find a 12V-powered USB-C charger that will work though.

The Toughbook will likely be my go-to on camping trips and WICEN events, despite being a heavier and bigger unit, as usually I’m not lugging the thing around, it’s better ruggedised for outdoor activities, and it’s also looks about 10 years older than it really is, so not attractive to steal.

Generating ball tickets/programmes using LaTeX

My father does a lot of Scottish Country dancing, he was a treasurer for the Clan MacKenzie association for quite a while, and a president there for about 10 years too. He was given a task for making some ball tickets, but each one being uniquely numbered.

After hearing him swear at LibreOffice for a bit, then at Avery’s label making software, I decided to take matters into my own hands.

First step was to come up with a template. The programs were to be A6-size booklets; made up of A5 pages folded in half. For ease of manufacture, they would be printed two to a page on A4 pages.

The first step was to come up with the template that would serve as the outer and inner pages. The outer page would have a placeholder that we’d substitute.

The outer pages of the programme/ticket booklet… yes there is a typo in the last line of the “back” page.
\documentclass[a5paper,landscape,16pt]{minimal}
\usepackage{multicol}
\setlength{\columnsep}{0cm}
\usepackage[top=1cm, left=0cm, right=0cm, bottom=1cm]{geometry}
\linespread{2}
\begin{document}
\begin{multicols}{2}[]

\vspace*{1cm}

\begin{center}
\begin{em}
We thank you for your company today\linebreak
and helping to celebrate 50 years of friendship\linebreak
fun and learning in the Redlands.
\end{em}
\end{center}

\begin{center}
\begin{em}
May the road rise to greet you,\linebreak
may the wind always be at your back,\linebreak
may the sun shine warm upon your face,\linebreak
the rains fall soft upon your fields\linebreak
and until we meet again,\linebreak
may God gold you in the palm of his hand.
\end{em}
\end{center}

\vspace*{1cm}

\columnbreak
\begin{center}
\begin{em}
\textbf{CLEVELAND SCOTTISH COUNTRY DANCERS\linebreak
50th GOLD n' TARTAN ANNIVERSARY TEA DANCE}\linebreak
\linebreak
1973 - 2023\linebreak
Saturday 20th May 2023\linebreak
1.00pm for 1.30pm - 5pm\linebreak
Redlands Memorial Hall\linebreak
South Street\linebreak
Cleveland\linebreak
\end{em}
\end{center}

\begin{center}
\begin{em}
Live Music by Emma Nixon \& Iain Mckenzie\linebreak
Black Bear Duo
\end{em}
\end{center}

\vspace{1cm}

\begin{center}
\begin{em}
Cost \$25 per person, non-dancer \$15\linebreak
\textbf{Ticket No \${NUM}}
\end{em}
\end{center}
\end{multicols}
\end{document}

The inner pages were the same for all booklets, so we just came up with one file that was used for all. I won’t put the code here, but suffice to say, it was similar to the above.

The inner pages, no placeholders needed here.

So we had two files; ticket-outer.tex and ticket-inner.tex. What next? Well, we needed to make 100 versions of ticket-outer.tex, each with a different number substituted for $NUM, and rendered as PDF. Similarly, we needed the inner pages rendered as a PDF (which we can do just once, since they’re all the same).

#!/bin/bash
NUM_TICKETS=100

set -ex

pdflatex ticket-inner.tex
for n in $( seq 1 ${NUM_TICKETS} ); do
	sed -e 's:\\\${NUM}:'${n}':' \
            < ticket-outer.tex \
            > ticket-outer-${n}.tex
	pdflatex ticket-outer-${n}.tex
done

This gives us a single ticket-outer.pdf, and 100 different ticket-inner-NN.pdf files that look like this:

A ticket outer pages document with substituted placeholder

Now, we just need to put everything together. The final document should have no margins, and should just import the relevant PDF files in-place. So naturally, we just script it; this time stepping every 2 tickets, so we can assemble the A4 PDF document with our A5 tickets: outer pages of the odd-numbered ticket, outer pages of the even-numbered ticket, followed by two copies of the inner pages. Repeat for all tickets. We also need to ensure that initial paragraph lines are not indented, so setting \parindent solves that.

This is the rest of my quick-and-dirty shell script:

cat > tickets.tex <<EOF
\documentclass[a4paper]{minimal}
\usepackage[top=0cm, left=0cm, right=0cm, bottom=0cm]{geometry}
\usepackage{pdfpages}
\setlength{\parindent}{0pt}
\begin{document}
EOF
for n in $( seq 1 2 ${NUM_TICKETS} ); do
	m=$(( ${n} + 1 ))
	cat >> tickets.tex <<EOF
\includegraphics[width=21cm]{ticket-outer-${n}.pdf}
\includegraphics[width=21cm]{ticket-outer-${m}.pdf}
\includegraphics[width=21cm]{ticket-inner.pdf}
\includegraphics[width=21cm]{ticket-inner.pdf}
EOF
done
cat >> tickets.tex <<EOF
\end{document}
EOF
pdflatex tickets.tex

The result is a 100-page PDF, which when printed double-sided, will yield a stack of tickets that are uniquely numbered and serve as programmes.

Demise of classic hardware: the final act

So today I finally got around to the SGI kit in my possession. Not quite sure where all of it went, there’s a SGI PS/2 keyboard, Indy Presenter LCD and a SGI O2 R5000 180MHz CPU module that have gone AWOL, but this morning I took advantage of the Brisbane City Council kerb-side clean-up.

Screenshot of the post — yes I need to get Mastodon post embedding working

I rounded up some old Plextor 12× CD-ROM drives (SCSI interface) that took CD caddies (remember those?) as well to go onto the pile, and some SCSI HDDs I found laying around — since there’s a good chance the disks in the machines are duds. I did once boot the Indy off one of those CD-ROM drives, so I know they work with the SGI kit.

The machines themselves had gotten to the point where they no longer powered on. The O2 at first did, and I tried saving it, but I found:

  1. it was unreliable, frequently freezing up — until one day it stopped powering on
  2. the case had become ridiculously brittle

The Indy exploded last time I popped the cover, and fragments of the Indigo2 were falling off. The Octane is the only machine whose case seemed largely intact. I had gathered up what IRIX kit I had too, just in case the new owners wanted to experiment. archive.org actually has the images, and I had a crack at patching irixboot to be able to use them. Never got to test that though.

Today I made the final step of putting the machines out on the street to find a new home. It looks like exactly that has happened, someone grabbed the homebrew DB15 to 13W3 cable I used for interfacing to the Indy and Indigo2, then later in the day I found the lot had disappeared.

With more room, I went and lugged the old SGI monitor down, it’s still there, but suspect it’ll possibly go too. The Indy and Indigo2 looked to be pretty much maxxed-out on RAM, so should be handy for bits for restoring other similar-era SGI kit. I do wish the new owners well with their restoration journey, if that’s what they choose to do.

For me though, it’s the end of an era. Time to move on.

Network juju on the fly: migrating a corporate network to VPN-based connectivity

So, this week mother nature threw South East Queensland a curve-ball like none of us have seen in over a decade: a massive flood. My workplace, VRT Systems / WideSky.Cloud Pty Ltd resides at 38b Douglas Street, Milton, which is a low-lying area not far from the Brisbane River. Sister company CETA is just two doors down at no. 40. Mid-February, a rain depression developed in the Sunshine Coast Hinterland / Wide Bay area north of Brisbane.

That weather system crawled all the way south, hitting Brisbane with constant heavy rain for 5 days straight… eventually creeping down to the Gold Coast and over the border to the Northern Rivers part of NSW.

The result on our offices was devastating. (Copyright notice: these images are placed here for non-commercial use with permission of the original photographers… contact me if you wish to use these photos and I can forward your request on.)

Some of the stock still worked after the flood — the Siemens UH-40s pictured were still working (bar a small handful) and normally sell for high triple-figures. The WideSky Hubs and CET meters all feature a conformal coating on the PCBs that will make them robust to water ingress and the Wavenis meters are potted sealed units. So not all a loss — but there’s a big pile of expensive EDMI meters (Mk7s and Mk10s) though that are not economic to salvage due to approval requirements which is going to hurt!

Le Mans Motors, pictured in those photos is an automotive workshop, so would have had lots of lubricants, oils and grease in stock needed to service vehicles — much of those contaminants were now across the street, so washing that stuff off the surviving stock was the order of the day for much of Wednesday, before demolition day Friday.

As for the server equipment, knowing that this was a flood-prone area (which also by the way means insurance is non-existent), we deliberately put our server room on the first floor, well above the known flood marks of 1974 and 2011. This flood didn’t get that high, getting to about chest-height on the ground floor. Thus, aside from some desktops, laptops, a workshop (including a $7000 oscilloscope belonging to an employee), a new coffee machine (that hurt the coffee drinkers), and lots of furniture/fittings, most of the IT equipment came through unscathed. The servers “had the opportunity to run without the burden of electricity“.

We needed our stuff working, so we needed to first rescue the machines from the waterlogged building and set them up elsewhere. Elsewhere wound up being at the homes of some of our staff with beefy NBN Internet connections. Okay, not beefy compared to the 500Mbps symmetric microwave link we had, but 50Mbps uplinks were not to be snorted at in this time of need.

The initial plan was the machines that once shared an Ethernet switch, now would be in physically separate locations — but we still needed everything to look like the old network. We also didn’t want to run more VPN tunnels than necessary. Enter OpenVPN L2 mode.

Establishing the VPN server

Up to this point, I had deployed a temporary VPN server as a VPS in a Sydney data centre. This was a plain-jane Ubuntu 20.04 box with a modest amount of disk and RAM, but hopefully decent CPU grunt for the amount of cryptographic operations it was about to do.

Most of our customer sites used OpenVPN tunnels, so I migrated those first — I managed to grab a copy of the running server config as the waters rose before the power tripped out. Copying that config over to the new server, start up OpenVPN, open a UDP port to the world, then fiddled DNS to point the clients to the new box. They soon joined.

Connecting staff

Next problem was getting the staff linked — originally we used a rather aging Cisco router with its VPN client (or vpnc on Linux/BSD), but I didn’t feel like trying to experiment with an IPSec server to replicate that — so up came a second OpenVPN instance, on a new subnet. I got the Engineering team to run the following command to generate a certificate signing request (CSR):

openssl req -newkey rsa:4096 -nodes -keyout <name>.key -out <name>.req

They sent me their .req files, and I used EasyRSA v3 to manage a quickly-slapped-together CA to sign the requests. Downloading them via Slack required that I fish them out of the place where Slack decided to put them (without asking me) and place it in the correct directory. Sometimes I had to rename the file too (it doesn’t ask you what you want to call it either) so it had a .req extension. Having imported the request, I could sign it.

$ mv ~/Downloads/theclient.req pki/reqs/
$ ./easyrsa sign-req client theclient

A new file pki/issued/theclient.crt could then be sent back to the user. I also provided them with pki/ca.crt and a configuration file derived from the example configuration files. (My example came from OpenBSD’s OpenVPN package.)

They were then able to connect, and see all the customer site VPNs, so could do remote support. Great. So far so good. Now the servers.

Server connection VPN

For this, a third OpenVPN daemon was deployed on another port, but this time in L2 mode (dev tap) not L3 mode. In addition, I had servers on two different VLANs, I didn’t want to have to deploy yet more VPN servers and clients, so I decided to try tunnelling 802.1Q. This required boosting the MTU from the default of 1500 to 1518 to accommodate the 802.1Q VLAN tag.

The VPN server configuration looked like this:

port 1196
proto udp
dev tap
ca l2-ca.crt
cert l2-server.crt
key l2-server.key
dh data/dh4096.pem
server-bridge
client-to-client
keepalive 10 120
cipher AES-256-CBC
persist-key
persist-tun
status /etc/openvpn/l2-clients.txt
verb 3
explicit-exit-notify 1
tun-mtu 1518

In addition, we had to tell netplan to create some bridges, we created a vpn.conf in /etc/netplan/vpn.yaml that looked like this:

network:
    version: 2
    ethernets:
        # The VPN tunnel itself
        tap0:
            mtu: 1518
            accept-ra: false
            dhcp4: false
            dhcp6: false
    vlans:
        vlan10-phy:
            id: 10
            link: tap0
        vlan11-phy:
            id: 11
            link: tap0
        vlan12-phy:
            id: 12
            link: tap0
        vlan13-phy:
            id: 13
            link: tap0
    bridges:
        vlan10:
            interfaces:
                - vlan10-phy
            accept-ra: false
            link-local: [ ipv6 ]
            addresses:
                - 10.0.10.1/24
                - 2001:db8:10::1/64
        vlan11:
            interfaces:
                - vlan11-phy
            accept-ra: false
            link-local: [ ipv6 ]
            addresses:
                - 10.0.11.1/24
                - 2001:db8:11::1/64
        vlan12:
            interfaces:
                - vlan12-phy
            accept-ra: false
            link-local: [ ipv6 ]
            addresses:
                - 10.0.12.1/24
                - 2001:db8:12::1/64
        vlan13:
            interfaces:
                - vlan13-phy
            accept-ra: false
            link-local: [ ipv6 ]
            addresses:
                - 10.0.13.1/24
                - 2001:db8:13::1/64

Those aren’t the real VLAN IDs or IP addresses, but you get the idea. Bridge up on the cloud end isn’t strictly necessary, but it does mean we can do other forms of tunnelling if needed.

On the clients, we did something very similar. OpenVPN client config:

client
dev tap
proto udp
remote vpn.example.com 1196
resolv-retry infinite
nobind
persist-key
persist-tun
ca l2-ca.crt
cert l2-client.crt
key l2-client.key
remote-cert-tls server
cipher AES-256-CBC
verb 3
tun-mtu 1518

and for netplan:

network:
    version: 2
    ethernets:
        tap0:
            accept-ra: false
            dhcp4: false
            dhcp6: false
    vlans:
        vlan10-eth:
            id: 10
            link: eth0
        vlan11-eth:
            id: 11
            link: eth0
        vlan12-eth:
            id: 12
            link: eth0
        vlan13-eth:
            id: 13
            link: eth0
        vlan10-vpn:
            id: 10
            link: tap0
        vlan11-vpn:
            id: 11
            link: tap0
        vlan12-vpn:
            id: 12
            link: tap0
        vlan13-vpn:
            id: 13
            link: tap0
    bridges:
        vlan10:
            interfaces:
                - vlan10-vpn
                - vlan10-eth
            accept-ra: false
            link-local: [ ipv6 ]
            addresses:
                - 10.0.10.2/24
                - 2001:db8:10::2/64
        vlan11:
            interfaces:
                - vlan11-vpn
                - vlan11-eth
            accept-ra: false
            link-local: [ ipv6 ]
            addresses:
                - 10.0.11.2/24
                - 2001:db8:11::2/64
        vlan12:
            interfaces:
                - vlan12-vpn
                - vlan12-eth
            accept-ra: false
            link-local: [ ipv6 ]
            addresses:
                - 10.0.12.2/24
                - 2001:db8:12::2/64
        vlan13:
            interfaces:
                - vlan13-vpn
                - vlan13-eth
            accept-ra: false
            link-local: [ ipv6 ]
            addresses:
                - 10.0.13.2/24
                - 2001:db8:13::2/64

I also tried using a Raspberry Pi with Debian, the /etc/network/interfaces config looked like this:

auto eth0
iface eth0 inet dhcp
        mtu 1518

auto tap0
iface tap0 inet manual
        mtu 1518

auto vlan10
iface vlan10 inet static
        address 10.0.10.2
        netmask 255.255.255.0
        bridge_ports tap0.10 eth0.10
iface vlan10 inet6 static
        address 2001:db8:10::2
        netmask 64

auto vlan11
iface vlan11 inet static
        address 10.0.11.2
        netmask 255.255.255.0
        bridge_ports tap0.11 eth0.11
iface vlan11 inet6 static
        address 2001:db8:11::2
        netmask 64

auto vlan12
iface vlan12 inet static
        address 10.0.12.2
        netmask 255.255.255.0
        bridge_ports tap0.12 eth0.12
iface vlan12 inet6 static
        address 2001:db8:12::2
        netmask 64

auto vlan13
iface vlan13 inet static
        address 10.0.13.2
        netmask 255.255.255.0
        bridge_ports tap0.13 eth0.13
iface vlan13 inet6 static
        address 2001:db8:13::2
        netmask 64

Having done this, we had the ability to expand our virtual “L2” network by simply adding more clients on other home Internet connections, the bridges would allow all servers to see each-other as if they were connected to the same Ethernet switch.

Building my own wireless headset interface

So, I’ve been wanting to do this for the better part of a decade… but lately, the cost of more capable embedded devices has come right down to make this actually feasible.

It’s taken a number of incarnations, the earliest being the idea of DIYing it myself with a UHF-band analogue transceiver. Then the thought was to pair a I²S audio CODEC with a ESP8266 or ESP32.

I don’t want to rely on technology that might disappear from the market should relations with China suddenly get narky, and of course, time marches on… I learn about protocols like ROC. Bluetooth also isn’t what it was back when I first started down this path — back then A2DP was one-way and sounded terrible, HSP was limited to 8kHz mono audio.

Today, Bluetooth headsets are actually pretty good. I’ve been quite happy with the Logitech Zone Wireless for the most part — the first one I bought had a microphone that failed, but Logitech themselves were good about replacing it under warranty. It does have a limitation though: it will talk to no more than two Bluetooth devices. The USB dongle it’s supplied with, whilst a USB Audio class device, also occupies one of those two slots.

The other day I spent up on a DAB+ radio and a shortwave radio — it’d be nice to listen to these via the same Bluetooth headset I use for calls and the tablet. There are Bluetooth audio devices that I could plug into either of these, then pair with my headset, but I’d have to disconnect either the phone or the tablet to use it.

So, bugger it… the wireless headset interface will get an upgrade. The plan is a small pocket audio swiss-army-knife that can connect to…

  • an analogue device such as a wired headset or radio receiver/transceiver
  • my phone via Bluetooth
  • my tablet via Bluetooth
  • the aforementioned Bluetooth headset
  • a desktop PC or laptop over WiFi

…and route audio between them as needs require.

The device will have a small LCD display for control with a directional joystick button for control, and will be able to connect to a USB host for management.

Proposed parts list

The chip crisis is actually a big limitation, some of the bits aren’t as easily available as I’d like. But, I’ve managed to pull together the following:

The only bit that’s old stock is the LCD, it’s been sitting on my shelf gathering dust for over a decade. Somewhere in one of my junk boxes I’ve got some joystick buttons also bought many years ago.

Proposed software

For the sake of others looking to duplicate my efforts, I’ll stick with Raspberry Pi OS. As my device is an ARMv6 device, I’ll have to stick with the 32-bit release. Not that big a deal, and long-term I’ll probably look at using OpenEmbedded or Gentoo Embedded long-term to make a minimalist image that just does what I need it to do.

The starter kit came with a SD card loaded with NOOBS… I ignored this and just flashed the SD card with a bare minimum Debian Bullseye image. The plan is I’ll get PipeWire up and running on this for its Bluetooth audio interface. Then we’ll try and get the hardware bits going.

Right now, I have the zero booting up, connecting to my local WiFi network, and making itself available via SSH. A good start.

Data sheet for the LCD

The LCD will possibly be one of the more challenging bits. This is from a phone that was new last century! As it happens though, Bergthaller Iulian-Alexandru was kind enough to publish some details on a number of LCD screens. Someone’s since bought and squatted the domain, but The Wayback Machine has an archive of the site.

I’ve mirrored his notes on various Ericsson LCDs here:

The diagrams on that page appear to show the connections as viewed from the front of the LCD panel. I guess if I let magic smoke out, too bad! The alternative is I do have two Nokia 3310s floating around, so harvest the LCDs out of them — in short, I have a fallback plan!

PipeWire on the Pi Zero

This will be the interesting bit. Not sure how well it’ll work, but we’ll give it a shot. The trickiest bit is getting binaries for the device, no one builds for armhf yet. There are these binaries for Ubuntu AMD64, and luckily there are source packages available.

I guess worst case scenario is I put the Pi Zero W aside and get a Pi Zero 2 W instead. Key will be to test PipeWire first before I warm up the soldering iron, let’s at least prove the software side of things, maybe using USB audio devices in place of the AudioInjector board.

I’m going through and building the .debs for armhf myself now, taking notes as I go. I’ll post these when I’m done.

Demise of classic hardware

So, this year I had a new-year’s resolution of sorts… when we first started this “work from home” journey due to China’s “gift”, I just temporarily set up on the dinner table, which was of course, meant to be another few months.

Well, nearly 2 years later, we’re still working from home, and work has expanded to the point that a move to the office, on any permanent basis, is pretty much impossible now unless the business moves to a bigger building. With this in mind, I decided I’d clear off the dinner table, and clean up my room sufficiently to set up my workstation in there.

That meant re-arranging some things, but for the most part, I had the space already. So some stuff I wasn’t using got thrown into boxes to be moved into the garage. My CD collection similarly got moved into the garage (I have it on the computer, but need to retain the physical discs as they represent my “personal use license”), and lo and behold, I could set up my workstation.

The new workspace

One of my colleagues spotted the Indy and commented about the old classic SGI logo. Some might notice there’s also an O2 lurking in the shadows. Those who have known me for a while, will remember I did help maintain a Linux distribution for these classic machines, among others, and had a reasonable collection of my own:

My Indy, O2 and Indigo2 R10000
The Octane, booting up

These machines were all eBay purchases, as is the Sun monitor pictured (it came with the Octane). Sadly, fast forward a number of years, and these machines are mostly door stops and paperweights now.

The Octane’s and Indigo2’s demises

The Octane died when I tried to clean it out with a vacuum cleaner, without realising the effect of static electricity generated by the vacuum cleaner itself. I might add mine was a particularly old unit: it had a 175MHz R10000 CPU, and I remember the Linux kernel didn’t recognise the power management circuitry in the PSU without me manually patching it.

The Indigo2 mysteriously stopped working without any clear reason why, I’ve never actually tried to diagnose the issue.

That left the Indy and the O2 as working machines. I haven’t fired them up in a long time until today. I figured, what the hell, do they still run?

Trying the Indy and O2 out

Plug in the Indy, hit the power button… nothing. Dead as a doornail. Okay, put it aside… what about the O2?

I plug it in, shuffle it closer to the monitor so I can connect it. ‘Lo and behold:

The O2 lives!

Of course, the machine was set up to use a serial console as its primary interface, and boot-up was running very slow.

Booting up… very slowly…

It sat there like that for a while, figuring the action was happening on a serial port, I went to go get my null modem cable, only to find a log-in prompt by the time I got back.

Next was remembering what password I was using when I last ran this machine. We had the OpenSSL heartbleed vulnerability happen since then, and at about that time, I revoked all OpenPGP keys and changed all passwords, so it isn’t what I use today. I couldn’t get in as root, but my regular user account worked, and I was able to change the root password via sudo.

Remembering my old log-in credentials, from 22 years ago it seems

The machine soon crashed after that. I tried rebooting, this time I tweaked some PROM settings (and yes, I was rusty remembering how to do it) to be able to see what was going. (I had the null modem cable in hand, but didn’t feel like trying to blindly find the serial port at the back of my desktop.)

Changing PROM settings
The subsequent boot, and crash

Evidently, I had a dud disk. This did not surprise me in the slightest. I also noticed the PSU fan was not spinning, possibly seized after years of non-use.

Okay, there were two disks installed in this machine, both 80-pin SCA SCSI drives. Which one was it? I took a punt and tried the one furtherest away from the I/O ports.

Success, she boots now

I managed to reset the root password, before the machine powered itself off (possibly because of overheating). I suspect the machine will need the dust blown out of it (safely! — not using the method that killed the Octane!), and the HDDs will need replacements. The guilty culprit was this one (which I guessed correctly first go):

a 4GB HDD was a big drive back in 1998!

The computer I’m typing this on has a HDD that stores 1000 of these drives. Today, there are modern alternatives, such as SCSI2SD that could get this machine running fully if needed. The tricky bit would be handling the 80-pin hot-swap interface. There’d be some hardware hacking needed to connect the two, but AU$145 plus an adaptor seems like a safer bet than trying some random used HDD.

So, replacement for the HDDs, a clean-out, and possibly a new fan or two, and that machine will be back to “working” state. Of course the Linux landscape has moved on since then, Debian no longer support the MIPS4 ISA that the RM5200 CPU understands, Gentoo still could work on this though, and maybe OpenBSD still support this too. In short, this machine is enough of a “go-er” that it should not be sent to land-fill… yet.

Turning my attention back to the Indy

So the Indy was my first SGI machine. Bought to better understand the MIPS processor architecture, and perhaps gain enough understanding to try and breathe life into a Cobalt Qube II server appliance (remember those?), it did teach me a lot about Linux and how things vary between platforms.

I figured I might as well pop the cover and see if there’s anything “obviously” wrong. The procedure I was rusty on, but I recalled there was a little catch on the back of the case that needed to be release before the cover slid off. So I lugged the 20″ CRT off the top of the machine, pulled the non-functioning Indy out, and put it on the table to inspect further.

Upon trying to pop the cover (gently!), the top of the case just exploded. Two pieces of the top cover go flying, and the RF shield parts company with the cover’s underside.

The RF shield parted company with the underside of the lid

I was left with a handful of small plastic fragments that were the heat-set posts holding the RF shield to the inside of the lid.

Some of the fragments that once held the RF shield in place

Clearly, the plastic has become brittle over the years. These machines were released in 1993, I think this might be a 1994-model as it has a slightly upgraded R4600 CPU in it.

As to the machine itself, I had a quick sticky-beak, there didn’t seem to be any immediately obvious things, but to be honest, I didn’t do a very thorough check. Maybe there’s some corrosion under the motherboard I didn’t spot, maybe it’s just a blown fuse in the PSU, who knows?

The inside of the Indy

This particular machine had 256MB RAM (a lot for its day), 8-bit Newport XL graphics, the “Indy Presenter” LCD interface (somewhere, we have the 15″ monitor it came with — sadly the connecting cable has some damaged conductors), and the HDD is a 9.1GB HDD I added some time back.

Where to now?

I was hanging on to these machines with the thinking that someone who was interested in experimenting with RISC machines might want them — find them a new home rather than sending them to landfill. I guess that’s still an option for the O2, as it still boots: so long as its remaining HDD doesn’t die it’ll be fine.

For the others, there’s the possibility of combining bits to make a functional frankenmachine from lots of parts. The Indy will need a new PROM battery if someone does manage to breathe life into it.

The Octane had two SCSI interfaces, one of which was dead — a problem that was known-of before I even acquired it. The PROM would bitch and moan about the dead SCSI interface for a good two minutes before giving up and dumping you in the boot menu. Press 1, and it’d hand over to arcload, which would boot a Linux kernel from a disk on the working controller. Linux would see the dead SCSI controller, and proceed to ignore it, booting just fine as if nothing had happened.

The Indigo2 R10000 was always the red-hedded step child: an artefact of the machine’s design. The IP22 design (Indy and Indigo2) was never designed with the intent of being used with a R10000 CPU, and the speculative execution features played havoc with caching on this design. The Octane worked fine because it was designed from the outset to run this CPU. The O2 could be made to work because of a quirk of its hardware design, but the Indigo2 was not so flexible, so kernel-space code had to hack around the problem in software.

I guess I’d still like to see the machines go to a good home, but no idea who that’d be in this day and age. Somewhere, I have a partial disc set of Irix 6.5, and there’s also a 20″ SGI GDM5410 monitor (not the Sun monitor pictured above) that, at last check, did still work.

It’ll be a sad day when these go to the tip.

Making InfluxDB’s init script more patient

Recently, I noticed my network monitoring was down… I hadn’t worried about it because I had other things to keep me busy, and thankfully, my network monitoring, whilst important, isn’t mission critical.

I took a look at it today. The symptom was an odd one, influxd was running, it was listening on the back-up/RPC port 8088, but not 8086 for queries.

It otherwise was generating logs as if it were online. What gives?

Tried some different settings, nothing… nada… zilch. Nothing would make it listen to port 8086.

Tried updating to 1.8 (was 1.1), still nothing.

Tried manually running it as root… sure enough, if I waited long enough, it started on its own, and did begin listening on port 8086. Hmmm, I wonder. I had a look at the init scripts:

#!/bin/bash -e

/usr/bin/influxd -config /etc/influxdb/influxdb.conf $INFLUXD_OPTS &
PID=$!
echo $PID > /var/lib/influxdb/influxd.pid

PROTOCOL="http"
BIND_ADDRESS=$(influxd config | grep -A5 "\[http\]" | grep '^  bind-address' | cut -d ' ' -f5 | tr -d '"')
HTTPS_ENABLED_FOUND=$(influxd config | grep "https-enabled = true" | cut -d ' ' -f5)
HTTPS_ENABLED=${HTTPS_ENABLED_FOUND:-"false"}
if [ $HTTPS_ENABLED = "true" ]; then
  HTTPS_CERT=$(influxd config | grep "https-certificate" | cut -d ' ' -f5 | tr -d '"')
  if [ ! -f "${HTTPS_CERT}" ]; then
    echo "${HTTPS_CERT} not found! Exiting..."
    exit 1
  fi
  echo "$HTTPS_CERT found"
  PROTOCOL="https"
fi
HOST=${BIND_ADDRESS%%:*}
HOST=${HOST:-"localhost"}
PORT=${BIND_ADDRESS##*:}

set +e
max_attempts=10
url="$PROTOCOL://$HOST:$PORT/health"
result=$(curl -k -s -o /dev/null $url -w %{http_code})
while [ "$result" != "200" ]; do
  sleep 1
  result=$(curl -k -s -o /dev/null $url -w %{http_code})
  max_attempts=$(($max_attempts-1))
  if [ $max_attempts -le 0 ]; then
    echo "Failed to reach influxdb $PROTOCOL endpoint at $url"
    exit 1
  fi
done
set -e

Ahh right, so start the server, check every second to see if it’s up, and if not, just abort and let systemd restart the whole shebang. Because turning the power on-off-on-off-on-off is going to make it go faster, right?

I changed max_attempts to 360 and the sleep to 10.

Having fixed this, I am now getting data back into my system.

Setting up a Kyocera ECOSYS P5021cdn in CUPS

So, I finally had enough with the Epson WF7510 we have which is getting on in years, occasionally miss-picks pages, won’t duplex, and has a rather curious staircase problem when printing. We’ll keep it for A3 scanning and printing (the fax feature is now useless), but for a daily driver, I decided to make an end-of-financial-year purchase. I wanted something that met this criteria:

  • A4 paper size
  • Automatic duplex printing
  • Networked
  • Laser/LED (for water-resistant prints)
  • Colour is a “nice to have”

I looked at the mono options, but when I looked at the driver options for Linux, things were looking dire with binary blobs everywhere. Removed the restriction on it being mono, and suddenly this option appeared that was cheaper, and more open. I didn’t need a scanner (the WF7510’s scanner works fine with xsane, plus I bought a separate Canon LiDE 300 which is pretty much plug-and-play with xsane), a built-in fax is useless since we can achieve the same using hylafax+t38modem (a TO-DO item well down in my list of priorities).

The Kyocera P5021cdn allegedly isn’t the cheapest to run, but it promised a fairly pain-free experience on Linux and Unix. I figured I’d give it a shot. These are some notes I used to set the thing up. I want to move it to a different part of the network ultimately, but we’ll see what the cretinous Windows laptop my father users will let us do, for now it shares that Ethernet VLAN with the WF7510 and his laptop, and I’ll just hop over the network to access it.

Getting the printer’s IP and MAC address

The menu on the printer does not tell you this information. There is however, a Printer Status menu item in the top-panel menu. Tell it to print the status page, you’ll get a nice colour page with lots of information about the printer including its IPv4 and IPv6 addresses.

Web interface

If you want to configure the thing further, you need a web browser. Visit the printer’s IP address in your browser and you’re greeted with Command Centre RX. Out of the box, the username and password were Admin and Admin (capitalised A).

Setting up CUPS

The printer “driver” off the Kyocera website is a massive 400MB zip file, because they bundled up .deb and .rpm packages for every distribution they officially support together in one file. Someone needs to introduce them to reprepro and its dnf-equivalent. That said, you have a choice… if you pick a random .deb out of that blob, and manually unpack it somewhere (use ar x on it, you’ll see data.tar.xz or something, unpack that and you’ve got your package files), you’ll find a .ppd file you’ll need.

Or, you can do a search and realise that the Arch Linux guys have done the hard work for you. Many thanks guys (and girls… et all)!

Next puzzle is figuring out the printer URI. Turns out the printer calls itself lp1… so the IPP URI you should use is http://<IP>:631/ipp/lp1.

I haven’t put the thing fully through its paces, and I note the cartridges are down about 4% from those two prints (the status page and the CUPS test print), but often the initial cartridges are just “starter” cartridges and that the replacements often have a lot more toner in them. I guess time will tell on their longevity (and that of the imaging drum).

Pondering audio streaming over LANs

Lately, I’ve been socially distancing a home and so there’s been a few projects that have been considered that otherwise wouldn’t ordinarily get a look in on a count of lack-of-time.

One of these has been setting up a Raspberry Pi with DRAWS board for use on the bicycle as a radio interface. The DRAWS interface is basically a sound card, RTC, GPS and UART interface for radio interfacing applications. It is built around the TI TMS320AIC3204.

Right now, I’m still waiting for the case to put it in, even though the PCB itself arrived months ago. Consequently it has not seen action on the bike yet. It has gotten some use though at home, primarily as an OpenThread border router for 3 WideSky hubs.

My original idea was to interface it to Mumble, a VoIP server for in-game chat. The idea being that, on events like the Yarraman to Wulkuraka bike ride, I’d fire up the phone, connect it to an AP run by the Raspberry Pi on the bike, and plug my headset into the phone:144/430MHz→2.4GHz cross-band.

That’s still on the cards, but another use case came up: digital. It’d be real nice to interface this over WiFi to a stronger machine for digital modes. Sound card over network sharing. For this, Mumble would not do, I need a lossless audio transport.

Audio streaming options

For audio streaming, I know of 3 options:

  • PulseAudio network streaming
  • netjack
  • trx

PulseAudio I’ve found can be hit-and-miss on the Raspberry Pi, and IMO, is asking for trouble with digital modes. PulseAudio works fine for audio (speech, music, etc). It will make assumptions though about the nature of that audio. The problem is we’re not dealing with “audio” as such, we’re dealing with modem tones. Human ears cannot detect phase easily, data modems can and regularly do. So PA is likely to do things like re-sample the audio to synchronise the two stations, possibly use lossy codecs like OPUS or CELT, and make other changes which will mess with the signal in unpredictable ways.

netjack is another possibility, but like PulseAudio, is geared towards low-latency audio streaming. From what I’ve read, later versions use OPUS, which is a no-no for digital modes. Within a workstation, JACK sounds like a close fit, because although it is geared to audio, its use in professional audio means it’s less likely to make decisions that would incur loss, but it is a finicky beast to get working at times, so it’s a question mark there.

trx was a third option. It uses RTP to stream audio over a network, and just aims to do just that one thing. Digging into the code, present versions use OPUS, older versions use CELT. The use of RTP seemed promising though, it actually uses oRTP from the Linphone project, and last weekend I had a fiddle to see if I could swap out OPUS for linear PCM. oRTP is not that well documented, and I came away frustrated, wondering why the receiver was ignoring the messages being sent by the sender.

It’s worth noting that trx probably isn’t a good example of a streaming application using oRTP. It advertises the stream as G711u, but then sends OPUS data. What it should be doing is sending it as a dynamic content type (e.g. 96), and if this were a SIP session, there’d be a RTPMAP sent via Session Description Protocol to say content type 96 was OPUS.

I looked around for other RTP libraries to see if there was something “simpler” or better documented. I drew a blank. I then had a look at the RTP/RTCP specs themselves published by the IETF. I came to the conclusion that RTP was trying to solve a much more complicated use case than mine. My audio stream won’t traverse anything more sophisticated than a WiFi AP or an Ethernet switch. There’s potential for packet loss due to interference or weak signal propagation between WiFi nodes, but latency is likely to remain pretty consistent and out-of-order handling should be almost a non-issue.

Another gripe I had with RTP is its almost non-consideration of linear PCM. PCMA and PCMU exist, 16-bit linear PCM at 44.1kHz sampling exists (woohoo, CD quality), but how about 48kHz? Nope. You have to use SDP for that.

Custom protocol ideas

With this in mind, my own custom protocol looks like the simplest path forward. Some simple systems that used by GQRX just encapsulate raw audio in UDP messages, fire them at some destination and hope for the best. Some people use TCP, with reasonable results.

My concern with TCP is that if packets get dropped, it’ll try re-sending them, increasing latency and never quite catching up. Using UDP side-steps this, if a packet is lost, it is forgotten about, so things will break up, then recover. Probably a better strategy for what I’m after.

I also want some flexibility in audio streams, it’d be nice to be able to switch sample rates, bit depths, channels, etc. RTP gets close with its L16/44100/2 format (the Philips Red-book standard audio format). In some cases, 16kHz would be fine, or even 8kHz 16-bit linear PCM. 44.1k works, but is wasteful. So a header is needed on packets to at least describe what format is being sent. Since we’re adding a header, we might as well set aside a few bytes for a timestamp like RTP so we can maintain synchronisation.

So with that, we wind up with these fields:

  • Timestamp
  • Sample rate
  • Number of channels
  • Sample format

Timestamp

The timestamp field in RTP is basically measured in ticks of some clock of known frequency, e.g. for PCMU it is a 8kHz clock. It starts with some value, then increments up monotonically. Simple enough concept. If we make this frequency the sample rate of the audio stream, I think that will be good enough.

At 48kHz 16-bit stereo; data will be streaming at 192kbps. We can tolerate wrap-around, and at this data rate, we’d see a 16-bit counter overflow every ~341ms, which whilst not unworkable, is getting tight. Better to use a 32-bit counter for this, which would extend that overflow to over 6 hours.

Sample rate encoding

We can either support an integer field, or we can encode the rate somehow. An integer field would need a range up to 768k to support every rate ALSA supports. That’s another 32-bit integer. Or, we can be a bit clever: nearly every sample rate in common use is a harmonic of 8kHz or 11.025kHz, so we devise a scheme consisting of a “base” rate and multiplier. 48kHz? That’s 8kHz×6. 44.1kHz? That’s 11.025kHz×4.

If we restrict ourselves to those two base rates, we can support standard rates from 8kHz through to 1.4MHz by allocating a single bit to select 8kHz/11.025kHz and 7 bits for the multiplier: the selected sample rate is the base rate multiplied by the multipler incremented by one. We’re unlikely to use every single 8kHz step though. Wikipedia lists some common rates and as we go up, the steps get bigger, so let’s borrow 3 multiplier bits for a left-shift amount.

7 6 5 4 3 2 1 0
B S S S M M M M

B = Base rate: (0) 8000 Hz, (1) 11025 Hz
S = Shift amount
M = Multiplier - 1

Rate = (Base << S) * (M + 1)

Examples:
  00000000b (0x00): 8kHz
  00010000b (0x10): 16kHz
  10100000b (0xa0): 44.1kHz
  00100000b (0x20): 48kHz
  01010010b (0x52): 768kHz (ALSA limit)
  11111111b (0xff): 22.5792MHz (yes, insane)

Other settings

I primarily want to consider linear PCM types. Technically that includes unsigned PCM, but since that’s losslessly transcodable to signed PCM, we could ignore it. So we could just encode the number of bytes needed for a single channel sample, minus one. Thus 0 would be 8-bits; 1 would be 16-bits; 2 would be 32-bits and 3 would be 64-bits. That needs just two bits. For future-proofing, I’d probably earmark two extra bits; reserved for now, but might be used to indicate “compressed” (and possibly lossy) formats.

The remaining 4 bits could specify a number of channels, again minus 1 (mono would be 0, stereo 1, etc up to 16).

Packet type

For the sake of alignment, I might include a 16-bit identifier field so the packet can be recognised as being this custom audio format, and to allow multiplexing of in-band control messages, but I think the concept is there.

COVIDSafe for older devices?

So, the other day I pondered about whether BlueTrace could be ported to an older device, or somehow re-implemented so it would be compatible with older phones.

The Australian Government has released their version of TraceTogether, COVIDSafe, which is available for newer devices on the Google and Apple application repositories. It suffers a number of technical issues, one glaring one being that even on devices it theoretically supports, it doesn’t work properly unless you have it running in the foreground and your phone unlocked!

Well, there’s a fail right there! Lots of people, actually need to be able to lock their phones. (e.g. a condition of their employment, preventing pocket dials, saving battery life, etc…)

My phone, will never run COVIDSafe, as provided. Even compiling it for Android 4.1 won’t be enough, it uses Bluetooth Low Energy, which is a Bluetooth 4.0 feature. However, the government did one thing right, they have published the source code. A quick fish-eye over the diff against TraceTogether, suggests the changes are largely superficial.

Interestingly, although the original code is GPLv3, our government has decided to supply their own license. I’m not sure how legal that is. Others have questioned this too.

So, maybe I can run it after all? All I need is a device that can do BLE. That then “phones home” somehow, to retrieve tokens or upload data. Newer phones (almost anything Android-based) usually can do WiFi hotspot, which would work fine with a ESP32.

Older phones don’t have WiFi at all, but many can still provide an Internet connection over a Bluetooth link, likely via the LAN Access Profile. I think this would mean my “token” would need to negotiate HTTPS itself. Not fun on a MCU, but I suspect someone has possibly done it already on ESP32.

Nordic platforms are another option if we go the pure Bluetooth route. I have two nRF52840-DK boards kicking around here, bought for OpenThread development, but not yet in use. A nicety is these do have a holder for a CR2032 cell, so can operate battery-powered.

Either way, I think it important that the chosen platform be:

  1. easily available through usual channels
  2. cheap
  3. hackable, so the devices can be re-purposed after this COVID-19 nonsense blows over

A first step might be to see if COVIDSafe can be cleaved in two… with the BLE part running on a ESP32 or nRF52840, and the HTTPS part running on my Android phone. Also useful, would be some sort of staging server so I can test my code without exposing things. Not sure if there is such a beast publicly available that we can all make use of.

Guess that’ll be the next bit to look at.