Feb 192022
 

So, I’ve been wanting to do this for the better part of a decade… but lately, the cost of more capable embedded devices has come right down to make this actually feasible.

It’s taken a number of incarnations, the earliest being the idea of DIYing it myself with a UHF-band analogue transceiver. Then the thought was to pair a I²S audio CODEC with a ESP8266 or ESP32.

I don’t want to rely on technology that might disappear from the market should relations with China suddenly get narky, and of course, time marches on… I learn about protocols like ROC. Bluetooth also isn’t what it was back when I first started down this path — back then A2DP was one-way and sounded terrible, HSP was limited to 8kHz mono audio.

Today, Bluetooth headsets are actually pretty good. I’ve been quite happy with the Logitech Zone Wireless for the most part — the first one I bought had a microphone that failed, but Logitech themselves were good about replacing it under warranty. It does have a limitation though: it will talk to no more than two Bluetooth devices. The USB dongle it’s supplied with, whilst a USB Audio class device, also occupies one of those two slots.

The other day I spent up on a DAB+ radio and a shortwave radio — it’d be nice to listen to these via the same Bluetooth headset I use for calls and the tablet. There are Bluetooth audio devices that I could plug into either of these, then pair with my headset, but I’d have to disconnect either the phone or the tablet to use it.

So, bugger it… the wireless headset interface will get an upgrade. The plan is a small pocket audio swiss-army-knife that can connect to…

  • an analogue device such as a wired headset or radio receiver/transceiver
  • my phone via Bluetooth
  • my tablet via Bluetooth
  • the aforementioned Bluetooth headset
  • a desktop PC or laptop over WiFi

…and route audio between them as needs require.

The device will have a small LCD display for control with a directional joystick button for control, and will be able to connect to a USB host for management.

Proposed parts list

The chip crisis is actually a big limitation, some of the bits aren’t as easily available as I’d like. But, I’ve managed to pull together the following:

The only bit that’s old stock is the LCD, it’s been sitting on my shelf gathering dust for over a decade. Somewhere in one of my junk boxes I’ve got some joystick buttons also bought many years ago.

Proposed software

For the sake of others looking to duplicate my efforts, I’ll stick with Raspberry Pi OS. As my device is an ARMv6 device, I’ll have to stick with the 32-bit release. Not that big a deal, and long-term I’ll probably look at using OpenEmbedded or Gentoo Embedded long-term to make a minimalist image that just does what I need it to do.

The starter kit came with a SD card loaded with NOOBS… I ignored this and just flashed the SD card with a bare minimum Debian Bullseye image. The plan is I’ll get PipeWire up and running on this for its Bluetooth audio interface. Then we’ll try and get the hardware bits going.

Right now, I have the zero booting up, connecting to my local WiFi network, and making itself available via SSH. A good start.

Data sheet for the LCD

The LCD will possibly be one of the more challenging bits. This is from a phone that was new last century! As it happens though, Bergthaller Iulian-Alexandru was kind enough to publish some details on a number of LCD screens. Someone’s since bought and squatted the domain, but The Wayback Machine has an archive of the site.

I’ve mirrored his notes on various Ericsson LCDs here:

The diagrams on that page appear to show the connections as viewed from the front of the LCD panel. I guess if I let magic smoke out, too bad! The alternative is I do have two Nokia 3310s floating around, so harvest the LCDs out of them — in short, I have a fallback plan!

PipeWire on the Pi Zero

This will be the interesting bit. Not sure how well it’ll work, but we’ll give it a shot. The trickiest bit is getting binaries for the device, no one builds for armhf yet. There are these binaries for Ubuntu AMD64, and luckily there are source packages available.

I guess worst case scenario is I put the Pi Zero W aside and get a Pi Zero 2 W instead. Key will be to test PipeWire first before I warm up the soldering iron, let’s at least prove the software side of things, maybe using USB audio devices in place of the AudioInjector board.

I’m going through and building the .debs for armhf myself now, taking notes as I go. I’ll post these when I’m done.

Feb 172022
 

Lately, I’ve been stuck at home with not much bicycle mobile operation happening, and it’s given me time to review where I’m going with the station and the onboard communications systems.

At home, I’ve been listening to a lot of commercial radio, whereas on the bicycle, in pre-COVID times I was basically restricted to recorded music unless I wanted to use the FT-897D for broadcast radio reception.

Now, the Yaesu rig actually isn’t a bad receiver for broadcast radio… but a few downsides:

  • Wideband FM sounds good, but is only received in mono
  • Medium wave and shortwave broadcast requires a rather bulky HF antenna to be deployed
  • The FT-897D is thirsty for power: about 1A on receive
  • When receiving broadcast radio, I obviously cannot monitor amateur frequencies
  • Some of the stations I like listening to are on DAB+, which the FT-897D will never receive

Long-term plan

Long term, the plan is to use SDR to augment the FT-897D, basically I rig up a Raspberry Pi 4 (already procured) with a SDR, and through some antenna switching, basically use the FT-897D as the transmitter with the Raspberry Pi 4 implementing an all-band scanning receiver. That would give me dual-watch (actually, I could watch entire bands) capability which I miss on the FT-897D.

Likely, the SDR chosen will either be a multi-channel one so I can watch a couple of bands: 2m + 70cm; or maybe I monitor 2m whilst listening to a radio broadcast on the other. SDR would also open up DAB+ to me.

This is a long-way off though. And also is rather fixed to the bike, I can’t take any of this stuff on a walk, which lately in COVID times has been my more likely form of exercise.

Current MW rig

For medium wave reception, I do have a small portable transistor radio, a Sanyo BC-088 which I was given years ago in non-working condition. The fault at first was broken PCB traces from the unit being thrown against a wall by its previous owner, which was fixed and allowed the radio to give many years of entertainment for over a decade until another incident on the bike smacked it against Waterworks Road, breaking a few connections to the internal loop-stick antenna.

I’ve repaired that, and the unit now works, but found it does not get along with any microprocessor-based device; picking up all manner of hash when placed near my handheld GPS (Garmin Rino 650) and squealing like a banshee next to my desktop PC. It also seems to be a tad deaf.

SDR is one possible option, but the SDRs I have in my possession: a couple of RTL-SDR v3 dongles and a HackRF One, none of them will tune down to 693kHz where I normally have the BC-088 tuned. The HackRF One gets close at 1MHz, but anything below 10MHz sounds terrible with noise and birdies galore. Even for shortwave, the HackRF One seems to suffer; trying it out on the HF antenna at home, I find myself picking up 4BH at 18883kHz — they normally broadcast at 882kHz.

Thus, I figured I’d try a couple of off-the-shelf options for the short-term and see how they go. Ideally I wanted a single radio that could do MW, FM and DAB+ bands… bonus points if it could do shortwave too.

New DAB+ rig: Digitech AR-1690

I bought this at a time when I noticed all the Australian Radio Network stations (4KQ, 97.3) suddenly go mute on the Brisbane channel 9A multiplex. I wasn’t sure if it was my end or the station, as other DAB+ stations seemed to be fine, and thought this little rig would both be a useful observer, and scratch that itch of portable listening.

The Digitech AR-1690

This is a basic entry-level DAB+/FM set. It’s a smallish unit, roughly 125mm×73mm×30mm. There’s no real special features of this unit. It has 40 station presets; 20 each for FM and DAB+, and there’s two alarm functions that can be set. The clock is set by the radio transmitter time broadcast. The front panel features the volume and channel buttons, along with a SELECT button. The rest of the controls are on the top.

Top controls
  • Info/Menu button:
    • Long-press → enters a configuration menu where you can configure the system time, set the two alarms, see the firmware version or do a factory reset
    • Short press → scrolls through different pieces of information on the LCD display:
      • Frequency
      • Current time
      • Current date
      • (DAB+): Signal strength?
      • (DAB+): Genre
      • (DAB+): DAB+ Multiplex name
      • (DAB+): Frequency and channel
      • (DAB+): Signal error rate
      • (DAB+): Bit rate and standard (DAB or DAB+)
      • (FM RDS): Station name
      • (FM RDS): Genre
  • Mode: Switches between FM and DAB+ mode
  • Scan: Initiates a scan on the currently selected mode (so, all FM broadcast, or all DAB+ channels)
  • Alarm: A shortcut button for setting the alarms (same as holding Info/Menu, then navigating to Alarms)
  • Preset: Used for accessing memory presets, short press recalls a preset, long press to store a station preset
  • Power: Switches the radio on, stand-by (short press) or off (long press)

For power, it can either run on 3 AAA cells, or you can buy separately a Nokia BL-5C Lithium battery.

The back of the radio, nothing much to see here.
The battery compartment, with Nokia BL-5C installed

As for ports, there’s just the two on the right-hand side:

Connectors

The power jack is a small ~3mm barrel jack, the radio is supplied with a USB cable that interfaces to this connector. Looks like a dead ringer for the old Nokia phone connectors, I might dig up one of my old chargers and see if it works. (Update 2022-03-11: Found one, it doesn’t… the barrel is the same size but the tip in the radio is too big to fit in the bore of the connector.)

In operation

The set seems to do a reasonable job. I’m close to Mt. Coot-tha, so receiving DAB+ really isn’t that difficult. The sound is quite reasonable for the size, I thought the speaker would be a bit on the tinny side, but it’s perfectly listen-able. Certainly it’s a big improvement on the BC-088!

One gripe I do have with this set is that the volume steps are very coarse, and there’s no real “quiet” setting. Minimum volume is mute, one step up is comfortable listening level in a small room. I would have liked maybe 3 or four steps in between.

In both DAB+ mode, it can report the station dynamic labels.

DAB+ dynamic labels

It also can pull a similar stunt with RDS data on FM:

FM RDS reception

New Short wave rig: Tecsun PL-398MP

Now, when I bought the above DAB+ receiver, I ideally wanted something that would do MW broadcast as well, as one of the stations pictured on the DAB set is in fact, a MW station as well.

There is such a beast, Sangean make the DPR-45 which can do MW/FM and DAB+, but it’s enormous. Too big for my needs. Plus I found it after purchasing the little AR-1690 (not that it mattered, as size pretty much rules the DPR-45 out). I figured the next best thing was to get a portable set that had a line-in feature so it could provide the stereo speakers that the AR-1690 lacks.

Enter the Tecsun PL-398MP.

The Tecsun PL-398MP

As the text above the screen suggests, this is quad-band radio; supporting LW/MW/SW and FM bands, as well as a (primitive!) MP3 player. Unlike the Sanyo BC-088 it’s replacing, which boasted 8 transistors (wow!), this unit is a DSP-based receiver using the common Silicon Labs Si4734 radio receiver IC.

Most of the controls are on the front. The labels marked in red are activated when the radio is turned off; so holding 1 down allows you to switch the FM radio band from the default 88-108MHz to 64-108MHz or 76-108MHz. Holding 2 down switches the clock between 12-hour and 24-hour time, 3 will switch the MW band between using 9kHz steps and 10kHz steps, 0 turns keypad beep on/off and the ST button toggles the “intelligent backlight”.

Unlike the AR-1690, this thing runs on either standard disposable dry-cells, or you can install Ni-MH cells and by holding the M button whilst the radio is turned off, you can enable a built-in charger. Dry cells are not exactly my favourite way of powering a device, for no other reason than the reduced energy density and their nasty habit of leaking electrolyte.

Maybe a future project will be to hack a LiPo cell into this thing.

On the back, are the controls for the MP3 player.

Back of the PL-398MP

I’ll get to the MP3 player part in a moment, but in short, don’t bother!

For tuning and volume, there are two thumbwheels on the right-hand side. These are both rotary encoders driving a small microcontroller inside.

Volume and tuning controls

The “digital” volume control steps aren’t too bad for resolution, certainly nowhere near as coarse as the AR-1690! The tuning knob works well enough for small adjustments, and for moving between presets. Thankfully for moving between stations, there’s the keypad for entering frequencies directly.

On the left are all the ports:

FM/SW antenna, line-in, earphone output, and a 5V mini-USB input

The line-in feature is what set this apart from other MW and SW-capable sets. Being able to connect an external shortwave antenna is a welcome feature, and with this radio, I purchased a Sangean ANT-60 antenna for this purpose.

On top, there’s just the Light/Snooze button; pressing it momentarily turns the lighting on. I presume it’ll also silence the wake-up alarm if you have one set, but I haven’t tried this.

Light / Snooze button on top

In operation

FM Stereo reception

I’m close to the Mt Coot-tha transmitter site, so this isn’t much of a strain for the receiver, I guess I’ll know more when I take it out of town with me, but it seems to receive the local stations well, without getting overloaded from the strong ones (looking at you ABC Classic FM).

Being a dual-speaker device, this can provide stereo without additional hardware. Audio quality is actually decent for a radio this size. The speaker drivers are about 50mm in diameter, appear to be a low-profile mylar construction; not going to win audiophile magazine awards and are outperformed by many Bluetooth speakers, but are decent enough.

Short wave reception

The shortwave feature of this set seems good so far. There’s not as much to listen to on the shortwave bands as there used to be, but I’ve been able to receive China Radio International and Radio New Zealand both quite clearly, and one evening managed to pick up the BBC World service.

It performs decently with its built-in antenna, even without me telescoping it out. I haven’t had a chance to fully try the set with the ANT-60 — I did try it indoors in my room, but I suspect I haven’t really got enough wire “in the air” to make much difference. I’ll have to try it at a camp site some evening.

Medium wave reception

This blew me away actually. Okay, so maybe a late 60’s era transistor radio with leaky vintage germanium transistors that’s had a hard life and more than one ham-fisted repair attempt is not much of a contest, but it left the old Sanyo in the dust.

4KQ on 693kHz was a bit of a fiddle to get tuned on the Sanyo, and even then, I found I had to have the radio oriented right to receive it. 4QR on 612kHz of course, was loud and strong. Both stations are very clear on the PL-398MP. Ohh, and while this set’s no rich console radio, it’s nowhere near as tinny as what I was expecting to hear. For a portable rig, quite acceptable.

Out of the box, my unit used 9kHz frequency steps, which will also suit Europe. For those in the USA, you’ll want to hold that “3” button with the power off to switch the radio to a 10kHz spacing. This will also switch the temperature display to show °F instead of °C.

Long wave reception

Firstly, to even get at LW took a bit of fiddling. The handbook is a little inaccurate, telling you to press a non-existent MW/LW button. The correct procedure to enable LW is to turn the radio off, then long-press the AM button. The display will then show “LW” and “On” to indicate the feature is now enabled.

Turning on LW mode.

The same procedure turns the LW feature off too.

Having done so, when you turn the radio back on, pressing the AM button momentarily will now switch between MW and LW.

Now, ITU region 3 where I am, does not have any official LW stations. Nor does region 2 (Americas), this is a feature that’s more useful to those in Europe.

There used to be a LW weather beacon on 359kHz broadcasting out of Amberley Air base, and my Sony ST-2950F (my very first LW-capable receiver) could pick it up with its loop-stick antenna. Neither it, nor the PL398-MP do today. I guess I could drag out one of my amateur sets out to get a third opinion, but smart money is that the transmitter is now turned off.

Never mind, I’ll just turn LW back off and not worry about it.

Line-in feature

This works pretty simple, the radio is supplied with a 3.5mm male-male stereo cable. Plug one end into the line-in port on the radio, and the other into your audio device. On the LCD screen, a “>>” symbol appears on the right-hand side. Turn the radio on, and get your source playing, you’ll hear it through the radio speakers.

Nice and simple. I’ll be able to use it with the aforementioned AR-1690, my tablet, and the little portable media player I already use on the bike.

MP3 Player

Yes, I did mention it has one. The controls are on the back, and the device takes a SD card via a port hiding under the rear stand.

The SD card port

Plugging in a FAT32-formatted SD card with some MP3s on it (The Goons Show, what else?) and turning the radio on, I then tried getting it to play something. Hitting Pause/Play at first seemed to do nothing, but eventually I must’ve either waited long enough, or managed to coax it to play something, it started playing the first track it found.

I could navigate between the tracks — I have no idea whether it sorts the files by file name or not, the display is too primitive to support showing any track metadata, but it did work. There’s no playlist capability in this device, no random shuffle mode, as I say, it’s primitive.

So I think I’ll just ignore it and pretend it’s not there. A Bluetooth receiver would have had greater utility, but never mind. There is a sister-model to this one, the PL398-BT with exactly this feature… but good luck getting one unless you order direct from China.

Hidden function #1: A lithium charger?

So, fiddling with the radio, I noticed a few hidden features that are undocumented. With the radio off, holding the VM button triggers the display to show “Li On” and the “Ni-MH Battery” indicator starts flashing.

Is this a “Lithium battery” mode?

Exactly what this is doing I’m not sure. There are radios in Tecsun’s line-up that do support and include Lithium batteries, so maybe the project to add this feature isn’t out of the question. I guess a trip into the set with a screw driver will be my next move, but maybe some of that work is done for me.

Hidden feature #2: Self test?

Holding the BW button whilst the radio is turned off seems to perform a self-test of the display.

All segments on the LCD turned on

When the button is released, it switches back to showing the time, plus some 4-digit code (firmware version perhaps?):

“3985”, a code for the gurus to meditate over

Not sure what this is.

Final comments

All in all, both seem to be decent sets. The little DAB+ set is more-or-less a one-trick pony, it’ll be interesting to see if it does any better or worse than the Tecsun. I’m also yet to introduce these to the Garmin GPS that caused my Sanyo so much grief.

It’s nice to know that short wave sets are still being manufactured, and the performance of this set is quite remarkable. Tecsun themselves are based out of Hong Kong, and seem to have a decent reputation from what I’ve seen in reviews online.

While lately it’s been my policy to avoid buying stuff that’s made in China / by Chinese companies — in this case the feature set I wanted was practically a unicorn, no one else makes something like this, and this set seems to perform decently, so we’ll see how it looks after a year or two to see how it is long-term. After all, the little Sanyo has been in my possession since the early 90s, and it was an old radio then… it still goes. Will the Tecsun last as long? We’ll see.

As for the Digitech unit; well, DAB+ has a crazy amount of DSP going on to pick out one station out of a multiplex. I expect being more complicated, it’ll perhaps have a shorter longevity, but hopefully long enough for me to cobble up a replacement. Time will tell.

Jan 302022
 

We’ve had WiFi in one form or another for some years on this network. Originally it started with an interest in the Brisbane Mesh metropolitan area network which more-or-less imploded around 2006 or so. Back then, I think I had one of the few WiFi access points in The Gap. 2.4GHz was basically microwave ovens and not much else. The same is not true today.

WiFi networks in my local area, 2.4GHz isn’t as quiet as it once was.

Since then, the network has changed a bit: from a little D-Link 802.11b AP, we moved to a Prism54g WiFi card (that I still have) with hostapd, using OpenVPN to provide security. That got replaced by a Telstra-branded Netcomm WiFi router which I figured out supported WPA-Enterprise, so I went down the rabbit hole of setting up FreeRADIUS, and we ran that until a lightning strike blew it up. The next consumer AP that replaced it was a miserable failure, so it’s been business APs since then.

Initially a Cisco WAP4410N, which was a great little AP… worked reliably for years, but about 12 months ago I noticed it was dropping packets occasionally and getting a bit intermittent. Thinking that maybe the device is past its prime, I bought a replacement: a WAP150, which proved to be a bit disappointing. Range wasn’t as good compared to the WAP4410N, and I soon found myself moving the WAP150 downstairs to service the network there and re-instating the WAP4410N.

In particular, one feature I liked about the two Cisco units is they support 802.1Q VLANs, with the ability to assign a different WiFi SSID to each. The 4410N could do 4 SSIDs, the 150 8. This is a feature that consumer APs don’t do, and it is a handy feature here as it enables me to have a “work” LAN (with VPN to my workplace) and a “home” LAN which everybody else uses.

Years ago, our Internet usage was over a 512kbps/128kbps ADSL link, and it was mostly Internet browsing… so intermittent packet loss wasn’t a big deal… one AP did just fine. Now with the move to NBN, our telephone service is a VoIP service, and I’m finding that WiFi IP phones are very picky about APs. We have three IP phones and an ATA… the ATA (Grandstream HT814) is Ethernet of course, as is one of the IP phones (Grandstream GXP1615), but the other two IP phones are WiFi (Aristel Wi-Fi Genius X1+ and Grandstream WP810).

The Aristel device in particular, was really choppy… and the first one sent out seemed to be a DoA, with poor performance even when right beside the AP. A replacement was provided under RMA, and this one performed much better, but still suffered intermittent loss. The Grandstream WP810 in general worked, but there were noticeable dead spots in a few areas around the house.

The final straw with the existing pair of APs came at the last Brisbane WICEN meeting, conducted over Zoom… both APs seem to suffer a problem where they started dropping packets and glitching badly. A power-cycle “fixes” the problem, but the issue returns after a week or two. Clearly, they were no longer up to snuff.

The replacements

APs

I procured the following replacements:

I went the long-range one for upstairs since it’s in a high spot (sitting atop a stereo speaker on a top shelf in my room) so would be able to “radiate” over a long distance to hopefully reach down the drive way and into the back-yard.

The other one is to fill in dead spots downstairs, and since it’s going to be pretty much sitting at waist level, there’s no point in it being “long range”.

The devices I bought were purchased through mWave (here and here), as they had them in stock at the time.

Power injectors

These are 48V passive PoE devices… so to make them go, you need a separate power injector. The “standard” Ubiquiti power injector was out-of-stock, but I wanted these to work on 12V anyway, so I looked around for a suitable option. Core Electronics do have some step-up converters which work great for 24V devices, but the range available doesn’t quite reach 48V. I did find though that Telco Antennas sell these 48V PoE injectors. (They also sell the APs here and here, but were out-of-stock at the time of purchase.)

Admittedly, they’re 10/100Mbps only, which means you don’t quite get the full throughput out of the WiFi6 APs, but meh, it’s good enough… if the IP phones need more than 100Mbps, they’ll run up against the 25Mbps limit of the NBN link!

Controller

These APs, unlike the Cisco devices they’re replacing (and everything else I’ve used prior), these have no built-in management interface, they talk to a network controller device… normally the UniFi Cloud Key. I had a run-in with the first generation of these at the Stirling’s Crossing Endurance Centre. For a big network, the idea of a central device does make a lot of sense (that site has 5 UAP-AC-Ms and 3 8-port PoE switches), but for a two-AP network like mine it seemed overkill.

One thing I learned, is these things positively DO NOT like being power-cycled! Repeated power-cycling corrupts the database in very short order, and you find yourself restoring configurations from a back-up soon after. So I was squeamish about buying one of these. The second generation version has its own back-up battery, but reports suggest they can be just as unreliable. In any case, they were out of stock everywhere, and I didn’t want to spring the extra cash for the “plus” model (that has a HDD… not much use to me) or the Dream Machine router.

I did consider using a Raspberry Pi 3, in fact that was my original plan… I had one spare, and so started down the path of setting it up as a UniFi controller… however, ran into two road blocks:

  • UniFi controller at this time requires Java v8… Debian Bullseye ships with v11 minimum
  • UniFi controller needs MongoDB 3.4, which isn’t available on Debian Bullsye on ARM64

I could compile MongoDB, but Java is a whole other issue, and lots of people have complained loudly about this very limitation. If there was one big gripe I’ve got, this would be it.

I did some further research: Ubuntu 20.04 does offer a Java 8 runtime, and on AMD64, I can use existing binaries for MongoDB. I looked around and purchased this small-form-factor PC. Windows 10 went bye byes once I managed to hit F1 at the right point in the BIOS set-up, and Ubuntu 20.04 was PXE-loaded. I could then follow the standard instructions to install via APT. The controller seems to be working fine using OpenJDK JRE v8. I’d recommend this over the licensing quagmire that is using Oracle JRE.

Installation

With a controller, and all the requisite bits, things went smoothly. I found at first, the controller insisted on using 192.168.1.0/24 addresses to talk to the APs… so wound up setting that up in the netplan config. I later found that the UniFi controller won’t let you set a network subnet address unless you turn off Auto Scale Network.

Setting the network subnet is not possible until “Auto Scale Network” is disabled.

So maybe from here-on-in, new APs will appear in the correct subnet, but to be honest, it’s no big deal either way, unless an AP has an untimely end, I shouldn’t need to buy new ones for a while!

Auto-negotiation quirks with Cisco switches

One oddity I noticed was the upstairs (U6 LR) AP was reluctant to communicate via Ethernet, instead funnelling its traffic via the downstairs AP. While it’s handy they can do that, means I don’t necessarily need to worry about powering the upstairs switches in a power outage, the AP should be able to use its Ethernet back-end.

The downstairs one was having no problems, and the set-up was similar: switch port → PoE injector → AP, via short cables. I tried a few different cables with no change. Logged into the switch and had a look, it was set to auto-negotiate, which was working fine downstairs. The downstairs switch is a Netgear GS748T, whereas the one upstairs is a Cisco SG200-08 (not the P version that does PoE).

I found I could log into the AP over SSH (you can provide your SSH key via the UniFi controller)… so I logged in as root and had a look around. They run Linux with (a sadly tainted due to ubnthal.ko and ubnt_common.ko) kernel 4.4, and a Busybox/musl environment with an ARM64 CPU. (Well, the U6 LRs are ARM64, the U6 Lites are MediaTek MT7621s… mipsel32r2 with kernel 5.4.0 and not tainted.) ip told me that eth0 was up, and that the AP’s IP address was assigned to br0 which was also up. brctl told me that eth0 was enslaved by br0. Curiously, /sys/class/net/eth0/carrier was reporting 1, which disagreed with what the switch was telling me.

On a hunch, I tried turning off auto-negotiation, forcing instead 100Mbps full-duplex. Bingo, a link LED appeared. The topology showed the AP was now wired, not talking via downstairs.

Network topology shown in the UniFi Controller UI

Switched back to auto-negotiation, and the AP switched to being a wireless extender with the link LED disappearing from the switch. This may be a quirk of the PoE injectors I’m using, which do not handle 100Mbps, and maybe the switch hasn’t realised this because the AP otherwise “advertises” 1Gbps link capability. For now, I’m leaving that switch port locked at 100Mbps full-duplex. If you have problems with an AP showing up via Ethernet, here’s a place that is worth checking.

Jan 222022
 

Well, some might recall a few years ago I was trying ideas for cycle clothing, and later followed up with some findings.

My situation has changed a bit… the death of a former work colleague shook me up quite a bit, and while I have been riding, I haven’t been doing it nearly as much. Then, COVID-19 reared its ugly head.

Suffice to say, my commute is now one side of the bedroom to the other. Right at this moment, I’m in self-imposed lockdown until I can get my booster shot: I had my second AstraZenica shot on the 4th November, and the Queensland Government has moved the booster shots to being 3 months after the second shot, so for me, that means I’m due on the 4th February. I’m already booked in with a local chemist here in The Gap, I did that weeks ago so that the appointment would be nailed to the floor, and thus currently I’m doing everything in my power to ensure that appointment goes ahead on-time.

I haven’t been on the bike much at all. That doesn’t mean though that I stop thinking about how I can make my ride more comfortable.

Castle Clothing Coveralls

Yes, I’m the one clad in yellow far left.

They had quite few positives:

  • They were great in wet weather
  • They were great in ambient temperatures below 20°C
  • The pocket was handy for storing keys/a phone/a wallet
  • They had good visibility day and night
  • They keep the wind out well. (On the Main Range, Threadbo Top Station was reporting 87km/hr wind gusts that day.)

But, they weren’t without their issues:

  • They’re (unsurprisingly) no good on a sunny summer’s day (on the day that photo was taken, it was borderline too hot, weather prediction was for showers and those didn’t happen)
  • They’re knackered after about 30 washes or so: the outer waterproof layer peels off the lining
  • In intermittent rain / sunshine, they’d keep you dry during the rainy bit, but when the sun came out, you’d get steamed

To cap it off, they’re no longer being manufactured. Castle Clothing have basically canned them. They’ve got a plain yellow version with no stripes, but otherwise, nothing like their old product. I wound up buying 4 of them in the end… the first two had to be chucked because of the aforementioned peeling problem, the other two are in good condition now, but eventually they’ll need replacement.

Mammoth Workwear do have some alternatives. The “Supertouch” ones I have tried, they’re even shorter lived than the Castle ones, and feel like wearing a plastic bag. The others are either not night-time visible, or they’re lined for winter use.

So, back to research again.

Zentai suits?

Now, I know I’ve said previously I’m no MAMIL… and for the most part I stand by this. I did try wearing a stinger suit on the bike once… on the plus side they are very breathable, so quite comfortable to ride in. BUT, three negatives with stinger suits:

That got me thinking, what’s the difference between a stinger suit and an open-face zentai suit? Not a lot. The zentai suit, if it has gloves, can be bought as a “mitten” or (more commonly) a proper multi-finger glove version. They come in a lot more colours than a stinger suit does. They’re about the same price. And there’s no logos, just plain colours (or you can do various patterns/designs if that’s your thing).

A downside is that the zipper is at the back, which means answering calls from nature is more difficult. But then again, some stinger suits and most wetsuits also feature a back-entry.

I’ve got two coming to try the idea out. I suspect they’ll get worn over other clothing, I’ll just duck into a loo, take my shirt off, put the zentai suit on, then jump on the bike to ride to my destination… that way my shirt isn’t soaked with sweat. We’ll see.

One is a black one, which was primarily bought to replace one of the stinger suits for swimming activities, but I can also evaluate the fabric too (it is the usual lycra material).

The other is a silver one (thus a lycra/latex blend), to try out the visibility — it’ll be interesting to see whether it’s somewhat water-repellent due to the latex mix in the material, and see what effect this has on sweat.

Both of these are open-face! You should never try swimming with a full-face zentai suit. I can’t imagine getting caught in the rain ending well either, and the ability to see where you’re going is paramount when operating any vehicle (especially a bicycle)!

They’ll turn up in a week or two, I can try them out then. Maybe won’t be the final solution, but it may answer a few questions.

Heavy Wet/Cold weather gear

So, with the lighter-weight class out of the way, that turns my attention to what to do in truly foul weather, or just bitterly cold weather.

Now, let me define the latter: low single digits °C. Possibly with a westerly breeze carrying it. For some reading this, this will feel like a hot summer’s day, but for those of us in Brisbane, temperatures this low are what we see in the middle of winter.

The waterproof overalls I was wearing before worked well in dry-but-cold weather, however I did note my hands copped the cold… I needed gloves. The ends of the legs also could get tangled with the chain if I wasn’t careful, and my shoes would still get wet. Riggers boots work okay for this, but they’re hard to come by.

I happened to stumble on Sujuvat ratkaisut Oy, who do specialist wet-weather clothing meant for Europe. Meeko (who runs the site) has a commercial relationship with a few manufacturers, notably AJGroup who supply the material for a lot of Meeko’s “extreme” range.

The suits are a variant of PVC, which will mean they’re less breathable than what I have now, but should also mean they’re a lot more durable. There’s a decent range of colours available, with many options having the possibility of reflective bands, attached gloves and attached wellington boots. It’s worth noting the BikeSuit (no longer available) I was looking at 8 years ago was also a PVC outfit.

In the winter time, the big problem is not so much sweat, but rather, sweat being hit by wind-chill. Thus I’m ordering one of the Extreme Drainage Coveralls to try them out.

I’ve seen something similar out of AliExpress, however the options there are often built for the Chinese market… so rarely feature size options that fit someone like myself. Most of the Chinese ones are dark colours, with one “tan”-coloured option listed, and a couple of rubber ones that were lighter colours (a dark “pink”, and a yellow). Some of the rubber ones also had a strange opening arrangement: a tube opening in the stomach, which you pulled yourself through, then clamped shut with a peg. Innovative, but looks very untidy and just begging to get caught in something! I’ll stick with something a bit more conventional.

The coverall I’m ordering will be a 500g/m² white fabric… so about twice the weight of my current Castle workwear overalls (which are about 330g/m²), and will have the gloves and boots attached. I’m curious to see how that’s done up close, and see how it works out in my use case.

Being a white rather than a yellow/orange will make them less visible in the day time, but I suspect this won’t be much of an issue as it’s night-time visibility I’m particularly after. Also, being white instead of a “strong” fluro colour will likely be better at horse endurance rides, as horses tend to react to fluro colours.

The zip arrangement intrigues me as well… it’s been placed up high so that you can pretty much wade into water up to your chest and not get wet. There’s a lighter-weight option of the same suit, however with fewer options for colours. If the extreme version doesn’t work out for cycling, I might look at this alternative (the bike doesn’t react to strong colours like a horse does).

There’s about a 2-month lead-time on this gear because it’s made-to-order, a reasonable trade-off given you get to more-or-less get it made exactly how you want it. Looking around, I’m seeing off-the-shelf not-customisable outfits at AU$400 a pop, €160 (~AU$252) is looking a good option.

The fact that this is being run as a small side-hustle is commendable. I look forward to seeing the product.

Jan 032022
 

So, this year I had a new-year’s resolution of sorts… when we first started this “work from home” journey due to China’s “gift”, I just temporarily set up on the dinner table, which was of course, meant to be another few months.

Well, nearly 2 years later, we’re still working from home, and work has expanded to the point that a move to the office, on any permanent basis, is pretty much impossible now unless the business moves to a bigger building. With this in mind, I decided I’d clear off the dinner table, and clean up my room sufficiently to set up my workstation in there.

That meant re-arranging some things, but for the most part, I had the space already. So some stuff I wasn’t using got thrown into boxes to be moved into the garage. My CD collection similarly got moved into the garage (I have it on the computer, but need to retain the physical discs as they represent my “personal use license”), and lo and behold, I could set up my workstation.

The new workspace

One of my colleagues spotted the Indy and commented about the old classic SGI logo. Some might notice there’s also an O2 lurking in the shadows. Those who have known me for a while, will remember I did help maintain a Linux distribution for these classic machines, among others, and had a reasonable collection of my own:

My Indy, O2 and Indigo2 R10000
The Octane, booting up

These machines were all eBay purchases, as is the Sun monitor pictured (it came with the Octane). Sadly, fast forward a number of years, and these machines are mostly door stops and paperweights now.

The Octane’s and Indigo2’s demises

The Octane died when I tried to clean it out with a vacuum cleaner, without realising the effect of static electricity generated by the vacuum cleaner itself. I might add mine was a particularly old unit: it had a 175MHz R10000 CPU, and I remember the Linux kernel didn’t recognise the power management circuitry in the PSU without me manually patching it.

The Indigo2 mysteriously stopped working without any clear reason why, I’ve never actually tried to diagnose the issue.

That left the Indy and the O2 as working machines. I haven’t fired them up in a long time until today. I figured, what the hell, do they still run?

Trying the Indy and O2 out

Plug in the Indy, hit the power button… nothing. Dead as a doornail. Okay, put it aside… what about the O2?

I plug it in, shuffle it closer to the monitor so I can connect it. ‘Lo and behold:

The O2 lives!

Of course, the machine was set up to use a serial console as its primary interface, and boot-up was running very slow.

Booting up… very slowly…

It sat there like that for a while, figuring the action was happening on a serial port, I went to go get my null modem cable, only to find a log-in prompt by the time I got back.

Next was remembering what password I was using when I last ran this machine. We had the OpenSSL heartbleed vulnerability happen since then, and at about that time, I revoked all OpenPGP keys and changed all passwords, so it isn’t what I use today. I couldn’t get in as root, but my regular user account worked, and I was able to change the root password via sudo.

Remembering my old log-in credentials, from 22 years ago it seems

The machine soon crashed after that. I tried rebooting, this time I tweaked some PROM settings (and yes, I was rusty remembering how to do it) to be able to see what was going. (I had the null modem cable in hand, but didn’t feel like trying to blindly find the serial port at the back of my desktop.)

Changing PROM settings
The subsequent boot, and crash

Evidently, I had a dud disk. This did not surprise me in the slightest. I also noticed the PSU fan was not spinning, possibly seized after years of non-use.

Okay, there were two disks installed in this machine, both 80-pin SCA SCSI drives. Which one was it? I took a punt and tried the one furtherest away from the I/O ports.

Success, she boots now

I managed to reset the root password, before the machine powered itself off (possibly because of overheating). I suspect the machine will need the dust blown out of it (safely! — not using the method that killed the Octane!), and the HDDs will need replacements. The guilty culprit was this one (which I guessed correctly first go):

a 4GB HDD was a big drive back in 1998!

The computer I’m typing this on has a HDD that stores 1000 of these drives. Today, there are modern alternatives, such as SCSI2SD that could get this machine running fully if needed. The tricky bit would be handling the 80-pin hot-swap interface. There’d be some hardware hacking needed to connect the two, but AU$145 plus an adaptor seems like a safer bet than trying some random used HDD.

So, replacement for the HDDs, a clean-out, and possibly a new fan or two, and that machine will be back to “working” state. Of course the Linux landscape has moved on since then, Debian no longer support the MIPS4 ISA that the RM5200 CPU understands, Gentoo still could work on this though, and maybe OpenBSD still support this too. In short, this machine is enough of a “go-er” that it should not be sent to land-fill… yet.

Turning my attention back to the Indy

So the Indy was my first SGI machine. Bought to better understand the MIPS processor architecture, and perhaps gain enough understanding to try and breathe life into a Cobalt Qube II server appliance (remember those?), it did teach me a lot about Linux and how things vary between platforms.

I figured I might as well pop the cover and see if there’s anything “obviously” wrong. The procedure I was rusty on, but I recalled there was a little catch on the back of the case that needed to be release before the cover slid off. So I lugged the 20″ CRT off the top of the machine, pulled the non-functioning Indy out, and put it on the table to inspect further.

Upon trying to pop the cover (gently!), the top of the case just exploded. Two pieces of the top cover go flying, and the RF shield parts company with the cover’s underside.

The RF shield parted company with the underside of the lid

I was left with a handful of small plastic fragments that were the heat-set posts holding the RF shield to the inside of the lid.

Some of the fragments that once held the RF shield in place

Clearly, the plastic has become brittle over the years. These machines were released in 1993, I think this might be a 1994-model as it has a slightly upgraded R4600 CPU in it.

As to the machine itself, I had a quick sticky-beak, there didn’t seem to be any immediately obvious things, but to be honest, I didn’t do a very thorough check. Maybe there’s some corrosion under the motherboard I didn’t spot, maybe it’s just a blown fuse in the PSU, who knows?

The inside of the Indy

This particular machine had 256MB RAM (a lot for its day), 8-bit Newport XL graphics, the “Indy Presenter” LCD interface (somewhere, we have the 15″ monitor it came with — sadly the connecting cable has some damaged conductors), and the HDD is a 9.1GB HDD I added some time back.

Where to now?

I was hanging on to these machines with the thinking that someone who was interested in experimenting with RISC machines might want them — find them a new home rather than sending them to landfill. I guess that’s still an option for the O2, as it still boots: so long as its remaining HDD doesn’t die it’ll be fine.

For the others, there’s the possibility of combining bits to make a functional frankenmachine from lots of parts. The Indy will need a new PROM battery if someone does manage to breathe life into it.

The Octane had two SCSI interfaces, one of which was dead — a problem that was known-of before I even acquired it. The PROM would bitch and moan about the dead SCSI interface for a good two minutes before giving up and dumping you in the boot menu. Press 1, and it’d hand over to arcload, which would boot a Linux kernel from a disk on the working controller. Linux would see the dead SCSI controller, and proceed to ignore it, booting just fine as if nothing had happened.

The Indigo2 R10000 was always the red-hedded step child: an artefact of the machine’s design. The IP22 design (Indy and Indigo2) was never designed with the intent of being used with a R10000 CPU, and the speculative execution features played havoc with caching on this design. The Octane worked fine because it was designed from the outset to run this CPU. The O2 could be made to work because of a quirk of its hardware design, but the Indigo2 was not so flexible, so kernel-space code had to hack around the problem in software.

I guess I’d still like to see the machines go to a good home, but no idea who that’d be in this day and age. Somewhere, I have a partial disc set of Irix 6.5, and there’s also a 20″ SGI GDM5410 monitor (not the Sun monitor pictured above) that, at last check, did still work.

It’ll be a sad day when these go to the tip.

Dec 062021
 

Tonight I learned something disturbing… I heard hear-say evidence that someone I know, had made the decision to obtain a fraudulent COVID-19 vaccination certificate for the purpose of bypassing the upcoming restrictions due to be applied on the 17th December, 2021.

Now, it comes as no surprise that people will want to dodge this. I won’t identify the individual who is trying to dodge the requirements in this case, nor will I reveal my source. As what I have is hear-say evidence, this is not admissible in a court of law, and it would be wrong for me to name or identify the person in any way.

No doubt though, the authorities have considered this possibility. They cracked down on one “doctor”, who was found to be issuing fraudulent documents a little over a month ago. She isn’t the first, won’t be the last either. It’s not entirely clear looking at the Queensland Government website what the penalties are for supplying fraudulent documentation. One thing I know for certain, I do not want to be on the receiving end. I do not want to have to justify my presence because someone I go to a restaurant with chooses to break the rules.

My biggest fear in this is two-fold:

  1. Fear of prosecution from association with the individual committing fraud
  2. Fear of knee-jerk restrictions being applied to everybody because a small number could not follow the rules

We’ve seen #2 already this pandemic. It’s why we’ve got this silly check-in program in the first place. I’ve already made my thoughts clear on that.

What worries me is it’s unknown at this stage how the certificate can be verified. There are two possible ways I can think of: the Individual Healthcare Identifier and the Document number, both of which appear on the MyGov-issued certificates. Are the staff members at venues able to validate these documents somehow? How do they know they’re looking at a genuine certificate? Is it a matter of blind-faith, or can they punch these details in and come up with something that says yay or nay?

I’m guessing the police have some way of verifying this, but, as a staff member at a venue, do you really want to be calling the police on patrons just because you have a “gut feeling” that something is fishy? How is this going to be policed really?

Surprise!

Let’s play devil’s advocate and suggest that indeed, there will be surprise inspections by the constabulary. Presumably they have a way of validating these certificates, otherwise what is the point? Now, suppose for arguments sake, one or two people are found to be holding fraudulent documents.

What then? Clearly, the guilty parties will have some explaining to do. What about the rest of us at that table, are we guilty by association? How about the business owner? The staff who were working that shift?

Cough! Sneeze! I’m not feeling well!

The other prospect is even worse, suppose that a few of us come down with an illness, get tested, and it winds up being one of the many strains floating around. Maybe it’s original-recipe COVID-19, maybe it’s Alpha, or Delta… this new Omicron variant… would you like some Pi with that? (You know, the irrational one that never ends!)

You’ve had to check-in (or maybe you don’t, but others you were with did, and they say you were there too — and CCTV backs their story up). Queensland Health looks up your details, and hang on, you’re not vaccinated. They check with venue staff, “Ohh yes, that person did show me a certificate and it looked valid”.

Hmmm, dear sir/madam, could you please show us your certificate? Ohh, you haven’t got one? The staff at the restaurant say you do. BUSTED! You’d either be charged for failing to follow a health direction, or charged with fraud, possibly both.

What’s worse with this hypothetical situation is that you and the people you’re with are then exposed to a deadly virus. At least with the surprise inspection in the previous hypothetical situation no one gets sick.

The end game

Really, I hope that we can move on from this. The worst possible situation we can wind up with is that the privilege of going out and doing things is revoked from everybody because a small minority (less than 10% of the Queensland population) refuse to do the right thing by everyone else.

I don’t want to be hassled by staff at the door everywhere I go. This will not end if people keep flouting rules! It used to be just hospitality venues where you needed to sign-in, it was done on paper, and life was simple, but then Queensland Health learned that today’s adults can’t write properly. If they mandate proprietary check-in software programs, then those of us who do not have a suitable phone are needlessly excluded from participation in society through no fault of their own.

We will eventually get to the stage where we treat COVID-19 like every other coronavirus out there. The common flu is, after all, a member of that same family, and we never needed check-in programs for that. Some aged-care centres will insist on seeing vaccination certificates, but you could get a coffee without fear of being interrogated. We are not there yet though. We’ve probably got another year of this… so we’re maybe ⅔ of the way through. Please don’t blow it for all of us!

Dec 012021
 

You’d be hard pressed to find a global event that has brought as much pandamonium as this COVID-19 situation has in the last two years. Admittedly, Australia seems to have come out of it better than most nations, but not without our own tortise and hare moment on the vaccination “stroll-out”.

One area where we’re all slowly trying to figure out a way to get along, is in contact tracing, and proving vaccination status.

Now, it’s far from a unique problem. If Denso Wave were charging royalties each time a QR code were created or scanned, they’d be richer than Microsoft, Amazon and Apple put together by now. In the beginning of the pandemic, when a need for effective contact tracing was first proposed, we initially did things on paper.

Evidently though, at least here in Queensland, our education system has proven ineffective at teaching today’s crop of adults how to work a pen, with a sufficient number seemingly being unable to write in a legible manner. And so, the state government here mandated that all records shall be electronic.

Now, this wasn’t too bad, yes a little time-consuming, but by-in-large, most of the check-in systems worked with just your phone’s web browser. Some even worked by SMS, no web browser or fancy check-in software needed. It was a bother if you didn’t have a phone on you (e.g. maybe you don’t like using them, or maybe you can’t for legal reasons), but most of the places where they were enforcing this policy, had staff on hand that could take down your details.

The problems really started much later on when first, the Queensland Government decided that there shall be one software package, theirs. This state was not unique in doing this, each state and territory decided that they cannot pool resources together — wheels must be re-invented!

With restrictions opening up, they’re now making vaccination status a factor in deciding what your restrictions are. Okay, no big issue with this in principle, but once again, someone in Canberra thought that what the country really wanted to do was to spend all evening piss-farting around with getting MyGov and ther local state/territory’s check-in application to talk to each-other.

MyGov itself is its own barrel of WTFs. Never needed to worry about it until now… it took 6 attempts with pass to come up with a password that met its rather loosely defined standards, and don’t get me started on the “wish-it-were two-factor” authentication. I did manage to get an account set-up, and indeed, the COVID-19 certificate is as basic as they come; a PDF genrated using the Eclipse BIRT Report Engine, on what looks to be a Linux machine (or some Unix-like system with a /opt directory anyway). The PDF itself just has the coat-of-arms in the background, and some basic text describing whom the certificate is for, what they got poked with and when. Nothing there that would allow machine-verification whatsoever.

The International version (which I don’t have as I lack a passport), embeds a rather large and complicated QR-code which embeds a JSON data structure (perhaps JOSE? I didn’t check) that seems to be digitally signed with an ECC-based private key. That QR code pushes the limits of what a standard QR code can store, but provided the person scanning it has a copy of the corresponding public key, all the data is there for verification.

The alternative to QRZilla, is rather to make an opaque token, and have that link through to a page with further information. This is, after all, what all the check-in QR codes do anyway. Had MyGov embedded such a token on the certificate, it’d be a trivial matter for the document to be printed out, screen-shotted or opened in, an application that needs to check it, and have that direct whatever check-in application to make an API call to the MyGov site to verify the certificate.

But no, they instead have on the MyGov site in addition to the link that gives you the rather bland PDF, a button that “shares with” the check-in applications. To see this button, you have to be logged in on the mobile device running the check-in application(s). For me, that’s the tablet, as my phone is too old for this check-in app stuff.

When you tap that button, it brings you to a page showing you the smorgasboard of check-in applications you can theoretically share the certificate with. Naturally, “Check-in Queensland” is one of those; tapping it, it takes you to a legal agreement page to which you must accept, and after that, magic is supposed to happen.

As you can gather, magic did not happen. I got this instead.

I at least had the PDF, which I’ve since printed, and stashed, so as far as I’m concerned, I’ve met the requirements. If some business owner wants to be a technical elitist, then they can stick it where it hurts.

In amongst the instructions, it makes two curious points:

  • iOS devices, apparently Safari won’t work, they need you to use Chrome on iOS (which really is just Safari pretending to be Chrome)
  • Samsung’s browser apparently needs to be told to permit opening links in third-party applications

I use Firefox for Android on my tablet as I’m a Netscape user from way-back. I had a look at the settings to see if something could help there, and spotted this:

Turning the Open links in apps option on, I wondered if I could get this link-up to work. So, dug out the password, logged in, navigated to the appropriate page… nada, nothing. They changed the wording on the page, but the end result was the same.

So, I’m no closer than I was; and I think I’ll not bother from here on in.

As it is, I’m thankful I don’t need to go interstate. I’ve got better things to do than to muck around with a computer every time I need to go to the shops! Service NSW had a good idea in that, rather than use their application, you could instead go to a website (perhaps with the aide of someone who had the means), punch in your details, and print out some sort of check-in certificate that the business could then scan. Presumably that same certificate could mention vaccination status.

Why this method of checking-in hasn’t been adopted nation-wide is a mystery to me. Seems ridiculous that each state needs to maintain its own database and software, when all these tools are supposed to be doing the same thing.

In any case, it’s a temporary problem: I for one, will be uninstalling any contact-tracing software at some point next year. Once we’re all mingling out in public, sharing coronaviruses with each-other, and internationally… it’ll be too much of a flood of data for each state’s contact tracers to keep up with everyone’s movements.

I’m happy to just tell my phone, tablet or GPS to record a track-log of where I’ve been, and maybe keep a diary — for the sake of these contact tracers. Not hard when they make an announcement that ${LOCATION} is a contact site; me to check, “have I been to ${LOCATION}?” and get in touch if I have, turning over my diary/track logs for contact tracers to do their work. It’ll probably be more accurate than what all these silly applications can give them anyway.

We need to move on, and move forward.

Oct 072021
 

Recently, I noticed my network monitoring was down… I hadn’t worried about it because I had other things to keep me busy, and thankfully, my network monitoring, whilst important, isn’t mission critical.

I took a look at it today. The symptom was an odd one, influxd was running, it was listening on the back-up/RPC port 8088, but not 8086 for queries.

It otherwise was generating logs as if it were online. What gives?

Tried some different settings, nothing… nada… zilch. Nothing would make it listen to port 8086.

Tried updating to 1.8 (was 1.1), still nothing.

Tried manually running it as root… sure enough, if I waited long enough, it started on its own, and did begin listening on port 8086. Hmmm, I wonder. I had a look at the init scripts:

#!/bin/bash -e

/usr/bin/influxd -config /etc/influxdb/influxdb.conf $INFLUXD_OPTS &
PID=$!
echo $PID > /var/lib/influxdb/influxd.pid

PROTOCOL="http"
BIND_ADDRESS=$(influxd config | grep -A5 "\[http\]" | grep '^  bind-address' | cut -d ' ' -f5 | tr -d '"')
HTTPS_ENABLED_FOUND=$(influxd config | grep "https-enabled = true" | cut -d ' ' -f5)
HTTPS_ENABLED=${HTTPS_ENABLED_FOUND:-"false"}
if [ $HTTPS_ENABLED = "true" ]; then
  HTTPS_CERT=$(influxd config | grep "https-certificate" | cut -d ' ' -f5 | tr -d '"')
  if [ ! -f "${HTTPS_CERT}" ]; then
    echo "${HTTPS_CERT} not found! Exiting..."
    exit 1
  fi
  echo "$HTTPS_CERT found"
  PROTOCOL="https"
fi
HOST=${BIND_ADDRESS%%:*}
HOST=${HOST:-"localhost"}
PORT=${BIND_ADDRESS##*:}

set +e
max_attempts=10
url="$PROTOCOL://$HOST:$PORT/health"
result=$(curl -k -s -o /dev/null $url -w %{http_code})
while [ "$result" != "200" ]; do
  sleep 1
  result=$(curl -k -s -o /dev/null $url -w %{http_code})
  max_attempts=$(($max_attempts-1))
  if [ $max_attempts -le 0 ]; then
    echo "Failed to reach influxdb $PROTOCOL endpoint at $url"
    exit 1
  fi
done
set -e

Ahh right, so start the server, check every second to see if it’s up, and if not, just abort and let systemd restart the whole shebang. Because turning the power on-off-on-off-on-off is going to make it go faster, right?

I changed max_attempts to 360 and the sleep to 10.

Having fixed this, I am now getting data back into my system.

Oct 032021
 

So, the situation: I have two boxes that must replicate data between themselves and generally keep in contact with one another over a network (Ethernet or WiFi) that I do not control. I want the two to maintain a peer-to-peer VPN over this potentially hostile network: ensuring confidentiality and authenticity of data sent over the tunnelled link.

The two nodes should be able to try and find each-other via other means, such as mDNS (Avahi).

I had thought of just using OpenVPN in its P2P mode, but I figured I’d try something new, WireGuard. Both machines are running Debian 10 (Buster) on AMD64 hardware, but this should be reasonably applicable to lots of platforms and Linux-based OSes.

This assumes WireGuard is in fact, installed: sudo apt-get install -y wireguard will do the deed on Debian/Ubuntu.

Initial settings

First, having installed WireGuard, I needed to make some decisions as to how the VPN would be addressed. I opted for using an IPv6 ULA. Why IPv6? Well, remember I mentioned I do not control the network? They could be using any IPv4 subnet, including the one I hypothetically might choose for my own network. This is also true of ULAs, however the probabilities are ridiculously small: parts per billion chance, enough to ignore!

So, I trundled over to a ULA generator site and generated a ULA. I made up a MAC address for this purpose. For the purposes of this document let’s pretend it gave me 2001:db8:aaaa::/48 as my address (yes, I know this is not a ULA, this is in the documentation prefix). For our VPN, we’ll statically allocate some addresses out of 2001:db8:aaaa:1000::/64, leaving the other address blocks free for other use as desired.

For ease of set-up, we also picked a port number for each node to listen on, WireGuard’s Quick Start guide uses port 51820, which is as good as any, so we used that.

Finally, we need to choose a name for the VPN interface, wg0 seemed as good as any.

Summarising:

  • ULA: 2001:db8:aaaa::/48
  • VPN subnet: 2001:db8:aaaa:1000::/64
  • Listening port number: 51820
  • WireGuard interface: wg0

Generating keys

Each node needs a keypair for communicating with its peers. I did the following:

( umask 077 ; wg genkey > /etc/wg.priv )
wg pubkey < /etc/wg.priv > /etc/wg.pub

I gathered all the wg.pub files from my nodes and stashed them locally

Creating settings for all nodes

I then made some settings files for some shell scripts to load. First, a description of the VPN settings for wg0, I put this into /etc/wg0.conf:

INTERFACE=wg0
SUBNET_IP=2001:db8:aaaa:1000::
SUBNET_SZ=64
LISTEN_PORT=51820
PERSISTENT_KEEPALIVE=60

Then, in a directory called wg.peers, I added a file with the following content for each peer:

pubkey=< node's /etc/wg.pub content >
ip=<node's VPN IP >

The VPN IP was just allocated starting at ::1 and counting upwards, do whatever you feel is appropriate for your virtual network. The IPs only need to be unique and within that same subnet.

Both the wg.peers and wg0.conf were copied to /etc on all nodes.

The VPN clean-up script

I mention this first, since it makes debugging the set-up script easier since there’s a single command that will bring down the VPN and clean up /etc/hosts:

#!/bin/bash

. /etc/wg0.conf

if [ -d /sys/class/net/${INTERFACE} ]; then
	ip link set ${INTERFACE} down
	ip link delete dev ${INTERFACE}

	sed -i -e "/^${SUBNET_IP}/ d" /etc/hosts
fi

This checks for the existence of wg0, and if found, brings the link down and deletes it; then cleans up all VPN IPs from the /etc/hosts file. Copy this to /usr/local/sbin, make permissions 0700.

The VPN set-up script

This is what establishes the link. The set-up script can take arguments that tell it where to find each peer: e.g. peernode.local=10.20.30.40 to set a static IP, or peernode.local=10.20.30.40:12345 if an alternate port is needed.

Giving peernode.local=listen just tells the script to tell WireGuard to listen for an incoming connection from that peer, where-ever it happens to be.

If a peer is not mentioned, it tries to discover the address of the peer using getent: the peer must have a non-link-local, non-VPN address assigned to it already: this is due to getent not being able to tell me which interface the link-local address came from. If it does, and it can ping that address, it considers the node up and adds it.

Nodes that do not have a static address configured, are set to listen, or are not otherwise locatable and reachable, are dropped off the list for VPN set-up. For two peers, this makes sense, since we want them to actively seek each-other out; for three nodes you might want to add these in “listen” mode, an exercise I leave for the reader.

#!/bin/bash

set -e

. /etc/wg0.conf

# Pick up my IP details and private key
ME=$( uname -n ).local
MY_IP=$( . /etc/wg.peers/${ME} ; echo ${ip} )

# Abort if we already have our interface
if [ -d /sys/class/net/${INTERFACE} ]; then
	exit 0
fi

# Gather command line arguments
declare -A static_peers
while [ $# -gt 0 ]; do
	case "$1" in
	*=*)	# Peer address
		static_peers["${1%=*}"]="${1#*=}"
		shift
		;;
	*)
		echo "Unrecognised argument: $1"
		exit 1
	esac
done

# Gather the cryptography configuration settings
peers=""
for peerfile in /etc/wg.peers/*; do
	peer=$( basename ${peerfile} )
	if [ "${peer}" != ${ME} ]; then
		# Derive a host name for the endpoint on the VPN
		host=${peer%.local}
		vpn_hostname=${host}.vpn

		# Do we have an endpoint IP given on the command line?
		endpoint=${static_peers[${peer}]}

		if [ -n "${endpoint}" ] && [ "${endpoint}" != listen ]; then
			# Given an IP/name, add brackets around IPv6, add port number if needed.
			endpoint=$(
				echo "${endpoint}" | sed \
					-e 's/^[0-9a-f]\+:[0-9a-f]\+:[0-9a-f:]\+$/[&]/' \
					-e "s/^\\(\\[[0-9a-f:]\\+\\]\\|[0-9\\.]\+\\)\$/\1:${LISTEN_PORT}/"
			)
		elif [ -z "${endpoint}" ]; then
			# Try to resolve the IP address for the peer
			# Ignore link-local and VPN tunnel!
			endpoint_ip=$(
				getent hosts ${peer} \
					| cut -f 1 -d' ' \
					| grep -v "^\(fe80:\|169\.\|${SUBNET_IP}\)"
			)

			if ping -n -w 20 -c 1 ${endpoint_ip}; then
				# Endpoint is reachable.  Construct endpoint argument
				endpoint=$( echo ${endpoint_ip} | sed -e '/:/ s/^.*$/[&]/' ):${LISTEN_PORT}
			fi
		fi

		# Test reachability
		if [ -n "${endpoint}" ]; then
			# Pick up peer pubkey and VPN IP
			. ${peerfile}

			# Add to peers
			peers="${peers} peer ${pubkey}"
			if [ "${endpoint}" != "listen" ]; then
				peers="${peers} endpoint ${endpoint}"
			fi
			peers="${peers} persistent-keepalive ${PERSISTENT_KEEPALIVE}"
			peers="${peers} allowed-ips ${SUBNET_IP}/${SUBNET_SZ}"

			if ! grep -q "${vpn_hostname} ${host}\\$" /etc/hosts ; then
				# Add to /etc/hosts
				echo "${ip} ${vpn_hostname} ${host}" >> /etc/hosts
			else
				# Update /etc/hosts
				sed -i -e "/${vpn_hostname} ${host}\\$/ s/^[^ ]\+/${ip}/" \
					/etc/hosts
			fi
		else
			# Remove from /etc/hosts
			sed -i -e "/${vpn_hostname} ${host}\\$/ d" \
				/etc/hosts
		fi
	fi
done

# Abort if no peers
if [ -z "${peers}" ]; then
	exit 0
fi

# Create the interface
ip link add ${INTERFACE} type wireguard

# Configre the cryptographic settings
wg set ${INTERFACE} listen-port ${LISTEN_PORT} \
	private-key /etc/wg.priv ${peers}

# Bring the interface up
ip -6 addr add ${MY_IP}/${SUBNET_SZ} dev ${INTERFACE}
ip link set ${INTERFACE} up

This is run from /etc/cron.d/vpn:

* * * * * root /usr/local/sbin/vpn-up.sh >> /tmp/vpn.log 2>&1
Sep 192021
 

I stumbled across this article regarding the use of TCP over sensor networks. Now, TCP has been done with AX.25 before, and generally suffers greatly from packet collisions. Apparently (I haven’t read more than the first few paragraphs of this article), implementations TCP can be tuned to improve performance in such networks, which may mean TCP can be made more practical on packet radio networks.

Prior to seeing this, I had thought 6LoWHAM would “tunnel” TCP over a conventional AX.25 connection using I-frames and S-frames to carry TCP segments with some header prepended so that multiple TCP connections between two peers can share the same AX.25 connection.

I’ve printed it out, and made a note of it here… when I get a moment I may give this a closer look. Ultimately I still think multicast communications is the way forward here: radio inherently favours one-to-many communications due to it being a shared medium, but there are definitely situations in which being able to do one-to-one communications applies; and for those, TCP isn’t a bad solution.

Comments having read the article

So, I had a read through it. The take-aways seem to be this:

  • TCP was historically seen as “too heavy” because the MCUs of the day (circa 2002) lacked the RAM needed for TCP data structures. More modern MCUs have orders of magnitude more RAM (32KiB vs 512B) today, and so this is less of an issue.
    • For 6LoWHAM, intended for single-board computers running Linux, this will not be an issue.
  • A lot of early experiments with TCP over sensor networks tried to set a conservative MSS based on the actual link MTU, leading to TCP headers dominating the lower-level frame. Leaning on 6LoWPAN’s ability to fragment IP datagrams lead to much improved performance.
    • 6LoWHAM uses AX.25 which can support 256-byte frames; vs 128-byte 802.15.4 frames on 6LoWPAN. Maybe gains can be made this way, but we’re already a bit ahead on this.
  • Much of the document considered battery-powered nodes, in which the radio transceiver was powered down completely for periods of time to save power, and the effects this had on TCP communications. Optimisations were able to be made that reduced the impact of such power-down events.
    • 6LoWHAM will likely be using conventional VHF/UHF transceivers. Hand-helds often implement a “battery saver” mode — often this is configured inside the device with no external control possible (thus it will not be possible for us to control, or even detect, when the receiver is powered down). Mobile sets often do not implement this, and you do not want to frequently power-cycle a modern mobile transceiver at the sorts of rates that 802.15.4 radios get power-cycled!
  • Performance in ideal conditions favoured TCP, with the article authors managing to achieve 30% of the raw link bandwidth (75kbps of a theoretical 250kbps maximum), with the underlying hardware being fingered as a possible cause for performance issues.
    • Assuming we could manage the same percentage; that would equate to ~360bps on 1200-baud networks, or 2.88kbps on 9600-baud networks.
  • With up to 15% packet loss, TCP and CoAP (its nearest contender) can perform about the same in terms of reliability.
  • A significant factor in effective data rate is CSMA/CA. aioax25 effectively does CSMA/CA too.

Its interesting to note they didn’t try to do anything special with the TCP headers (e.g. Van Jacobson compression). I’ll have to have a look at TCP and see just how much overhead there is in a typical segment, and whether the roughly double MTU of AX.25 will help or not: the article recommends using MSS of approximately 3× the link MTU for “fair” conditions (so ~384 bytes), and 5× in “good” conditions (~640 bytes).

It’s worth noting a 256-byte AX.25 frame takes ~2 seconds to transmit on a 1200-baud link. You really don’t want to make that a habit! So smaller transmissions using UDP-based protocols may still be worthwhile in our application.