Jan 032022
 

So, this year I had a new-year’s resolution of sorts… when we first started this “work from home” journey due to China’s “gift”, I just temporarily set up on the dinner table, which was of course, meant to be another few months.

Well, nearly 2 years later, we’re still working from home, and work has expanded to the point that a move to the office, on any permanent basis, is pretty much impossible now unless the business moves to a bigger building. With this in mind, I decided I’d clear off the dinner table, and clean up my room sufficiently to set up my workstation in there.

That meant re-arranging some things, but for the most part, I had the space already. So some stuff I wasn’t using got thrown into boxes to be moved into the garage. My CD collection similarly got moved into the garage (I have it on the computer, but need to retain the physical discs as they represent my “personal use license”), and lo and behold, I could set up my workstation.

The new workspace

One of my colleagues spotted the Indy and commented about the old classic SGI logo. Some might notice there’s also an O2 lurking in the shadows. Those who have known me for a while, will remember I did help maintain a Linux distribution for these classic machines, among others, and had a reasonable collection of my own:

My Indy, O2 and Indigo2 R10000
The Octane, booting up

These machines were all eBay purchases, as is the Sun monitor pictured (it came with the Octane). Sadly, fast forward a number of years, and these machines are mostly door stops and paperweights now.

The Octane’s and Indigo2’s demises

The Octane died when I tried to clean it out with a vacuum cleaner, without realising the effect of static electricity generated by the vacuum cleaner itself. I might add mine was a particularly old unit: it had a 175MHz R10000 CPU, and I remember the Linux kernel didn’t recognise the power management circuitry in the PSU without me manually patching it.

The Indigo2 mysteriously stopped working without any clear reason why, I’ve never actually tried to diagnose the issue.

That left the Indy and the O2 as working machines. I haven’t fired them up in a long time until today. I figured, what the hell, do they still run?

Trying the Indy and O2 out

Plug in the Indy, hit the power button… nothing. Dead as a doornail. Okay, put it aside… what about the O2?

I plug it in, shuffle it closer to the monitor so I can connect it. ‘Lo and behold:

The O2 lives!

Of course, the machine was set up to use a serial console as its primary interface, and boot-up was running very slow.

Booting up… very slowly…

It sat there like that for a while, figuring the action was happening on a serial port, I went to go get my null modem cable, only to find a log-in prompt by the time I got back.

Next was remembering what password I was using when I last ran this machine. We had the OpenSSL heartbleed vulnerability happen since then, and at about that time, I revoked all OpenPGP keys and changed all passwords, so it isn’t what I use today. I couldn’t get in as root, but my regular user account worked, and I was able to change the root password via sudo.

Remembering my old log-in credentials, from 22 years ago it seems

The machine soon crashed after that. I tried rebooting, this time I tweaked some PROM settings (and yes, I was rusty remembering how to do it) to be able to see what was going. (I had the null modem cable in hand, but didn’t feel like trying to blindly find the serial port at the back of my desktop.)

Changing PROM settings
The subsequent boot, and crash

Evidently, I had a dud disk. This did not surprise me in the slightest. I also noticed the PSU fan was not spinning, possibly seized after years of non-use.

Okay, there were two disks installed in this machine, both 80-pin SCA SCSI drives. Which one was it? I took a punt and tried the one furtherest away from the I/O ports.

Success, she boots now

I managed to reset the root password, before the machine powered itself off (possibly because of overheating). I suspect the machine will need the dust blown out of it (safely! — not using the method that killed the Octane!), and the HDDs will need replacements. The guilty culprit was this one (which I guessed correctly first go):

a 4GB HDD was a big drive back in 1998!

The computer I’m typing this on has a HDD that stores 1000 of these drives. Today, there are modern alternatives, such as SCSI2SD that could get this machine running fully if needed. The tricky bit would be handling the 80-pin hot-swap interface. There’d be some hardware hacking needed to connect the two, but AU$145 plus an adaptor seems like a safer bet than trying some random used HDD.

So, replacement for the HDDs, a clean-out, and possibly a new fan or two, and that machine will be back to “working” state. Of course the Linux landscape has moved on since then, Debian no longer support the MIPS4 ISA that the RM5200 CPU understands, Gentoo still could work on this though, and maybe OpenBSD still support this too. In short, this machine is enough of a “go-er” that it should not be sent to land-fill… yet.

Turning my attention back to the Indy

So the Indy was my first SGI machine. Bought to better understand the MIPS processor architecture, and perhaps gain enough understanding to try and breathe life into a Cobalt Qube II server appliance (remember those?), it did teach me a lot about Linux and how things vary between platforms.

I figured I might as well pop the cover and see if there’s anything “obviously” wrong. The procedure I was rusty on, but I recalled there was a little catch on the back of the case that needed to be release before the cover slid off. So I lugged the 20″ CRT off the top of the machine, pulled the non-functioning Indy out, and put it on the table to inspect further.

Upon trying to pop the cover (gently!), the top of the case just exploded. Two pieces of the top cover go flying, and the RF shield parts company with the cover’s underside.

The RF shield parted company with the underside of the lid

I was left with a handful of small plastic fragments that were the heat-set posts holding the RF shield to the inside of the lid.

Some of the fragments that once held the RF shield in place

Clearly, the plastic has become brittle over the years. These machines were released in 1993, I think this might be a 1994-model as it has a slightly upgraded R4600 CPU in it.

As to the machine itself, I had a quick sticky-beak, there didn’t seem to be any immediately obvious things, but to be honest, I didn’t do a very thorough check. Maybe there’s some corrosion under the motherboard I didn’t spot, maybe it’s just a blown fuse in the PSU, who knows?

The inside of the Indy

This particular machine had 256MB RAM (a lot for its day), 8-bit Newport XL graphics, the “Indy Presenter” LCD interface (somewhere, we have the 15″ monitor it came with — sadly the connecting cable has some damaged conductors), and the HDD is a 9.1GB HDD I added some time back.

Where to now?

I was hanging on to these machines with the thinking that someone who was interested in experimenting with RISC machines might want them — find them a new home rather than sending them to landfill. I guess that’s still an option for the O2, as it still boots: so long as its remaining HDD doesn’t die it’ll be fine.

For the others, there’s the possibility of combining bits to make a functional frankenmachine from lots of parts. The Indy will need a new PROM battery if someone does manage to breathe life into it.

The Octane had two SCSI interfaces, one of which was dead — a problem that was known-of before I even acquired it. The PROM would bitch and moan about the dead SCSI interface for a good two minutes before giving up and dumping you in the boot menu. Press 1, and it’d hand over to arcload, which would boot a Linux kernel from a disk on the working controller. Linux would see the dead SCSI controller, and proceed to ignore it, booting just fine as if nothing had happened.

The Indigo2 R10000 was always the red-hedded step child: an artefact of the machine’s design. The IP22 design (Indy and Indigo2) was never designed with the intent of being used with a R10000 CPU, and the speculative execution features played havoc with caching on this design. The Octane worked fine because it was designed from the outset to run this CPU. The O2 could be made to work because of a quirk of its hardware design, but the Indigo2 was not so flexible, so kernel-space code had to hack around the problem in software.

I guess I’d still like to see the machines go to a good home, but no idea who that’d be in this day and age. Somewhere, I have a partial disc set of Irix 6.5, and there’s also a 20″ SGI GDM5410 monitor (not the Sun monitor pictured above) that, at last check, did still work.

It’ll be a sad day when these go to the tip.

Dec 012021
 

You’d be hard pressed to find a global event that has brought as much pandamonium as this COVID-19 situation has in the last two years. Admittedly, Australia seems to have come out of it better than most nations, but not without our own tortise and hare moment on the vaccination “stroll-out”.

One area where we’re all slowly trying to figure out a way to get along, is in contact tracing, and proving vaccination status.

Now, it’s far from a unique problem. If Denso Wave were charging royalties each time a QR code were created or scanned, they’d be richer than Microsoft, Amazon and Apple put together by now. In the beginning of the pandemic, when a need for effective contact tracing was first proposed, we initially did things on paper.

Evidently though, at least here in Queensland, our education system has proven ineffective at teaching today’s crop of adults how to work a pen, with a sufficient number seemingly being unable to write in a legible manner. And so, the state government here mandated that all records shall be electronic.

Now, this wasn’t too bad, yes a little time-consuming, but by-in-large, most of the check-in systems worked with just your phone’s web browser. Some even worked by SMS, no web browser or fancy check-in software needed. It was a bother if you didn’t have a phone on you (e.g. maybe you don’t like using them, or maybe you can’t for legal reasons), but most of the places where they were enforcing this policy, had staff on hand that could take down your details.

The problems really started much later on when first, the Queensland Government decided that there shall be one software package, theirs. This state was not unique in doing this, each state and territory decided that they cannot pool resources together — wheels must be re-invented!

With restrictions opening up, they’re now making vaccination status a factor in deciding what your restrictions are. Okay, no big issue with this in principle, but once again, someone in Canberra thought that what the country really wanted to do was to spend all evening piss-farting around with getting MyGov and ther local state/territory’s check-in application to talk to each-other.

MyGov itself is its own barrel of WTFs. Never needed to worry about it until now… it took 6 attempts with pass to come up with a password that met its rather loosely defined standards, and don’t get me started on the “wish-it-were two-factor” authentication. I did manage to get an account set-up, and indeed, the COVID-19 certificate is as basic as they come; a PDF genrated using the Eclipse BIRT Report Engine, on what looks to be a Linux machine (or some Unix-like system with a /opt directory anyway). The PDF itself just has the coat-of-arms in the background, and some basic text describing whom the certificate is for, what they got poked with and when. Nothing there that would allow machine-verification whatsoever.

The International version (which I don’t have as I lack a passport), embeds a rather large and complicated QR-code which embeds a JSON data structure (perhaps JOSE? I didn’t check) that seems to be digitally signed with an ECC-based private key. That QR code pushes the limits of what a standard QR code can store, but provided the person scanning it has a copy of the corresponding public key, all the data is there for verification.

The alternative to QRZilla, is rather to make an opaque token, and have that link through to a page with further information. This is, after all, what all the check-in QR codes do anyway. Had MyGov embedded such a token on the certificate, it’d be a trivial matter for the document to be printed out, screen-shotted or opened in, an application that needs to check it, and have that direct whatever check-in application to make an API call to the MyGov site to verify the certificate.

But no, they instead have on the MyGov site in addition to the link that gives you the rather bland PDF, a button that “shares with” the check-in applications. To see this button, you have to be logged in on the mobile device running the check-in application(s). For me, that’s the tablet, as my phone is too old for this check-in app stuff.

When you tap that button, it brings you to a page showing you the smorgasboard of check-in applications you can theoretically share the certificate with. Naturally, “Check-in Queensland” is one of those; tapping it, it takes you to a legal agreement page to which you must accept, and after that, magic is supposed to happen.

As you can gather, magic did not happen. I got this instead.

I at least had the PDF, which I’ve since printed, and stashed, so as far as I’m concerned, I’ve met the requirements. If some business owner wants to be a technical elitist, then they can stick it where it hurts.

In amongst the instructions, it makes two curious points:

  • iOS devices, apparently Safari won’t work, they need you to use Chrome on iOS (which really is just Safari pretending to be Chrome)
  • Samsung’s browser apparently needs to be told to permit opening links in third-party applications

I use Firefox for Android on my tablet as I’m a Netscape user from way-back. I had a look at the settings to see if something could help there, and spotted this:

Turning the Open links in apps option on, I wondered if I could get this link-up to work. So, dug out the password, logged in, navigated to the appropriate page… nada, nothing. They changed the wording on the page, but the end result was the same.

So, I’m no closer than I was; and I think I’ll not bother from here on in.

As it is, I’m thankful I don’t need to go interstate. I’ve got better things to do than to muck around with a computer every time I need to go to the shops! Service NSW had a good idea in that, rather than use their application, you could instead go to a website (perhaps with the aide of someone who had the means), punch in your details, and print out some sort of check-in certificate that the business could then scan. Presumably that same certificate could mention vaccination status.

Why this method of checking-in hasn’t been adopted nation-wide is a mystery to me. Seems ridiculous that each state needs to maintain its own database and software, when all these tools are supposed to be doing the same thing.

In any case, it’s a temporary problem: I for one, will be uninstalling any contact-tracing software at some point next year. Once we’re all mingling out in public, sharing coronaviruses with each-other, and internationally… it’ll be too much of a flood of data for each state’s contact tracers to keep up with everyone’s movements.

I’m happy to just tell my phone, tablet or GPS to record a track-log of where I’ve been, and maybe keep a diary — for the sake of these contact tracers. Not hard when they make an announcement that ${LOCATION} is a contact site; me to check, “have I been to ${LOCATION}?” and get in touch if I have, turning over my diary/track logs for contact tracers to do their work. It’ll probably be more accurate than what all these silly applications can give them anyway.

We need to move on, and move forward.

Oct 072021
 

Recently, I noticed my network monitoring was down… I hadn’t worried about it because I had other things to keep me busy, and thankfully, my network monitoring, whilst important, isn’t mission critical.

I took a look at it today. The symptom was an odd one, influxd was running, it was listening on the back-up/RPC port 8088, but not 8086 for queries.

It otherwise was generating logs as if it were online. What gives?

Tried some different settings, nothing… nada… zilch. Nothing would make it listen to port 8086.

Tried updating to 1.8 (was 1.1), still nothing.

Tried manually running it as root… sure enough, if I waited long enough, it started on its own, and did begin listening on port 8086. Hmmm, I wonder. I had a look at the init scripts:

#!/bin/bash -e

/usr/bin/influxd -config /etc/influxdb/influxdb.conf $INFLUXD_OPTS &
PID=$!
echo $PID > /var/lib/influxdb/influxd.pid

PROTOCOL="http"
BIND_ADDRESS=$(influxd config | grep -A5 "\[http\]" | grep '^  bind-address' | cut -d ' ' -f5 | tr -d '"')
HTTPS_ENABLED_FOUND=$(influxd config | grep "https-enabled = true" | cut -d ' ' -f5)
HTTPS_ENABLED=${HTTPS_ENABLED_FOUND:-"false"}
if [ $HTTPS_ENABLED = "true" ]; then
  HTTPS_CERT=$(influxd config | grep "https-certificate" | cut -d ' ' -f5 | tr -d '"')
  if [ ! -f "${HTTPS_CERT}" ]; then
    echo "${HTTPS_CERT} not found! Exiting..."
    exit 1
  fi
  echo "$HTTPS_CERT found"
  PROTOCOL="https"
fi
HOST=${BIND_ADDRESS%%:*}
HOST=${HOST:-"localhost"}
PORT=${BIND_ADDRESS##*:}

set +e
max_attempts=10
url="$PROTOCOL://$HOST:$PORT/health"
result=$(curl -k -s -o /dev/null $url -w %{http_code})
while [ "$result" != "200" ]; do
  sleep 1
  result=$(curl -k -s -o /dev/null $url -w %{http_code})
  max_attempts=$(($max_attempts-1))
  if [ $max_attempts -le 0 ]; then
    echo "Failed to reach influxdb $PROTOCOL endpoint at $url"
    exit 1
  fi
done
set -e

Ahh right, so start the server, check every second to see if it’s up, and if not, just abort and let systemd restart the whole shebang. Because turning the power on-off-on-off-on-off is going to make it go faster, right?

I changed max_attempts to 360 and the sleep to 10.

Having fixed this, I am now getting data back into my system.

Oct 032021
 

So, the situation: I have two boxes that must replicate data between themselves and generally keep in contact with one another over a network (Ethernet or WiFi) that I do not control. I want the two to maintain a peer-to-peer VPN over this potentially hostile network: ensuring confidentiality and authenticity of data sent over the tunnelled link.

The two nodes should be able to try and find each-other via other means, such as mDNS (Avahi).

I had thought of just using OpenVPN in its P2P mode, but I figured I’d try something new, WireGuard. Both machines are running Debian 10 (Buster) on AMD64 hardware, but this should be reasonably applicable to lots of platforms and Linux-based OSes.

This assumes WireGuard is in fact, installed: sudo apt-get install -y wireguard will do the deed on Debian/Ubuntu.

Initial settings

First, having installed WireGuard, I needed to make some decisions as to how the VPN would be addressed. I opted for using an IPv6 ULA. Why IPv6? Well, remember I mentioned I do not control the network? They could be using any IPv4 subnet, including the one I hypothetically might choose for my own network. This is also true of ULAs, however the probabilities are ridiculously small: parts per billion chance, enough to ignore!

So, I trundled over to a ULA generator site and generated a ULA. I made up a MAC address for this purpose. For the purposes of this document let’s pretend it gave me 2001:db8:aaaa::/48 as my address (yes, I know this is not a ULA, this is in the documentation prefix). For our VPN, we’ll statically allocate some addresses out of 2001:db8:aaaa:1000::/64, leaving the other address blocks free for other use as desired.

For ease of set-up, we also picked a port number for each node to listen on, WireGuard’s Quick Start guide uses port 51820, which is as good as any, so we used that.

Finally, we need to choose a name for the VPN interface, wg0 seemed as good as any.

Summarising:

  • ULA: 2001:db8:aaaa::/48
  • VPN subnet: 2001:db8:aaaa:1000::/64
  • Listening port number: 51820
  • WireGuard interface: wg0

Generating keys

Each node needs a keypair for communicating with its peers. I did the following:

( umask 077 ; wg genkey > /etc/wg.priv )
wg pubkey < /etc/wg.priv > /etc/wg.pub

I gathered all the wg.pub files from my nodes and stashed them locally

Creating settings for all nodes

I then made some settings files for some shell scripts to load. First, a description of the VPN settings for wg0, I put this into /etc/wg0.conf:

INTERFACE=wg0
SUBNET_IP=2001:db8:aaaa:1000::
SUBNET_SZ=64
LISTEN_PORT=51820
PERSISTENT_KEEPALIVE=60

Then, in a directory called wg.peers, I added a file with the following content for each peer:

pubkey=< node's /etc/wg.pub content >
ip=<node's VPN IP >

The VPN IP was just allocated starting at ::1 and counting upwards, do whatever you feel is appropriate for your virtual network. The IPs only need to be unique and within that same subnet.

Both the wg.peers and wg0.conf were copied to /etc on all nodes.

The VPN clean-up script

I mention this first, since it makes debugging the set-up script easier since there’s a single command that will bring down the VPN and clean up /etc/hosts:

#!/bin/bash

. /etc/wg0.conf

if [ -d /sys/class/net/${INTERFACE} ]; then
	ip link set ${INTERFACE} down
	ip link delete dev ${INTERFACE}

	sed -i -e "/^${SUBNET_IP}/ d" /etc/hosts
fi

This checks for the existence of wg0, and if found, brings the link down and deletes it; then cleans up all VPN IPs from the /etc/hosts file. Copy this to /usr/local/sbin, make permissions 0700.

The VPN set-up script

This is what establishes the link. The set-up script can take arguments that tell it where to find each peer: e.g. peernode.local=10.20.30.40 to set a static IP, or peernode.local=10.20.30.40:12345 if an alternate port is needed.

Giving peernode.local=listen just tells the script to tell WireGuard to listen for an incoming connection from that peer, where-ever it happens to be.

If a peer is not mentioned, it tries to discover the address of the peer using getent: the peer must have a non-link-local, non-VPN address assigned to it already: this is due to getent not being able to tell me which interface the link-local address came from. If it does, and it can ping that address, it considers the node up and adds it.

Nodes that do not have a static address configured, are set to listen, or are not otherwise locatable and reachable, are dropped off the list for VPN set-up. For two peers, this makes sense, since we want them to actively seek each-other out; for three nodes you might want to add these in “listen” mode, an exercise I leave for the reader.

#!/bin/bash

set -e

. /etc/wg0.conf

# Pick up my IP details and private key
ME=$( uname -n ).local
MY_IP=$( . /etc/wg.peers/${ME} ; echo ${ip} )

# Abort if we already have our interface
if [ -d /sys/class/net/${INTERFACE} ]; then
	exit 0
fi

# Gather command line arguments
declare -A static_peers
while [ $# -gt 0 ]; do
	case "$1" in
	*=*)	# Peer address
		static_peers["${1%=*}"]="${1#*=}"
		shift
		;;
	*)
		echo "Unrecognised argument: $1"
		exit 1
	esac
done

# Gather the cryptography configuration settings
peers=""
for peerfile in /etc/wg.peers/*; do
	peer=$( basename ${peerfile} )
	if [ "${peer}" != ${ME} ]; then
		# Derive a host name for the endpoint on the VPN
		host=${peer%.local}
		vpn_hostname=${host}.vpn

		# Do we have an endpoint IP given on the command line?
		endpoint=${static_peers[${peer}]}

		if [ -n "${endpoint}" ] && [ "${endpoint}" != listen ]; then
			# Given an IP/name, add brackets around IPv6, add port number if needed.
			endpoint=$(
				echo "${endpoint}" | sed \
					-e 's/^[0-9a-f]\+:[0-9a-f]\+:[0-9a-f:]\+$/[&]/' \
					-e "s/^\\(\\[[0-9a-f:]\\+\\]\\|[0-9\\.]\+\\)\$/\1:${LISTEN_PORT}/"
			)
		elif [ -z "${endpoint}" ]; then
			# Try to resolve the IP address for the peer
			# Ignore link-local and VPN tunnel!
			endpoint_ip=$(
				getent hosts ${peer} \
					| cut -f 1 -d' ' \
					| grep -v "^\(fe80:\|169\.\|${SUBNET_IP}\)"
			)

			if ping -n -w 20 -c 1 ${endpoint_ip}; then
				# Endpoint is reachable.  Construct endpoint argument
				endpoint=$( echo ${endpoint_ip} | sed -e '/:/ s/^.*$/[&]/' ):${LISTEN_PORT}
			fi
		fi

		# Test reachability
		if [ -n "${endpoint}" ]; then
			# Pick up peer pubkey and VPN IP
			. ${peerfile}

			# Add to peers
			peers="${peers} peer ${pubkey}"
			if [ "${endpoint}" != "listen" ]; then
				peers="${peers} endpoint ${endpoint}"
			fi
			peers="${peers} persistent-keepalive ${PERSISTENT_KEEPALIVE}"
			peers="${peers} allowed-ips ${SUBNET_IP}/${SUBNET_SZ}"

			if ! grep -q "${vpn_hostname} ${host}\\$" /etc/hosts ; then
				# Add to /etc/hosts
				echo "${ip} ${vpn_hostname} ${host}" >> /etc/hosts
			else
				# Update /etc/hosts
				sed -i -e "/${vpn_hostname} ${host}\\$/ s/^[^ ]\+/${ip}/" \
					/etc/hosts
			fi
		else
			# Remove from /etc/hosts
			sed -i -e "/${vpn_hostname} ${host}\\$/ d" \
				/etc/hosts
		fi
	fi
done

# Abort if no peers
if [ -z "${peers}" ]; then
	exit 0
fi

# Create the interface
ip link add ${INTERFACE} type wireguard

# Configre the cryptographic settings
wg set ${INTERFACE} listen-port ${LISTEN_PORT} \
	private-key /etc/wg.priv ${peers}

# Bring the interface up
ip -6 addr add ${MY_IP}/${SUBNET_SZ} dev ${INTERFACE}
ip link set ${INTERFACE} up

This is run from /etc/cron.d/vpn:

* * * * * root /usr/local/sbin/vpn-up.sh >> /tmp/vpn.log 2>&1
Sep 192021
 

I stumbled across this article regarding the use of TCP over sensor networks. Now, TCP has been done with AX.25 before, and generally suffers greatly from packet collisions. Apparently (I haven’t read more than the first few paragraphs of this article), implementations TCP can be tuned to improve performance in such networks, which may mean TCP can be made more practical on packet radio networks.

Prior to seeing this, I had thought 6LoWHAM would “tunnel” TCP over a conventional AX.25 connection using I-frames and S-frames to carry TCP segments with some header prepended so that multiple TCP connections between two peers can share the same AX.25 connection.

I’ve printed it out, and made a note of it here… when I get a moment I may give this a closer look. Ultimately I still think multicast communications is the way forward here: radio inherently favours one-to-many communications due to it being a shared medium, but there are definitely situations in which being able to do one-to-one communications applies; and for those, TCP isn’t a bad solution.

Comments having read the article

So, I had a read through it. The take-aways seem to be this:

  • TCP was historically seen as “too heavy” because the MCUs of the day (circa 2002) lacked the RAM needed for TCP data structures. More modern MCUs have orders of magnitude more RAM (32KiB vs 512B) today, and so this is less of an issue.
    • For 6LoWHAM, intended for single-board computers running Linux, this will not be an issue.
  • A lot of early experiments with TCP over sensor networks tried to set a conservative MSS based on the actual link MTU, leading to TCP headers dominating the lower-level frame. Leaning on 6LoWPAN’s ability to fragment IP datagrams lead to much improved performance.
    • 6LoWHAM uses AX.25 which can support 256-byte frames; vs 128-byte 802.15.4 frames on 6LoWPAN. Maybe gains can be made this way, but we’re already a bit ahead on this.
  • Much of the document considered battery-powered nodes, in which the radio transceiver was powered down completely for periods of time to save power, and the effects this had on TCP communications. Optimisations were able to be made that reduced the impact of such power-down events.
    • 6LoWHAM will likely be using conventional VHF/UHF transceivers. Hand-helds often implement a “battery saver” mode — often this is configured inside the device with no external control possible (thus it will not be possible for us to control, or even detect, when the receiver is powered down). Mobile sets often do not implement this, and you do not want to frequently power-cycle a modern mobile transceiver at the sorts of rates that 802.15.4 radios get power-cycled!
  • Performance in ideal conditions favoured TCP, with the article authors managing to achieve 30% of the raw link bandwidth (75kbps of a theoretical 250kbps maximum), with the underlying hardware being fingered as a possible cause for performance issues.
    • Assuming we could manage the same percentage; that would equate to ~360bps on 1200-baud networks, or 2.88kbps on 9600-baud networks.
  • With up to 15% packet loss, TCP and CoAP (its nearest contender) can perform about the same in terms of reliability.
  • A significant factor in effective data rate is CSMA/CA. aioax25 effectively does CSMA/CA too.

Its interesting to note they didn’t try to do anything special with the TCP headers (e.g. Van Jacobson compression). I’ll have to have a look at TCP and see just how much overhead there is in a typical segment, and whether the roughly double MTU of AX.25 will help or not: the article recommends using MSS of approximately 3× the link MTU for “fair” conditions (so ~384 bytes), and 5× in “good” conditions (~640 bytes).

It’s worth noting a 256-byte AX.25 frame takes ~2 seconds to transmit on a 1200-baud link. You really don’t want to make that a habit! So smaller transmissions using UDP-based protocols may still be worthwhile in our application.

Jun 202021
 

So, today on the radio I heard that from this Friday, our state government was “expanding” the use of their Check-in Queensland program. Now, since my last post on the topic, I have since procured a new tablet. The tablet was purchased for completely unrelated reasons, namely:

  1. to provide navigation assistance, current speed monitoring and positional logging whilst on the bicycle (basically, what my Garmin Rino-650 does)
  2. to act as a media player (basically what my little AGPTek R2 is doing — a device I’ve now outgrown)
  3. to provide a front-end for a SDR receiver I’m working on
  4. run Slack for monitoring operations at work

Since it’s a modern Android device, it happens to be able to run the COVID-19 check-in programs. So I have COVIDSafe and Check-in Queensland installed. For those to work though, I have to run my existing phone’s WiFi hotspot. A little cumbersome, but it works, and I get the best of both worlds: modern Android + my phone’s excellent cell tower reception capability.

The snag though comes when these programs need to access the Internet at times when using my phone is illegal. Queensland laws around mobile phone use changed a while back, long before COVID-19. The upshot was that, while people who hold “open” driver’s licenses may “use” a mobile phone (provided that they do not need to handle it to do so), anybody else may not “use” a phone for “any purpose”. So…

  • using it for talking to people? Banned. Even using “hands-free”? Yep, still banned.
  • using it for GPS navigation? Banned.
  • using it for playing music? Banned.

It’s a $1000 fine if you’re caught. I’m glad I don’t use a wheelchair: such mobility aids are classed as a “vehicle” under the Queensland traffic act, and you can be fined for “drink driving” if you operate one whilst drunk. So traffic laws that apply to “motor vehicles” also apply to non-“motor vehicles”.

I don’t have a driver’s license of any kind, and have no interest in getting one, my primary mode of private transport is by bicycle. I can’t see how I’d be granted permission to do something that someone on a learner’s permit or P1 provisional license is forbidden from doing. The fact that I’m not operating a “motor vehicle” does not save me, the drink-driving in a wheelchair example above tells me that I too, would be fined for riding my bicycle whilst drunk. Likely, the mobile phones apply to me too. Given this, I made the decision to not “use” a mobile phone on the bicycle “for any purpose”. “For any purpose” being anything that requires the device to be powered on.

If I’m going to be spending a few hours at the destination, and in a situation that may permit me to use the phone, I might carry it in the top-box turned off (not certain if this is permitted, but kinda hard to police), but if it’s a quick trip to the shops, I leave the mobile phone at home.

What’s this got to do with the Check-in Queensland application or my new shiny-shiny you ask? Glad you did.

The new tablet is a WiFi-only device… specifically because of the above restrictions on using a “mobile phone”. The day those restrictions get expanded to include the tablet, you can bet the tablet will be ditched when travelling as well. Thus, it receives its Internet connection via a WiFi access point. At home, that’s one of two Cisco APs that provide my home Internet service. No issue there.

If I’m travelling on foot, or as a passenger on someone else’s vehicle, I use the WiFi hot-spot function on my phone to provide this Internet service… but this obviously won’t work if I just ducked up the road on my bike to go get some grocery shopping done, as I leave the phone at home for legal reasons.

Now, the Check-in Queensland application does not work without an Internet connection, and bringing my own in this situation is legally problematic.

I can also think of situations where an Internet connection is likely to be problematic.

  • If your phone doesn’t have a reliable cell tower link, it won’t reliably connect to the Internet, Check-in Queensland will fail.
  • If your phone is on a pre-paid service and you run out of credit, your carrier will deny you an Internet service, Check-in Queensland will fail.
  • If your carrier has a nation-wide whoopsie (Telstra had one a couple of years back, Optus and Vodafone have had them too), you can find yourself with a very pretty but very useless brick in your hand. Check-in Queensland will fail.

What can be done about this?

  1. The venues could provide a WiFi service so people can log in to that, and be provided with limited Internet access to allow the check-in program to work whilst at the venue. I do not see this happening for most places.
  2. The Check-in Queensland application could simply record the QR code it saw, date/time, co-visitors, and simply store it on the device to be uploaded later when the device has a reliable Internet link.
  3. For those who have older phones (and can legally carry them), the requirement of an “application” seems completely unnecessary:
    1. Most devices made post-2010 can run a web browser capable of running an in-browser QR code scanner, and storage of the customer’s details can be achieved either through using window.localStorage or through RFC-6265 HTTP cookies. In the latter case, you’d store the details server-side, and generate an “opaque” token which would be stored on the device as a cookie. A dedicated program is not required to do the function that Check-in Queensland is performing.
    2. For older devices, pretty much anything that can access the 3G network can send and receive SMS messages. (Indeed, most 2G devices can… the only exception I know to this would be the Motorola MicroTAC 5200 which could receive but not send SMSes. The lack of a 2G network will stop you though.) Telephone carriers are required to capture and verify contact details when provisioning pre-paid and post-paid cellular services, so already have a record of “who” has been assigned which telephone number. So why not get people to text the 6-digit code that Check-In Queensland uses, to a dedicated telephone number? If there’s an outbreak, they simply contact the carrier (or the spooks in Canberra) to get the contact details.
  4. The Check-in Queensland application has a “business profile” which can be used for manual entry of a visitor’s details… hypothetically, why not turn this around? Scan a QR code that the visitor carries and provides. Such QR codes could be generated by the Check-in Queensland website, printed out on paper, then cut out to make a business-card sized code which visitors can simply carry in their wallets and present as needed. No mobile phone required! For the record, the Electoral Commission of Queensland has been doing this for our state and council elections for years.

It seems the Queensland Government is doing this fancy “app” thing “because we can”. Whilst I respect the need to effectively contact-trace, the truth is there’s no technical reason why “this” must be the implementation. We just seem to be playing a game of “follow the shepherd”. They keep trying to advertise how “smart” we are, why not prove it?

Jun 092021
 

So, I finally had enough with the Epson WF7510 we have which is getting on in years, occasionally miss-picks pages, won’t duplex, and has a rather curious staircase problem when printing. We’ll keep it for A3 scanning and printing (the fax feature is now useless), but for a daily driver, I decided to make an end-of-financial-year purchase. I wanted something that met this criteria:

  • A4 paper size
  • Automatic duplex printing
  • Networked
  • Laser/LED (for water-resistant prints)
  • Colour is a “nice to have”

I looked at the mono options, but when I looked at the driver options for Linux, things were looking dire with binary blobs everywhere. Removed the restriction on it being mono, and suddenly this option appeared that was cheaper, and more open. I didn’t need a scanner (the WF7510’s scanner works fine with xsane, plus I bought a separate Canon LiDE 300 which is pretty much plug-and-play with xsane), a built-in fax is useless since we can achieve the same using hylafax+t38modem (a TO-DO item well down in my list of priorities).

The Kyocera P5021cdn allegedly isn’t the cheapest to run, but it promised a fairly pain-free experience on Linux and Unix. I figured I’d give it a shot. These are some notes I used to set the thing up. I want to move it to a different part of the network ultimately, but we’ll see what the cretinous Windows laptop my father users will let us do, for now it shares that Ethernet VLAN with the WF7510 and his laptop, and I’ll just hop over the network to access it.

Getting the printer’s IP and MAC address

The menu on the printer does not tell you this information. There is however, a Printer Status menu item in the top-panel menu. Tell it to print the status page, you’ll get a nice colour page with lots of information about the printer including its IPv4 and IPv6 addresses.

Web interface

If you want to configure the thing further, you need a web browser. Visit the printer’s IP address in your browser and you’re greeted with Command Centre RX. Out of the box, the username and password were Admin and Admin (capitalised A).

Setting up CUPS

The printer “driver” off the Kyocera website is a massive 400MB zip file, because they bundled up .deb and .rpm packages for every distribution they officially support together in one file. Someone needs to introduce them to reprepro and its dnf-equivalent. That said, you have a choice… if you pick a random .deb out of that blob, and manually unpack it somewhere (use ar x on it, you’ll see data.tar.xz or something, unpack that and you’ve got your package files), you’ll find a .ppd file you’ll need.

Or, you can do a search and realise that the Arch Linux guys have done the hard work for you. Many thanks guys (and girls… et all)!

Next puzzle is figuring out the printer URI. Turns out the printer calls itself lp1… so the IPP URI you should use is http://<IP>:631/ipp/lp1.

I haven’t put the thing fully through its paces, and I note the cartridges are down about 4% from those two prints (the status page and the CUPS test print), but often the initial cartridges are just “starter” cartridges and that the replacements often have a lot more toner in them. I guess time will tell on their longevity (and that of the imaging drum).

May 262021
 

So, recently I misplaced the headset adaptor I use for my aging ZTE T83… which is getting on nearly 6 years old now. I had the parts on hand to make a new adaptor, so whipped up a new one, but found the 3.5mm plug would not stay in the socket.

Evidently, this socket has reached the number of insert/remove cycles, and will no longer function reliably. I do like music on the go, and while I’m no fan of Bluetooth, it is a feature my phone supports. I’ve also been hacking an older Logtech headset I have so that I can re-purpose it for use at work, but so far, it’s been about 15 months since I did any real work in the office. Thanks to China’s little gift, I’ve been working at home.

At work, I was using the Logitech H800 which did both USB and Bluetooth. Handy… but one downside it had is it didn’t do both, you selected the mode via a slider switch on the back of one of the ear cups. The other downside is that being an “open ear” design, it tended to leak sound, so my colleagues got treated to the sound track of my daily work.

My father now uses that headset since he needed one for video conferencing (again, thank-you China) and it was the best-condition headset I had on hand. I switched to using a now rather tatty-looking G930 before later getting a ATH-G1WL which is doing the task at home nicely. The ATH-G1WL is better all-round for a wireless USB headset, but it’s a one-trick pony: it just does USB audio. It does it well, better than anything else I’ve used, but that’s all it does. Great for home, where I may want better fidelity and for applications that don’t like asymmetric sample rates, but useless with my now Bluetooth-only phone.

I had a look around, and found the Zone Wireless headset… I wondered how it stacked up against the H800. Double the cost, is it worth it?

Firstly, my environment: I’m mostly running Linux, but I will probably use this headset with Android a lot… maybe OpenBSD. The primary use case will be mobile devices, so my existing Android phone, and a Samsung Active3 8″ tablet I have coming. The fact this unit like the H800 does both Bluetooth and USB is an attractive feature. Another interesting advertised feature is that it can be simultaneously connected to both, unlike the H800 which is exclusively one or the other.

First impressions

So, it turned up today. In the box was a USB-C cable (probably the first real use I have for such a cable), a USB-A to USB-C adaptor (for all you young whipper-snappers with USB-C ports exclusively), the headset itself, the USB dongle, and a bag to stow everything in.

Interestingly, each of these has a set-up guide. Ohh, and at the time of writing, yes, there are 6 links titled “Setup Guide (PDF)”… the bottom-right one is the one for the headset (guess how I discovered that?). Amongst those is a set-up guide for the bag. (Who’d have thought some fabric bag with a draw-string closure needed a set-up guide?) I guess they’re aiming this at the Pointy Haired Boss.

Many functions are controlled using an “app” installed on a mobile device. I haven’t tried this as I suspect Android 4.1 is too old. Maybe I can look at that when the tablet turns up, as it should be recent enough. It’d be nice to duplicate this functionality on Linux, but ehh… enough of it works without.

Also unlike the H800… there’s nowhere on the headset to stash the dongle when not in use. This is a bit of a nuisance, but they do provide the little bag to stow it in. The assumption is I guess that it’ll permanently occupy a USB port, since the same dongle also talks to their range of keyboards and mice.

USB audio functionality

I had the Raspberry Pi 3 running as a DAB+ receiver, Triple M Classic Rock had The Beatles Seargeant Pepper’s Lonely Hearts Club Band album on… so I plugged the dongle in to see how they compared with my desktop speakers (plugged in to the “headphone” jack). Now this isn’t the best test for sound quality for two reasons: (1) this DAB+ station is broadcasting 64kbps HE-AAC, and (2) the “headphone” jack on the Pi is hardly known as high fidelity, but it gave me a rough idea.

Audio quality was definitely reasonable. No better or worse than the H800. I haven’t tried the microphone yet, but it looks as if it’s on par with the H800 as well. Like every other Logitech headset I’ve owned to date, it too forces asymmetric sample rates, if you’re looking at using JACK, consider something else:

stuartl@vk4msl-pi3:~ $ cat /proc/asound/Wireless/stream0
Logitech Zone Wireless at usb-3f980000.usb-1.4, full speed : USB Audio

Playback:
  Status: Running
    Interface = 2
    Altset = 1
    Packet Size = 192
    Momentary freq = 48000 Hz (0x30.0000)
  Interface 2
    Altset 1
    Format: S16_LE
    Channels: 2
    Endpoint: 3 OUT (NONE)
    Rates: 48000
    Bits: 16

Capture:
  Status: Stop
  Interface 1
    Altset 1
    Format: S16_LE
    Channels: 1
    Endpoint: 3 IN (NONE)
    Rates: 16000
    Bits: 16

The control buttons seem to work, and there’s even a HID device appearing under Linux, but testing with xev reveals no reaction when I press the up/down/MFB buttons.

Bluetooth functionality

With the dongle plugged in, I reached for my phone, turned on its Bluetooth radio, pressed the power button on the headset for a couple of seconds, then told my phone to go look for the device. It detected the headset and paired straight away. Fairly painless, as you’d expect, even given the ancient Android device it was paired with. (Bluetooth 5 headset… meet Bluetooth 3 host!)

I then tried pulling up some music… the headset immediately switched streams, I was now hearing Albert Hammond – Free Electric Band. Hit pause, and I was back to DAB+ on the Raspberry Pi.

Yes, it was connected to both the USB dongle and the phone, which was fantastic. One thing it does NOT do though, at least out-of-the-box, is “mix” the two streams. Great for telephone calls I suppose, but forget listening to something in the background via your computer whilst you converse with somebody on the phone.

The audio quality was good though. Some cheaper Bluetooth headsets often sound “watery” to my hearing (probably the audio CODEC), which is why I avoided them prior to buying the H800, the H800 was the first that sounded “normal”, and this carries that on.

I’m not sure what the microphone sounds like in this mode. I suspect with my old phone, it’ll drop back to the HSP profile, which has an 8kHz sample rate, no wideband audio. I’ll know more when the tablet turns up as it should be able to better put the headset through its paces.

Noise cancellation

One nice feature of this headset is that unlike the H800, this is a closed-ear design which does a reasonable amount of passive noise suppression. So likely will leak sound less than the H800. Press the ANC button, and an announcement reports that noise cancellation is now ON… and there’s a subtle further muffling of outside noise.

It won’t pass AS/NZS:1270, but it will at least mean I’m not turning the volume up quite so loud, which is better for my hearing anyway. Doing this is at the cost of about an hour’s battery life apparently.

Left/right channel swapping

Another nice feature, this is the first headset I’ve owned where you can put the microphone on either side you like. Put the headset on either way around, flip the microphone to where your mouth is: as you pass roughly the 15° point on the boom, the channels switch so left/right channels are the correct way around for the way you’re wearing it.

This isn’t a difficult thing to achieve, but I have no idea why more companies don’t do it. There seems to be this defacto standard of microphone/controls on the left, which is annoying, as I prefer it and controls on the right. Some headsets (Logitech wired USB) were right-hand side only, but this puts the choice in the user’s hands. This should be encouraged.

Verdict

Well, it’s pricey, but it gets the job done and has a few nice features to boot. I’ll be doing some more testing when more equipment turns up, but it seems to play nice with what I have now.

The ability to switch between USB and Bluetooth sources is welcome, it’d be nice to have some software mixing possible, but that’s not the end of the world, it’s an improvement nonetheless.

Audio quality seems decent enough, with playback sample rates able to match DVD audio sample rates (at 16-bits linear PCM). Microphone sample rate should be sufficient for wideband voice (but not ultra-wideband or fullband).

It’s nice to be able to put the microphone on the side of my choosing rather than having that dictated to me.

The audio cancellation is a nice feature, and one I expect to make more use of in an open-plan environment.

The asymmetric record/playback sample rates might be a bit of a nuisance if you use software that expects these to be symmetric.

Somewhere to stash the dongle on the headset would’ve been nicer than having a carry bag.

It’d be nice if there was some sort of protocol spec for the functions offered in the “app” so that those who cannot or choose not to run it, can still access those functions.