Oct 072021
 

Recently, I noticed my network monitoring was down… I hadn’t worried about it because I had other things to keep me busy, and thankfully, my network monitoring, whilst important, isn’t mission critical.

I took a look at it today. The symptom was an odd one, influxd was running, it was listening on the back-up/RPC port 8088, but not 8086 for queries.

It otherwise was generating logs as if it were online. What gives?

Tried some different settings, nothing… nada… zilch. Nothing would make it listen to port 8086.

Tried updating to 1.8 (was 1.1), still nothing.

Tried manually running it as root… sure enough, if I waited long enough, it started on its own, and did begin listening on port 8086. Hmmm, I wonder. I had a look at the init scripts:

#!/bin/bash -e

/usr/bin/influxd -config /etc/influxdb/influxdb.conf $INFLUXD_OPTS &
PID=$!
echo $PID > /var/lib/influxdb/influxd.pid

PROTOCOL="http"
BIND_ADDRESS=$(influxd config | grep -A5 "\[http\]" | grep '^  bind-address' | cut -d ' ' -f5 | tr -d '"')
HTTPS_ENABLED_FOUND=$(influxd config | grep "https-enabled = true" | cut -d ' ' -f5)
HTTPS_ENABLED=${HTTPS_ENABLED_FOUND:-"false"}
if [ $HTTPS_ENABLED = "true" ]; then
  HTTPS_CERT=$(influxd config | grep "https-certificate" | cut -d ' ' -f5 | tr -d '"')
  if [ ! -f "${HTTPS_CERT}" ]; then
    echo "${HTTPS_CERT} not found! Exiting..."
    exit 1
  fi
  echo "$HTTPS_CERT found"
  PROTOCOL="https"
fi
HOST=${BIND_ADDRESS%%:*}
HOST=${HOST:-"localhost"}
PORT=${BIND_ADDRESS##*:}

set +e
max_attempts=10
url="$PROTOCOL://$HOST:$PORT/health"
result=$(curl -k -s -o /dev/null $url -w %{http_code})
while [ "$result" != "200" ]; do
  sleep 1
  result=$(curl -k -s -o /dev/null $url -w %{http_code})
  max_attempts=$(($max_attempts-1))
  if [ $max_attempts -le 0 ]; then
    echo "Failed to reach influxdb $PROTOCOL endpoint at $url"
    exit 1
  fi
done
set -e

Ahh right, so start the server, check every second to see if it’s up, and if not, just abort and let systemd restart the whole shebang. Because turning the power on-off-on-off-on-off is going to make it go faster, right?

I changed max_attempts to 360 and the sleep to 10.

Having fixed this, I am now getting data back into my system.

Oct 032021
 

So, the situation: I have two boxes that must replicate data between themselves and generally keep in contact with one another over a network (Ethernet or WiFi) that I do not control. I want the two to maintain a peer-to-peer VPN over this potentially hostile network: ensuring confidentiality and authenticity of data sent over the tunnelled link.

The two nodes should be able to try and find each-other via other means, such as mDNS (Avahi).

I had thought of just using OpenVPN in its P2P mode, but I figured I’d try something new, WireGuard. Both machines are running Debian 10 (Buster) on AMD64 hardware, but this should be reasonably applicable to lots of platforms and Linux-based OSes.

This assumes WireGuard is in fact, installed: sudo apt-get install -y wireguard will do the deed on Debian/Ubuntu.

Initial settings

First, having installed WireGuard, I needed to make some decisions as to how the VPN would be addressed. I opted for using an IPv6 ULA. Why IPv6? Well, remember I mentioned I do not control the network? They could be using any IPv4 subnet, including the one I hypothetically might choose for my own network. This is also true of ULAs, however the probabilities are ridiculously small: parts per billion chance, enough to ignore!

So, I trundled over to a ULA generator site and generated a ULA. I made up a MAC address for this purpose. For the purposes of this document let’s pretend it gave me 2001:db8:aaaa::/48 as my address (yes, I know this is not a ULA, this is in the documentation prefix). For our VPN, we’ll statically allocate some addresses out of 2001:db8:aaaa:1000::/64, leaving the other address blocks free for other use as desired.

For ease of set-up, we also picked a port number for each node to listen on, WireGuard’s Quick Start guide uses port 51820, which is as good as any, so we used that.

Finally, we need to choose a name for the VPN interface, wg0 seemed as good as any.

Summarising:

  • ULA: 2001:db8:aaaa::/48
  • VPN subnet: 2001:db8:aaaa:1000::/64
  • Listening port number: 51820
  • WireGuard interface: wg0

Generating keys

Each node needs a keypair for communicating with its peers. I did the following:

( umask 077 ; wg genkey > /etc/wg.priv )
wg pubkey < /etc/wg.priv > /etc/wg.pub

I gathered all the wg.pub files from my nodes and stashed them locally

Creating settings for all nodes

I then made some settings files for some shell scripts to load. First, a description of the VPN settings for wg0, I put this into /etc/wg0.conf:

INTERFACE=wg0
SUBNET_IP=2001:db8:aaaa:1000::
SUBNET_SZ=64
LISTEN_PORT=51820
PERSISTENT_KEEPALIVE=60

Then, in a directory called wg.peers, I added a file with the following content for each peer:

pubkey=< node's /etc/wg.pub content >
ip=<node's VPN IP >

The VPN IP was just allocated starting at ::1 and counting upwards, do whatever you feel is appropriate for your virtual network. The IPs only need to be unique and within that same subnet.

Both the wg.peers and wg0.conf were copied to /etc on all nodes.

The VPN clean-up script

I mention this first, since it makes debugging the set-up script easier since there’s a single command that will bring down the VPN and clean up /etc/hosts:

#!/bin/bash

. /etc/wg0.conf

if [ -d /sys/class/net/${INTERFACE} ]; then
	ip link set ${INTERFACE} down
	ip link delete dev ${INTERFACE}

	sed -i -e "/^${SUBNET_IP}/ d" /etc/hosts
fi

This checks for the existence of wg0, and if found, brings the link down and deletes it; then cleans up all VPN IPs from the /etc/hosts file. Copy this to /usr/local/sbin, make permissions 0700.

The VPN set-up script

This is what establishes the link. The set-up script can take arguments that tell it where to find each peer: e.g. peernode.local=10.20.30.40 to set a static IP, or peernode.local=10.20.30.40:12345 if an alternate port is needed.

Giving peernode.local=listen just tells the script to tell WireGuard to listen for an incoming connection from that peer, where-ever it happens to be.

If a peer is not mentioned, it tries to discover the address of the peer using getent: the peer must have a non-link-local, non-VPN address assigned to it already: this is due to getent not being able to tell me which interface the link-local address came from. If it does, and it can ping that address, it considers the node up and adds it.

Nodes that do not have a static address configured, are set to listen, or are not otherwise locatable and reachable, are dropped off the list for VPN set-up. For two peers, this makes sense, since we want them to actively seek each-other out; for three nodes you might want to add these in “listen” mode, an exercise I leave for the reader.

#!/bin/bash

set -e

. /etc/wg0.conf

# Pick up my IP details and private key
ME=$( uname -n ).local
MY_IP=$( . /etc/wg.peers/${ME} ; echo ${ip} )

# Abort if we already have our interface
if [ -d /sys/class/net/${INTERFACE} ]; then
	exit 0
fi

# Gather command line arguments
declare -A static_peers
while [ $# -gt 0 ]; do
	case "$1" in
	*=*)	# Peer address
		static_peers["${1%=*}"]="${1#*=}"
		shift
		;;
	*)
		echo "Unrecognised argument: $1"
		exit 1
	esac
done

# Gather the cryptography configuration settings
peers=""
for peerfile in /etc/wg.peers/*; do
	peer=$( basename ${peerfile} )
	if [ "${peer}" != ${ME} ]; then
		# Derive a host name for the endpoint on the VPN
		host=${peer%.local}
		vpn_hostname=${host}.vpn

		# Do we have an endpoint IP given on the command line?
		endpoint=${static_peers[${peer}]}

		if [ -n "${endpoint}" ] && [ "${endpoint}" != listen ]; then
			# Given an IP/name, add brackets around IPv6, add port number if needed.
			endpoint=$(
				echo "${endpoint}" | sed \
					-e 's/^[0-9a-f]\+:[0-9a-f]\+:[0-9a-f:]\+$/[&]/' \
					-e "s/^\\(\\[[0-9a-f:]\\+\\]\\|[0-9\\.]\+\\)\$/\1:${LISTEN_PORT}/"
			)
		elif [ -z "${endpoint}" ]; then
			# Try to resolve the IP address for the peer
			# Ignore link-local and VPN tunnel!
			endpoint_ip=$(
				getent hosts ${peer} \
					| cut -f 1 -d' ' \
					| grep -v "^\(fe80:\|169\.\|${SUBNET_IP}\)"
			)

			if ping -n -w 20 -c 1 ${endpoint_ip}; then
				# Endpoint is reachable.  Construct endpoint argument
				endpoint=$( echo ${endpoint_ip} | sed -e '/:/ s/^.*$/[&]/' ):${LISTEN_PORT}
			fi
		fi

		# Test reachability
		if [ -n "${endpoint}" ]; then
			# Pick up peer pubkey and VPN IP
			. ${peerfile}

			# Add to peers
			peers="${peers} peer ${pubkey}"
			if [ "${endpoint}" != "listen" ]; then
				peers="${peers} endpoint ${endpoint}"
			fi
			peers="${peers} persistent-keepalive ${PERSISTENT_KEEPALIVE}"
			peers="${peers} allowed-ips ${SUBNET_IP}/${SUBNET_SZ}"

			if ! grep -q "${vpn_hostname} ${host}\\$" /etc/hosts ; then
				# Add to /etc/hosts
				echo "${ip} ${vpn_hostname} ${host}" >> /etc/hosts
			else
				# Update /etc/hosts
				sed -i -e "/${vpn_hostname} ${host}\\$/ s/^[^ ]\+/${ip}/" \
					/etc/hosts
			fi
		else
			# Remove from /etc/hosts
			sed -i -e "/${vpn_hostname} ${host}\\$/ d" \
				/etc/hosts
		fi
	fi
done

# Abort if no peers
if [ -z "${peers}" ]; then
	exit 0
fi

# Create the interface
ip link add ${INTERFACE} type wireguard

# Configre the cryptographic settings
wg set ${INTERFACE} listen-port ${LISTEN_PORT} \
	private-key /etc/wg.priv ${peers}

# Bring the interface up
ip -6 addr add ${MY_IP}/${SUBNET_SZ} dev ${INTERFACE}
ip link set ${INTERFACE} up

This is run from /etc/cron.d/vpn:

* * * * * root /usr/local/sbin/vpn-up.sh >> /tmp/vpn.log 2>&1
Sep 192021
 

I stumbled across this article regarding the use of TCP over sensor networks. Now, TCP has been done with AX.25 before, and generally suffers greatly from packet collisions. Apparently (I haven’t read more than the first few paragraphs of this article), implementations TCP can be tuned to improve performance in such networks, which may mean TCP can be made more practical on packet radio networks.

Prior to seeing this, I had thought 6LoWHAM would “tunnel” TCP over a conventional AX.25 connection using I-frames and S-frames to carry TCP segments with some header prepended so that multiple TCP connections between two peers can share the same AX.25 connection.

I’ve printed it out, and made a note of it here… when I get a moment I may give this a closer look. Ultimately I still think multicast communications is the way forward here: radio inherently favours one-to-many communications due to it being a shared medium, but there are definitely situations in which being able to do one-to-one communications applies; and for those, TCP isn’t a bad solution.

Comments having read the article

So, I had a read through it. The take-aways seem to be this:

  • TCP was historically seen as “too heavy” because the MCUs of the day (circa 2002) lacked the RAM needed for TCP data structures. More modern MCUs have orders of magnitude more RAM (32KiB vs 512B) today, and so this is less of an issue.
    • For 6LoWHAM, intended for single-board computers running Linux, this will not be an issue.
  • A lot of early experiments with TCP over sensor networks tried to set a conservative MSS based on the actual link MTU, leading to TCP headers dominating the lower-level frame. Leaning on 6LoWPAN’s ability to fragment IP datagrams lead to much improved performance.
    • 6LoWHAM uses AX.25 which can support 256-byte frames; vs 128-byte 802.15.4 frames on 6LoWPAN. Maybe gains can be made this way, but we’re already a bit ahead on this.
  • Much of the document considered battery-powered nodes, in which the radio transceiver was powered down completely for periods of time to save power, and the effects this had on TCP communications. Optimisations were able to be made that reduced the impact of such power-down events.
    • 6LoWHAM will likely be using conventional VHF/UHF transceivers. Hand-helds often implement a “battery saver” mode — often this is configured inside the device with no external control possible (thus it will not be possible for us to control, or even detect, when the receiver is powered down). Mobile sets often do not implement this, and you do not want to frequently power-cycle a modern mobile transceiver at the sorts of rates that 802.15.4 radios get power-cycled!
  • Performance in ideal conditions favoured TCP, with the article authors managing to achieve 30% of the raw link bandwidth (75kbps of a theoretical 250kbps maximum), with the underlying hardware being fingered as a possible cause for performance issues.
    • Assuming we could manage the same percentage; that would equate to ~360bps on 1200-baud networks, or 2.88kbps on 9600-baud networks.
  • With up to 15% packet loss, TCP and CoAP (its nearest contender) can perform about the same in terms of reliability.
  • A significant factor in effective data rate is CSMA/CA. aioax25 effectively does CSMA/CA too.

Its interesting to note they didn’t try to do anything special with the TCP headers (e.g. Van Jacobson compression). I’ll have to have a look at TCP and see just how much overhead there is in a typical segment, and whether the roughly double MTU of AX.25 will help or not: the article recommends using MSS of approximately 3× the link MTU for “fair” conditions (so ~384 bytes), and 5× in “good” conditions (~640 bytes).

It’s worth noting a 256-byte AX.25 frame takes ~2 seconds to transmit on a 1200-baud link. You really don’t want to make that a habit! So smaller transmissions using UDP-based protocols may still be worthwhile in our application.

Sep 162021
 

So, one evening I was having difficulty sleeping, so like some people count sheep, turned to a different problem…6LoWPAN relies on all nodes sharing a common “context”. This is used as a short-hand to “compress” the rather lengthy IPv6 addresses for allowing two nodes to communicate with one another by substituting particular IPv6 address subnets with a “context number” which can be represented in 4 bits.

Fundamentally, this identifier is a stand-in for the subnet address. This was a sticking-point with earlier thoughts on 6LoWHAM: how do we agree on what the context should be? My thought was, each network should be assigned a 3-bit network ID. Why 3-bit? Well, this means we can reserve some context IDs for other uses. We use SCI/DCI values 0-7 and leave 8-15 reserved; I’ll think of a use for the other half of the contexts.

The node “group” also share a SSID; the “group” SSID. This is a SSID that receives all multicast traffic for the nodes on the immediate network. This might be just a generic MCAST-n SSID, where n is the network ID; or it could be a call-sign for a local network coordinator, e.g. I might decide my network will use VK4MSL-0 for my group SSID (network 0). Probably nodes that are listening on a custom SSID should still listen for MCAST-n traffic, in case a node is attempting to join without knowing the group SSID.

AX.25 allows for 16 SSIDs per call-sign, so what about the other 8? Well, if we have a convention that we reserve SSIDs 0-7 for groups; that leaves 8-15 for stations. This can be adjusted for local requirements where needed, and would not be enforced by the protocol.

Joining a network

How does a new joining node “discover” this network? Firstly, the first node in an area is responsible for “forming” the network — a node which “forms” a network must be manually programmed with the local subnet, group SSID and other details. Ensuring all nodes with “formation” capability for a given network is beyond the scope of 6LoWHAM.

When a node joins; at first it only knows how to talk to immediate nodes. It can use MCAST-n to talk to immediate neighbours using the fe80::/64 subnet. Anyone in earshot can potentially reply. Nodes simply need to be listening for traffic on a reserved UDP port (maybe 61631; there’s an optimisation in 6LoWPAN for 61616-61631). The joining node can ask for the network context, maybe authenticate itself if needed (using asymmetric cryptography – digital signatures, no encryption).

The other nodes presumably already know the answer, but for all nodes to reply simultaneously, would lead to a pile-up. Nodes should wait a randomised delay, and if nothing is heard in that period, they then transmit what they know of the context for the given network ID.

The context information sent back should include:

  • Group SSID
  • Subnet prefix
  • (Optional) Authentication data:
    • Public key of the forming network (joining node will need to maintain its own “trust” database)
    • Hash of all earlier data items
    • Digital signature signed with included public key

Once a node knows the context for its chosen network, it is officially “joined”.

Routing to non-local endpoints

So, a node may wish to send a message to another node that’s not directly reachable. This is, after-all, the whole point of using a routing protocol atop AX.25. If we knew a route, we could encode it in the digipeater path, and use conventional AX.25 source routing. Nodes that know a reliable route are encouraged to do exactly that. But what if you don’t know your way around?

APRS uses WIDEN-n to solve this problem: it’s a dumb broadcast, but it achieves this aim beautifully. n just stands for the number of hops, and it gets decremented with each hop. Each digipeater inserts itself into the path as it sends the frame on. APRS specs normally call for everyone to broadcast all at once, pile-up be damned. FM capture effect might help here, but I’m not sure its a good policy. Simple, but in our case, we can do a little better.

We only need to broadcast far enough to reach a node that knows a route. We’ll use ROUTE-n to stand for a digipeater that is no more than n hops away from the station listed in the AX.25 destination field. n must be greater than 0 for a message to be relayed. AX.25 2.0 limits the number of digipeaters to 8 (and 2.2 to 2!), so naturally n cannot be greater than 8.

So we’ll have a two-tier approach.

Routing from a node that knows a viable route

If a node that receives a ROUTE-n destination message, knows it has a good route that is n or less hops away from the target; it picks a randomised delay (maybe 0-5 seconds range), and if no reply is heard from another node; it relays the message: the ROUTE-n is replaced by its own SSID, followed by the required digipeater path to reach the target node.

Routing from a node that does not know a viable route

In the case where a node receives this same ROUTE-n destination message, does not know a route, and hasn’t heard anyone else relay that same message; it should pick a randomised delay (5-10 second range), and if it hasn’t heard the message relayed via a specific path in that time, should do one of the following:

If n is greater than 1:

Substitute ROUTE-n in the digipeater path with its own SSID followed by ROUTE-(n-1) then transmit the message.

If n is 1 (or 0):

Substitute ROUTE-n with its own SSID (do not append ROUTE-0) then transmit the message.

Routing multicast traffic

Discovering multicast listeners

I’ll have to research MLD (RFC-3810 / RFC-4604), but that seems the sensible way forward from here.

Relaying multicast traffic

If a node knows of downstream nodes that ordinarily rely on it to contact the sender of a multicast message, and it knows the downstream nodes are subscribers to the destination multicast group, it should wait a randomised period, and forward the message on (appending its SSID in the digipeater path) to the downstream nodes.

Application thoughts

I think I have done some thoughts on what the applications for this system may be, but the other day I was looking around for “prior art” regarding one-to-many file transfer applications.

One such system that could be employed is UFTP. Yes, it mentions encryption, but that is an optional feature (and could be useful in emcomm situations). That would enable SSTV-style file sharing to all participants within the mesh network. Its ability to be proxied also lends itself to bridging to other networks like AMPRnet, D-Star packet, DMR and other systems.

Jun 092021
 

So, I finally had enough with the Epson WF7510 we have which is getting on in years, occasionally miss-picks pages, won’t duplex, and has a rather curious staircase problem when printing. We’ll keep it for A3 scanning and printing (the fax feature is now useless), but for a daily driver, I decided to make an end-of-financial-year purchase. I wanted something that met this criteria:

  • A4 paper size
  • Automatic duplex printing
  • Networked
  • Laser/LED (for water-resistant prints)
  • Colour is a “nice to have”

I looked at the mono options, but when I looked at the driver options for Linux, things were looking dire with binary blobs everywhere. Removed the restriction on it being mono, and suddenly this option appeared that was cheaper, and more open. I didn’t need a scanner (the WF7510’s scanner works fine with xsane, plus I bought a separate Canon LiDE 300 which is pretty much plug-and-play with xsane), a built-in fax is useless since we can achieve the same using hylafax+t38modem (a TO-DO item well down in my list of priorities).

The Kyocera P5021cdn allegedly isn’t the cheapest to run, but it promised a fairly pain-free experience on Linux and Unix. I figured I’d give it a shot. These are some notes I used to set the thing up. I want to move it to a different part of the network ultimately, but we’ll see what the cretinous Windows laptop my father users will let us do, for now it shares that Ethernet VLAN with the WF7510 and his laptop, and I’ll just hop over the network to access it.

Getting the printer’s IP and MAC address

The menu on the printer does not tell you this information. There is however, a Printer Status menu item in the top-panel menu. Tell it to print the status page, you’ll get a nice colour page with lots of information about the printer including its IPv4 and IPv6 addresses.

Web interface

If you want to configure the thing further, you need a web browser. Visit the printer’s IP address in your browser and you’re greeted with Command Centre RX. Out of the box, the username and password were Admin and Admin (capitalised A).

Setting up CUPS

The printer “driver” off the Kyocera website is a massive 400MB zip file, because they bundled up .deb and .rpm packages for every distribution they officially support together in one file. Someone needs to introduce them to reprepro and its dnf-equivalent. That said, you have a choice… if you pick a random .deb out of that blob, and manually unpack it somewhere (use ar x on it, you’ll see data.tar.xz or something, unpack that and you’ve got your package files), you’ll find a .ppd file you’ll need.

Or, you can do a search and realise that the Arch Linux guys have done the hard work for you. Many thanks guys (and girls… et all)!

Next puzzle is figuring out the printer URI. Turns out the printer calls itself lp1… so the IPP URI you should use is http://<IP>:631/ipp/lp1.

I haven’t put the thing fully through its paces, and I note the cartridges are down about 4% from those two prints (the status page and the CUPS test print), but often the initial cartridges are just “starter” cartridges and that the replacements often have a lot more toner in them. I guess time will tell on their longevity (and that of the imaging drum).

May 262021
 

So, recently I misplaced the headset adaptor I use for my aging ZTE T83… which is getting on nearly 6 years old now. I had the parts on hand to make a new adaptor, so whipped up a new one, but found the 3.5mm plug would not stay in the socket.

Evidently, this socket has reached the number of insert/remove cycles, and will no longer function reliably. I do like music on the go, and while I’m no fan of Bluetooth, it is a feature my phone supports. I’ve also been hacking an older Logtech headset I have so that I can re-purpose it for use at work, but so far, it’s been about 15 months since I did any real work in the office. Thanks to China’s little gift, I’ve been working at home.

At work, I was using the Logitech H800 which did both USB and Bluetooth. Handy… but one downside it had is it didn’t do both, you selected the mode via a slider switch on the back of one of the ear cups. The other downside is that being an “open ear” design, it tended to leak sound, so my colleagues got treated to the sound track of my daily work.

My father now uses that headset since he needed one for video conferencing (again, thank-you China) and it was the best-condition headset I had on hand. I switched to using a now rather tatty-looking G930 before later getting a ATH-G1WL which is doing the task at home nicely. The ATH-G1WL is better all-round for a wireless USB headset, but it’s a one-trick pony: it just does USB audio. It does it well, better than anything else I’ve used, but that’s all it does. Great for home, where I may want better fidelity and for applications that don’t like asymmetric sample rates, but useless with my now Bluetooth-only phone.

I had a look around, and found the Zone Wireless headset… I wondered how it stacked up against the H800. Double the cost, is it worth it?

Firstly, my environment: I’m mostly running Linux, but I will probably use this headset with Android a lot… maybe OpenBSD. The primary use case will be mobile devices, so my existing Android phone, and a Samsung Active3 8″ tablet I have coming. The fact this unit like the H800 does both Bluetooth and USB is an attractive feature. Another interesting advertised feature is that it can be simultaneously connected to both, unlike the H800 which is exclusively one or the other.

First impressions

So, it turned up today. In the box was a USB-C cable (probably the first real use I have for such a cable), a USB-A to USB-C adaptor (for all you young whipper-snappers with USB-C ports exclusively), the headset itself, the USB dongle, and a bag to stow everything in.

Interestingly, each of these has a set-up guide. Ohh, and at the time of writing, yes, there are 6 links titled “Setup Guide (PDF)”… the bottom-right one is the one for the headset (guess how I discovered that?). Amongst those is a set-up guide for the bag. (Who’d have thought some fabric bag with a draw-string closure needed a set-up guide?) I guess they’re aiming this at the Pointy Haired Boss.

Many functions are controlled using an “app” installed on a mobile device. I haven’t tried this as I suspect Android 4.1 is too old. Maybe I can look at that when the tablet turns up, as it should be recent enough. It’d be nice to duplicate this functionality on Linux, but ehh… enough of it works without.

Also unlike the H800… there’s nowhere on the headset to stash the dongle when not in use. This is a bit of a nuisance, but they do provide the little bag to stow it in. The assumption is I guess that it’ll permanently occupy a USB port, since the same dongle also talks to their range of keyboards and mice.

USB audio functionality

I had the Raspberry Pi 3 running as a DAB+ receiver, Triple M Classic Rock had The Beatles Seargeant Pepper’s Lonely Hearts Club Band album on… so I plugged the dongle in to see how they compared with my desktop speakers (plugged in to the “headphone” jack). Now this isn’t the best test for sound quality for two reasons: (1) this DAB+ station is broadcasting 64kbps HE-AAC, and (2) the “headphone” jack on the Pi is hardly known as high fidelity, but it gave me a rough idea.

Audio quality was definitely reasonable. No better or worse than the H800. I haven’t tried the microphone yet, but it looks as if it’s on par with the H800 as well. Like every other Logitech headset I’ve owned to date, it too forces asymmetric sample rates, if you’re looking at using JACK, consider something else:

stuartl@vk4msl-pi3:~ $ cat /proc/asound/Wireless/stream0
Logitech Zone Wireless at usb-3f980000.usb-1.4, full speed : USB Audio

Playback:
  Status: Running
    Interface = 2
    Altset = 1
    Packet Size = 192
    Momentary freq = 48000 Hz (0x30.0000)
  Interface 2
    Altset 1
    Format: S16_LE
    Channels: 2
    Endpoint: 3 OUT (NONE)
    Rates: 48000
    Bits: 16

Capture:
  Status: Stop
  Interface 1
    Altset 1
    Format: S16_LE
    Channels: 1
    Endpoint: 3 IN (NONE)
    Rates: 16000
    Bits: 16

The control buttons seem to work, and there’s even a HID device appearing under Linux, but testing with xev reveals no reaction when I press the up/down/MFB buttons.

Bluetooth functionality

With the dongle plugged in, I reached for my phone, turned on its Bluetooth radio, pressed the power button on the headset for a couple of seconds, then told my phone to go look for the device. It detected the headset and paired straight away. Fairly painless, as you’d expect, even given the ancient Android device it was paired with. (Bluetooth 5 headset… meet Bluetooth 3 host!)

I then tried pulling up some music… the headset immediately switched streams, I was now hearing Albert Hammond – Free Electric Band. Hit pause, and I was back to DAB+ on the Raspberry Pi.

Yes, it was connected to both the USB dongle and the phone, which was fantastic. One thing it does NOT do though, at least out-of-the-box, is “mix” the two streams. Great for telephone calls I suppose, but forget listening to something in the background via your computer whilst you converse with somebody on the phone.

The audio quality was good though. Some cheaper Bluetooth headsets often sound “watery” to my hearing (probably the audio CODEC), which is why I avoided them prior to buying the H800, the H800 was the first that sounded “normal”, and this carries that on.

I’m not sure what the microphone sounds like in this mode. I suspect with my old phone, it’ll drop back to the HSP profile, which has an 8kHz sample rate, no wideband audio. I’ll know more when the tablet turns up as it should be able to better put the headset through its paces.

Noise cancellation

One nice feature of this headset is that unlike the H800, this is a closed-ear design which does a reasonable amount of passive noise suppression. So likely will leak sound less than the H800. Press the ANC button, and an announcement reports that noise cancellation is now ON… and there’s a subtle further muffling of outside noise.

It won’t pass AS/NZS:1270, but it will at least mean I’m not turning the volume up quite so loud, which is better for my hearing anyway. Doing this is at the cost of about an hour’s battery life apparently.

Left/right channel swapping

Another nice feature, this is the first headset I’ve owned where you can put the microphone on either side you like. Put the headset on either way around, flip the microphone to where your mouth is: as you pass roughly the 15° point on the boom, the channels switch so left/right channels are the correct way around for the way you’re wearing it.

This isn’t a difficult thing to achieve, but I have no idea why more companies don’t do it. There seems to be this defacto standard of microphone/controls on the left, which is annoying, as I prefer it and controls on the right. Some headsets (Logitech wired USB) were right-hand side only, but this puts the choice in the user’s hands. This should be encouraged.

Verdict

Well, it’s pricey, but it gets the job done and has a few nice features to boot. I’ll be doing some more testing when more equipment turns up, but it seems to play nice with what I have now.

The ability to switch between USB and Bluetooth sources is welcome, it’d be nice to have some software mixing possible, but that’s not the end of the world, it’s an improvement nonetheless.

Audio quality seems decent enough, with playback sample rates able to match DVD audio sample rates (at 16-bits linear PCM). Microphone sample rate should be sufficient for wideband voice (but not ultra-wideband or fullband).

It’s nice to be able to put the microphone on the side of my choosing rather than having that dictated to me.

The audio cancellation is a nice feature, and one I expect to make more use of in an open-plan environment.

The asymmetric record/playback sample rates might be a bit of a nuisance if you use software that expects these to be symmetric.

Somewhere to stash the dongle on the headset would’ve been nicer than having a carry bag.

It’d be nice if there was some sort of protocol spec for the functions offered in the “app” so that those who cannot or choose not to run it, can still access those functions.

Apr 262021
 

Recently, I discovered that in spite of my attempts to ensure my Internet connection would remain reliable throughout adverse conditions, I discovered a simple power outage basically left the ohh-so-wonderful HFC NBN NTD blinking and boot-looping helplessly.

In the last major storm event, the PSTN land-line was the only way we got a phone service. Sadly I was not geared up to test whether ADSL worked at that time, but the PSTN did, which was good because Telstra’s mobile network didn’t!

Armed with this knowledge, I decided to protect myself. My choices for an Internet link here are 4G and NBN. That does not give me much hope in a major calamity, but you know, do the best you can. At least in simple black-outs, 4G should stay up. 4G exclusively is too expensive, especially for a connection comparable to the NBN link I have, so the next best thing is to set up a back-up link using 4G. Since local towers may be down-and-out, best hope I have is to put the 4G antennas up as high as I possibly can. I looked at possible options, and one locally-produced option I stumbled on is the Telco Electronics T1. This is an outdoor rated 4G router, powered using PoE. The PoE scheme i simple: 24V DC nominal voltage, with the blue pair (pins 4 & 5) carrying the positive leg, and the brown pair (7 & 8) carrying the negative.

Talking with the vendor, I discovered that while these things can run down to 12V, they don’t recommend it. I guess I²R losses are a big factor there: CAT5e isn’t known for its power carrying capability. My thinking since my system is all 12V, is to simply run a 12V cable using 15A-rated DC cable alongside the Ethernet cable up to my bedroom, then from there I can split off a few 12V feeds: one for my 8-port switch, one for my access point, and one going to the 4G router.

Since the router expects 24V, I’ll use a boost converter so that the “PoE” run is as short as practical. I found an inexpensive 24V boost converter which could tolerate input voltages as low as 3.3V and input currents up to 5A. Mount this into a little wall-plate box with a couple of RJ-45 jacks and a barrel jack for the DC input, and we’d have a quick and easy boost converter.

I won’t put the wiring diagram up because honestly, it’s pretty straightforward! I haven’t tried running an Ethernet signal through this, but I’m confident it’ll work just fine. It does however power the T1 beautifully… the T1 drawing about 150mA when running at 14.4V (which is what my bench supply was set to). Some things I should possibly add would be fuses on the input and output: 1A on the input, 500mA on the output. For now I’ll just wing it. I’ll probably put the fuse at the socket in my room. There’s plenty of room to add this to the enclosure as it is now.

The business end of the PoE injector
Wiring job inside the enclosure.

Apr 232021
 

So, about 10 years ago, I started out as a contractor with a local industrial automation company, helping them integrate energy meters into various energy management systems.

Back then, they had an in-house self-managed corporate email system built on Microsoft Small Business Server. It worked, mostly, but had the annoyance of being a pariah regarding Internet standards… begrudgingly speaking SMTP to the outside world and mangling RFC822 messaging left-right and centre any chance it got. Ohh, and if you didn’t use its sister product, Microsoft Outlook, you weren’t invited!

Thankfully, as a contractor, I was largely insulated from that horror of a mail system… I had my own, running postfix + dovecot. That worked. Flawlessly for my needs. Emails were stored in the Maildir format, so back-ups were easy, if I couldn’t find something over IMAP, a ssh into the server was all I needed to unleash grep on the mailstore. Prior to this, I’ve used various combinations of Sendmail, Qmail, qpsmtpd for MTA and uw-imapd, Binc IMAP and finally dovecot. I used SpamAssassin for mail filtering, configured the server with a variety of RBLs, and generally enjoyed a largely spam-free and easy life.

A year or two into this arrangement, my workplace’s server had a major meltdown… they apparently had hit some internal limit on the Microsoft server, and on receipt of a few messages, it just crashed. Restore from back-up, all good, then some more incoming emails, down she went. In a hurry for an alternative, they grabbed an old box, loaded it up with an Ubuntu server fork and configured Zarafa groupware which sat atop the postfix MTA.

It was chosen because it was feature-wise, similar, to the Microsoft option. Unfortunately, it was also architecturally similar, with the mailstore being stored in MySQL using a bizzare schema that tried to replicate how Microsoft Exchange stored emails… meaning any header that Zarafa didn’t understand, got stripped… and any character that didn’t fit in the mailstore’s LATIN1 table character set got replaced with ?. Yes Mr. ????????? we’ll be onto that support request right away! One thing that I will say in Zarafa’s defence though, is that they at least supported IMAP (even if their implementation was primitive, it mostly “worked”), and calendaring was accessible using CalDAV.

That was the server I inherited as mail server administrator. We kept it going like that for a couple of years, but over time, the growing pains became evident… we had to move… again. By this stage, we were using Thunderbird as our standard email client, the Lightning extension for calendaring. On the fateful weekend of the 13-14th February, 2016, after a few weeks of research and testing, we moved again; to a combination of postfix, dovecot and SoGO providing calendaring/webmail. Like the server I had at home, email was stored in Maildir mail stores, which meant back-ups were as simple as rsync, selective restoring of a mail folder was easy, we could do public folders. People could use any IMAP compatible mail client: Thunderbird, Outlook, mutt, Apple Mail… whatever floated their boat.

I was quite proactive about the spam/malware situation… there was an extensive blacklist I maintained on that server to keep repeat offenders out. If you used a server at OVH or DigitalOcean for example, your email was not welcome, connections to port 25/tcp were rejected. Anything that did get through brought to my attention, I would pass the email through Spamcop for analysis and reporting, and any repeat offenders got added to the blacklist. I’d have liked to improve on the malware scanning… there are virus scanners that will integrate into Postfix and I was willing to set something up, but obviously needed management to purchase something suitable to do that.

Calendaring worked too… about the only thing that was missing was free-busy information, which definitely has its value, but it was workable. Worst case in my opinion is maybe replace SoGO with something else, but for now, it worked.

Fast forward to March 29th this year. New company has bought up my humble abode… and the big wigs have selected… Microsoft! No consultation. No discussion. The first note I got regarding this was a company-wide email stating we’d be migrating over the Easter long week-end.

I emailed back, pointing out a few concerns. I was willing to give Microsoft a second chance. For my end as a end user, I really only care about one thing: that the server communicates with the software on my computer with agreed “standard” protocols. For email that is IMAP and SMTP. For calendaring that is CalDAV. I really don’t care how it’s implemented, so long as it implements it properly. They do their end of the bargain by speaking an agreed protocol correctly… I’ll do my end by selecting a standards-compliant email/calendar client. All good.

I was assured that yes, it would do this. Specifically, I was shown this page as evidence. Okay, I thought, lets see how it goes. Small Business Server was from 2003… surely Microsoft has learned something in 18 years. They’ve been a lot more open about things, adopting support for OpenDocument in Office, working with Novell on .NET, ditching Visual Source Safe and embracing git so much so they acquired Github… surely things have improved.

Tuesday, 6th April, we entered a new world. A world were public folders were gone. A world with no calendaring. I’m guessing the powers at be have decided I do not need to see public folders, after all, RFC2342 has been around since the 90s… and even has people from Microsoft working on it! It’s possible they’re still migrating them from the old server, but 3 weeks seems a stretch.

Fine, I can live without public folders for now. Gone are the days where I interacted with customers on a regular basis and thus needed to file correspondence. The only mail folder I had much to do with of late was a public folder called Junk Mail which I used to monitor for spam to report and train the spam filter with.

Calendaring, I’ll admit I don’t use much… but to date, I have no CalDAV URI to configure my client with. I did some digging this morning. Initial investigations suggest that Microsoft still lives in the past. Best they can offer is a “look-but-not-touch” export. Useless.

But wait, there’s a web client! Yeah great… let’s cram it all in a web browser. I have to deal with Slack and its ugly bloat because voice chat doesn’t work in anything else. Then there’s the thorny of web-based email and why I think that is a bad idea. No, just because a web client works for you, or a particular brand desktop client works for you, does not mean it will work for everybody.

The frustration from this end right now is that I’m trapped with nowhere to go. I’m locked in to supporting myself and Sam (I made a commitment to my dying grandmother that he’d be cared for) for another 10 years at least (who knows how long he’ll live for, he’s 7 now and Emma lived to nearly 18), so suicide isn’t an option right now, nor is simply quitting and living on the savings I have.

Most workplaces seem to be infected with this groupware-malware, so switching isn’t a viable option either. Office365 apparently has a REST API, so maybe that’s the next point of call: see if I can write a proxy to bolt-on such an interface.

Apr 112021
 

So, for the past 12 months we’ve basically had a whirlwind of different “solutions” to the problem of contact tracing. The common theme amongst them seems to be they’re all technical-based, and they all assume people carry a smartphone, registered with one of the two major app stores, and made in the last few years.

Quite simply, if you’re carrying an old 3G brick from 2010, you don’t exist to these “apps”. Our own federal government tried its hand in this space by taking OpenTrace (developed by the Singapore Government and released as GPLv3 open-source) and rebadging that (and re-licensing it!) as COVIDSafe.

This had very mild success to say the least, with contact tracers telling us that this fancy “app” wasn’t telling them anything new. So much focus has been put on signing into and out of venues.

To be honest, I’m fine with this until such time as we get this gift from China under control. The concept is not what irks me, it’s its implementation.

At first, it was done on paper. Good old fashioned pen and paper. Simple, nearly foolproof, didn’t crash, didn’t need credit, didn’t need recharging, didn’t need network coverage… except for two problems:

  1. people who can’t successfully operate a pen (Hmm, what went wrong, Education Queensland?)
  2. people who can’t take the process seriously (and an app solves this how?)

So they demanded that all venues use an electronic system. Fine, so we had a myriad of different electronic web-based systems, a little messy, but it worked, and for the most part, the venue’s system didn’t care what your phone was.

A couple, even could take check-in by SMS. Still rocking a Nokia 3210 from 1998? Assuming you’ve found a 2G cell tower in range, you can still check in. Anything that can do at least 3G will be fine.

An advantage of this solution is that they have your correct mobile phone number then and it’s a simple matter for Queensland Health to talk to Telstra/Optus/Vodaphone/whoever to get your name and address from that… as a bonus, the cell sites may even have logs of your device’s IMEI roaming, so there’s more for the contact tracing kitty.

I only struck one venue out of dozens, whose system would not talk to my phone. Basically some JavaScript library didn’t load, and so it fell in a heap.

Until yesterday.

The Queensland Government has decided to foist its latest effort on everybody, the “Check-in Queensland” app. It is available on Google Play Store and Apple App Store, and their QR codes are useless without it. I can’t speak about the Apple version of the software, but for the Android one, it requires Android 5.0 or above.

Got an old reliable clunker that you keep using because it pulls the weakest signals and has a stand-by time that can be measured in days? Too bad. For me, my Android 4.1 device is not welcome. There are people out there for whom, even that, is a modern device.

Why not buy a newer phone? Well, when I bought this particular phone, back in 2015… I was looking for 3 key features:

  1. Make and receive (voice) telephone calls
  2. Send and receive short text messages
  3. Provide a Internet link for my laptop via USB/WiFi

Anything else is a bonus. It has a passable camera. It can (and does) play music. There’s a functional web browser (Firefox). There’s a selection of software I can download (via F-Droid). It Does What I Need It To Do. The battery still lasts 2-3 days between charges on stand-by. I’ve seen it outperform nearly every contemporary device on the market in areas with weak mobile coverage, and I can connect an external antenna to boost that if needed.

About the only thing I could wish for is open-source firmware and a replaceable battery. (Well, it sort-of is replaceable. Just a lot of frigging around to get at it. I managed to replace a GPS battery, so this should be doable.)

So, given this new check-in requirement, what is someone like me to do? Whilst the Queensland Government is urging people to install their application, they recognise that there are those of us who cannot because we lack anything that will run it. So they ask that venues have a device on hand that can be used to check visitors in if this situation arises.

My little “hack” simply exploits this:

# This file is part of pylabels, a Python library to create PDFs for printing
# labels.
# Copyright (C) 2012, 2013, 2014 Blair Bonnett
#
# pylabels is free software: you can redistribute it and/or modify it under the
# terms of the GNU General Public License as published by the Free Software
# Foundation, either version 3 of the License, or (at your option) any later
# version.
#
# pylabels is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
# A PARTICULAR PURPOSE.  See the GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along with
# pylabels.  If not, see <http://www.gnu.org/licenses/>.

import argparse
import labels
import time
from reportlab.lib.units import mm
from reportlab.graphics import shapes
from reportlab.lib import colors
from reportlab.graphics.barcode import qr

rows = 4
cols = 2
# Specifications for Avery C32028 2×4 85×54mm
specs = labels.Specification(210, 297, cols, rows, 85, 54, corner_radius=0,
        left_margin=17, right_margin=17, top_margin=31, bottom_margin=32)

def draw_label(label, width, height, checkin_id):
    label.add(shapes.String(
        42.5*mm, 50*mm,
        'COVID-19 Check-in Card',
        fontName="Helvetica", fontSize=12, textAnchor='middle'
    ))
    label.add(shapes.String(
        42.5*mm, 46*mm,
        'The Queensland Government has chosen to make the',
        fontName="Helvetica", fontSize=8, textAnchor='middle'
    ))
    label.add(shapes.String(
        42.5*mm, 43*mm,
        'CheckIn QLD application incompatible with my device.',
        fontName="Helvetica", fontSize=8, textAnchor='middle'
    ))
    label.add(shapes.String(
        42.5*mm, 40*mm,
        'Please enter my contact details into your system',
        fontName="Helvetica", fontSize=8, textAnchor='middle'
    ))
    label.add(shapes.String(
        42.5*mm, 37*mm,
        'at your convenience.',
        fontName="Helvetica", fontSize=8, textAnchor='middle'
    ))

    label.add(shapes.String(
        5*mm, 32*mm,
        'Name: Joe Citizen',
        fontName="Helvetica", fontSize=12
    ))
    label.add(shapes.String(
        5*mm, 28*mm,
        'Phone: 0432 109 876',
        fontName="Helvetica", fontSize=12
    ))
    label.add(shapes.String(
        5*mm, 24*mm,
        'Email address:',
        fontName="Helvetica", fontSize=12
    ))
    label.add(shapes.String(
        84*mm, 20*mm,
        'myaddress+c%o@example.com' % checkin_id,
        fontName="Courier", fontSize=12, textAnchor='end'
    ))
    label.add(shapes.String(
        5*mm, 16*mm,
        'Home address:',
        fontName="Helvetica", fontSize=12
    ))
    label.add(shapes.String(
        15*mm, 12*mm,
        '12 SomeDusty Rd',
        fontName="Helvetica", fontSize=12
    ))
    label.add(shapes.String(
        15*mm, 8*mm,
        'BORING SUBURB, QLD, 4321',
        fontName="Helvetica", fontSize=12
    ))

    label.add(shapes.String(
        2, 2, 'Date: ',
        fontName="Helvetica", fontSize=10
    ))
    label.add(shapes.Rect(
        10*mm, 2, 12*mm, 4*mm,
        fillColor=colors.white, strokeColor=colors.gray
    ))
    label.add(shapes.String(
        22.5*mm, 2, '-', fontName="Helvetica", fontSize=10
    ))
    label.add(shapes.Rect(
        24*mm, 2, 6*mm, 4*mm,
        fillColor=colors.white, strokeColor=colors.gray
    ))
    label.add(shapes.String(
        30.5*mm, 2, '-', fontName="Helvetica", fontSize=10
    ))
    label.add(shapes.Rect(
        32*mm, 2, 6*mm, 4*mm,
        fillColor=colors.white, strokeColor=colors.gray
    ))
    label.add(shapes.String(
        40*mm, 2, 'Time: ',
        fontName="Helvetica", fontSize=10
    ))
    label.add(shapes.Rect(
        50*mm, 2, 6*mm, 4*mm,
        fillColor=colors.white, strokeColor=colors.gray
    ))
    label.add(shapes.String(
        56.5*mm, 2, ':', fontName="Helvetica", fontSize=10
    ))
    label.add(shapes.Rect(
        58*mm, 2, 6*mm, 4*mm,
        fillColor=colors.white, strokeColor=colors.gray
    ))

    label.add(shapes.String(
        10*mm, 5*mm, 'Year',
        fontName="Helvetica", fontSize=6, fillColor=colors.gray
    ))
    label.add(shapes.String(
        24*mm, 5*mm, 'Month',
        fontName="Helvetica", fontSize=6, fillColor=colors.gray
    ))
    label.add(shapes.String(
        32*mm, 5*mm, 'Day',
        fontName="Helvetica", fontSize=6, fillColor=colors.gray
    ))
    label.add(shapes.String(
        50*mm, 5*mm, 'Hour',
        fontName="Helvetica", fontSize=6, fillColor=colors.gray
    ))
    label.add(shapes.String(
        58*mm, 5*mm, 'Minute',
        fontName="Helvetica", fontSize=6, fillColor=colors.gray
    ))

    label.add(qr.QrCodeWidget(
            '%o' % checkin_id,
            barHeight=12*mm, barWidth=12*mm, barBorder=1,
            x=73*mm, y=0
    ))

# Grab the arguments
OCTAL_T = lambda x : int(x, 8)
parser = argparse.ArgumentParser()
parser.add_argument(
        '--base', type=OCTAL_T,
        default=(int(time.time() / 86400.0) << 8)
)
parser.add_argument('--offset', type=OCTAL_T, default=0)
parser.add_argument('pages', type=int, default=1)
args = parser.parse_args()

# Figure out cards per sheet (max of 256 cards per day)
cards = min(rows * cols * args.pages, 256)

# Figure out check-in IDs
start_id = args.base + args.offset
end_id = start_id + cards
print ('Generating cards from %o to %o' % (start_id, end_id))

# Create the sheet.
sheet = labels.Sheet(specs, draw_label, border=True)

sheet.add_labels(range(start_id, end_id))

# Save the file and we are done.
sheet.save('checkin-cards.pdf')
print("{0:d} cards(s) output on {1:d} page(s).".format(sheet.label_count, sheet.page_count))

That script (which may look familiar), generates up to 256 check-in cards. The check-in cards are business card sized and look like this:

That card has:

  1. the person’s full name
  2. a contact telephone number
  3. an email address with a unique sub-address component for verification purposes (compatible with services that use + for sub-addressing like Gmail)
  4. home address
  5. date and time of check-in (using ISO-8601 date format)
  6. a QR code containing a “check-in number” (which also appears in the email sub-address)

Each card has a unique check-in number (seen above in the email address and as the content of the QR code) which is derived from the number of days since 1st January 1970 and a 8-bit sequence number; so we can generate up to 256 cards a day. The number is just meant to be unique to the person generating them, two people using this script can, and likely will, generate cards with the same check-in ID.

I actually added the QR code after I printed off a batch (thought of the idea too late). Maybe the next batch will have the QR code. This can be used with a phone app of your choosing (e.g. maybe use BarcodeScanner to copy the check-in number to the clip-board then paste it into a spreadsheet, or make your own tool) to add other data. In my case, I’ll use a paper system:

The script that generates those is here:

# This file is part of pylabels, a Python library to create PDFs for printing
# labels.
# Copyright (C) 2012, 2013, 2014 Blair Bonnett
#
# pylabels is free software: you can redistribute it and/or modify it under the
# terms of the GNU General Public License as published by the Free Software
# Foundation, either version 3 of the License, or (at your option) any later
# version.
#
# pylabels is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
# A PARTICULAR PURPOSE.  See the GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along with
# pylabels.  If not, see <http://www.gnu.org/licenses/>.

import argparse
import labels
import time
from reportlab.lib.units import mm
from reportlab.graphics import shapes
from reportlab.lib import colors

rows = 4
cols = 2
# Specifications for Avery C32028 2×4 85×54mm
specs = labels.Specification(210, 297, cols, rows, 85, 54, corner_radius=0,
        left_margin=17, right_margin=17, top_margin=31, bottom_margin=32)

def draw_label(label, width, height, checkin_id):
    label.add(shapes.String(
        42.5*mm, 50*mm,
        'COVID-19 Check-in Log',
        fontName="Helvetica", fontSize=12, textAnchor='middle'
    ))

    label.add(shapes.Rect(
        1*mm, 3*mm, 20*mm, 45*mm,
        fillColor=colors.lightgrey,
        strokeColor=None
    ))
    label.add(shapes.Rect(
        41*mm, 3*mm, 28*mm, 45*mm,
        fillColor=colors.lightgrey,
        strokeColor=None
    ))

    for row in range(3, 49, 5):
        label.add(shapes.Line(1*mm, row*mm, 84*mm, row*mm, strokeWidth=0.5))
    for col in (1, 21, 41, 69, 84):
        label.add(shapes.Line(col*mm, 48*mm, col*mm, 3*mm, strokeWidth=0.5))

    label.add(shapes.String(
        2*mm, 44*mm,
        'In',
        fontName="Helvetica", fontSize=8
    ))

    label.add(shapes.String(
        22*mm, 44*mm,
        'Check-In #',
        fontName="Helvetica", fontSize=8
    ))

    label.add(shapes.String(
        42*mm, 44*mm,
        'Place',
        fontName="Helvetica", fontSize=8
    ))

    label.add(shapes.String(
        83*mm, 44*mm,
        'Out',
        fontName="Helvetica", fontSize=8, textAnchor='end'
    ))

# Grab the arguments
parser = argparse.ArgumentParser()
parser.add_argument('pages', type=int, default=1)
args = parser.parse_args()

cards = rows * cols * args.pages

# Create the sheet.
sheet = labels.Sheet(specs, draw_label, border=True)

sheet.add_labels(range(cards))

# Save the file and we are done.
sheet.save('checkin-log-cards.pdf')
print("{0:d} cards(s) output on {1:d} page(s).".format(sheet.label_count, sheet.page_count))

When I see one of these Check-in Queensland QR codes, I simply pull out the log card, a blank check-in card, and a pen. I write the check-in number from the blank card (visible in the email address) in my log with the date/time, place, and on the blank card, write the same date/time and hand that to the person collecting the details.

They can write that into their device at their leisure, and it saves time not having to spell it all out. As for me, I just have to remember to write the exit time. If Queensland Health come a ringing, I have a record of where I’ve been on hand… or if I receive an email, I can use the check-in number to validate that this is legitimate, or even tell if a venue has on-sold my personal details to an advertiser.

I guess it’d be nice if the Queensland Government could at least add a form to their fancy pages that their flashy QR codes send people to, so that those who do not have the application can still at least check-in without it, but that’d be too much to ask.

In the meantime, this at least meets them half-way, and hopefully does so which ensures minimal contact and increases efficiency.