Sep 192021
 

I stumbled across this article regarding the use of TCP over sensor networks. Now, TCP has been done with AX.25 before, and generally suffers greatly from packet collisions. Apparently (I haven’t read more than the first few paragraphs of this article), implementations TCP can be tuned to improve performance in such networks, which may mean TCP can be made more practical on packet radio networks.

Prior to seeing this, I had thought 6LoWHAM would “tunnel” TCP over a conventional AX.25 connection using I-frames and S-frames to carry TCP segments with some header prepended so that multiple TCP connections between two peers can share the same AX.25 connection.

I’ve printed it out, and made a note of it here… when I get a moment I may give this a closer look. Ultimately I still think multicast communications is the way forward here: radio inherently favours one-to-many communications due to it being a shared medium, but there are definitely situations in which being able to do one-to-one communications applies; and for those, TCP isn’t a bad solution.

Comments having read the article

So, I had a read through it. The take-aways seem to be this:

  • TCP was historically seen as “too heavy” because the MCUs of the day (circa 2002) lacked the RAM needed for TCP data structures. More modern MCUs have orders of magnitude more RAM (32KiB vs 512B) today, and so this is less of an issue.
    • For 6LoWHAM, intended for single-board computers running Linux, this will not be an issue.
  • A lot of early experiments with TCP over sensor networks tried to set a conservative MSS based on the actual link MTU, leading to TCP headers dominating the lower-level frame. Leaning on 6LoWPAN’s ability to fragment IP datagrams lead to much improved performance.
    • 6LoWHAM uses AX.25 which can support 256-byte frames; vs 128-byte 802.15.4 frames on 6LoWPAN. Maybe gains can be made this way, but we’re already a bit ahead on this.
  • Much of the document considered battery-powered nodes, in which the radio transceiver was powered down completely for periods of time to save power, and the effects this had on TCP communications. Optimisations were able to be made that reduced the impact of such power-down events.
    • 6LoWHAM will likely be using conventional VHF/UHF transceivers. Hand-helds often implement a “battery saver” mode — often this is configured inside the device with no external control possible (thus it will not be possible for us to control, or even detect, when the receiver is powered down). Mobile sets often do not implement this, and you do not want to frequently power-cycle a modern mobile transceiver at the sorts of rates that 802.15.4 radios get power-cycled!
  • Performance in ideal conditions favoured TCP, with the article authors managing to achieve 30% of the raw link bandwidth (75kbps of a theoretical 250kbps maximum), with the underlying hardware being fingered as a possible cause for performance issues.
    • Assuming we could manage the same percentage; that would equate to ~360bps on 1200-baud networks, or 2.88kbps on 9600-baud networks.
  • With up to 15% packet loss, TCP and CoAP (its nearest contender) can perform about the same in terms of reliability.
  • A significant factor in effective data rate is CSMA/CA. aioax25 effectively does CSMA/CA too.

Its interesting to note they didn’t try to do anything special with the TCP headers (e.g. Van Jacobson compression). I’ll have to have a look at TCP and see just how much overhead there is in a typical segment, and whether the roughly double MTU of AX.25 will help or not: the article recommends using MSS of approximately 3× the link MTU for “fair” conditions (so ~384 bytes), and 5× in “good” conditions (~640 bytes).

It’s worth noting a 256-byte AX.25 frame takes ~2 seconds to transmit on a 1200-baud link. You really don’t want to make that a habit! So smaller transmissions using UDP-based protocols may still be worthwhile in our application.

Sep 162021
 

So, one evening I was having difficulty sleeping, so like some people count sheep, turned to a different problem…6LoWPAN relies on all nodes sharing a common “context”. This is used as a short-hand to “compress” the rather lengthy IPv6 addresses for allowing two nodes to communicate with one another by substituting particular IPv6 address subnets with a “context number” which can be represented in 4 bits.

Fundamentally, this identifier is a stand-in for the subnet address. This was a sticking-point with earlier thoughts on 6LoWHAM: how do we agree on what the context should be? My thought was, each network should be assigned a 3-bit network ID. Why 3-bit? Well, this means we can reserve some context IDs for other uses. We use SCI/DCI values 0-7 and leave 8-15 reserved; I’ll think of a use for the other half of the contexts.

The node “group” also share a SSID; the “group” SSID. This is a SSID that receives all multicast traffic for the nodes on the immediate network. This might be just a generic MCAST-n SSID, where n is the network ID; or it could be a call-sign for a local network coordinator, e.g. I might decide my network will use VK4MSL-0 for my group SSID (network 0). Probably nodes that are listening on a custom SSID should still listen for MCAST-n traffic, in case a node is attempting to join without knowing the group SSID.

AX.25 allows for 16 SSIDs per call-sign, so what about the other 8? Well, if we have a convention that we reserve SSIDs 0-7 for groups; that leaves 8-15 for stations. This can be adjusted for local requirements where needed, and would not be enforced by the protocol.

Joining a network

How does a new joining node “discover” this network? Firstly, the first node in an area is responsible for “forming” the network — a node which “forms” a network must be manually programmed with the local subnet, group SSID and other details. Ensuring all nodes with “formation” capability for a given network is beyond the scope of 6LoWHAM.

When a node joins; at first it only knows how to talk to immediate nodes. It can use MCAST-n to talk to immediate neighbours using the fe80::/64 subnet. Anyone in earshot can potentially reply. Nodes simply need to be listening for traffic on a reserved UDP port (maybe 61631; there’s an optimisation in 6LoWPAN for 61616-61631). The joining node can ask for the network context, maybe authenticate itself if needed (using asymmetric cryptography – digital signatures, no encryption).

The other nodes presumably already know the answer, but for all nodes to reply simultaneously, would lead to a pile-up. Nodes should wait a randomised delay, and if nothing is heard in that period, they then transmit what they know of the context for the given network ID.

The context information sent back should include:

  • Group SSID
  • Subnet prefix
  • (Optional) Authentication data:
    • Public key of the forming network (joining node will need to maintain its own “trust” database)
    • Hash of all earlier data items
    • Digital signature signed with included public key

Once a node knows the context for its chosen network, it is officially “joined”.

Routing to non-local endpoints

So, a node may wish to send a message to another node that’s not directly reachable. This is, after-all, the whole point of using a routing protocol atop AX.25. If we knew a route, we could encode it in the digipeater path, and use conventional AX.25 source routing. Nodes that know a reliable route are encouraged to do exactly that. But what if you don’t know your way around?

APRS uses WIDEN-n to solve this problem: it’s a dumb broadcast, but it achieves this aim beautifully. n just stands for the number of hops, and it gets decremented with each hop. Each digipeater inserts itself into the path as it sends the frame on. APRS specs normally call for everyone to broadcast all at once, pile-up be damned. FM capture effect might help here, but I’m not sure its a good policy. Simple, but in our case, we can do a little better.

We only need to broadcast far enough to reach a node that knows a route. We’ll use ROUTE-n to stand for a digipeater that is no more than n hops away from the station listed in the AX.25 destination field. n must be greater than 0 for a message to be relayed. AX.25 2.0 limits the number of digipeaters to 8 (and 2.2 to 2!), so naturally n cannot be greater than 8.

So we’ll have a two-tier approach.

Routing from a node that knows a viable route

If a node that receives a ROUTE-n destination message, knows it has a good route that is n or less hops away from the target; it picks a randomised delay (maybe 0-5 seconds range), and if no reply is heard from another node; it relays the message: the ROUTE-n is replaced by its own SSID, followed by the required digipeater path to reach the target node.

Routing from a node that does not know a viable route

In the case where a node receives this same ROUTE-n destination message, does not know a route, and hasn’t heard anyone else relay that same message; it should pick a randomised delay (5-10 second range), and if it hasn’t heard the message relayed via a specific path in that time, should do one of the following:

If n is greater than 1:

Substitute ROUTE-n in the digipeater path with its own SSID followed by ROUTE-(n-1) then transmit the message.

If n is 1 (or 0):

Substitute ROUTE-n with its own SSID (do not append ROUTE-0) then transmit the message.

Routing multicast traffic

Discovering multicast listeners

I’ll have to research MLD (RFC-3810 / RFC-4604), but that seems the sensible way forward from here.

Relaying multicast traffic

If a node knows of downstream nodes that ordinarily rely on it to contact the sender of a multicast message, and it knows the downstream nodes are subscribers to the destination multicast group, it should wait a randomised period, and forward the message on (appending its SSID in the digipeater path) to the downstream nodes.

Application thoughts

I think I have done some thoughts on what the applications for this system may be, but the other day I was looking around for “prior art” regarding one-to-many file transfer applications.

One such system that could be employed is UFTP. Yes, it mentions encryption, but that is an optional feature (and could be useful in emcomm situations). That would enable SSTV-style file sharing to all participants within the mesh network. Its ability to be proxied also lends itself to bridging to other networks like AMPRnet, D-Star packet, DMR and other systems.

Jun 202021
 

So, today on the radio I heard that from this Friday, our state government was “expanding” the use of their Check-in Queensland program. Now, since my last post on the topic, I have since procured a new tablet. The tablet was purchased for completely unrelated reasons, namely:

  1. to provide navigation assistance, current speed monitoring and positional logging whilst on the bicycle (basically, what my Garmin Rino-650 does)
  2. to act as a media player (basically what my little AGPTek R2 is doing — a device I’ve now outgrown)
  3. to provide a front-end for a SDR receiver I’m working on
  4. run Slack for monitoring operations at work

Since it’s a modern Android device, it happens to be able to run the COVID-19 check-in programs. So I have COVIDSafe and Check-in Queensland installed. For those to work though, I have to run my existing phone’s WiFi hotspot. A little cumbersome, but it works, and I get the best of both worlds: modern Android + my phone’s excellent cell tower reception capability.

The snag though comes when these programs need to access the Internet at times when using my phone is illegal. Queensland laws around mobile phone use changed a while back, long before COVID-19. The upshot was that, while people who hold “open” driver’s licenses may “use” a mobile phone (provided that they do not need to handle it to do so), anybody else may not “use” a phone for “any purpose”. So…

  • using it for talking to people? Banned. Even using “hands-free”? Yep, still banned.
  • using it for GPS navigation? Banned.
  • using it for playing music? Banned.

It’s a $1000 fine if you’re caught. I’m glad I don’t use a wheelchair: such mobility aids are classed as a “vehicle” under the Queensland traffic act, and you can be fined for “drink driving” if you operate one whilst drunk. So traffic laws that apply to “motor vehicles” also apply to non-“motor vehicles”.

I don’t have a driver’s license of any kind, and have no interest in getting one, my primary mode of private transport is by bicycle. I can’t see how I’d be granted permission to do something that someone on a learner’s permit or P1 provisional license is forbidden from doing. The fact that I’m not operating a “motor vehicle” does not save me, the drink-driving in a wheelchair example above tells me that I too, would be fined for riding my bicycle whilst drunk. Likely, the mobile phones apply to me too. Given this, I made the decision to not “use” a mobile phone on the bicycle “for any purpose”. “For any purpose” being anything that requires the device to be powered on.

If I’m going to be spending a few hours at the destination, and in a situation that may permit me to use the phone, I might carry it in the top-box turned off (not certain if this is permitted, but kinda hard to police), but if it’s a quick trip to the shops, I leave the mobile phone at home.

What’s this got to do with the Check-in Queensland application or my new shiny-shiny you ask? Glad you did.

The new tablet is a WiFi-only device… specifically because of the above restrictions on using a “mobile phone”. The day those restrictions get expanded to include the tablet, you can bet the tablet will be ditched when travelling as well. Thus, it receives its Internet connection via a WiFi access point. At home, that’s one of two Cisco APs that provide my home Internet service. No issue there.

If I’m travelling on foot, or as a passenger on someone else’s vehicle, I use the WiFi hot-spot function on my phone to provide this Internet service… but this obviously won’t work if I just ducked up the road on my bike to go get some grocery shopping done, as I leave the phone at home for legal reasons.

Now, the Check-in Queensland application does not work without an Internet connection, and bringing my own in this situation is legally problematic.

I can also think of situations where an Internet connection is likely to be problematic.

  • If your phone doesn’t have a reliable cell tower link, it won’t reliably connect to the Internet, Check-in Queensland will fail.
  • If your phone is on a pre-paid service and you run out of credit, your carrier will deny you an Internet service, Check-in Queensland will fail.
  • If your carrier has a nation-wide whoopsie (Telstra had one a couple of years back, Optus and Vodafone have had them too), you can find yourself with a very pretty but very useless brick in your hand. Check-in Queensland will fail.

What can be done about this?

  1. The venues could provide a WiFi service so people can log in to that, and be provided with limited Internet access to allow the check-in program to work whilst at the venue. I do not see this happening for most places.
  2. The Check-in Queensland application could simply record the QR code it saw, date/time, co-visitors, and simply store it on the device to be uploaded later when the device has a reliable Internet link.
  3. For those who have older phones (and can legally carry them), the requirement of an “application” seems completely unnecessary:
    1. Most devices made post-2010 can run a web browser capable of running an in-browser QR code scanner, and storage of the customer’s details can be achieved either through using window.localStorage or through RFC-6265 HTTP cookies. In the latter case, you’d store the details server-side, and generate an “opaque” token which would be stored on the device as a cookie. A dedicated program is not required to do the function that Check-in Queensland is performing.
    2. For older devices, pretty much anything that can access the 3G network can send and receive SMS messages. (Indeed, most 2G devices can… the only exception I know to this would be the Motorola MicroTAC 5200 which could receive but not send SMSes. The lack of a 2G network will stop you though.) Telephone carriers are required to capture and verify contact details when provisioning pre-paid and post-paid cellular services, so already have a record of “who” has been assigned which telephone number. So why not get people to text the 6-digit code that Check-In Queensland uses, to a dedicated telephone number? If there’s an outbreak, they simply contact the carrier (or the spooks in Canberra) to get the contact details.
  4. The Check-in Queensland application has a “business profile” which can be used for manual entry of a visitor’s details… hypothetically, why not turn this around? Scan a QR code that the visitor carries and provides. Such QR codes could be generated by the Check-in Queensland website, printed out on paper, then cut out to make a business-card sized code which visitors can simply carry in their wallets and present as needed. No mobile phone required! For the record, the Electoral Commission of Queensland has been doing this for our state and council elections for years.

It seems the Queensland Government is doing this fancy “app” thing “because we can”. Whilst I respect the need to effectively contact-trace, the truth is there’s no technical reason why “this” must be the implementation. We just seem to be playing a game of “follow the shepherd”. They keep trying to advertise how “smart” we are, why not prove it?

Apr 112021
 

So, for the past 12 months we’ve basically had a whirlwind of different “solutions” to the problem of contact tracing. The common theme amongst them seems to be they’re all technical-based, and they all assume people carry a smartphone, registered with one of the two major app stores, and made in the last few years.

Quite simply, if you’re carrying an old 3G brick from 2010, you don’t exist to these “apps”. Our own federal government tried its hand in this space by taking OpenTrace (developed by the Singapore Government and released as GPLv3 open-source) and rebadging that (and re-licensing it!) as COVIDSafe.

This had very mild success to say the least, with contact tracers telling us that this fancy “app” wasn’t telling them anything new. So much focus has been put on signing into and out of venues.

To be honest, I’m fine with this until such time as we get this gift from China under control. The concept is not what irks me, it’s its implementation.

At first, it was done on paper. Good old fashioned pen and paper. Simple, nearly foolproof, didn’t crash, didn’t need credit, didn’t need recharging, didn’t need network coverage… except for two problems:

  1. people who can’t successfully operate a pen (Hmm, what went wrong, Education Queensland?)
  2. people who can’t take the process seriously (and an app solves this how?)

So they demanded that all venues use an electronic system. Fine, so we had a myriad of different electronic web-based systems, a little messy, but it worked, and for the most part, the venue’s system didn’t care what your phone was.

A couple, even could take check-in by SMS. Still rocking a Nokia 3210 from 1998? Assuming you’ve found a 2G cell tower in range, you can still check in. Anything that can do at least 3G will be fine.

An advantage of this solution is that they have your correct mobile phone number then and it’s a simple matter for Queensland Health to talk to Telstra/Optus/Vodaphone/whoever to get your name and address from that… as a bonus, the cell sites may even have logs of your device’s IMEI roaming, so there’s more for the contact tracing kitty.

I only struck one venue out of dozens, whose system would not talk to my phone. Basically some JavaScript library didn’t load, and so it fell in a heap.

Until yesterday.

The Queensland Government has decided to foist its latest effort on everybody, the “Check-in Queensland” app. It is available on Google Play Store and Apple App Store, and their QR codes are useless without it. I can’t speak about the Apple version of the software, but for the Android one, it requires Android 5.0 or above.

Got an old reliable clunker that you keep using because it pulls the weakest signals and has a stand-by time that can be measured in days? Too bad. For me, my Android 4.1 device is not welcome. There are people out there for whom, even that, is a modern device.

Why not buy a newer phone? Well, when I bought this particular phone, back in 2015… I was looking for 3 key features:

  1. Make and receive (voice) telephone calls
  2. Send and receive short text messages
  3. Provide a Internet link for my laptop via USB/WiFi

Anything else is a bonus. It has a passable camera. It can (and does) play music. There’s a functional web browser (Firefox). There’s a selection of software I can download (via F-Droid). It Does What I Need It To Do. The battery still lasts 2-3 days between charges on stand-by. I’ve seen it outperform nearly every contemporary device on the market in areas with weak mobile coverage, and I can connect an external antenna to boost that if needed.

About the only thing I could wish for is open-source firmware and a replaceable battery. (Well, it sort-of is replaceable. Just a lot of frigging around to get at it. I managed to replace a GPS battery, so this should be doable.)

So, given this new check-in requirement, what is someone like me to do? Whilst the Queensland Government is urging people to install their application, they recognise that there are those of us who cannot because we lack anything that will run it. So they ask that venues have a device on hand that can be used to check visitors in if this situation arises.

My little “hack” simply exploits this:

# This file is part of pylabels, a Python library to create PDFs for printing
# labels.
# Copyright (C) 2012, 2013, 2014 Blair Bonnett
#
# pylabels is free software: you can redistribute it and/or modify it under the
# terms of the GNU General Public License as published by the Free Software
# Foundation, either version 3 of the License, or (at your option) any later
# version.
#
# pylabels is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
# A PARTICULAR PURPOSE.  See the GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along with
# pylabels.  If not, see <http://www.gnu.org/licenses/>.

import argparse
import labels
import time
from reportlab.lib.units import mm
from reportlab.graphics import shapes
from reportlab.lib import colors
from reportlab.graphics.barcode import qr

rows = 4
cols = 2
# Specifications for Avery C32028 2×4 85×54mm
specs = labels.Specification(210, 297, cols, rows, 85, 54, corner_radius=0,
        left_margin=17, right_margin=17, top_margin=31, bottom_margin=32)

def draw_label(label, width, height, checkin_id):
    label.add(shapes.String(
        42.5*mm, 50*mm,
        'COVID-19 Check-in Card',
        fontName="Helvetica", fontSize=12, textAnchor='middle'
    ))
    label.add(shapes.String(
        42.5*mm, 46*mm,
        'The Queensland Government has chosen to make the',
        fontName="Helvetica", fontSize=8, textAnchor='middle'
    ))
    label.add(shapes.String(
        42.5*mm, 43*mm,
        'CheckIn QLD application incompatible with my device.',
        fontName="Helvetica", fontSize=8, textAnchor='middle'
    ))
    label.add(shapes.String(
        42.5*mm, 40*mm,
        'Please enter my contact details into your system',
        fontName="Helvetica", fontSize=8, textAnchor='middle'
    ))
    label.add(shapes.String(
        42.5*mm, 37*mm,
        'at your convenience.',
        fontName="Helvetica", fontSize=8, textAnchor='middle'
    ))

    label.add(shapes.String(
        5*mm, 32*mm,
        'Name: Joe Citizen',
        fontName="Helvetica", fontSize=12
    ))
    label.add(shapes.String(
        5*mm, 28*mm,
        'Phone: 0432 109 876',
        fontName="Helvetica", fontSize=12
    ))
    label.add(shapes.String(
        5*mm, 24*mm,
        'Email address:',
        fontName="Helvetica", fontSize=12
    ))
    label.add(shapes.String(
        84*mm, 20*mm,
        'myaddress+c%o@example.com' % checkin_id,
        fontName="Courier", fontSize=12, textAnchor='end'
    ))
    label.add(shapes.String(
        5*mm, 16*mm,
        'Home address:',
        fontName="Helvetica", fontSize=12
    ))
    label.add(shapes.String(
        15*mm, 12*mm,
        '12 SomeDusty Rd',
        fontName="Helvetica", fontSize=12
    ))
    label.add(shapes.String(
        15*mm, 8*mm,
        'BORING SUBURB, QLD, 4321',
        fontName="Helvetica", fontSize=12
    ))

    label.add(shapes.String(
        2, 2, 'Date: ',
        fontName="Helvetica", fontSize=10
    ))
    label.add(shapes.Rect(
        10*mm, 2, 12*mm, 4*mm,
        fillColor=colors.white, strokeColor=colors.gray
    ))
    label.add(shapes.String(
        22.5*mm, 2, '-', fontName="Helvetica", fontSize=10
    ))
    label.add(shapes.Rect(
        24*mm, 2, 6*mm, 4*mm,
        fillColor=colors.white, strokeColor=colors.gray
    ))
    label.add(shapes.String(
        30.5*mm, 2, '-', fontName="Helvetica", fontSize=10
    ))
    label.add(shapes.Rect(
        32*mm, 2, 6*mm, 4*mm,
        fillColor=colors.white, strokeColor=colors.gray
    ))
    label.add(shapes.String(
        40*mm, 2, 'Time: ',
        fontName="Helvetica", fontSize=10
    ))
    label.add(shapes.Rect(
        50*mm, 2, 6*mm, 4*mm,
        fillColor=colors.white, strokeColor=colors.gray
    ))
    label.add(shapes.String(
        56.5*mm, 2, ':', fontName="Helvetica", fontSize=10
    ))
    label.add(shapes.Rect(
        58*mm, 2, 6*mm, 4*mm,
        fillColor=colors.white, strokeColor=colors.gray
    ))

    label.add(shapes.String(
        10*mm, 5*mm, 'Year',
        fontName="Helvetica", fontSize=6, fillColor=colors.gray
    ))
    label.add(shapes.String(
        24*mm, 5*mm, 'Month',
        fontName="Helvetica", fontSize=6, fillColor=colors.gray
    ))
    label.add(shapes.String(
        32*mm, 5*mm, 'Day',
        fontName="Helvetica", fontSize=6, fillColor=colors.gray
    ))
    label.add(shapes.String(
        50*mm, 5*mm, 'Hour',
        fontName="Helvetica", fontSize=6, fillColor=colors.gray
    ))
    label.add(shapes.String(
        58*mm, 5*mm, 'Minute',
        fontName="Helvetica", fontSize=6, fillColor=colors.gray
    ))

    label.add(qr.QrCodeWidget(
            '%o' % checkin_id,
            barHeight=12*mm, barWidth=12*mm, barBorder=1,
            x=73*mm, y=0
    ))

# Grab the arguments
OCTAL_T = lambda x : int(x, 8)
parser = argparse.ArgumentParser()
parser.add_argument(
        '--base', type=OCTAL_T,
        default=(int(time.time() / 86400.0) << 8)
)
parser.add_argument('--offset', type=OCTAL_T, default=0)
parser.add_argument('pages', type=int, default=1)
args = parser.parse_args()

# Figure out cards per sheet (max of 256 cards per day)
cards = min(rows * cols * args.pages, 256)

# Figure out check-in IDs
start_id = args.base + args.offset
end_id = start_id + cards
print ('Generating cards from %o to %o' % (start_id, end_id))

# Create the sheet.
sheet = labels.Sheet(specs, draw_label, border=True)

sheet.add_labels(range(start_id, end_id))

# Save the file and we are done.
sheet.save('checkin-cards.pdf')
print("{0:d} cards(s) output on {1:d} page(s).".format(sheet.label_count, sheet.page_count))

That script (which may look familiar), generates up to 256 check-in cards. The check-in cards are business card sized and look like this:

That card has:

  1. the person’s full name
  2. a contact telephone number
  3. an email address with a unique sub-address component for verification purposes (compatible with services that use + for sub-addressing like Gmail)
  4. home address
  5. date and time of check-in (using ISO-8601 date format)
  6. a QR code containing a “check-in number” (which also appears in the email sub-address)

Each card has a unique check-in number (seen above in the email address and as the content of the QR code) which is derived from the number of days since 1st January 1970 and a 8-bit sequence number; so we can generate up to 256 cards a day. The number is just meant to be unique to the person generating them, two people using this script can, and likely will, generate cards with the same check-in ID.

I actually added the QR code after I printed off a batch (thought of the idea too late). Maybe the next batch will have the QR code. This can be used with a phone app of your choosing (e.g. maybe use BarcodeScanner to copy the check-in number to the clip-board then paste it into a spreadsheet, or make your own tool) to add other data. In my case, I’ll use a paper system:

The script that generates those is here:

# This file is part of pylabels, a Python library to create PDFs for printing
# labels.
# Copyright (C) 2012, 2013, 2014 Blair Bonnett
#
# pylabels is free software: you can redistribute it and/or modify it under the
# terms of the GNU General Public License as published by the Free Software
# Foundation, either version 3 of the License, or (at your option) any later
# version.
#
# pylabels is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
# A PARTICULAR PURPOSE.  See the GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along with
# pylabels.  If not, see <http://www.gnu.org/licenses/>.

import argparse
import labels
import time
from reportlab.lib.units import mm
from reportlab.graphics import shapes
from reportlab.lib import colors

rows = 4
cols = 2
# Specifications for Avery C32028 2×4 85×54mm
specs = labels.Specification(210, 297, cols, rows, 85, 54, corner_radius=0,
        left_margin=17, right_margin=17, top_margin=31, bottom_margin=32)

def draw_label(label, width, height, checkin_id):
    label.add(shapes.String(
        42.5*mm, 50*mm,
        'COVID-19 Check-in Log',
        fontName="Helvetica", fontSize=12, textAnchor='middle'
    ))

    label.add(shapes.Rect(
        1*mm, 3*mm, 20*mm, 45*mm,
        fillColor=colors.lightgrey,
        strokeColor=None
    ))
    label.add(shapes.Rect(
        41*mm, 3*mm, 28*mm, 45*mm,
        fillColor=colors.lightgrey,
        strokeColor=None
    ))

    for row in range(3, 49, 5):
        label.add(shapes.Line(1*mm, row*mm, 84*mm, row*mm, strokeWidth=0.5))
    for col in (1, 21, 41, 69, 84):
        label.add(shapes.Line(col*mm, 48*mm, col*mm, 3*mm, strokeWidth=0.5))

    label.add(shapes.String(
        2*mm, 44*mm,
        'In',
        fontName="Helvetica", fontSize=8
    ))

    label.add(shapes.String(
        22*mm, 44*mm,
        'Check-In #',
        fontName="Helvetica", fontSize=8
    ))

    label.add(shapes.String(
        42*mm, 44*mm,
        'Place',
        fontName="Helvetica", fontSize=8
    ))

    label.add(shapes.String(
        83*mm, 44*mm,
        'Out',
        fontName="Helvetica", fontSize=8, textAnchor='end'
    ))

# Grab the arguments
parser = argparse.ArgumentParser()
parser.add_argument('pages', type=int, default=1)
args = parser.parse_args()

cards = rows * cols * args.pages

# Create the sheet.
sheet = labels.Sheet(specs, draw_label, border=True)

sheet.add_labels(range(cards))

# Save the file and we are done.
sheet.save('checkin-log-cards.pdf')
print("{0:d} cards(s) output on {1:d} page(s).".format(sheet.label_count, sheet.page_count))

When I see one of these Check-in Queensland QR codes, I simply pull out the log card, a blank check-in card, and a pen. I write the check-in number from the blank card (visible in the email address) in my log with the date/time, place, and on the blank card, write the same date/time and hand that to the person collecting the details.

They can write that into their device at their leisure, and it saves time not having to spell it all out. As for me, I just have to remember to write the exit time. If Queensland Health come a ringing, I have a record of where I’ve been on hand… or if I receive an email, I can use the check-in number to validate that this is legitimate, or even tell if a venue has on-sold my personal details to an advertiser.

I guess it’d be nice if the Queensland Government could at least add a form to their fancy pages that their flashy QR codes send people to, so that those who do not have the application can still at least check-in without it, but that’d be too much to ask.

In the meantime, this at least meets them half-way, and hopefully does so which ensures minimal contact and increases efficiency.

Dec 312020
 

So, this last 2 years, I’ve been trying to keep multiple projects on the go, then others come along and pile their own projects on top. It kinda makes a mess of one’s free time, including for things like keeping on top of where things have been put.

COVID-19 has not helped here, as it’s meant I’ve lugged a lot of gear that belongs to my workplace, or belongs at my workplace, home, to use there. This all needs tracking to ensure nothing is lost.

Years ago, I threw together a crude parts catalogue system. It was built on Django, django-mptt and PostgreSQL, and basically abused the admin part of Django to manage electronic parts storage.

I later re-purposed some of its code for an estate database for my late grandmother: I just wrote a front-end so that members of the family could be given login accounts, and “claim” certain items of the estate. In that sense, the concept was extremely powerful.

The overarching principle of how both these systems worked is that you had “items” stored within “locations”. Locations were in a tree-structure (hence django-mptt) where a location could contain further “locations”… e.g. a root-level location might be a bed room, within that might be a couple of wardrobes and draws, and there might be containers within those.

You could nest locations as deeply as you liked. In my parts database, I didn’t consider rooms, but I’d have labelled boxes like “IC Parts 1”, “IC Parts 2”, these were Plano StowAway 932 boxes… which work okay, although I’ve since discovered you don’t leave the inner boxes exposed to UV light: the plastic becomes brittle and falls apart.

The inner boxes themselves were labelled by their position within the outer box (row, column), and each “bin” inside the inner box was labelled by row and column.

IC tubes themselves were also labelled, so if I had several sitting in a box, I could identify them and their location. Some were small enough to fit inside these boxes, others were stored in large storage tubs (I have two).

If I wanted to know where I had put some LM311 op-amps, I might look up the database and it’d tell me that there were 3 of them in IC Box 1/Row 2/Row 3/Column 5. If luck was on my side, I’d go to that box, pull out the inner box, open it up and find what I was looking for plugged into some anti-static foam or stashed in a small IC tube.

The parts themselves were fairly basic, just a description, a link to a data sheet, and some other particulars. I’d then have a separate table that recorded how many of each part was present, and in which location.

So from the locations perspective, it did everything I wanted, but parametric search was out of the question.

The place here looks like a tip now, so I really do need to get on top of what I have, so much so I’m telling people no more projects until I get on top of what I have now.

Other solutions exist. OpenERP had a warehouse inventory module, and I suspect Odoo continues this, but it’s a bit of a beast to try and figure out and it seems customisation has been significantly curtailed from the OpenERP days.

PartKeepr (if you can tolerate deliberate bad spelling) is another option. It seems to have very good parametric search of parts, but one downside is that it has a flat view of locations. There’s a proposal to enhance this, but it’s been languishing for 4 years now.

VRT used to have a semi-active track-and-trace business built on a tracking software package called P-Trak. P-Trak had some nice ideas (including a surprisingly modern message-passing back-end, even if it was a proprietary one), but is overkill of my needs, and it’s a pain to try and deploy, even if I was licensed to do so.

That doesn’t mean though I can’t borrow some ideas from it. It integrated barcode scanners as part of the user interface, something these open-source part inventory packages seem to overlook. I don’t have a dedicated barcode scanner, but I do have a phone with a camera, and a webcam on my netbook. Libraries exist to do this from a web browser, such as this one for QR codes.

My big problem right now is the need to do a stock-take to see what I’ve still got, and what I’ve added since then, along with where it has gone. I’ve got a lot of “random boxes” now which are unlabelled, and just have random items thrown in due to lack-of-time. It’s likely those items won’t remain there either. I need some frictionless way to record where things are getting put. It doesn’t matter exactly where something gets put, just so long as I record that information for use later. If something is going to move to a new location, I want to be able to record that with as little fuss as possible.

So the thinking is this:

  • Print labels for all my storage locations with UUIDs stored as barcodes
  • Enter those storage locations into a database using the UUIDs allocated
  • Expand (or re-write) my parts catalogue database to handle these UUIDs:
    • adding new locations (e.g. when a consignment comes in)
    • recording movement of containers between parent locations
    • sub-dividing locations (e.g. recording the content of a consignment)
    • (partial and complete) merging locations (e.g. picking parts from stock into a project-specific container)

The first step on this journey is to catalogue the storage containers I have now. Some are already entered into the old system, so I’ve grabbed a snapshot of that and can pick through it. Others are new boxes that have arrived since, and had additional things thrown in.

I looked at ways I could label the boxes. Previously that was a spirit pen hand-writing a label, but this does not scale. If I’m to do things efficiently, then a barcode seems the logical way to go since it uses what I already have.

Something new comes in? Put a barcode on the box, scan it, enter it into the system as a new location, then mark where that box is being stored by scanning the location barcode where I’ll put the box. Later, I’ll grab the box, open it up, and I might repeat the process with any IC tubes or packets of parts inside, marking them as being present inside that box.

Need something? Look up where it is, then “check it out” into my work area… now, ideally when I’m finished, it should go back there, but if I’m in a hurry, I just throw it in a box, somewhere, then record that I put it there. Next time I need it, I can look up where it is. Logical order isn’t needed up front, and can come later.

So, step 1 is to label all the locations. Since I’m doing this before the database is fully worked-out, I want to avoid ID clashes, I’m using UUIDs to label all the locations. Initially I thought of QR codes, but then realised some of the “locations” are DIP IC storage tubes, which do not permit large square labels. I did some experiments with Code-128, but found it was near impossible to reliably encode a UUID that way, my phone had difficulty recognising the entire barcode.

I returned to the idea of QR-codes, and found that my phone will scan a 10mm×10mm QR code printed on a page. That’s about the right height for the side of an IC tube. We had some inkjet labels kicking around, small 38.1×21.2mm labels arranged in a 5×11 grid (Avery J8651/L7651 layout). Could I make a script that generated a page full of QR codes?

Turns out, pylabels will do this. It is built on reportlab which amongst other things, embeds a barcode generator that supports various symbologies including QR codes. @hugohadfield had contributed a pull request which demonstrated using this tool with QR codes. I just had to tweak this for my needs.

# This file is part of pylabels, a Python library to create PDFs for printing
# labels.
# Copyright (C) 2012, 2013, 2014 Blair Bonnett
#
# pylabels is free software: you can redistribute it and/or modify it under the
# terms of the GNU General Public License as published by the Free Software
# Foundation, either version 3 of the License, or (at your option) any later
# version.
#
# pylabels is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
# A PARTICULAR PURPOSE.  See the GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along with
# pylabels.  If not, see <http://www.gnu.org/licenses/>.

import uuid

import labels
from reportlab.graphics.barcode import qr
from reportlab.lib.units import mm

# Create an A4 portrait (210mm x 297mm) sheets with 5 columns and 13 rows of
# labels. Each label is 38.1mm x 21.2mm with a 2mm rounded corner. The margins
# are automatically calculated.
specs = labels.Specification(210, 297, 5, 13, 38.1, 21.2, corner_radius=2,
        left_margin=6.7, right_margin=3, top_margin=10.7, bottom_margin=10.7)

def draw_label(label, width, height, obj):
    size = 12 * mm
    label.add(qr.QrCodeWidget(
            str(uuid.uuid4()),
            barHeight=height, barWidth=size, barBorder=2))

# Create the sheet.
sheet = labels.Sheet(specs, draw_label, border=True)

sheet.add_labels(range(1, 66))

# Save the file and we are done.
sheet.save('basic.pdf')
print("{0:d} label(s) output on {1:d} page(s).".format(sheet.label_count, sheet.page_count))

The alignment is slightly off, but not severely. I’ll fine tune it later. I’m already through about 30 of those labels. It’s enough to get me started.

For the larger J8165 2×4 sheets, the following specs work. (I can see this being a database table!)

# Specifications for Avery J8165 2×4 99.1×67.7mm
specs = labels.Specification(210, 297, 2, 4, 99.1, 67.7, corner_radius=3,
        left_margin=5.5, right_margin=4.5, top_margin=13.5, bottom_margin=12.5)

Later when I get the database ready (standing up a new VM to host the database and writing the code) I can enter this information in and get back on top of my inventory once again.

Dec 282020
 

So, a while back I tore apart an old Logitech wireless headset with the intention of using its bits to make a wireless USB audio interface. I was undecided whether the headset circuitry would “live” in a new headset, or whether it’d be a separate unit to which I could attach any headset.

I ended up doing the latter. I found through Mouser a suitable enclosure for the original circuitry and have fitted it with cable glands and sockets for the charger input (which now sports a standard barrel jack) and a DIN-5 connector for the earpiece/microphone connections.

The first thing to do was to get rid of that proprietary power connector. The two outer contacts are the +6V and 0V pins, shown here in orange and white/orange coloured cable respectively. I used a blob of heat-melt glue to secure it so I didn’t rip pads off.

Replacing the power connector. +6V is orange, 0V is orange/white.

The socket is “illuminated” by a LED on the PCB. Maybe I’ll look at some sort of light-pipe arrangement to bring that outside, we’ll see.

The other end, just got wired to a plain barrel jack. Future improvement might be to put a 6V DC-DC converter, allowing me to plug in any old 12V source, but for now, this’ll do. I just have to remember to watch what lead I grab. Whilst I was there, I also put in a cable gland for the audio interface connection.

Power socket and audio connections mounted in case.

One challenge with the board design is that there is not one antenna, but two, plus some rather lumpy tantalum capacitors near the second antenna. I suspect the two antennas are for handling polarisation, which will shift as you move your head and as the signal propagates. Either way, they meant the PCB wouldn’t sit “flat”. No problem, I had some old cardboard boxes which provided the solution:

PCB spacer, with cut-out for high-clearance parts.

The cardboard is a good option since it’s readily available and won’t attenuate the 2.4GHz signal much. It was also easy to work with.

I haven’t exposed the three push-buttons on that side of the PCB at this stage. I suppose drilling a hole and making a small “poker” to hit the buttons isn’t out of the question. This isn’t much different to what Logitech’s original case did. I’ll tackle that later. I need a similar solution for the slide-switch used for power.

One issue I faced was wrangling the now over-length FFC that linked the two sides. Previously, this spanned the headband, but now it only needed to reach a few centimetres at most. Eyeballing the original cable, I found this short replacement. I’ll have to figure out how to mount that floating PCB somehow, but at least it’s a clean solution.

Replacement FFC.

At this point, it was a case of finish wiring everything up. I haven’t tried any audio as yet, that will come in time. It still powers up, sees the transceiver, so there’s still “life” in this.

Powering up post-surgery.

I plugged it into its charger and let it run for a while just to top the LiPo cell.

Charging for the first time since mounting.

One thing I’m not happy with is the angle the battery is sitting at, since it’s just a bit wider than the space between the mounting posts. I might try shaving some material off the posts to see if I can get the battery to sit “flat”. I only need about 1mm, which should still allow enough clearance for the screwdriver and screw to pass the cell safely.

The polarity of the speakers is a guess on my part. Neither end seemed to be grounded, hopefully the drivers don’t mind being “common-ed”, otherwise I might need to cram some small isolation transformers in there.

Dec 132020
 

So, in the last 12 months or so, I’ve grown my music collection in a big way. Basically over the Christmas – New Year break, I was stuck at home, coughing and spluttering due to the bushfire smoke in the area (and yes, I realise it was no where near as bad in Brisbane as it was in other parts of the country).

I spent a lot of time listening to the radio, and one of the local radio stations was doing a “25 years in 25 days” feature, covering many iconic tracks from the latter part of last decade. Now, I’ve always been a big music listener. Admittedly, I’m very much a music luddite, with the vast majority of my music spanning 1965~1995… with some spill over as far back as 1955 and going as forward as 2005 (maybe slightly further).

Trouble is, I’m not overly familiar with the names, and the moment I walk into a music shop, I’m like the hungry patron walking into a food court: I want to eat something, but what? My mind goes blank as my mind is bombarded with all kinds of possibilities.

So when this count-down appeared on the radio, naturally, I found myself looking up the play list, and I came away with a long “shopping list” of songs I’d look for. Since then, a decent amount has been obtained as CDs from the likes of Amazon and Sanity… however, for some songs, I found it was easiest to obtain them as a digital download in FLAC format.

Now, for me, my music is a long-term investment. An investment that transcends changes in media formats. I do agree with ensuring that the “creators” of these works are suitably compensated for their efforts, but I do not agree with paying for the same thing multiple times.

A few people have had to perform in a studio (or on stage), someone’s had to collect the recordings, mix them, work with the creators to assemble those into an album, work with other creative people to come up with cover art, marketing… all that costs money, and I’m happy to contribute to that. The rest is simply an act of duplication: and yes, that has a cost, but it’s minimal and highly automated compared to the process of creating the initial work in the first place.

To me, the physical media represents one “license”, to perform that work, in private, on one device. Even if I make a few million copies myself, so long as I only play one of those copies at a time, I am keeping in the spirit of that license.

Thus, I work on the principle of keeping an “archival” copy, from which I can derive working copies that get day-to-day playback. The day-to-day copy will be in some lossy format for convenience.

A decade ago that was MP3, but due to licensing issues, that became awkward, so I switched over to Ogg/Vorbis, which also reduced the storage requirements by 40% whilst not having much audible impact on the sound quality (if anything, it improved). Since I also had to ditch the illegally downloaded MP3s in the process, that also had a “cleaning” effect: I insisted then on that I have a “license” for each song after that, whether that be wax cylinder, tape reel, 8-track, cassette tape, vinyl record, CD, whatever.

This year saw the first time I returned to music downloads, but this time, downloading legally purchased FLAC files. This leads to an interesting problem, how do you store these files in a manner that will last?

Audio archiving and CDs

I’m far from the first person with this problem, and the problem isn’t specific to audio. The archiving business is big money, and sometimes it does go wrong, whether it be old media being re-purposed (e.g. old tapes of “The Goon Show” being re-recorded with other material by the BBC), destruction (e.g. Universal Studios fire), or just old fashioned media degredation.

The procedure for film-based media (whether it be optical film, or magnetic media) usually involves temperature and humidity control, along with periodic inspection. Time-consuming, expensive, error prone.

CDs are reasonably resilient, particularly proper audio CDs made to the Red Book audio disc standard. In the CD-DA standard, uncompressed PCM audio is Reed Solomon encoded to achieve forward error correction of the PCM data. Thus, if a minor surface defect develops on the media, there is hopefully enough data intact to recover the audio samples and play on as if nothing had happened.

The fact that one can take a disc purchased decades ago, and still play it, is testament to this design feature.

I’m not sure what features exist in DVDs along the same lines. While there is the “video object” container format, the purpose of this seems to be more about copyright protection than about resiliency of the content.

Much of the above applies to pressed media. Recordable media (CD-Rs) sadly isn’t as resilient. In particular, the quality of blanks varies, with some able to withstand years of abuse, and others degrading after 18 months. Notably, the dye fades, and so you start to experience data loss beginning with the edge of the disc.

This works great for stuff I’ve purchased on CDs. Vinyl records if looked after, will also age well, although it’d be nice to have a CD back-up in case my record player packs it in. However, this presents a problem for my digital downloads.

At the moment, my strategy is to download the files to a directory, save a copy of the email receipt with them, place my GPG public key along-side, take SHA-256 hashes of all of the files, then digitally sign the hashes. I then place a copy on an old 1TB HDD, and burn a copy to CD-R or DVD-R. This will get me by for the next few years, but I’ve been “burned” by recordable media failing, and HDDs are not infallible either.

Getting discs pressed only makes sense when you need thousands of copies. I just need one or two. So I need some media that will last the distance, but can be produced in small quantities at home from readily available blanks.

Archiving formats

So, there are a few options out there for archival storage. Let’s consider a few:

Magnetic tape

Professional outfits seem to work on tape storage. Magnetic media, with all the overheads that implies. The newest drive in the house is a DDS-2 DAT drive, the media for which has not been produced in years, so that’s a lame duck. LTO is the new kid on the block, and LTO-6 drives are pricey!

Magneto-Optical

MO drives are another option from the past… we do have a 5¼” SCSI MO drive sitting in the cupboard, which takes 2GB cartridges, but where do you get the media from? Moreover, what do I do when this unit croaks (if it hasn’t already)?

Flash

Flash media sounds tempting, but then one must remember how flash works. It’s a capacitor on the gate of a MOSFET, storing a charge. The dielectric material around this capacitor has a finite resistance, which will cause “leakage” of the charge, meaning over time, your data “rots” much like it does on magnetic media. No one is quite sure what the retention truly is. NOR flash is better for endurance than NAND, but if it’s a recent device with more than about 32MB of storage, it’ll likely be NAND.

PROM

I did consider whether PROMs could be used for this, the idea being you’d work out what you wanted to store, burn a PROM with the data as ISO9660, then package it up with a small MCU that presents it as CD-ROM. The concept could work since it worked great for game consoles from the 80s. In practice they don’t make PROMs big enough. Best I can do is about 1 floppy’s worth: maybe 8 seconds of audio.

Hard drives

HDDs are an option, and for now that’s half my present interim solution. I have a 1TB drive formatted UDF which I store my downloads on. The drive is one of the old object storage drives from the server cluster after I upgraded to 2TB drives. So not a long-term solution. I am presently also recovering data from an old 500GB drive (PATA!), and observing what age does to these disks when they’re not exercised frequently. In short, I can’t rely on this alone.

CDs, DVDs and Bluray

So, we’re back to optical media. All three of these are available as blank record-able media, and even Bluray drives can read CDs. (Unlike LTO: where an LTO-$X drive might be backward compatible with LTO-$(X-2) but no further.)

There are blanks out there that are designed for archival use, notably the M-Disc DVD media, are allegedly capable of lasting 1000 years.

I don’t plan to wait that long to see if their claims stack up.

All of these formats use the same file systems normally, either ISO-9660 or UDF. Neither of these file systems offer any kind of forward error correction of data, so if the dye fades, or the disc gets scratched, you can potentially lose data.

Right now, my other mechanism, is to use CDs and DVDs, burned with the same material I put on the aforementioned 1TB HDD. The optical media is formatted ISO-9660 with Joliet and Rock-Ridge extensions. It works for now, but I know from hard experience that CD-Rs and DVD-Rs aren’t forever. Question is, can they be improved?

File system thoughts

Obviously genuinely better quality media will help in this archiving endeavour, but the thought is can I improve the odds? Can I sacrifice some storage capacity to achieve data resilience?

Audio CDs, as I mentioned, use Reed-Solomon encoding. Specifically, Cross-Interleaved Reed-Solomon encoding. ISO-9660 is a file system that supports extensions on the base standard.

I mentioned two before, Rock-Ridge and Joliet. On top of Rock-Ridge, there’s also zisofs, which adds transparent decompression to a Rock-Ridge file system. What if, I could make a copy of each file’s blocks that were RS-encoded, and placed them around the disc surface so that if the original file was unreadable, we could turn to the forward-error corrected copy?

There is some precedent in such a proposal. In Usenet, the “parchive” format was popularised as a way of adding FEC to files distributed on Usenet. That at least has the concept of what I’m wishing to achieve.

The other area of research is how can I make the ISO-9660 filesystem metadata more resilient. No good the files surviving if the filesystem metadata that records where they are is dead.

Video DVD are often dual UDF/ISO-9660 file systems, the so-called “UDF Bridge” format. Thus, it must be possible for a foreign file system to live amongst the blocks of an ISO-9660 file system. Conceptually, if we could take a copy of the ISO-9660 filesystem metadata, FEC-encode those blocks, and map them around the drive, we can make the file system resilient too.

FEC algorithms are another consideration. RS is a tempting prospect for two reasons:

zfec used in Tahoe-LAFS is another option, as is Golay, and many others. They’ll need to be assessed on their merits.

Anyway, there are some ideas… I’ll ponder further details in another post.

Dec 012020
 

The last few years have been a testing time for world politics. Recent events have seen much sabre-rattling, but really, none of this has suddenly “appeared”… it’s been slowly bubbling away for some time now.

Economic tunnel-vision

For a long time now, much of our world has revolved around the unit of currency. Call it the US dollar, the Australian dollar, the British Pound, Chinese Yuan, whatever… for the past 50 years or so, we have been “seduced” by two concepts which developed in the latter part of last century:

  • economies of scale
  • just-in-time production

The concepts are on the surface, fairly simple.

Just-in-time production forgoes having a large stock and inventory of components to feed your supply-lines in favour of ordering just enough of what you need to fulfil the orders you have active at the present moment. So long as nothing disrupts your supply lines, all is rosy. You might keep a small inventory just as a buffer, but in general, that might only last a day or so.

Economies of Scale was the other concept that really took hold last century, and was the reason why smaller workshops got shut down in favour of making lots of a widget in one central place, and shipping it out to everywhere from that one point.

Again, works great, until something happens in that place where you are doing the manufacturing, or something happens that hampers your ability to shift parts or product around.

The latter in particular took a dark turn when instead of making things close to where the demand was, “we” instead outsourced it, shifting the production to places where the labour was cheapest. As a consequence, many countries are forced to import as they no longer have the expertise or capabilities to manufacture products locally.

Both these concepts were ideas conceived with people wearing rose-coloured glasses, they emphasise cost-cutting over contingency plans on the grounds that disruption to manufacturing and supplies are unlikely events.

The rise of “the world’s factory”

Over time, companies pushed this concept of centralised manufacturing to extremes, whereby they were largely making things in one place. Apple for instance, were leaning heavily on Foxconn in China for the manufacture of their hardware.

None of this is without precedent, when I was growing up, Nike used to cop a lot of flack for the exploitation of workers in various third-world localities.

That said, history has often had something to say about putting all of one’s eggs in a single basket. There’s mostly nothing wrong with having products made in China, the problem is having things made exclusively in China.

At first, products made in China were seen as dodgy knock-offs of things made elsewhere. The same was said of things made in Japan in the 1950s and 1960s… but then Japan improved their systems and processes, and with it, the products they made improved too. In the case of China, initially things were done “cheaply”, which gave rise to a perception that things made in China were all “dodgy”.

Over time, processes again improved, and now there are some great examples of products and services, which are designed and built by people based in China. Stuff that works, and is reliable. There are some very smart people over there who are great at their craft.

That said, manufacturing all revolves around the dollar, and so when it came to cutting costs, something had to give.

Trouble in Xinjiang

With this global demand for manufacturing, China had a problem trying to find people to do the mundane jobs. Quality had to be maintained, and so some organisations over there tried to solve the cost problem a different way: cheaper labour.

Now, it’s well known that China’s government is not a government that particularly values individualism. This is evident in the manner in which the Tienanmen Square protests were so violently silenced.

The Uighur Muslim community is one such group that has been in their sights for a long time. This is a group that has been clamped-down on for more than 6 years. Over time, a narrative was developed that tried to cast this group as being “trouble makers” in need of “re-education”.

Over time, members of this community found themselves co-opted into being the cogs in this “global” factory. At first, such actions were hidden from view, including from the direct customers of these factories.

COVID-19 makes its entrance

So, over time, global manufacturing has shifted to China, in some cases involving forced labour in the effort to drive the cost down and make the end product seem more competitive.

Much of these problems have been hidden from the outside world, but for now, whilst we’re starting to learn of these issues, we still do the majority of our manufacturing in one country.

Then, about this time last year, a bizarre respiratory condition started showing up in Wuhan. Nobody knew much about this condition, other than the fact that it was discovered it was highly contagious.

Even today, we’re still unsure exactly how it came about, but the smart money is that it jumped from some reservoir host such as a bat, via some intermediate host, to humans. Bats in particular are major carriers of all kinds of corona-viruses, and as such, are a highly probably suspect in this.

I do not believe it is synthetic in origin.

COVID-19 threw a major spanner in the works for everybody. Community event calendars looked like an utter train-wreck with cancellations and deferrals all over the place. For me, some of the casualties I was looking forward to include the 2020 Yarraman to Wulkuraka bike ride and numerous endurance horse-riding events (where I assist in operations).

It also threw a major spanner in the works for just-in-time manufacturing (since freight was running inefficiently due to a lack of flights) and rolling shut-downs across China as COVID-19 did its worst.

Some businesses have already closed for good.

Knee-jerk reactions

Numerous countries, notably ours, called for an investigation into the origins and initial handling of the COVID-19 pandemic.

I for one, think such an investigation should go ahead. We owe it to the people who have lost their lives, and those who have lost their livelihoods, to this condition, that we try and find out what went wrong. It’s not about blaming people.

We’re not interested in who made the mistakes, it’s more a question of what the mistakes were. This event will repeat itself again, and again, until such time as we get to understand what “we” (globally) did wrong.

China’s government does not seem to have seen it this way. It’s as if they see it as a witch-hunt. As a result, we as a nation that seems to have been singled-out, with heavy tariffs placed on goods that we as a nation export to China.

Notably absent in this trade-war is iron ore, partially because the other major producer of iron ore, Brazil, has been left a complete basket-case by this pandemic, and Australia was a major supplier of iron ore long before COVID-19 reared its ugly head.

A plan “B”

Right now, things are escalating in this diplomatic row. Whilst the politicians are trying to resolve this with as little fuss as possible, I think China’s position is becoming very clear. They’ve told the world “F You” in no uncertain terms.

We are most definitely dealing with a rebellious and violent teenager, more than capable of smashing holes in a few walls and inflicting grievous bodily harm.

I think it would be wonderful if things could be reset back to the way they were, but at the same time, I think that really, we may need to realise that “peak China” days may be behind us now.

I know there are organisations that have built their entire business model around exports to China, and that literally overnight, conditions have changed which now make that greatly risk business viability.

They are geared around the huge appetite that this country’s people have previously demonstrated for our goods and services. I think now, more than ever, we should be looking around. Where else can I outsource to? Where else can I sell to? How can we make do with less demand?

If China does come around, then sure, maybe a certain portion of your market can be serviced there. I think it folly though to be reliant on one single region for your supply or demand though.

Two or three alternatives may not totally balance things, but having at least a partial income is better than none at all!

The Australian coat-of-arms features the emu and the kangaroo. These animals are quite different from one another, but they share a few common attributes. Yes, some might say they’re two of the less brainy members of the animal kingdom, but also, they are not known for going “backwards”.

Whilst we momentarily look over our shoulder at our past, I think it important that we keep moving “forwards”.

Learning from our mistakes

I think in all of this, it’s fair to say none of us are perfect. Yes, our SAS troops have been implicated in some truly horrendous war crimes. Not all of them, thankfully, but enough to cast a cloud over the military in general. Some of the Army’s chopper pilots are not exactly famous for fast reporting of fires either.

We’re investigating this, and yes, some of the top brass are ducking for cover, as it’s likely some know more than they’ve been letting on. An analysis of what went wrong will be done, and we, collectively, will learn from those mistakes.

In the case of COVID-19, for the first few months of 2020, we were told “No, we don’t need help, we’re fine, we’ve got this!”. Taiwan saw this, and immediately sprang to action, as did many other nations close to China. They’ve seen similar things happen before (SARS, MERS), and so maybe their scepticism shielded them somewhat.

I think one of the biggest lessons of all is to realise that asking for help is not a sign of weakness, it’s a sign of maturity. We’re on this planet, together. We are in this mess, together. We need to work this all out, together.

What am I doing?

So, based on the above… where do I sit? Not on the fence.

I myself have started seriously considering my suppliers.

In particular, I have practically destroyed my credentials for AliExpress, having bought the last few things I’m likely to want from there. I’ve ordered printed circuit boards from a supplier in Hong Kong.

During last year, I had ordered a few PCBs from their sister factory in mainland China as I was concerned about the civil unrest there (and on that, I do think the people there have a valid point to raise) causing delays, but had originally intended to move things back once things settled down. However, with China being so adamant that Hong Kong is “theirs”, I’m forced to treat Hong Kong the same as mainland China.

As such, I’ll probably be looking to the US, Europe or India to evaluate options there. I might still use the old Hong Kong supplier, but they won’t be the sole supplier.

Where possible, I’ll probably be paying more attention to country-of-origin for products I buy from now on, and preferring local options where possible. This won’t always be the case, and some things will have to be imported from China, but I aim to diversify my sources.

I may start making things myself. Yes, time-consuming, expensive, but ultimately, this means I become the master of my own destiny, it’s likely a worthwhile journey to undertake.

Above all, I am not out to discriminate against the people of China. I may not always agree with some of their customs, but that does not give one the right to indulge in racism. My only real complaint with China at this time, is the conduct of its government.

Maybe with time, diplomatic relations might turn this around, and we may see a more co-operative Chinese government, only time will tell on that.

In the meantime, I plan to not reward their government for what I consider, bad behaviour.

Nov 042020
 

So, today I was doing some work for my workplace, when a critical 4G modem-router dropped offline, cutting me off. Figuring it was a temporary issue, I thought I’d resume this headset rebuild.

The replacement battery had turned up a few weeks back, a beefy 1000mAh 3.7V LiPo pack intended for Raspberry Pi PiJuice battery supplies. This pack was chosen because it has a similar arrangement to the original Logitech battery: single cell with a 10k thermistor. As a bonus, the pack appears to be at least triple the original cell’s capacity. I just had to wire it up.

The new battery being charged by the headset circuitry

The connectors are not compatible, being both physically different sizes and the thermistor and 0V pins being swapped. Black and Red are the same conventions on both packs, that is, red for +3.7V and black for 0V… the thermister connects to the blue wire on the old pack and yellow on the new pack.

I’m at two minds whether to embed the electronics directly into the earmuffs, or whether to just wire the microphone/speaker connections to a DIN-5 connector to make it compatible with my other headsets.

For now at least, the battery is charging, not getting hot, and the circuitry detects the base transceiver when it is plugged into a laptop USB port, so the circuitry is still “functional”.

Sep 052020
 

So, this is not really news… for the past 12 months or so, the scammers have been busy. They’ve been calling us long before we moved to the NBN, and of course we’ve just hung up the moment they started their spiel. The dead giveaway is the seconds of silence at the start of the call. Dead silence.

Of course, it’s not just the NBN, we’ve had “Amazon Prime”, “Visa”, “Telstra” and others call. Far and above all others has been NBN-related scams.

The latest on the NBN front is they claim your connection has been “compromised” by “other users”, in a British accent.

This is the call I received this morning. You can hear other callers in the back-ground. This is not a professional call-centre, this is a back-yard operation!

The home number recently moved from the PSTN to a VoIP service, so this actually gives me a lot of scope for dealing with this. For now, it’s a manual process: when they call, put them on hold. If I put someone on hold on this number, you better be a Deborah Harry fan!

Long term, I’ll probably look at seeing if I can sample the first 2 seconds of call audio, and if silent, direct the call to a voicemail service or IVR menu. In the meantime, it’s a manual process.

Thankfully we get caller ID now, something Telstra used to charge for.

MoH considerations

There’s three big considerations with music on hold:

  1. Licensing: You need to do the research into how music is licensed in your country. If you want to be safe, go look for something that is “public domain” or one of the “Creative Commons” family of licenses. In Australia, you probably want to have a look at this page if you want to use a piece of commercial music (like “Hangin’ On The Telephone”).
  2. Appropriateness: is the caller likely to get offended by your choice of hold music? (Then again, maybe that’s your goal?)
  3. Suitability for your chosen audio CODEC: Some audio CODECs, particularly the lower-bitrate ones, do an unsurprisingly terrible job, with music.

Regarding point (3) always test your music choice! Try different CODEC settings, and ensure it sounds “good” with ALL of them. Asterisk actually supports transcoding, but will choose the format that takes the least effort. RIFF Wave files (.wav) can be used too, but they must be mono files.

I slapped a CD-quality 44.1kHz stereo version in there, and wondered why it got ignored: that’s why — it wasn’t mono and Asterisk won’t down-mix.

Signed 16-bit linear is a pretty safe bet: effort of going to that to PCMA/PCMU (G.711a/G.711u) isn’t a big deal, but to anything else, you’re at the mercy of the CODEC implementation. Using G.722, things sounded fine, but I found even with Speex settings cranked right up (quality=10 complexity=10 enhancement=true), my selection of audio sounded terrible in Ultra-wideband Speex mode. I wound up with the following in my MoH directory:

vk4msl-gap# ls -l /usr/local/share/moh/
total 8280
-rw-r--r--  1 root  wheel   527836 Aug 29 17:02 moh.sln
-rw-r--r--  1 root  wheel  1055670 Aug 29 17:02 moh.sln16
-rw-r--r--  1 root  wheel  2111342 Aug 29 17:01 moh.sln32
-rw-r--r--  1 root  wheel   104793 Sep  5 12:17 moh.spx
-rw-r--r--  1 root  wheel   177879 Sep  5 12:34 moh.spx16
-rw-r--r--  1 root  wheel   184617 Sep  5 12:16 moh.spx32
  • .sln* is for 16-bit signed linear, the 16 and 32 suffixes refer to the sample rate, so 16kHz (wideband) and 32kHz (ultra-wideband). These should otherwise be “raw” files (no headers). Use sox <input> -r <rate> -b 16 -e signed-integer -c 1 <output>.sln to convert.
  • .spx* is Speex: Here again, I’ve got 8kHz, 16kHz and 32kHz versions. These were encoded using the following command: speexenc --quality 10 --comp 10 moh.wav moh.spx

There are various other CODEC selections, but right now, I’ve just focussed on signed linear and Speex since the latter is what needs careful attention paid. I tested between my laptop running Twinkle and the ATA on my network, and when I placed the call on hold from my laptop it sounded fine there, so I figure it’ll be “good enough”.


“Visa Security Department”

So, had “Visa” call me this morning… this too, is another scam. Anonymous caller. Bear in mind I do not actually have a credit card. Never have had one, never will.

“Visa security department”

They didn’t stick around, seems their system just drops the call if it hears a noise which isn’t a DTMF tone.

Interestingly, both this call, and the previous one were G.711u (µ-law PCM). Australia normally uses A-law PCM. America uses µ-law encoding. What’s the difference? Both are logarithmic encoding schemes. µ-law encoding has a wider dynamic range, however A-law has less distortion for quieter signals.


“Amazon”

“Amazon”

Almost the same structure as before. Audio CODEC was G.729 this time.