Aug 052020
 

Some people make fun of my plain-text emails, but really, I think it’s time we re-consider our desire for colours, hyperlinks and inline images in email messages, especially for those who use web-based email clients as their primary email interface.

The problem basically boils down to this: HTML gives too much opportunity for mischief by a malicious party. In most cases, HTML isn’t even necessary to convey the information required. Tables are about the only real “feature” that is hard to replicate in plain text, for everything else there’s reasonable de-facto standards already in existence.

Misleading hyperlinks

HTML has a feature where a link to a remote page can take on any descriptive text the author desires, including images and other valid URIs. For example, the following piece of HTML code is perfectly valid:

<a href="http://www.google.com.malicious.website.example.com/">
   http://www.google.com
</a>

There are many cases where this feature is “useful”, however in an email, it can be used to disguise phishing attempts. In the above example, the link is claiming to be to Google’s search website, however would otherwise re-direct that user to some other, likely malicious, website.

Granted, not every user can read a URI to determine if it is safe. There are adults who “grew up with the Internet”, that have never typed a URI in an address bar ever, instead relying on tools like search engines to locate websites of interest.

However, it would seem disingenuous to say that because a proportion of the community cannot read a URI, we should hide any and all links from everybody. For that small portion, showing the links won’t make a difference, but it will at least make it easier to avoid such traps.

Media exploits

Media decoders are written by humans, and humans are imperfect, thus it is fair to say there are media decoders that contain bugs, some of which could be disastrous for computer security.

Microsoft had such a problem in their GDI+ JPEG decoder back in 2004. More recently, there was a kernel-level security vulnerability in their TrueType font parser.

Modern HTML allows embedding of all this, and more. Most email clients will also allow you to “preview” an email without opening it. If an email embeds inline media which exploits vulnerabilities such as the one above, just previewing it will be sufficient to gain access.

Details are scarce, but it would appear it was a vulnerability along these lines that allowed unauthorised access into the Australian National University back in 2018.

Scripting

Modern web standards allow all kinds of means for embedding scripts, that is, small pieces of interpreted code which runs client-side in the HTML renderer. ECMAScript (JavaScript) can be embedded:

  • in <script> tags (the traditional way)
  • inside a hyperlink using a javascript: URI
  • HTC and XBL features in Internet Explorer and Mozilla Firefox, respectively.

Probably lots more ways I haven’t thought about.

Web-based email clients

Now, a stand-alone email client such as Microsoft Outlook, Eudora or Mozilla Thunderbird can simply not implement the scripting features, however the problem is highly acute where web-based email clients are used.

Here, you’re viewing an email in a HTML engine that has complete media and scripting capabilities. There’s dozens of ways to embed both forms of content into a blob of HTML, and you are entirely at the mercy of your web-based email client’s ability to sanitise the HTML before it dumps it inside the DOM tree that represents your email client.

As far as the web browser is concerned, the “email” is just another web page, and will not hesitate to execute embedded scripts or render inline media, whether the user wishes it to or not.

It’s not known what ANU uses for their email infrastructure, but many universities are big fans of web-based email since it means they don’t have to explain to end users how to configure their email clients, and provides portability for their users.

Putting users at risk

Despite the above, it would appear there are lots of organisations that are completely oblivious to this problem, and insist on forcing people to render their emails as HTML, putting their customers/users at risk of security breach.

The purpose of multiple formats in the same email is to provide alternate formats of the same content. Not to provide totally different emails because you can’t be stuffed!

For example, my workplace’s hosting provider, recently sent us an email, which when viewed as plain text, read as follows:

Hello Client,
 
Unfortunately your email client is outdated and does not support HTML emails, our system uses HTML emails as standard. You will NOT be able to read this email.
 
HOW DO I READ THIS EMAIL?
 
To read this email please login to your domain manager https://hostingprovider.example.com/login/ and click on Notifications to see a list of all sent emails.
 
Thank You
 
Customer Support

The suggestion that an email client configured to read emails as plain text, counts as it being “outdated” is naïve in the extreme, and I’d expect a hosting provider to know better. I’m thankful I personally don’t purchase services from them!

Then there’s financial service providers. One share registry’s handling of the situation is downright abusive:

Link Market Services sent numerous emails that looked exactly like this.

Yeah, rather than just omitting the text/plain component and letting the email client at this end try to render the HTML as plain text (which works reasonably well in many cases), in this case, they just sent an empty text/plain body:

From: …redacted… <comms@linkmarketservices.com.au>
To: …redacted…
Reply-To: donotreply@linkmarketservices.com.au
Date: Wed, 29 Jul 2020 04:42:30 +0000
Subject: …redacted… Funds Attribution Managed Investment Trust Member Annual
 Statement
Content-Type: multipart/alternative;
 boundary=--boundary_55327_8fbc43bd-48f3-4aa1-9ab7-9046df02b853
ZMID: 9f485235-9848-49cf-9a66-62c215ea86ba-1
Message-ID: <0100017398e10334-d45a0a83-ff4d-4b83-8d19-146b100017f6-000000@us-east-1.amazonses.com>
X-SES-Outgoing: 2020.07.29-54.240.9.110
Feedback-ID: 1.us-east-1.oB/l4dCmGdzC38jjMLirCbscajeK9vK6xBjWQPPJrkA=:AmazonSES


----boundary_55327_8fbc43bd-48f3-4aa1-9ab7-9046df02b853
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"


----boundary_55327_8fbc43bd-48f3-4aa1-9ab7-9046df02b853
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Content-Type: text/html; charset="utf-8"

PCFET0NUWVBFIGh0bWw+Cgo8aHRtbCBsYW5nPSJlbiI+CjxoZWFkPgo8bWV0YSBjaGFyc2V0PSJ1
dGYtOCI+CjxtZXRhIGNvbnRlbnQ9IndpZHRoPWRldmljZS13aWR0aCwgaW5pdGlhbC1zY2FsZT0x
IiBuYW1lPSJ2aWV3cG9ydCI+CjxtZXRhIGNvbnRlbnQ9IklFPWVkZ2UiIGh0dHAtZXF1aXY9Ilgt

Ohh yeah, I’m so fluent in BASE64! I’ve since told them to send it via snail mail. Ensuring they don’t burn down forests posting blank sheets of A4 paper will be their problem.

The alternative: wiki-style mark-up

So, our biggest problem is that HTML does “too much”, so much that it becomes a liability from a security perspective. HTML wasn’t the first attempt at rich text in email… some older email clients used enriched text.

Before enriched text and HTML, we made do with various formatting marks, for instance, *bold text might be surrounded by asterisks*, and /italics indicated with forwardslashes/. _Underlining_ could be done too.

No there was no colour or font size options, but then again this was from a time when teletype terminals were not uncommon. The terminals of that time only understood one fixed-size font, and most did not do colour.

More recently, Wikis have built on this to allow for some basic mark-up features, whilst still allowing the plain text to be human readable. Modern takes of this include reStructured Text and Markdown, the latter being the native format for Wikis on Github.

Both these formats allow for embedding images inline and for tables (Markdown itself doesn’t do tables, but extensions of it do).

In email clients such images should be replaced with place-holders until the user clicks a button to reveal the images (which they should only click if they trust the sender).

Likewise, hyperlinks should be rendered in full, e.g. in a web-based client, the link [a cool website](http://example.com/) might be rendered as <a href="http://example.com">[a cool website](<code>http://www.example.com</code>)</a> — thus allowing for malicious links to be more easily detected. (It also makes them printable.) Only plain text should be permitted as a “label” for a hyperlink.

Use of such a mark-up format would have a number of benefits:

  • the format is minimal, meaning a much reduced attack surface for security vulnerabilities
  • whilst minimal, it would cover the vast majority of peoples’ use cases for HTML email
  • the mark-up is light-weight, reducing bandwidth for those on low-speed links or using lower-power devices

The downside might be for businesses, which rely on more advanced features in HTML to basically make an email look like their letter head. The business community might wish to consider the differences between a printed letter sent via the post, and an email sent electronically. They are different beasts, and trying to treat one as a substitute for the other will end in tears.

In any case, a simple letter head could be embedded as an inline image quite safely if such a feature was indeed required.

It is in our interests to curtail the features used in email communications if we intend to ensure communications remain safe and reliable.

Jul 222020
 

I noted at the last horse endurance ride we did up at Imbil that we had a packet of pancake mix whose “best-before” date had expired early last year. We were too busy that event to go making pancakes for breakfast, so the packet wound up coming back with us in the van.

I made a note of this, bought a new packet to replace the one in the van (turns out there’s a second one there too!) and bring the old packet to use up. Thinking about this, I thought, why not experiment? What would it be like if I used coffee instead of water to make up the mix?

So, I poured in some coffee from the percolator up to the marking on the packaging, and whilst chatting on the Bayside District Amateur Radio Society morning “coffee net”, I cooked up a number of small pikelets.

No I won’t win awards for my pancake flipping skills… but they taste alright.

Two things I’d do different:

  1. use slightly stronger coffee (the mix I used is perhaps a little weak, so the coffee flavouring is only subtle even though the colour is notably darker)
  2. thicken the mixture up a bit so the pikelets are not as thin

In fact, not using >18 month old mixture, or doing the batter from scratch might help too!

Jun 012020
 

Brisbane Area WICEN Group (Inc) lately has been caught up in this whole COVID-19 situation, unable to meet face-to-face for business meetings. Like a lot of groups, we’ve had to turn to doing things online.

Initially, Cisco WebEx was trialled, however this had significant compatibility issues, most notably, under Linux — it just straight plain didn’t work. Zoom however, has proven fairly simple to operate and seems to work, so we’ve been using that for a number of “social” meetings and at least one business meeting so far.

A challenge we have though, is that one of our members does not have a computer or smart-phone. Mobile telephony is unreliable in his area (Kelvin Grove), and so yee olde PSTN is the most reliable service. For him to attend meetings, we need some way of patching that PSTN line into the meeting.

The first step is to get something you can patch to. In my case, it was a soft-phone and a SIP VoIP service. I used Twinkle to provide that link. You could also use others like baresip, Linphone or anything else of your choosing. This connects to your sound card at one end, and a Voice Service Provider; in my case it’s my Asterisk server through Internode NodePhone.

The problem is though, while you can certainly make a call outbound whilst in a conference, the person on the phone won’t be able to hear the conference, nor will the conference attendees be able to hear the person on the phone.

Enter JACK

JACK is a audio routing framework for Unix-like operating systems that allows for audio to be routed between applications. It is geared towards multimedia production and professional audio, but since there’s a plug-in in the ALSA framework, it is very handy for linking audio between applications that would otherwise be incompatible.

For this to work, one application has to work either directly with JACK, or via the ALSA plug-in. Many support, and will use, an alternate framework called PulseAudio. Conference applications like Zoom and Jitsi almost universally rely on this as their sound card interface on Linux.

PulseAudio unfortunately is not able to route audio with the same flexibility, but it can route audio to JACK. In particular, JACKv2 and its jackdbus is the path of least resistance. Once JACK starts, PulseAudio detects its presence, and loads a module that connects PulseAudio as a client of JACK.

A limitation with this is PulseAudio will pre-mix all audio streams it receives from its clients into one single monolithic (stereo) feed before presenting that to JACK. I haven’t figured out a work-around for this, but thankfully for this use case, it doesn’t matter. For our purposes, we have just one PulseAudio application: Zoom (or Jitsi), and so long as we keep it that way, things will work.

Software tools

  • jack2: The audio routing daemon.
  • qjackctl: This is a front-end for controlling JACK. It is optional, but if you’re not familiar with JACK, it’s the path of least resistance. It allows you to configure, start and stop JACK, and to control patch-bay configuration.
  • SIP Client, in my case, Twinkle.
  • ALSA JACK Plug-in, part of alsa-plugins.
  • PulseAudio JACK plug-in, part of PulseAudio.

Setting up the JACK ALSA plug-in

To expose JACK to ALSA applications, you’ll need to configure your ${HOME}/.asoundrc file. Now, if your SIP client happens to support JACK natively, you can skip this step, just set it up to talk to JACK and you’re set.

Otherwise, have a look at guides such as this one from the ArchLinux team.

I have the following in my .asoundrc:

pcm.!default {
        type plug
        slave { pcm "jack" }
}

pcm.jack {
        type jack
        playback_ports {
                0 system:playback_1
                1 system:playback_2
        }
        capture_ports {
                0 system:capture_1
                1 system:capture_1
        }
}

The first part sets my default ALSA device to jack, then the second block defines what jack is. You could possibly skip the first block, in which case your SIP client will need to be told to use jack (or maybe plug:jack) as the ALSA audio device for input/output.

Configuring qjackctl

At this point, to test this we need a JACK audio server running, so start qjackctl. You’ll see a window like this:

qjackctl in operation

This shows it actually running, most likely for you this will not be the case. Over on the right you’ll see Setup… — click that, and you’ll get something like this:

Parameters screen

The first tab is the parameters screen. Here, you’ll want to direct this at your audio device that your speakers/microphone are connected to.

The sample rate may be limited by your audio device. In my experience, JACK hates devices that can’t do the same sample rate for input and output.

My audio device is a Logitech G930 wireless USB headset, and it definitely has this limitation: it can play audio right up to 48kHz, but will only do a meagre 16kHz on capture. JACK thus limits me to both directions running 16kHz. If your device can do 48kHz, that’d be better if you intend to use it for tasks other than audio conferencing. (If your device is also wireless, I’d be interested in knowing where you got it!)

JACK literature seems to recommend 3 periods/buffer for USB devices. The rest is a matter of experiment. 1024 samples/period seems to work fine on my hardware most of the time. Your mileage may vary. Good setups may get away with less, which will decrease latency (mine is 192ms… good enough for me).

The other tab has more settings:

Advanced settings

The things I’ve changed here are:

  • Force 16-bit: since my audio device cannot do anything but 16-bit linear PCM, I force 16-bit mode (rather than the default of 32-bit mode)
  • Channels I/O: output is stereo but input is mono, so I set 1 channel in, two channels out.

Once all is set, Apply then OK.

Now, on qjackctl itself, click the “Start” button. It should report that it has started. You don’t need to click any play buttons to make it work from here. You may have noticed that PulseAudio has detected the JACK server and will now connect to it. If click “Graph”, you’ll see something like this:

qjackctl‘s Graph window

This is the thing you’ll use in qjackctl the most. Here, you can see the “system” boxes represent your audio device, and “PulseAudio JACK Sink”/”PulseAudio JACK Source” represent everything that’s connected to PulseAudio.

You should be able to play sound in PulseAudio, and direct applications there to use the JACK virtual sound card. pavucontrol (normally shipped with PulseAudio) may be handy for moving things onto the JACK virtual device.

Configuring your telephony client

I’ll use Twinkle as the example here. In the preferences, look for a section called Audio. You should see this:

Twinkle audio settings

Here, I’ve set my ringing device to pulse to have that ring PulseAudio. This allows me to direct the audio to my laptop’s on-board sound card so I can hear the phone ring without the headset on.

Since jack was made my default device, I can leave the others as “Default Device”. Otherwise, you’d specify jack or plug:jack as the audio device. This should be set on both Speaker and Microphone settings.

Click OK once you’re done.

Configuring Zoom

I’ll use Zoom here, but the process is similar for Jitsi. In the settings, look for the Audio section.

Zoom audio settings

Set both Speaker and Microphone to JACK (sink and source respectively). Use the “Test Speaker” function to ensure it’s all working.

The patch up

Now, it doesn’t matter whether you call first, then join the meeting, or vice versa. You can even have the PSTN caller call you. The thing is, you want to establish a link to both your PSTN caller and your conference.

The assumption is that you now have a session active in both programs, you’re hearing both the PSTN caller and the conference in your headset, when you speak, both groups hear you. To let them hear each other, do this:

Go to qjackctl‘s patch bay. You’ll see PulseAudio is there, but you’ll also see the instance of the ALSA plug-in connected to JACK. That’s your telephony client. Both will be connected to the system boxes. You need to draw new links between those two new boxes, and the PulseAudio boxes like this:

qjackctl patching Twinkle to Zoom

Here, Zoom is represented by the PulseAudio boxes (since it is using PulseAudio to talk to JACK), and Twinkle is represented by the boxes named alsa-jack… (tip: the number is the PID of the ALSA application if you’re not sure).

Once you draw the connections, the parties should be able to hear each-other. You’ll need to monitor this dialogue from time to time: if either of PulseAudio or the phone client disconnect from JACK momentarily, the connections will need to be re-made. Twinkle will do this if you do a three-way conference, then one person hangs up.

Anyway, that’s the basics covered. There’s more that can be done, for example, recording the audio, or piping audio from something else (e.g. a media player) is just a case of directing it either at JACK directly or via the ALSA plug-in, and drawing connections where you need them.

May 262020
 

Lately, I’ve been socially distancing a home and so there’s been a few projects that have been considered that otherwise wouldn’t ordinarily get a look in on a count of lack-of-time.

One of these has been setting up a Raspberry Pi with DRAWS board for use on the bicycle as a radio interface. The DRAWS interface is basically a sound card, RTC, GPS and UART interface for radio interfacing applications. It is built around the TI TMS320AIC3204.

Right now, I’m still waiting for the case to put it in, even though the PCB itself arrived months ago. Consequently it has not seen action on the bike yet. It has gotten some use though at home, primarily as an OpenThread border router for 3 WideSky hubs.

My original idea was to interface it to Mumble, a VoIP server for in-game chat. The idea being that, on events like the Yarraman to Wulkuraka bike ride, I’d fire up the phone, connect it to an AP run by the Raspberry Pi on the bike, and plug my headset into the phone:144/430MHz→2.4GHz cross-band.

That’s still on the cards, but another use case came up: digital. It’d be real nice to interface this over WiFi to a stronger machine for digital modes. Sound card over network sharing. For this, Mumble would not do, I need a lossless audio transport.

Audio streaming options

For audio streaming, I know of 3 options:

  • PulseAudio network streaming
  • netjack
  • trx

PulseAudio I’ve found can be hit-and-miss on the Raspberry Pi, and IMO, is asking for trouble with digital modes. PulseAudio works fine for audio (speech, music, etc). It will make assumptions though about the nature of that audio. The problem is we’re not dealing with “audio” as such, we’re dealing with modem tones. Human ears cannot detect phase easily, data modems can and regularly do. So PA is likely to do things like re-sample the audio to synchronise the two stations, possibly use lossy codecs like OPUS or CELT, and make other changes which will mess with the signal in unpredictable ways.

netjack is another possibility, but like PulseAudio, is geared towards low-latency audio streaming. From what I’ve read, later versions use OPUS, which is a no-no for digital modes. Within a workstation, JACK sounds like a close fit, because although it is geared to audio, its use in professional audio means it’s less likely to make decisions that would incur loss, but it is a finicky beast to get working at times, so it’s a question mark there.

trx was a third option. It uses RTP to stream audio over a network, and just aims to do just that one thing. Digging into the code, present versions use OPUS, older versions use CELT. The use of RTP seemed promising though, it actually uses oRTP from the Linphone project, and last weekend I had a fiddle to see if I could swap out OPUS for linear PCM. oRTP is not that well documented, and I came away frustrated, wondering why the receiver was ignoring the messages being sent by the sender.

It’s worth noting that trx probably isn’t a good example of a streaming application using oRTP. It advertises the stream as G711u, but then sends OPUS data. What it should be doing is sending it as a dynamic content type (e.g. 96), and if this were a SIP session, there’d be a RTPMAP sent via Session Description Protocol to say content type 96 was OPUS.

I looked around for other RTP libraries to see if there was something “simpler” or better documented. I drew a blank. I then had a look at the RTP/RTCP specs themselves published by the IETF. I came to the conclusion that RTP was trying to solve a much more complicated use case than mine. My audio stream won’t traverse anything more sophisticated than a WiFi AP or an Ethernet switch. There’s potential for packet loss due to interference or weak signal propagation between WiFi nodes, but latency is likely to remain pretty consistent and out-of-order handling should be almost a non-issue.

Another gripe I had with RTP is its almost non-consideration of linear PCM. PCMA and PCMU exist, 16-bit linear PCM at 44.1kHz sampling exists (woohoo, CD quality), but how about 48kHz? Nope. You have to use SDP for that.

Custom protocol ideas

With this in mind, my own custom protocol looks like the simplest path forward. Some simple systems that used by GQRX just encapsulate raw audio in UDP messages, fire them at some destination and hope for the best. Some people use TCP, with reasonable results.

My concern with TCP is that if packets get dropped, it’ll try re-sending them, increasing latency and never quite catching up. Using UDP side-steps this, if a packet is lost, it is forgotten about, so things will break up, then recover. Probably a better strategy for what I’m after.

I also want some flexibility in audio streams, it’d be nice to be able to switch sample rates, bit depths, channels, etc. RTP gets close with its L16/44100/2 format (the Philips Red-book standard audio format). In some cases, 16kHz would be fine, or even 8kHz 16-bit linear PCM. 44.1k works, but is wasteful. So a header is needed on packets to at least describe what format is being sent. Since we’re adding a header, we might as well set aside a few bytes for a timestamp like RTP so we can maintain synchronisation.

So with that, we wind up with these fields:

  • Timestamp
  • Sample rate
  • Number of channels
  • Sample format

Timestamp

The timestamp field in RTP is basically measured in ticks of some clock of known frequency, e.g. for PCMU it is a 8kHz clock. It starts with some value, then increments up monotonically. Simple enough concept. If we make this frequency the sample rate of the audio stream, I think that will be good enough.

At 48kHz 16-bit stereo; data will be streaming at 192kbps. We can tolerate wrap-around, and at this data rate, we’d see a 16-bit counter overflow every ~341ms, which whilst not unworkable, is getting tight. Better to use a 32-bit counter for this, which would extend that overflow to over 6 hours.

Sample rate encoding

We can either support an integer field, or we can encode the rate somehow. An integer field would need a range up to 768k to support every rate ALSA supports. That’s another 32-bit integer. Or, we can be a bit clever: nearly every sample rate in common use is a harmonic of 8kHz or 11.025kHz, so we devise a scheme consisting of a “base” rate and multiplier. 48kHz? That’s 8kHz×6. 44.1kHz? That’s 11.025kHz×4.

If we restrict ourselves to those two base rates, we can support standard rates from 8kHz through to 1.4MHz by allocating a single bit to select 8kHz/11.025kHz and 7 bits for the multiplier: the selected sample rate is the base rate multiplied by the multipler incremented by one. We’re unlikely to use every single 8kHz step though. Wikipedia lists some common rates and as we go up, the steps get bigger, so let’s borrow 3 multiplier bits for a left-shift amount.

7 6 5 4 3 2 1 0
B S S S M M M M

B = Base rate: (0) 8000 Hz, (1) 11025 Hz
S = Shift amount
M = Multiplier - 1

Rate = (Base << S) * (M + 1)

Examples:
  00000000b (0x00): 8kHz
  00010000b (0x10): 16kHz
  10100000b (0xa0): 44.1kHz
  00100000b (0x20): 48kHz
  01010010b (0x52): 768kHz (ALSA limit)
  11111111b (0xff): 22.5792MHz (yes, insane)

Other settings

I primarily want to consider linear PCM types. Technically that includes unsigned PCM, but since that’s losslessly transcodable to signed PCM, we could ignore it. So we could just encode the number of bytes needed for a single channel sample, minus one. Thus 0 would be 8-bits; 1 would be 16-bits; 2 would be 32-bits and 3 would be 64-bits. That needs just two bits. For future-proofing, I’d probably earmark two extra bits; reserved for now, but might be used to indicate “compressed” (and possibly lossy) formats.

The remaining 4 bits could specify a number of channels, again minus 1 (mono would be 0, stereo 1, etc up to 16).

Packet type

For the sake of alignment, I might include a 16-bit identifier field so the packet can be recognised as being this custom audio format, and to allow multiplexing of in-band control messages, but I think the concept is there.

May 222020
 

For the past 2 years now, there’s been quite a bit in the press about the next evolution of mobile telephony standards.

The 5G standard is supposed to bring with it higher speeds and greater user density handling. As with a lot of systems, “5G” itself, describes a family of standards… some concern the use of millimetre-wave communications for tower-to-handset communications, some cover the communications channels for more modest frequencies in the high UHF bands.

One thing that I really can’t get my head around is the so-called claims of health effects.

Now, these are as old as radio communications itself. And for sure, danger to radio transmissions does increase with frequency, proximity and transmit power. There is a reason why radio transmitter sites such as those that broadcast medium wave radio or television are fenced off: electrocution is a real risk at high power.

0G: glorified two-way radios

Mobile phones originally were little more than up-market cordless phones. They often were a luggable device if they were portable at all. Many were not, they were installed into a vehicle (hence “mobile”). No such thing as cell hand-over, and often incoming calls had to be manually switched.

Often the sets were half-duplex, and despite using a hand-set, would have a very distinctive “radio” feel to them, requiring the user use a call-sign when initiating a call, and pressing a push-to-talk button to switch between listening and talking modes.

These did not see much deployment outside the US or maybe Europe.

1G: cellular communications

Back in the late 80s, when AMPS mobile phones (1G) were little more than executive toys, there might not have been much press about, but I’m sure there’d be anecdotal evidence of people being concerned about “radiation”.

If any standard was going to cause problems, it’d have been 1G, since the sets generally used much higher transmit power to compensate for the lack of coverage. They were little more than glorified FM transceivers with a little digital control channel on the side which implemented the selective calling and cell hand-off.

This was the first standard we saw here in Australia, and was the first to be actually practical. Analogue services didn’t last that long, and because of the expense of running AMPS services, they were mostly an expensive luxury. So that did limit its up-take.

2G: voice goes digital

The next big change was 2G, which replaced the analogue FM voice channel and used digital modulation techniques. GSM (which used Gaussian Minimum Shift Keying) and CDMA (which used phase shift keying) encoded everything in a single digital transmission.

This meant audio could be compressed (with some loss in fidelity), and have forward error correction added to make the signal more robust to noise. The cells could handle more users than the 1G services could. Transmit power could be reduced, improving battery life and the sets became cheaper to make and services became more economical.

Then came all the claims that 2G was going to cause us to develop brain cancer.

Now, many of those 2G services started popping up in the mid 90s… has there been a mass pandemic of cancer cases? Nope! About the only thing GSM was bad for, was its ability to leak into any audio frequency circuit.

2G went through a few sub-revisions, but it basically was AMPS done digitally, so fundamentally worked much the same. A sore point was how data was handled. 2G and its predecessors all tried to emulate what the wired network was doing: establishing a dedicated circuit between callers.

The Internet was really starting to get popular, and people wanted a way to access it on the move. GPRS did allow for some of that, but it really didn’t work that well due to the way 2G saw the world, so things moved on.

3G: packet switching

The big change here was the move from “circuits” to sending data around in packets. This is more like how the Internet operates, and so it meant the services could better support an Internet connection.

Voice still went the old-fashioned way, dedicated circuits, since the QoS (quality of service) could be better maintained that way.

The cells could support more users than 2G could, and the packet mode meant mobile Internet finally became a “thing” for most people.

I don’t recall there being the same concern about health as there was for 2G… it was probably still simmering below the surface. Services were deployed further afield and of course, the uptake continued.

4G: bye bye circuit switching

4G or LTE is the current standard that most of us are using. The biggest change is it ditches the circuit switching used in 1G, 2G and 3G. Voice is done using VoLTE… basically the voice call is sent the same way calls are routed over the Internet.

The cell towers are no longer trying to keep a “circuit” connected to your phone as you move around, instead it’s just directing packets. It’s your handset’s problem to sort out whether it heard a given packet already, or re-arrange incoming packets if they arrive out-of-order.

To make this work, obviously the latency inherent in 3G had to be addressed. As a sweetener, the speeds were bumped up, and the voice CODEC could be updated, so we gained wide-band voice calls. (Pity Bluetooth hasn’t kept up!)

5G: new frequencies, higher speed, smaller cells

So far, the cellular standards have largely co-existed in the same frequency bands. 4G actually varies quite a bit in frequency, but basically there are bands from the low UHF around 410MHz right up to microwave at 2600MHz.

Higher frequencies

5G has been contentious because some implementations of it reach even higher. Frequency Range 1 used in the 5G NR standard is basically much the same as 4G, but frequency range 2 soars as high as 40GHz.

Now, in terms of the electromagnetic spectrum, compared to other forms of radiation that we rely on for survival (and have done ever since life first began on this planet), this might as well be DC!

Infrared radiation, which is the very bottom of the “light” spectrum, starts at 300GHz. At these frequencies, we typically forget about frequencies, and instead consider wavelengths (1mm in this case). Visible light is even higher, 430THz (yes, that’s T for tera!).

Now, where do we start to worry about radiation? The nasty stuff begins with ultraviolet radiation, specifically UVC which is at a dizzying 1.1PHz (yes, that’s peta-hertz). It’s worth noting that UVB, which is a little lower in frequency can cause problems when exposure is excessive… however none is dangerous too, you actually need UVB exposure on your body to produce vitamin D for survival!

Dielectric heating

So that’s where the danger is in terms of frequency. I did mention that danger also increases with power… this is why microwave ovens, which typically operate at a fairly modest 2.4GHz frequency, pose a risk.

No, they won’t make you develop cancer, but the danger there is when there’s a lot of power, it can cause dielectric heating. That is, it causes molecules to move around, and in doing so, collide transferring energy which is then given off as heat. It happens at all frequencies in the EM spectrum, but it starts to become more practical at microwave frequencies.

To do something like cook dinner, a microwave oven bombards your food with hundreds of watts of RF energy at it. The microwave has a thick RF shield around it for a reason! If that shield is doing what it should, you might be exposed to no more than a watt of energy escaping the shield. Not enough to cause any significant heating.

I hear that if you put a 4W power amp on a 2.4GHz WiFi access point and put your hand in front of the antenna, you can “feel” framing packets. (Never tried this myself.) That’s pretty high power for most microwave links, and would be many orders of magnitude more than what any cell phone would be capable of.

Verdict: not a health risk

In my view, there’s practically no risk in terms of health effects from 5G. I expect my reasoning above will be thoroughly rubbished by those who are protesting against the roll-out.

However, that does not mean I am in favour of 5G.

The case against 5G

So I seem to be sticking up for 5G above, but let me make one thing abundantly clear, for us here in Australia, I do not think 5G is the “right” thing for us to use. It’s perfectly safe in terms of health effects, but simply the wrong tool for the job.

Small cells

Did I mention before the cells were smaller? Compared to its predecessors, 5G cells are tiny! The whole point of 5G was to serve a large number of users in a small area. Think of 10s of thousands of people crammed into a single stadium (okay, once COVID-19 is put to bed). That’s the use case for 5G.

5G’s range when deployed on the lower bands, is about on par with 4G. Maybe a little better in certain ideal conditions with higher speeds. This is likely the variant we’re most likely to see outside of major city CBDs. How reliable it is at that higher speed remains to be seen, as there’s a crazy amount of DSP going on to make stuff work at those data rates.

5G when deployed with mmWave bands, barely makes 500 metres. This will make deployment in the suburbs prohibitively expensive. Outdoor Wi-Fi or WiMAX might not be as fast, but would be more cost-effective!

Processor load

Did I mention about the crazy amount of DSP going on? To process data streams that exceed 1Gbps, you’re doing a lot of processing to extract the data out of the radio signal. 5G leans heavily on MIMO for its higher speeds, basically dividing the high-rate stream into parts which are directed to separate antennas. This reduces the bandwidth needed to achieve a high data rate, but it does make processing the signal at the far end more complex.

Consequently, the current crop of 5G handsets run hot. How hot? Well, subject them to 29.5°C, and they shut down! Now, think about the weather we get in this country? How many days have we experienced lately where 29°C has been a daily minimum, not a maximum?

5G isn’t the future for Australia

We need a wireless standard that goes the distance, and can take the heat! 5G is not looking so great in this marathon race. Personally, I’d like to see more investment into the 4G services and getting those rolled out to more locations. There’s plenty of locations that are less than a day’s drive from most capital cities, where mobile coverage is next to useless.

Plenty of modern 4GX handsets also suffer technical elitism… they see 3G services, but then refuse to talk to them, instead dropping to -1G: brick emulation. There’s a reason I stick by my rather ancient ZTE T83 and why I had high hopes for the Kite.

I think for the most part, many of the wireless standards we see have been driven by Europe and Asia, both areas with high population densities and relatively cool annual temperatures.

It saddens me when I hear Telstra tell everybody that they “aspire” to be a technology company, when back in the early 90s, Telecom Australia very much was a technology company, and a well respected trail-blazing one at that! It’s time they pulled their finger out and returned to those days.

May 122020
 

So, the other day I pondered about whether BlueTrace could be ported to an older device, or somehow re-implemented so it would be compatible with older phones.

The Australian Government has released their version of TraceTogether, COVIDSafe, which is available for newer devices on the Google and Apple application repositories. It suffers a number of technical issues, one glaring one being that even on devices it theoretically supports, it doesn’t work properly unless you have it running in the foreground and your phone unlocked!

Well, there’s a fail right there! Lots of people, actually need to be able to lock their phones. (e.g. a condition of their employment, preventing pocket dials, saving battery life, etc…)

My phone, will never run COVIDSafe, as provided. Even compiling it for Android 4.1 won’t be enough, it uses Bluetooth Low Energy, which is a Bluetooth 4.0 feature. However, the government did one thing right, they have published the source code. A quick fish-eye over the diff against TraceTogether, suggests the changes are largely superficial.

Interestingly, although the original code is GPLv3, our government has decided to supply their own license. I’m not sure how legal that is. Others have questioned this too.

So, maybe I can run it after all? All I need is a device that can do BLE. That then “phones home” somehow, to retrieve tokens or upload data. Newer phones (almost anything Android-based) usually can do WiFi hotspot, which would work fine with a ESP32.

Older phones don’t have WiFi at all, but many can still provide an Internet connection over a Bluetooth link, likely via the LAN Access Profile. I think this would mean my “token” would need to negotiate HTTPS itself. Not fun on a MCU, but I suspect someone has possibly done it already on ESP32.

Nordic platforms are another option if we go the pure Bluetooth route. I have two nRF52840-DK boards kicking around here, bought for OpenThread development, but not yet in use. A nicety is these do have a holder for a CR2032 cell, so can operate battery-powered.

Either way, I think it important that the chosen platform be:

  1. easily available through usual channels
  2. cheap
  3. hackable, so the devices can be re-purposed after this COVID-19 nonsense blows over

A first step might be to see if COVIDSafe can be cleaved in two… with the BLE part running on a ESP32 or nRF52840, and the HTTPS part running on my Android phone. Also useful, would be some sort of staging server so I can test my code without exposing things. Not sure if there is such a beast publicly available that we can all make use of.

Guess that’ll be the next bit to look at.

May 032020
 

This afternoon, I was pondering about how I might do text-to-speech, but still have the result sound somewhat natural. For what use case? Well, two that come to mind…

The first being for doing “strapper call” announcements at horse endurance rides. A horse endurance ride is where competitors and their horses traverse a long (sometimes as long as 320km) trail through a wilderness area. Usually these rides (particularly the long ones) are broken up into separate stages or “legs”.

Upon arrival back at base, the competitor has a limited amount of time to get the horse’s vital signs into acceptable ranges before they must present to the vet. If the horse has a too-high temperature, or their horse’s heart rate is too high, they are “vetted out”.

When the competitor reaches the final check-point, ideally you want to let that competitor’s support team know they’re on their way back to base so they can be there to meet the competitor and begin their work with the horse.

Historically, this was done over a PA system, however this isn’t always possible for the people at base to achieve. So having an automated mechanism to do this would be great. In recent times, Brisbane WICEN has been developing a public display that people can see real-time results on, and this also doubles as a strapper-call display.

Getting the information to that display is something of a work-in-progress, but it’s recognised that if you miss the message popping up on the display, there’s no repeat. A better solution would be to “read out” the message. Then you don’t have to be watching the screen, you can go about your business. This could be done over a PA system, or at one location there’s an extensive WiFi network there, so streaming via Icecast is possible.

But how do you get the text into speech?

Enter flite

flite is a minimalist speech synthesizer from the Festival project. Out of the box it includes 3 voices, mostly male American voices. (I think the rms one might be Richard M. Stallman, but I could be wrong on that!) There’s a couple of demos there that can be run direct from the command line.

So, for the sake of argument, let’s try something simple, I’ll use the slt voice (a US female voice) and just get the program to read out what might otherwise be read out during a horse ride event:

$ flite_cmu_us_slt -t 'strapper call for the 160 kilometer event competitor numbers 123 and 234' slt-strapper-nopunctuation-digits.wav
slt-strapper-nopunctuation-digits.ogg

Not bad, but not that great either. Specifically, the speech is probably a little quick. The question is, how do you control this? Turns out there’s a bit of hidden functionality.

There is an option marked -ssml which tells flite to interpret the text as SSML. However, if you try it, you may find it does little to improve matters, I don’t think flite actually implements much of it.

Things are improved if we spell everything out. So if you instead replace the digits with words, you do get a better result:

$ flite_cmu_us_slt -t 'strapper call for the one hundred and sixty kilometer event competitor number one two three and two three four' slt-strapper-nopunctuation-words.wav
slt-strapper-nopunctuation-words.ogg

Definitely better. It could use some pauses. Now, we don’t have very fine-grained control over those pauses, but we can introduce some punctuation to have some control nonetheless.

$ flite_cmu_us_slt -t 'strapper call.  for the one hundred and sixty kilometer event.  competitor number one two three and two three four' slt-strapper-punctuation.wav
slt-strapper-punctuation.ogg

Much better. Of course it still sounds somewhat robotic though. I’m not sure how to adjust the cadence on the whole, but presumably we can just feed the text in piece-wise, render those to individual .wav files, then stitch them together with the pauses we want.

How about other changes though? If you look at flite --help, there is feature options which can control the synthesis. There’s no real documentation on what these do, what I’ve found so far was found by grep-ing through the flite source code. Tip: do a grep for feat_set_, and you’ll see a whole heap.

Controlling pitch

There’s two parameters for the pitch… int_f0_target_mean controls the “centre” frequency of the speech in Hertz, and int_f0_target_stddev controls the deviation. For the slt voice, …mean seems to sit around 160Hz and the deviation is about 20Hz.

So we can say, set the frequency to 90Hz and get a lower tone:

$ flite_cmu_us_slt --setf int_f0_target_mean=90 -t 'strapper call' slt-strapper-mean-90.wav
slt-strapper-mean-90.ogg

… or 200Hz for a higher one:

$ flite_cmu_us_slt --setf int_f0_target_mean=200 -t 'strapper call' slt-strapper-mean-200.wav
slt-strapper-mean-200.ogg

… or we can change the variance:

$ flite_cmu_us_slt --setf int_f0_target_stddev=0.0 -t 'strapper call' slt-strapper-stddev-0.wav
$ flite_cmu_us_slt --setf int_f0_target_stddev=70.0 -t 'strapper call' slt-strapper-stddev-70.wav
slt-strapper-stddev-0.ogg
slt-strapper-stddev-70.ogg

We can’t change these values during a block of speech, but presumably we can cut up the text we want to render, render each piece at the frequency/variance we want, then stitch those together.

Controlling rate

So I mentioned we can control the rate, somewhat coarsely using usual punctuation devices. We can also change the rate overall by setting duration_stretch. This basically is a control of how “long” we want to stretch out the pronunciation of words.

$ flite_cmu_us_slt --setf duration_stretch=0.5 -t 'strapper call' slt-strapper-stretch-05.wav
$ flite_cmu_us_slt --setf duration_stretch=0.7 -t 'strapper call' slt-strapper-stretch-07.wav
$ flite_cmu_us_slt --setf duration_stretch=1.0 -t 'strapper call' slt-strapper-stretch-10.wav
$ flite_cmu_us_slt --setf duration_stretch=1.3 -t 'strapper call' slt-strapper-stretch-13.wav
$ flite_cmu_us_slt --setf duration_stretch=2.0 -t 'strapper call' slt-strapper-stretch-20.wav
slt-strapper-stretch-05.ogg
slt-strapper-stretch-07.ogg
slt-strapper-stretch-10.ogg
slt-strapper-stretch-13.ogg
slt-strapper-stretch-20.ogg

Putting it together

So it looks as if all the pieces are there, we just need to stitch them together.

RC=0 stuartl@rikishi /tmp $ flite_cmu_us_slt --setf duration_stretch=1.2 --setf int_f0_target_stddev=50.0 --setf int_f0_target_mean=180.0 -t 'strapper call' slt-strapper-call.wav
RC=0 stuartl@rikishi /tmp $ flite_cmu_us_slt --setf duration_stretch=1.1 --setf int_f0_target_stddev=30.0 --setf int_f0_target_mean=180.0 -t 'for the, one hundred, and sixty kilometer event' slt-160km-event.wav
RC=0 stuartl@rikishi /tmp $ flite_cmu_us_slt --setf duration_stretch=1.4 --setf int_f0_target_stddev=40.0 --setf int_f0_target_mean=180.0 -t 'competitors, one two three, and, two three four' slt-competitors.wav
Above files stitched together in Audacity

Here, I manually imported all three files into Audacity, arranged them, then exported the result, but there’s no reason why the same could not be achieved by a program, I’m just inserting pauses after all.

There are tools for manipulating RIFF waveform files in most languages, and generating silence is not rocket science. The voice itself could be fine-tuned, but that’s simply a matter of tweaking settings. Generating the text is basically a look-up table feeding into snprintf (or its equivalent in your programming language of choice).

It’d be nice to implement a wrapper around flite that took the full SSML or JSML text and rendered it out as speech, but this gets pretty close without writing much code at all. Definitely worth continuing with.

Apr 192020
 

COVID-SARS-2 is a nasty condition caused by COVID-19 that has seen many a person’s life cut short. The COVID-19 virus which originated from Wuhan, China has one particularly insidious trait: it can be spread by asymptomatic people. That is, you do not have to be suffering symptoms to be an infectious carrier of the condition.

As frustrating as isolation has been, it’s really our only viable solution to preventing this infectious condition from spreading like wildfire until we get a vaccine that will finally knock it on the head.

One solution that has been proposed has been to use contract tracing applications which rely on Bluetooth messaging to detect when an infected person comes into contact with others. Singapore developed the TraceTogether application. The Australian Government look like they might be adopting this application, our deputy CMO even suggesting it’d be made compulsory (before the PM poured water on that plan).

Now, the Android version of this, requires Android 5.1. My phone runs 4.1: I cannot run this application. Not everybody is in the habit of using Bluetooth, or even carries a phone. This got me thinking: can this be implemented in a stand-alone device?

The guts of this application is a protocol called BlueTrace which is described in this whitepaper. Reference implementations exist for Android and iOS.

I’ll have to look at the nitty-gritty of it, but essentially it looks like a stand-alone implementation on a ESP32 module maybe a doable proposition. The protocol basically works like this:

  • Clients register using some contact details (e.g. a telephone number) to a server, which then issues back a “user ID” (randomised).
  • The server then uses this to generate “temporary IDs” which are constructed by concatenating the “User ID” and token life-time start/finish timestamps together, encrypting that with the secret key, then appending the IV and an authentication token. This BLOB is then Base64-encoded.
  • The client pulls down batches of these temporary IDs (forward-dated) to use for when it has no Internet connection available.
  • Clients, then exchange these temporary IDs using BLE messaging.

This, looks doable in an ESP32 module. The ESP32 could be loaded up with tokens by a workstation. You then go about your daily business, carrying this device with you. When you get home, you plug the device into your workstation, and it uploads the “temporary IDs” it saw.

I’ll have to dig out my ESP32 module, but this looks like a doable proposition.

Feb 082020
 

So, lately I’ve been helping out with running the base at a few horse rides up at Imbil. This involves amongst other things, running three radios, a base computer, laptops, and other paraphernalia.

The whole kit needs to run off an unregulated 12V DC supply, consisting of two 105Ah AGM batteries which have solar and mains back-up. The outlet for this is a Anderson SB50 connector, fairly standard for caravans.

Catch being, this is temporary. So no permanent linkages, we need to be able to disconnect and pack everything away when not in use. One bug bear is having enough DC outlets for everything. Especially of the 30A Anderson Power Pole variety, since most of our radios use those.

The monitor for the base computer uses a cigarette lighter adapter, while the base computer itself (an Intel NUC) has a cable terminated with a 30A power pole. There’s also a WiFi router which has a micro-USB power input — thankfully the monitor’s adaptor embeds a USB power outlet, so we can run it off that.

We need two amateur radios (one for voice comms, one for packet), and a CB set for communications with the ride organisers (who are otherwise not licensed to use amateur bands). We may also see a move to commercial frequencies, so that’s potentially another radio or two.

I started thinking about ways we could make a modular power distribution system.

The thought was, if we made PDU boxes where the inlet and outlet were nice big SB50s, configured so that they would mate when the boxes were joined up, we could have a flexible PDU system where we just clip it together like Lego bricks.

This is a work in progress, but I figured I’d post what I have so far.

Power outlets on the distribution box, yet to be wired up.

I still need to do the internal wiring, but above is basically what I was thinking of. There’s room for up to 6 consumers via the 30A power pole connections along one side, each with its own 20A breaker. (The connectors are rated at 45A.)

Originally I was aiming for 6 cigarette lighter sockets, but after receiving the parts, I realised that wouldn’t fit, but two seems to work okay, and we can always make a second box and slap that on the end. Each has a 15A breaker.

Protecting the upstream power source is a 50A breaker. So total of the down-stream port + all outlets on the box itself may not exceed 50A.

The upstream and downstream ports are positioned so that boxes can just be butted up against each-other for the connectors to mate. I’ve got to fine-tune the positioning a bit, and right now the connectors are also on an angle, but this hopefully shows the concept…

The idea for maintenance is the box will fold out. Not sure if the connection between all the outputs on the lid will be via a bus bar or using individual cables going to the tie point inside the box just yet. Those 30A outlets are just begging for a single cable to visit each bus-bar style. I also have to figure out how I’ll connect to the cigarette lighter sockets too.

Hopefully I’ll get this done before the next ride event.

Nov 242019
 

The past few months have been quiet for this project, largely because Brisbane WICEN has had my spare time soaked up with an RFID system they are developing for tracking horse rides through the Imbil State Forest for the Stirling’s Crossing Endurance Club.

Ultimately, when we have some decent successes, I’ll probably be reporting more on this on WICEN’s website. Suffice to say, it’s very much a work-in-progress, but it has proved a valuable testing ground for aioax25. The messaging system being used is basically just plain APRS messaging, with digipeating thrown in as well.

Since I’ve had a moment to breathe, I’ve started filling out the features in aioax25, starting with connected-mode operation. The thinking is this might be useful for sending larger payloads. APRS messages are limited to a 63 character message size with only a subset of ASCII being permitted.

Thankfully that subset includes all of the Base64 character set, so I’m able to do things like tunnel NTP packets and CBOR blobs through it, so that stations out in the field can pull down configuration settings and the current time.

As for the RFID EPCs, we’re sending those in the canonical hexadecimal format, which works, but the EPC occupies most of the payload size. At 1200 bits per second, this does slow things down quite a bit. We get a slight improvement if we encode the EPCs as Base64. We’d get a 200% efficiency increase if we could send it as binary bytes instead. Sending a CBOR blob that way would be very efficient.

The thinking is that the nodes find each-other via APRS, then once they’ve discovered a path, they can switch to connected mode to send bulk transfers back to base.

Thus, I’ve been digging into connected mode operation. AX.25 2.2 is not the most well-written spec I’ve read. In fact, it is down-right confusing in places. It mixes up little-endian and big-endian fields, certain bits have different meanings in different contexts, and it uses concepts which are “foreign” to someone like myself who’s used to TCP/IP.

Right now I’m making progress, there’s an untested implementation in the connected-mode branch. I’m writing unit test cases based on what I understand the behaviour to be, but somehow I think this is going to need trials with some actual AX.25 implementations such as Direwolf, the Linux kernel stack, G8BPQ stack and the implementation on my Kantronics KPC3 and my Kenwood TH-D72A.

Some things I’m trying to get an answer to:

  • In the address fields at the start of a frame, you have what I’ve been calling the ch bit.
    On digipeater addresses, it’s called H and it is used to indicate that a frame has been digipeated by that digipeater.
    When seen in the source or destination addresses, it is called C, and it describes whether the frame is a “command” frame, or a “response” frame.

    An AX.25 2.x “command” frame sets the destination address’s C bit to 1, and the source address’s C bit to 0, whilst a “response” frame in AX.25 does the opposite (destination C is 0, source C is 1).

    In prior AX.25 versions, they were set identically. Question is, which is which? Is a frame a “command” when both bits are set to 1s and a “response” if both C bits are 0s? (Thankfully, I think my chances of meeting an AX.25 1.x station are very small!)
  • In the Control field, there’s a bit marked P/F (for Poll/Final), and I’ve called it pf in my code. Sometimes this field gets called “Poll”, sometimes it gets called “Final”. It’s not clear on what occasions it gets called “Poll” and when it is called “Final”. It isn’t as simple as assuming that pf=1 means poll and pf=0 means final. Which is which? Who knows?
  • AX.25 2.0 allowed up to 8 digipeaters, but AX.25 2.2 limits it to 2. AX.25 2.2 is supposed to be backward compatible, so what happens when it receives a frame from a digipeater that is more than 2 digipeater hops away? (I’m basically pretending the limitation doesn’t exist right now, so aioax25 will handle 8 digipeaters in AX.25 2.2 mode)
  • The table of PID values (figure 3.2 in the AX.25 2.2 spec) mentions several protocols, including “Link Quality Protocol”. What is that, and where is the spec for it?
  • Is there an “experimental” PID that can be used that says “this is a L3 protocol that is a work in progress” so I don’t blow up someone’s station with traffic they can’t understand? The spec says contact the ARRL, which I have done, we’ll see where that gets me.
  • What do APRS stations do with a PID they don’t recognise? (Hopefully ignore it!)

Right at this point, the Direwolf sources have proven quite handy. Already I am now aware of a potential gotcha with the AX.25 2.0 implementation on the Kantronics KPC3+ and the Kenwood TM-D710.

I suspect my hand-held (Kenwood TH-D72A) might do the same thing as the TM-D710, but given JVC-Kenwood have pulled out of the Australian market, I’m more like to just say F### you Kenwood and ignore the problem since these can do KISS mode, bypassing the buggy AX.25 implementation on a potentially resource-constrained device.

NET/ROM is going to be a whole different ball-game, and yes, that’s on the road map. Long-term, I’d like 6LoWHAM stations to be able to co-exist peacefully with other stations. Much like you can connect to a NET/ROM node using traditional AX.25, then issue connect commands to jump from there to any AX.25 or NET/ROM station; I intend to offer the same “feature” on a 6LoWHAM station — you’ll be able to spin up a service that accepts AX.25 and NET/ROM connections, and allows you to hit any AX.25, NET/ROM or 6LoWHAM station.

I might park the project for a bit, and get back onto the WICEN stuff, as what we have in aioax25 is doing okay, and there’s lots of work to be done on the base software that’ll keep me busy right up to when the horse rides re-start in 2020.