Jul 232018
 

Lately, I’ve been doing a lot of development work on Tridium Niagara kit.  The Tridium platform is fundamentally built on Sun^WOracle’s Java environment, and is very popular in the building management industry.  There’s an estimate of over 600000 JACE devices (building management controllers) deployed worldwide, so I can fully understand why my workplace is chasing them.

That means coming to grips with their environment, and getting it to talk to ours.  Officially, VRT is a Debian/Ubuntu shop.  They used to dabble with Red Hat years ago, back when VRT and Red Hat were next-door neighbours (in Gardner Close, Milton) but VRT switched to Ubuntu around 2008 after a brief flirt with Gentoo.  Thus, must of our tooling assumes a Debian-based system.

Docker CE on Debian and Ubuntu is a snap.  However, Tridium it would seem, are Red Hat fans, and only support their development environment on Microsoft Windows (yes shudder) or Red Hat Enterprise Linux.  Thus, we have a RHEL 7.3 VM we pass around when we’re doing VM development.  I figured since we’re trying to link Niagara to WideSky, it would be nice to be able to deploy WideSky on RHEL.

WideSky uses Docker as the basis for its deployment, so this sounded simple enough.  Install Docker and docker-compose, throw a bog-standard deployment in there, docker-compose up -d, off we go.

Not so fast.

While there’s Docker EE for RHEL, budget is tight and we really don’t need the support as this isn’t a “production” instance as such.  If the VM gets sick we just roll it back to a known good version and continue from there.  It doesn’t make sense to spend money on purchasing Docker EE.  There’s a CentOS version of Docker CE, and even unofficial instructions on how to shoehorn this into RHEL.  I dutifully followed these, but then hit a road-block with container-selinux: the repository no longer has that version.

Rather than looking for what version they have now, or play Russian Roulette hunting for a random RPM from some mirror site (been there, done that many moons ago before I knew better)… a better plan was to grab the sources and sic rpmbuild onto them so we get a RHEL-native binary.

Building container-selinux on RHEL

  1. Begin by installing dependencies:
    # yum install -y selinux-policy selinux-policy-devel rpm-build rpm-devel git
  2. Download the sources for the RPM:
    $ git clone https://git.centos.org/r/rpms/container-selinux.git
    $ git checkout c7-alt
    $ cd SPECS
  3. Have a look at the .spec file to see where it expects to source the sources from, up the top of the file I downloaded, I saw:
    %global git0 https://github.com/projectatomic/%{name}
    %global commit0 dfb449b771ca4977bb7d5fb6cd7be3cfc14d6fca
  4. Fetch the sources, then check out that commit:
    $ git clone https://github.com/projectatomic/container-selinux
    $ git checkout dfb449b771ca4977bb7d5fb6cd7be3cfc14d6fca
  5. Rename the check-out directory as container-selinux-${GIT_COMMIT_ID}
    $ cd ..
    $ mv container-selinux container-selinux-dfb449b771ca4977bb7d5fb6cd7be3cfc14d6fca
  6. Package it up into a tarball, excluding the .git directory and plop that file in ~/rpmbuild/SOURCES
    $ tar --exclude container-selinux-dfb449b771ca4977bb7d5fb6cd7be3cfc14d6fca/.git \
    -czvf ~/rpmbuild/sources/container-selinux-dfb449b.tar.gz \
    container-selinux-dfb449b771ca4977bb7d5fb6cd7be3cfc14d6fca
  7. Build!
    $ rpmbuild -ba container-selinux.spec

All going to plan, you should have a shiny new RPM file in ~/rpmbuild/RPMS.  Install that, then you can proceed with installing the CentOS version of Docker CE.  If you’re doing this for a production environment, and absolutely must use Docker CE, then I’d advise that perhaps taking the source RPMs for Docker CE and building those on RHEL would be advisable over using raw CentOS binaries, but each to your own.

# docker info
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 0
Server Version: 18.06.0-ce
Storage Driver: overlay2
 Backing Filesystem: xfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: d64c661f1d51c48782c9cec8fda7604785f93587
runc version: 69663f0bd4b60df09991c08812a60108003fa340
init version: fec3683
Security Options:
 seccomp
  Profile: default
Kernel Version: 3.10.0-693.11.1.el7.x86_64
Operating System: Red Hat Enterprise Linux
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 3.702GiB
Name: localhost.localdomain
ID: YVHJ:UXQV:TBAS:E5MH:B4GL:VT2H:A2BW:MQMF:3AGA:FBBX:MINO:24Z6
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false
Jul 222018
 

So, on the bike, I use a portable GPS to keep track of my speed and to track the mileage done on the bike so I know when to next put it in for service. Originally I just relied on the trip counter in the GPS, but then found that this could develop quite an error if left to tick over for a few months.

Thus, I wrote a simple CGI application in Perl and SQLite3 that would track the odometer readings. Plain, simple, and it’s worked quite well, but remembering to punch in the current odometer reading is a chore, and my stats are only as granular as I submit: if I want to see what distance I did on a particular day, I either have to have had the foresight to store readings at the start and end of that day, or I’m stuffed.

I also keep the GPX tracklogs. While the Garmin 650 is not great at handling lots of tracklogs (and for some moronic reason, they name the files “Day DD-MMM-YY HH.MM.SS.gpx”, not something sensible like “YYYY-MM-DDTHH-MM-SS.gpx”), it’s good enough that I can periodically siphon off the track logs for storage on my laptop. I then have a record of where I’ve been.

Theoretically, this also has the distance travelled, I could make a service that just consumes the GPX files, and tallies up the distances that way. Maybe even visualise heat-map style, where I go most. (No prizes for guessing “work” … but where else?)

The existing system uses SQLite, and specifically, its views, as poor man’s stored procedures. It’s hacky, inefficient, and sooner or later I’ll have performance problems. PostGIS is an extension onto PostgreSQL which supports a large number of spatial operations, including finding the length of a series of points, which is exactly the problem I’m trying to solve right now. The catch is, how do you import the data?

Enter GDAL

GDAL is a library of geographic functions for answering these kinds of questions. It ships with a utility ogr2ogr, which can take geographic information in a variety of formats, and convert to a variety of output formats. Crucially, this tool supports consuming GPX files and writing to a PostGIS database.

Loading one file, is easy enough:

$ ogr2ogr -oo GPX_ELE_AS_25D=YES \
  -dim 3 \
  -gt 65536 \
  -lco GEOM_TYPE=geography \
  -preserve_fid \
  -f PostgreSQL \
  "PG:dbname=yourdb" yourfile.gpx \
  tracks track_points

The arguments here were found by trial-and-error.  Specifically, -oo GPX_ELE_AS_25D=YES and -dim 3 tell ogr2ogr to preserve the elevation in the point information (as well as keeping a copy of it in the ele column). -lco GEOM_TYPE=geography tells ogr2ogr to use the geography data type in PostGIS.

Look in the database, and you’ll see two tables, tracks and track_points. Sadly, you don’t get to choose the names of these (not easily anyway, there is -nln, but it then will create one table with the given name, put the tracks in it, then blow it away and replace it with a table of the same name containing points), and there’s no foreign keys between the two.

The fun starts when you try to import a second GPX file. Run that command again, and because of -preserve_fid, you’ll get a primary key clash. Take that away, and the track_fid column in track_points becomes meaningless.

If you drop -preserve_fid, then track_fid gets set to 0 for all points.  Useless.

Importing many GPX files

Out of the box, this just wasn’t going to fly, so we needed to do things a little different.  Firstly, I duplicated the schema that GDAL creates, creating my own tables which will ultimately store the data.  I then used a wrapper shell script that calls psql before and after ogr2ogr so I can re-map the primary keys to maintain relationships.

Schema SQL

CREATE SEQUENCE public.gpx_points_ogc_fid_seq
    INCREMENT 1
    START 1
    MINVALUE 1
    MAXVALUE 2147483647
    CACHE 1;

CREATE SEQUENCE public.gpx_tracks_ogc_fid_seq
    INCREMENT 1
    START 1
    MINVALUE 1
    MAXVALUE 2147483647
    CACHE 1;

CREATE TABLE public.gpx_tracks
(
    ogc_fid integer NOT NULL,
    name character varying COLLATE pg_catalog."default",
    cmt character varying COLLATE pg_catalog."default",
    "desc" character varying COLLATE pg_catalog."default",
    src character varying COLLATE pg_catalog."default",
    link1_href character varying COLLATE pg_catalog."default",
    link1_text character varying COLLATE pg_catalog."default",
    link1_type character varying COLLATE pg_catalog."default",
    link2_href character varying COLLATE pg_catalog."default",
    link2_text character varying COLLATE pg_catalog."default",
    link2_type character varying COLLATE pg_catalog."default",
    "number" integer,
    type character varying COLLATE pg_catalog."default",
    gpxx_trackextension character varying COLLATE pg_catalog."default",
    the_geog geography(MultiLineStringZ,4326),
    CONSTRAINT gpx_tracks_pkey PRIMARY KEY (ogc_fid)
)
WITH (
    OIDS = FALSE
)
TABLESPACE pg_default;

CREATE TABLE public.gpx_points
(
    ogc_fid integer NOT NULL,
    track_fid integer,
    track_seg_id integer,
    track_seg_point_id integer,
    ele double precision,
    "time" timestamp with time zone,
    magvar double precision,
    geoidheight double precision,
    name character varying COLLATE pg_catalog."default",
    cmt character varying COLLATE pg_catalog."default",
    "desc" character varying COLLATE pg_catalog."default",
    src character varying COLLATE pg_catalog."default",
    link1_href character varying COLLATE pg_catalog."default",
    link1_text character varying COLLATE pg_catalog."default",
    link1_type character varying COLLATE pg_catalog."default",
    link2_href character varying COLLATE pg_catalog."default",
    link2_text character varying COLLATE pg_catalog."default",
    link2_type character varying COLLATE pg_catalog."default",
    sym character varying COLLATE pg_catalog."default",
    type character varying COLLATE pg_catalog."default",
    fix character varying COLLATE pg_catalog."default",
    sat integer,
    hdop double precision,
    vdop double precision,
    pdop double precision,
    ageofdgpsdata double precision,
    dgpsid integer,
    the_geog geography(PointZ,4326),
    CONSTRAINT gpx_points_pkey PRIMARY KEY (ogc_fid),
    CONSTRAINT gpx_points_track_fid_fkey FOREIGN KEY (track_fid)
        REFERENCES public.gpx_tracks (ogc_fid) MATCH SIMPLE
        ON UPDATE RESTRICT
        ON DELETE RESTRICT
)
WITH (
    OIDS = FALSE
)
TABLESPACE pg_default;

The wrapper script

 1 #!/bin/sh
 2 
 3 DB=tracklog
 4 
 5 for f in "$@"; do
 6         psql tracklog <<EOF
 7 DROP TABLE IF EXISTS tracks;
 8 DROP TABLE IF EXISTS track_points;
 9 EOF
10         ogr2ogr -oo GPX_ELE_AS_25D=YES \
11                 -dim 3 \
12                 -gt 65536 \
13                 -lco SPATIAL_INDEX=FALSE \
14                 -lco GEOM_TYPE=geography \
15                 -overwrite \
16                 -preserve_fid \
17                 -f PostgreSQL \
18                 "PG:dbname=${DB}" "$f" \
19                 tracks track_points
20 
21         # Re-map FIDs then insert into real tables.
22         psql tracklog <<EOF
23         CREATE TEMPORARY TABLE track_fids AS
24         SELECT  ogc_fid AS orig_fid,
25                 nextval('gpx_tracks_ogc_fid_seq') AS ogc_fid
26         FROM    tracks;
27 
28         CREATE TEMPORARY TABLE point_fids AS
29         SELECT  ogc_fid AS orig_fid,
30                 nextval('gpx_points_ogc_fid_seq') AS ogc_fid
31         FROM    track_points;
32 
33         INSERT INTO gpx_tracks
34         SELECT  track_fids.ogc_fid AS ogc_fid,
35                 tracks.name as name,
36                 tracks.cmt as cmt,
37                 tracks."desc" as "desc",
38                 tracks.src as src,
39                 tracks.link1_href as link1_href,
40                 tracks.link1_text as link1_text,
41                 tracks.link1_type as link1_type,
42                 tracks.link2_href as link2_href,
43                 tracks.link2_text as link2_text,
44                 tracks.link2_type as link2_type,
45                 tracks."number" as "number",
46                 tracks.type as type,
47                 tracks.gpxx_trackextension as gpxx_trackextension,
48                 tracks.the_geog as the_geog
49         FROM    track_fids, tracks
50         WHERE   track_fids.orig_fid=tracks.ogc_fid;
51 
52         INSERT INTO gpx_points
53         SELECT  point_fids.ogc_fid AS ogc_fid,
54                 track_fids.ogc_fid AS track_fid,
55                 track_points.track_seg_id AS track_seg_id,
56                 track_points.track_seg_point_id AS track_seg_point_id,
57                 track_points.ele AS ele,
58                 track_points."time" AS "time",
59                 track_points.magvar AS magvar,
60                 track_points.geoidheight AS geoidheight,
61                 track_points.name AS name,
62                 track_points.cmt AS cmt,
63                 track_points."desc" AS "desc",
64                 track_points.src AS src,
65                 track_points.link1_href AS link1_href,
66                 track_points.link1_text AS link1_text,
67                 track_points.link1_type AS link1_type,
68                 track_points.link2_href AS link2_href,
69                 track_points.link2_text AS link2_text,
70                 track_points.link2_type AS link2_type,
71                 track_points.sym AS sym,
72                 track_points.type AS type,
73                 track_points.fix AS fix,
74                 track_points.sat AS sat,
75                 track_points.hdop AS hdop,
76                 track_points.vdop AS vdop,
77                 track_points.pdop AS pdop,
78                 track_points.ageofdgpsdata AS ageofdgpsdata,
79                 track_points.dgpsid AS dgpsid,
80                 track_points.the_geog AS the_geog
81         FROM    track_points, track_fids, point_fids
82         WHERE   point_fids.orig_fid=track_points.ogc_fid
83         AND     track_fids.orig_fid=track_points.track_fid;
84 
85         DROP TABLE tracks;
86         DROP TABLE track_points;
87         DROP TABLE track_fids;
88         DROP TABLE point_fids;
89 EOF
90 done

Getting the length of a track

Having imported all the data, we can do something like this:

SELECT ogc_fid, name,
  ST_Length(the_geog, false)/1000 as dist_in_km
FROM gpx_tracks order by ogc_fid desc limit 10;

and get this:

1754 Day 20-JUL-18 18:09:02′ 9.83689686312541′
1753 Day 15-JUL-18 09:36:16′ 5.75919119415676′
1752 Day 14-JUL-18 17:12:24′ 0.071734341651265′
1751 Day 14-JUL-18 17:12:23′ 0.0729574875289383′
1750 Day 13-JUL-18 08:13:32′ 9.88420745610283′
1749 Day 06-JUL-18 09:00:32′ 9.81221316219109′
1748 Day 30-JUN-18 01:11:26′ 9.77607205972035′
1747 Day 23-JUN-18 05:02:04′ 19.6368592034475′
1746 Day 22-JUN-18 18:03:37′ 9.91964760346248′
1745 Day 12-JUN-18 21:22:26′ 0.0884092391531763′

Visualisation with QGIS

Turns out, this is straightforward…

  1. In your workspace, there’s a tree with the different layer types you can add, including PostGIS… right-click on this and select New Connection… fill in the details for your PostgreSQL database.
  2. Below that is XYZ Tiles…, right click again, select New Connection for OpenStreetMap, and use the URL https://a.tile.openstreetmap.org/{z}/{x}/{y}.png (also, see their policy).
  3. Drag the OpenStreetMap connection to your layers
  4. Expand the PostGIS connection you just made, and look for the gpx_tracks table, drag this on top of your OpenStreetMap layer.

Below is everywhere I’ve been with the GPS tracklog running.  Much of what you see is the big loop a few of us did in 2012, including my trip to Ballarat for the 2012 LCA.

If I zoom in on Brisbane, unsurprisingly, some areas show up very clearly as being common haunts for me:

A bit of SQL voodoo, and I come up with this:

In orange is the territory covered on the Boulder (minus what was covered before I got the GPS), in blue the territory covered on the Talon 29 ER 0, and in red, on my current commuter (Toughroad SLR2).

Jul 162018
 

So, the local media here (can’t comment for other parts of the world) have been quite busy reporting on the fate of The Wild Boars soccer team and their coach, stuck in a flooded cave in Thailand.  With the great work of many, the group is now free of the cave, and getting the medical attention they need.

Pats on the back all around.  It could have very well been a dozen funerals that needed to be organised instead of servings of various meals.

Overshadowing this somewhat, has been the somewhat childish spat between Vern Unsworth and Elon Musk over the miniature submarine that was proposed as a vehicle for transporting the children through the cave system.

Now, I’ll admit right up front, what I know is what I’ve heard from the media here.  In amongst the reports, it was commented that the gaps though which people had to squeeze through, were as small as 38cm in places.

That does not leave you much room.  That’s bloody confined in the extreme.  A submarine that could fit a child and squeeze thorough such a gap?  It’d be positively claustrophobic!

Now, Mr Unsworth did label this as a PR stunt.  Maybe it was … maybe the design was just naïve.  I think the goal was a noble one, and Elon Musk’s team did a great job in giving it a go, even if they did overlook a few critical details.

However, I think I’ll take Mr Unsworth’s advice over Mr Musk’s regarding whether the device was practical, as he was actually there.  If the device got stuck, the results could have been fatal.  The team was already in a dangerous situation and had lost one member of their team already, they really weren’t in a position to experiment.  I think responding with “stick it where it hurts” is being overly harsh, but otherwise I think the criticism was entirely valid.

You do not, however, call someone a “pedo”, without very good grounds for doing so.  That is slanderous.  And what exactly is “sus” about living in Thailand?  Tesla’s been suffering some quite bad press lately, I really do not think this juvenile behaviour helps anyone.

One is free to believe that ego is not a dirty word, but that does not mean one’s humility should be locked under the stairs!


Update 2018-07-17: Hmm, I was saying…? Tesla sheds almost $US2b after Elon Musk’s ‘pedo’ attack on British diver.

Jun 142018
 

So, last Sunday we did a trip up the Brisbane Valley to do a rekkie for the Yarraman to Wulkuraka bike ride that Brisbane WICEN will be assisting in at the end of next month.

The area is known to be quite patchy where phone reception is concerned, with Linville shown to be highly unreliable… Telstra recommends external antennas are required to get any sort of service.  So it seemed a good place to take the Kite and try it out in a weak signal area.

3G coverage in Linville, with external antenna.

4G coverage in Linville, with external antenna.

4GX coverage in Linville, with external antenna.

Sadly, I didn’t get as much time as I would have liked to perform these tests, and it would have been great to compare against a few others… but I was able to take some screenshots on the way up of the three phones, all on the same network (Telstra), using their internal antennas (and the small whip in the case of the Kite).  However, we got there in the afternoon, and there were clouds gathering, so we had to get to Moore.

In any case, Telstra seems to have pulled their socks up since those maps were updated… as I found I was getting reasonable coverage on the T83.  The Kite was in the car at the time, I didn’t want it getting damaged if I came off the bike or if the heavens opened up.

I did manage to take some screenshots on the three phones on the way up.

This is not that scientific, and a bit crude since I couldn’t take the screenshots at exactly the same moment.  Plus, we were travelling at 100km/hr for much of the run.  There was one point where we stopped for breakfast at Fernvale, I can’t recall exactly what time that was or whether I got a screenshot from all three phones at that time.

The T84 is the only phone out of the three that can do the 4GX 700MHz band.

Time ZTE T83 ZTE T84 iSquare Mobility Kite v1 Notes
2018-06-10T06:08:16 t83 at 2018-06-10T06:08:16 Leaving Brisbane
2018-06-10T06:09:24 kite at 2018-06-10T06:09:24
2018-06-10T06:09:33 t83 at 2018-06-10T06:09:33
2018-06-10T06:26:17 t83 at 2018-06-10T06:26:17
2018-06-10T06:26:25 kite at 2018-06-10T06:26:25
2018-06-10T07:30:27 t84 at 2018-06-10T07:30:27 A rare moment where the T84 beats the others.  My guess is this is a 4GX (700MHz) cell.
2018-06-10T07:30:34 kite at 2018-06-10T07:30:34
2018-06-10T07:30:39 t83 at 2018-06-10T07:30:39
2018-06-10T07:41:48 kite at 2018-06-10T07:41:48
2018-06-10T07:41:54 t84 at 2018-06-10T07:41:54 HSPA coverage… one of the few times we see the T84 drop back to 3G.
2018-06-10T07:42:01 t83 at 2018-06-10T07:42:01
2018-06-10T07:51:34 t83 at 2018-06-10T07:51:34 Patchy coverage at times en route to Moore.
2018-06-10T07:51:45 kite at 2018-06-10T07:51:45
2018-06-10T08:24:57 kite at 2018-06-10T08:24:57 For grins, trying out Optus coverage on the Kite at Moore.  There’s a tower at Benarkin, not sure if there’s one closer to Moore.
2018-06-10T08:25:39 kite at 2018-06-10T08:25:39
2018-06-10T08:54:28 t84 at 2018-06-10T08:54:28
2018-06-10T08:54:35 kite at 2018-06-10T08:54:35 En route to Benarkin, we lose contact with Telstra on all three devices.
2018-06-10T08:54:39 t83 at 2018-06-10T08:54:39
2018-06-10T09:35:14 kite at 2018-06-10T09:35:14 In Benarkin.
2018-06-10T09:35:22 t83 at 2018-06-10T09:35:22
2018-06-10T10:25:27 kite at 2018-06-10T10:25:27
2018-06-10T10:25:48 t83 at 2018-06-10T10:25:48

So what does the above show?  Well, for starters, it is apparent that the T83 gets left in the dust by both devices.  This is interesting as my T83 definitely was the more reliable on our last trip into the Snowy Mountains, regularly getting a signal in places where the T84 failed.

Two spots I’d love to take the Kite would be Dumboy Creek (4km outside Delungra on the Gwydir Highway) and Sawpit Creek (just outside Jindabyne), but both are a bit far for a day trip!  It’s unlikely I’ll be venturing that far south again this year.

On this trip up the Brisbane Valley though, I observed that when the signal got weak, the Kite was more willing to drop back to 3G, whereas the two ZTE phones hung onto that little scrap of 4G.  Yes, 4G might give clearer call quality and faster speeds in ideal conditions, but these conditions are not ideal, we’re in fringe coverage.

The 4G standards use much more dense forms of modulation (QPSK, 16-QAM or 64-QAM) than 3G (QPSK only) trading off spectral efficiency for signal-to-noise performance, thus lean more heavily on forward error correction to achieve communications in adverse conditions.  When a symbol is corrupted, more data is lost with these standards.  3G might be slower, but sometimes slow and steady wins the race, fast and flaky is a recipe of frustration.

A more scientific experiment, where we are stationary, and can let each device “settle” before taking a reading, would be worthwhile.  Without a doubt, the Kite runs rings around the T83.  The T84 is less clear: the T84 and the Kite both run the same chipset; the Qualcomm MSM8916.  The T83 runs the older MSM8930.

By rights, the T84 and Kite should perform nearly identical, with the Kite having the advantage of a high-gain whip antenna instead of a more conventional patch panel antenna.  The only edge the T84 has, is the 700MHz band, which isn’t that heavily deployed here in Australia right now.

The T83 and T84 can take an external antenna, but the socket is designed for cradle use and isn’t as rugged or durable as the SMA connector used on the Kite.  It’s soldered to the PCB, and when a cable is plugged in, it disconnects the internal antenna.

Thus damage to this connector can render these phones useless.  The SMA connector on the Kite however is a pigtail to an IPX socket inside … a readily available off-the-shelf (mail-order) part.  People may not like the whip sticking out though.

The Kite does ship with a patch antenna, which is about 75% efficient; so maybe 0dBi at best, however I think making the case another 10mm longer and incorporating the whip into the top of the phone so the antenna can tuck away when not needed, is a better plan.  It would not be hard to make the case accommodate it so it’s invisible and can fold out, or be replaced with a coax connection to an external antenna.

If there’s time, I’ll try to get some more conclusive tests done, but there’s no guarantees on that.

Jun 042018
 

So, recently there was a task at my work to review enabling gzip compression on our nginx HTTP servers to compress the traffic.

Now, in principle it seemed like a good idea, but having been exposed to the security world a little bit, I was familiar with some of the issues with this, notably, CRIME, BEAST and BREACH.  Of these, only BREACH is unmitigated at the browser end.

The suggested mitigations, in order of effectiveness are:

  1. Disabling HTTP compression
  2. Separating secrets from user input
  3. Randomizing secrets per request
  4. Masking secrets (effectively randomizing by XORing with a random secret per request)
  5. Protecting vulnerable pages with CSRF
  6. Length hiding (by adding random number of bytes to the responses)
  7. Rate-limiting the requests

Now, we’ve effectively being doing (1) by default… but (2), (3) and (4) make me wonder how protocols like OAuth2 are supposed to work.  That got me thinking about a little toy I was given for attending the 2011 linux.conf.au… it’s a YubiKey, one of the early model ones.  The way it operates is that Yubico’s servers, and your key, share a secret AES key (I think it’s AES-128), some static data, and a counter.  Each time you generate a one-time pad with the key, it increments its counter, encrypts the value with the static data, then encodes the output as a hexdump using a keyboard-agnostic encoding scheme to be “typed” into the computer.

Yubico receive this token, decrypt it, then compare the counter value.  If it checks out, and is greater than the existing counter value at their end, they accept it, and store that new counter value.

The same made me wonder if that could work for requests from a browser… that is, you agree on a shared secret over HTTPS, or using Diffie Hellman.  You synchronise counters (either using your new shared secret, or over HTTPS at the same time as you make the shared key), then from there on, each request to your API made by the browser, is then accompanied by a one-time pad, generated by encrypting the counter value and the static data and sending that in the HTTP headers.

There are a few libraries that do AES in the browser, such as JSAES (GPLv3) and aes-js (MIT).

This is going to be expensive to do, so a compromise might be to use this every N requests, where N is small enough that BREACH doesn’t have a sufficient number of requests from which it can derive a secret.  By the time it figures out that secret, the token is expired.  Or they could be bulk-generated at the browser end in the background so there’s a ready supply.

I haven’t gone through the full in’s and out’s of this, and I’m no security expert, but that’s just some initial thinking.

Feb 132018
 

So, over the last few years we’ve seen a big shift in the way websites operate.

Once upon a time, JavaScript was a nice-to-have, and you as a web developer better be prepared for it to not be functional; the DOM was non-existent, and we were ooohing and ahhing over the de facto standard in Internet multimedia; MacroMedia Flash.  The engine we now call WebKit was still a primitive and quite basic renderer called KHTML in a little-known browser called Konqueror.  Mozilla didn’t exist as an open-source project yet; it was Netscape and Microsoft duelling it out together.

Back then, XMLHTTPRequest was so new, it wasn’t a standard yet; Microsoft had implemented the idea as an ActiveX control in IE5, no one else had it yet.  So if you wanted to update a page, you had to re-load the whole lot and render it server-side.  We had just shaken off our FONT tags for CSS (thank god!), but if you wanted to make an image change as the mouse cursor hovered over it, you still needed those onmouseover/onmouseout event handlers to swap the image.  Ohh, and scalable graphics?  Forget it.  Render as a GIF or JPEG and hope you picked the resolution right.

And bear in mind, the expectation was that, a user running an 800×600 pixel screen resolution, and connected via a 28.8kbps dial-up modem, should be able to load your page up within about 30 seconds, and navigate without needing to resort to horizontal scroll bars.  That meant images had to be compressed to be no bigger than 30kB.

That was 17 years ago.  Man I feel old!

This gets me thinking… today, the expectation is that your Internet connection is at least 256kbps.  Why then do websites take so long to load?

It seems our modern web designers have forgotten the art of how to pack down a website to minimise the amount of data needed to be transmitted so that the page is functional.  In this modern age of “pretty” web design, we’ve forgotten how to make a page practical.

Today, if you want to show an icon on a page, and have it fill the entire browser window, you can fire up Inkscape or Adobe Illustrator, let the creative juices flow and voilá, out pops a scalable vector graphic, which can be dropped straight into your HTML.  Turn on gzip compression on the web server, and that graphic will be on that 28.8kbps user’s screen in under 3 seconds, and can still be as big as they want.

If you want to make a page interactive, there’s no need to reload the entire page; XMLHTTPRequest is now a W3C standard, and implemented in all the major browsers.  Websockets means an end to any kind of polling; you can get updates as they happen.

It seems silly, but in spite of all the advancements, website page loads are not getting faster, they’re getting slower.  The “everybody has broadband” and “everybody has full-HD screens” argument is being used as an excuse for bloat and sloppy design practices.

More than once I’ve had to point someone to the horizontal scroll bar because the web designer failed to test their website at the rather common 1366×768 screen resolution of a typical laptop.  If I had a dollar for every time that’s happened in the last 12 months, I’d be able to buy the offending companies out and sack the web designers responsible!

One of the most annoying, from a security perspective, is the proliferation of “content distribution networks”.  It seems they’ve realised these big bulky blobs of JavaScript take a long time to load even on fast links.  So, what do the bright sparks do?  “I know… instead of loading it from one server, I’ll put it on 10 and increase my upload capacity 10-fold!”  Yes, they might have 1Gbps on each host.  1Gbps × 10 = 10Gbps, so the page will load at 10Gbps, right?

Cue sad tuba sound effect.

At my workplace, we have a 20Mbps Ethernet (not ADSL[2], fibre or cable; Ethernet) link to the Internet.  On that link, I’ve been watching the web get slower and slower… and I do not think our ISP is completely to blame, as I see the same issue at home too.  One where we feel the pain a lot, is Atlassian’s system, particularly Jira and Confluence.  To give you how bad they drink the CDN cool-aid, check out the number of sites I have to whitelist in order to get the page functional:

Atlassian’s JIRA… failing in spite of a crapton of scripts being loaded.

That’s 17 different hosts my web browser must make contact with, and download content from, before the page will function.  17 separate HTTP connections, which must fight with all other IP traffic on that 20Mbps Ethernet link for bandwidth.  20Mbps is the maximum that any one connection will do, and I can guarantee it will not reach even half that!

Interestingly, despite allowing all those scripts to load, they still failed to come up with the goods after a pregnant pause.  So the extra trashing of the link was for naught.  Then there’s the security implications.

At least 3 of those, are pages that Atlassian do not control.  If someone compromised ravenjs.com for example; they could inject any JavaScript they want on the JIRA site, and take control of a user’s account.  Atlassian are relying on these third partys’ promises and security practices, to ensure their site stays secure, and stays in their (third party’s) control.  Suppose someone forgets to renew the domain subscription, the result could be highly embarrassing!

So, I’m left wondering what they teach these days.  For a multitude of reasons, sites should be blazingly quick to load, partly because modern techniques ought to permit vastly improved efficiency of content representation and delivery; and that network link speeds are steadily improving.  However it seems the reverse is true… why are we failing so badly?

Oct 102017
 

So, over the last few years, computing power has gotten us to the point where remotely operated aerial vehicles are not only a thing, but are cheap and widely available.

There are of course, lots of good points about these toys, lots of tasks in which they can be useful.  No, I don’t think Amazon Prime is one of them.

They come with their risks though, and there’s a big list of do’s and don’ts regarding their use.  For recreational use, CASA for example, have this list of rules.  This includes amongst other things, staying below 120m altitude, and 30m away from any person.

For a building, that might as well be 30m from the top of the roof, as you cannot tell if there are people within that building, or where in that building those people reside, or from what entrance they may exit.

I in principle have no problem with people playing around with them.  I draw the line where such vehicles enter a person’s property.

The laws are rather lax about what is considered trespass with regards to such vehicles.  The no-brainer is if the vehicle enters any building or lands (controlled or otherwise) on any surface within the property.  A big reason for this is that the legal system often trails technological advancement.

This does not mean it is valid to fly over someone’s property.  For one thing, you had better ensure there is absolutely no chance that your device might malfunction and cause damage or injury to any person or possession on that property.

Moreover, without speaking to the owner of said property, you make it impossible for that person to take any kind of preventative action that might reduce the risk of malfunction, or alert you to any risks posed on the property.

In my case, I operate an amateur radio station.  My transmitting equipment is capable of 100W transmit power between 1.8MHz and 54MHz, 50W transmit power between 144MHz and 148MHz, and 20W transmit power between 420MHz and 450MHz, using FM, SSB, AM and CW, and digital modes built on these analogue modulation schemes.

Most of my antennas are dipoles, so 2.2dBi, I do have some higher-gain whips, and of course, may choose to use yagis or even dish antennas.  The stations that I might choose to work are mostly terrestrial in nature, however, airborne stations such as satellites, or indeed bouncing off objects such as the Moon, are also possibilities.

Beyond the paperwork that was submitted when applying for my radio license (which for this callsign, was filed about 9 years ago now, or for my original callsign was filed back in December 2007), there is no paperwork required to be submitted or filled out prior to me commencing transmissions.  Not to the ACMA, not to CASA, not to registered drone operators in the local area, not anybody.

While I’ve successfully operated this station with no complaints from my neighbours for nearly 10 years… it is worth pointing out that the said neighbours are a good distance away from my transmitting equipment.  Far enough away that the electromagnetic fields generated are sufficiently diminished to pose no danger to themselves or their property.

Any drone that enters the property, is at risk of malfunction if it strays too close to transmitting antennas.  If you think I will cease activity because you are in the area, think again.  There is no expectation on my part that I should alter my activities due to the presence of a drone.  It is highly probable that, whilst being inside, I am completely unaware of your device’s presence.  I cannot, and will not, take responsibility for your device’s electromagnetic immunity, or lack thereof.

In the event that it does malfunction though… it will be deemed to have trespassed if it falls within the property, and may be confiscated.  If it causes damage to any person or possession within the property, it will be confiscated, and the owner will be expected to pay damages prior to the device’s return.

In short, until such time as the laws are clarified on the matter, I implore all operators of these devices, to not fly over any property without the express permission of the owner of that property.  At least then, we can all be on the same page, we can avoid problems, and make the operation safer for all.

Sep 132017
 

So it seems that the Same Sex Marriage postal votes are finally being sent around.  This is good news in a way: we get to have a say in the matter and hopefully put the matter to bed one way or the other.

No more umming and arring, which I’m frankly sick and tired of, as I feel there are more pressing needs.  Yes, it’s important, but we have two nuclear armed crazy-haired nutters at opposite sides of the Pacific ready to light the planet up like a neon light!

I’m in support of the legislation changing by the way.  I think same-sex couples are entitled to the same rights, and it wasn’t that long ago that marriage was restricted to those not just of the opposite sex, but also had to be of the same “race” and religion.

To quote a song by John Williamson: “They’d chain you up to a boab tree, for kissing an Aborigine!”

So to my way of thinking, society changes.  What was taboo yesterday, we don’t think twice about today.  An Anglican family sending their children to a Catholic school would be heresy years ago… but for my sister and I, that is exactly what happened.  The world doesn’t seem to have imploded as a result.

The status quo regarding marriage is a hang-over from when the Church was the only place where you could get married, and ruled with far greater weight than today.  This is no longer the case, thus it no longer makes sense to hang onto this concept.

Anyway… my opinions on this are beside the point.  In spite of the good intentions, it looks as if the postal vote envelopes overlook one serious flaw: with sufficient light they are see through!

So my proposal: Put a thin piece of card in with the postal vote to block the light.  Not thick enough that it might cause the envelope to jam or interfere with sorting equipment, just opaque enough to prevent the contents being visible.  A small piece of black paper would likely do the job nicely.

Sure the ABS will have a little bit more paper to dispose of, but then at least, our votes are secure and people can’t “manipulate” the vote by snooping on sealed envelopes and discard the ones that disagree with their opinions.  At least then we won’t be wasting $122M.

Mar 262017
 

Yesterday’s post was rather long, but was intended for mostly technical audiences outside of amateur radio.  This post serves as a brain dump of volatile memory before I go to sleep for the night.  (Human conscious memory is more like D-RAM than one might realise.)

Radio interface

So, many in our group use packet radio TNCs already, with a good number using the venerable Kantronics KPC3.  These have a DB9 port that connects to the radio and a second DB25 RS-323 port that connects to the computer.

My proposal: we make an audio interface that either plugs into that DB9 port and re-uses the interface cables we already have, or directly into the radio’s data port.

This should connect to an audio interface on the computer.

For EMI’s sake, I’d recommend a USB sound dongle like this, or these, or this as that audio interface.  I looked on Jaycar and did see this one, which would also work (and burn a hole in your wallet!).

If you walk in and the asking price is more than $30, I’d seriously consider these other options.  Of those options, U-Mart are here in Brisbane; go to their site, order a dongle then tell the site you’ll come and pick it up.  They’ll send you an email with an order number when it’s ready, you just need to roll up to the store, punch that number into a terminal in the shop, then they’ll call your name out for you to collect and pay for it.

Scorptec are in Melbourne, so you’ll have to have items shipped, but are also worth talking to.  (They helped me source some bits for my server cluster when U-Mart wouldn’t.)

USB works over two copper pairs; one delivers +5V and 0V, the other is a differential pair for data.  In short, the USB link should be pretty immune from EMI issues.

At worst, you should be able to deal with it with judicious application of ferrite beads to knock down the common mode current and using a combination of low-ESR electrolytic and ceramic capacitors across the power rails.

If you then keep the analogue cables as short as absolutely possible, you should have little opportunity for RF to get in.

I don’t recommend the TigerTronics Signalink interfaces, they use cheap and nasty isolation transformers that lead to serious performance issues.

Receive audio

For the receive audio, we feed the audio from the radio and we feed that via potentiometer to a 3.5mm TRS (“phono”) plug tip, with sleeve going to common.  This plugs into the Line-In or Microphone input on the sound device.

Push to Talk and Transmit audio

I’ve bundled these together for a good reason.  The conventional way for computers to drive PTT is via an RS-232 serial port.

We can do that, but we won’t unless we have to.

Unless you’re running an original SoundBLASTER card, your audio interface is likely stereo.  We can get PTT control via an envelope detector forming a minimal-latency VOX control.

Another 3.5mm TRS plug connects to the “headphone” or “line-out” jack on our sound device and breaks out the left and right channels.

The left and right channels from the sound device should be fed into the “throw” contacts on two single-pole double-throw toggle switches.

The select pin (mechanically operated by the toggle handle) on each switch thus is used to select the left or right channel.

One switch’s select pin feeds into a potentiometer, then to the radio’s input.  We will call that the “modulator” switch; it selects which channel “modulates” our audio.  We can again adjust the gain with the potentiometer.

The other switch first feeds through a small Schottky diode then across a small electrolytic capacitor (to 0V) then through a small resistor before finally into the base of a small NPN signal transistor (e.g. BC547).  The emitter goes to 0V, the collector is our PTT signal.

This is the envelope detector we all know and love from our old experiments with crystal sets.  In theory, we could hook a speaker to the collector up to a power source and listen to AM radio stations, but in this case, we’ll be sending a tone down this channel to turn the transistor, and thus or PTT, on.

The switch feeding this arrangement we’ll call the “PTT” switch.

By using this arrangement, we can use either audio channel for modulation or PTT control, or we can use one channel for both.  1200-baud AFSK, FreeDV, etc, should work fine with both on the one channel.

If we just want to pass through analogue audio, then we probably want modulation separate, so we can hold the PTT open during speech breaks without having an annoying tone superimposed on our signal.

It may be prudent to feed a second resistor into the base of that NPN, running off to the RTS pin on an RS-232 interface.  This will let us use software that relies on RS-232 PTT control, which can be added by way of a USB-RS232 dongle.

The cheap Prolific PL-2303 ones sold by a few places (including Jaycar) will work for this.  (If your software expects a 16550 UART interface on port 0x3f8 or similar, consider running it in a virtual machine.)

Ideally though, this should not be needed, and if added, can be left disconnected without harm.

Software

There are a few “off-the-shelf” packages that should work fine with this arrangement.

AX.25 software

AGWPE on Windows provides a software TNC.  On Linux, there’s soundmodem (which I have used, and presently mirror) and Direwolf.

Shouldn’t need a separate PTT channel, it should be sufficient to make the pre-amble long enough to engage PTT and rely on the envelope detector recognising the packet.

Digital Voice

FreeDV provides an open-source digital voice platform system for Windows, Linux and MacOS X.

This tool also lets us send analogue voice.  Digital voice should be fine, the first frame might get lost but as a frame is 40ms, we just wait before we start talking, like we would for regular analogue radio.

For the analogue side of things, we would want tone-driven PTT.  Not sure if that’s supported, but hey, we’ve got the source code, and yours truly has worked with it, it shouldn’t be hard to add.

Slow-scan television

The two to watch here would be QSSTV (Linux) and EasyPal (Windows).  QSSTV is open-source, so if we need to make modifications, we can.

Not sure who maintains EasyPal these days, not Eric VK4AES as he’s no longer with us (RIP and thank-you).  Here, we might need an RS-232 PTT interface, which as discussed, is not a hard modification.

Radioteletype

Most is covered by FLDigi.  Modes with a fairly consistent duty cycle will work fine with the VOX PTT, and once again, we have the source, we can make others work.

Custom software ideas

So we can use a few off-the-shelf packages to do basic comms.

  • We need auditability of our messaging system.  Analogue FM, we can just use a VOX-like function on the computer to record individual received messages, and to record outgoing traffic.  Text messages and files can be logged.
  • Ideally, we should have some digital signing of logs to make them tamper-resistant.  Then we can mathematically prove what was sent.
  • In a true  emergency, it may be necessary to encrypt what we transmit.  This is fine, we’re allowed to do this in such cases, and we can always turn over our audited logs for authorities anyway.
  • Files will be sent as blocks which are forward-error corrected (or forward-erasure coded).  We can use a block cipher such as AES-256 to encrypt these blocks before FEC.  OpenPGP would work well here rather doing it from scratch; just send the OpenPGP output using FEC blocks.  It should be possible to pick out symmetric key used at the receiving end for auditing, this would be done if asked for by Government.  DIY not necessary, the building blocks are there.
  • Digital voice is a stream, we can use block ciphers but this introduces latency and there’s always the issue of bit errors.  Stream ciphers on the other hand, work by generating a key stream, then XOR-ing that with the data.  So long as we can keep sync in the face of bit errors, use of a stream cipher should not impair noise immunity.
  • Signal fade is a worse problem, I suggest a cleartext (3-bit, 4-bit?) gray-code sync field for synchronisation.  Receiver can time the length of a fade, estimate the number of lost frames, then use the field to re-sync.
  • There’s more than a dozen stream ciphers to choose from.  Some promising ones are ACHTERBAHN-128, Grain 128a, HC-256, Phelix, Py, the Salsa20 family, SNOW 2/3G, SOBER-128, Scream, Turing, MUGI, Panama, ISAAC and Pike.
  • Most (all?) stream ciphers are symmetric.  We would have to negotiate/distribute a key somehow, either use Diffie-Hellman or send a generated key as an encrypted file transfer (see above).  The key and both encrypted + decrypted streams could be made available to Government if needed.
  • The software should be capable of:
    • Real-time digital voice (encrypted and clear; the latter being compatible with FreeDV)
    • File transfer (again, clear and encrypted using OpenPGP, and using good FEC, files will be cryptographically signed by sender)
    • Voice mail and SSTV, implemented using file transfer.
    • Radioteletype modes (perhaps PSK31, Olivia, etc), with logs made.
    • Analogue voice pass-through, with recordings made.
    • All messages logged and time-stamped, received messages/files hashed, hashes cryptographically signed (OpenPGP signature)
    • Operation over packet networks (AX.25, TCP/IP)
    • Standard message forms with some basic input validation.
    • Ad-hoc routing between interfaces (e.g. SSB to AX.25, AX.25 to TCP/IP, etc) should be possible.
  • The above stack should ideally work on low-cost single-board computers that are readily available and are low-power.  Linux support will be highest priority, Windows/MacOS X/BSD is a nice-to-have.
  • GNU Radio has building blocks that should let us do most of the above.
Mar 252017
 

So, there’s been a bit of discussion lately about our communications infrastructure. I’ve been doing quite a bit of thinking about the topic.

The situation today

Here in Australia, a lot of people are being moved over to the National Broadband Network… with the analogue fixed line phone (if it hasn’t disappeared already) being replaced with a digital service.

For many, their cellular “mobile” phone is their only means of contact. More than the over-glorified two-way radios that was pre-cellular car phones used by the social elites in the early 70s, or the slightly more sophisticated and tennis-elbow inducing AMPS hand-held mobile phones that we saw in the 80s, mobile phones today are truly versatile and powerful hand-held computers.

In fact, they are more powerful than the teen-aged computer I am typing this on. (And yes, I have upgraded it; 1GB RAM, 250GB mSATA SSD, Linux kernel 4.0… this 2GHz P4 still runs, and yes I’ll update that kernel in a moment. Now, how’s that iPhone 3G going, still running well?)

All of these devices are able to provide data communications throughput in the order of millions of bits per second, and outside of emergencies, are generally, very reliable.

It is easy to forget just how much needs to work properly in order for you to receive that funny cat picture.

Mobile networks

One thing that is not clear about the NBN, is what happens when the power is lost. The electricity grid is not infallible, and requires regular maintenance, so while reliability is good, it is not guaranteed.

For FTTP users, battery backup is an optional extra. If you haven’t opted in, then your “land line” goes down when the power goes out.

This is not a fact that people think about. Most will say, “that’s fine, I’ve got my mobile” … but do you? The typical mobile phone cell tower has several hours battery back-up, and can be overwhelmed by traffic even in non-emergencies.They are fundamentally engineered to a cost, thus compromises are made on how long they can run without back-up power, and how much call capacity they carry.

In the 2008 storms that hit The Gap, I had no mobile telephone coverage for 2 days. My Nokia 3310 would occasionally pick up a signal from a tower in a neighbouring suburb such as Keperra, Red Hill or Bardon, and would thus occasionally receive the odd text message… but rarely could muster the effective radiated power to be able to reply back or make calls. (Yes, and Nokia did tell me that internal antennas surpassed the need for external ones. A 850MHz yagi might’ve worked!)

Emergency Services

Now, you tell yourself, “Well, the emergency services have their own radios…”, and this is correct. They do have their own radio networks. They too are generally quite reliable. They have their problems. The Emergency Alerting System employed in Victoria was having capacity problems as far back as 2006 (emphasis mine):

A high-priority project under the Statewide Integrated Public Safety Communications Strategy was establishing a reliable statewide paging system; the emergency alerting system. The EAS became operational in 2006 at a cost of $212 million. It provides coverage to about 96 per cent of Victoria through more than 220 remote transmitter sites. The system is managed by the Emergency Services Telecommunications Agency on behalf of the State and is used by the CFA, VICSES and Ambulance Victoria (rural) to alert approximately 37,400 personnel, mostly volunteers, to an incident. It has recently been extended to a small number of DSE and MFB staff.

Under the EAS there are three levels of message priority: emergency, non-emergency, and administrative. Within each category the system sends messages on a first-in, first-out basis. This means queued emergency messages are sent before any other message type and non-emergency messages have priority over administrative messages.

A problem with the transmission speed and coverage of messages was identified in 2006. The CFA expressed concern that areas already experiencing marginal coverage would suffer additional message loss when the system reached its limits during peak events.

To ensure statewide coverage for all pagers, in November 2006 EAS users decided to restrict transmission speed and respond to the capacity problems by upgrading the system. An additional problem with the EAS was caused by linking. The EAS can be configured to link messages by automatically sending a copy of a message to another pager address. If multiple copies of a message are sent the overall load on the system increases.

By February 2008 linking had increased by 25 per cent.

During the 2008 windstorm in Victoria the EAS was significantly short of delivery targets for non-emergency and administrative messages. The Emergency Services Telecommunications Agency subsequently reviewed how different agencies were using the system, including their message type selection and message linking. It recommended that the agencies establish business rules about the use of linking and processes for authorising and monitoring de-linking.

The planned upgrade was designed to ensure the EAS could cope better with more messages without the use of linking.

The upgrade was delayed several times and rescheduled for February 2009; it had not been rolled out by the time of Black Saturday. Unfortunately this affected the system on that day, after which the upgrade was postponed indefinitely.

I can find mention of this upgrade taking place around 2013. From what I gather, it did eventually happen, but it took a roasting from mother nature to make it happen. The lesson here is that even purpose built networks can fall over, and thus particularly in major incidents, it is prudent to have a back-up plan.

Alternatives

For the lay person, CB radio can be a useful tool for short-range (longer-than-yelling-range) voice communications. UHF CB will cover a few kilometres in urban environments and can achieve quite long distances if good line-of-sight is maintained. They require no apparatus license, and are relatively inexpensive.

It is worth having a couple of cheap ones, a small torch and a packet of AAA batteries (stored separately) in the car or in a bag you take with you. You can’t use them if they’re in a cupboard at home and you’re not there.

The downside with the hand-helds, particularly the low end ones, is effective radiated power. They will have small “rubber ducky” antennas, optimised for size, and will typically have limited transmit power, some can do the 5W limit, but most will be 1W or less.

If you need a bit more grunt, a mobile UHF CB set and magnetic mount antenna could be assembled and fitted to most cars, and will provide 5W transmit power, capable of about 5-10km in good conditions.

HF (27MHz) CB can go further, and with 12W peak envelope power, it is possible to get across town with one, even interstate or overseas when conditions permit. These too, are worth looking at, and many can be had cheaply second-hand. They require a larger antenna however to be effective, and are less common today.

Beware of fakes though… A CB radio must meet “type approval”, just being technically able to transmit in that band doesn’t automatically make it a CB, it must meet all aspects of the Citizens Band Radio Service Class License to be classified a CB.

If it does more than 5W on UHF, it is not a UHF CB. If it mentions a transmit range outside of 476-478MHz, it is not a UHF CB.  Programming it to do UHF channels doesn’t change this.

Similarly, if your HF CB radio can do 26MHz (NZ CB, not Australia), uses FM instead of SSB/AM (UK CB, again not Australia), does more than 12W, or can do 28-30MHz (10m amateur), it doesn’t qualify as being a CB under the class license.

Amateur radio licensing

If you’ve got a good understanding of high-school mathematics and physics, then a Foundation amateur radio license is well within reach.  In fact, I’d strongly recommend it for anyone doing first year Electrical Engineering … as it will give you a good practical grounding in electrical theories.

Doing so, you get to use up to 10W of power (double what UHF CB gives you; 6dB can matter!) and access to four HF, one VHF and one UHF band using analogue voice or hand-keyed Morse code.

You can then use those “CB radios” that sell on eBay/DealExtreme/BangGood/AliExpress…etc, without issue, as being un-modified “commercial off-the-shelf”, they are acceptable for use under the Foundation license.

Beyond Voice: amateur radio digital modes

Now, all good and well being able to get voice traffic across a couple of suburban blocks. In a large-scale disaster, it is often necessary to co-ordinate recovery efforts, which often means listings of inventory and requirements, welfare information, etc, needs to be broadcast.

You can broadcast this by voice over radio… very slowly!

You can put a spreadsheet on a USB stick and drive it there. You can deliver photos that way too. During an emergency event, roads may be in-passable, or they may be congested. If the regular communications channels are down, how does one get such files across town quickly?

Amateur radio requires operators who have undergone training and hold current apparatus licenses, but this service does permit the transmission of digital data (for standard and advanced licensees), with encryption if needed (“intercommunications when participating in emergency services operations or related training exercises”).

Amateur radio is by its nature, experimental. Lots of different mechanisms have been developed through experiment for intercommunication over amateur radio bands using digital techniques.

Morse code

The oldest by far is commonly known as “Morse code”, and while it is slower than voice, it requires simpler transmitting and receiving equipment, and concentrates the transmitted power over a very narrow bandwidth, meaning it can be heard reliably at times when more sophisticated modes cannot. However, not everybody can send or receive it (yours truly included).

I won’t dwell on it here, as there are more practical mechanisms for transmitting lots of data, but have included it here for completeness. I will point out though, due to its simplicity, it has practically no latency, thus it can be faster than SMS.

Radio Teletype

Okay, there are actually quite a few modes that can be described in this manner, and I’ll use this term to refer to the family of modes. Basically, you can think of it as two dumb terminals linked via a radio channel. When you type text into one, that text appears on the other in near real-time. The latency is thus very low, on par with Morse code.

The earliest of these is the RTTY mode, but more modern incarnations of the same idea include PSK31.

These are normally used as-is. With some manual copying and pasting pieces of text at each end, it is possible to encode other forms of data as short runs of text and send files in short hand-crafted “packets”, which are then hand-deconstructed and decoded at the far end.

This can be automated to remove the human error component.

The method is slow, but these radioteletype modes are known for being able to “punch through” poor signal conditions.

When I was studying web design back in 2001, we were instructed to keep all photos below 30kB in size. At the time, dial-up Internet was common, and loading times were a prime consideration.

Thus instead of posting photos like this, we had to shrink them down, like this. Yes, some detail is lost, but it is good enough to get an “idea” of the situation.

The former photo is 2.8MB, the latter is 28kB. Via the above contrived transmission system, it would take about 20 minutes to transmit.

The method would work well for anything that is text, particularly simple spread sheets, which could be converted to Comma Separated Values to strip all but the most essential information, bringing file sizes down into realms that would allow transmission times in the order of 5 minutes. Text also compresses well, thus in some cases, transmission time can be reduced.

To put this into perspective, a drive from The Gap where that photo was taken, into the Brisbane CBD, takes about 20 minutes in non-peak-hour normal circumstances. It can take an hour at peak times. In cases of natural disaster, the roads available to you may be more congested than usual, thus you can expect peak-hour-like trip times.

Radio Faximile and Slow Scan Television

This covers a wide variety of modes, ranging from the ancient like Hellschreiber which has its origins in the German Military back in World War II, various analogue slow-scan television modes through to the modern digital slow-scan television.

This allows the transmission of photos and visual information over radio. Some systems like EasyPAL and its elk (based on HamDRM, a variant of Digital Radio Mondiale) are in fact, general purpose modems for transmitting files, and thus can transmit non-graphical data too.

Transmit times can vary, but the analogue modes take between 30 seconds and two minutes depending on quality. For the HamDRM-based systems, transmit speeds vary between 86Bps up to 795kBps depending on the settings used.

Packet Radio

Packet radio is the concept of implementing packet-switched networks over radio links. There are various forms of this, the most common in amateur radio being PACTOR, WINMOR, the 1200-baud AFSK and 9600-baud FSK and 300-baud AFSK packet modes.

300-baud AFSK is normally used on HF links, and hails from experiments using surplus Bell 103 modems modified to work with radio. Similarly, on VHF and UHF FM radio, experiments were done with surplus Bell 202 modems, giving rise to the 1200-baud AFSK mode.

The 9600-baud FSK mode was the invention of James Miller G3RUH, and was one of the first packet radio modes actually developed by radio amateur operators for use on radio.

These are all general-purpose data modems, and while they can be used for radioteletype applications, they are designed with computer networking in mind.

The feature facilities like automatic repeating of lost messages, and in some cases support forward error correction. PACTOR/WINMOR is actually used with the Winlink radio network which provides email services.

The 300-baud, 1200-baud and 9600-baud versions generally use a networking protocol called AX.25, and by configuring stations with multiple such “terminal node controllers” (modems) connected and appropriate software, a station can operate as a router, relaying traffic received via one radio channel to a station that’s connected via another, or to non-AX.25 stations on Winlink or the Internet.

It is well suited to automatic stations, operating without human intervention.

AX.25 packet and PACTOR I are open standards, the later PACTOR modems are proprietary devices produced by SCS in Germany.

AX.25 packet is capable of transmit speeds between 15Bps (300 baud) and 1kBps (9600 baud). PACTOR varies between 5Bps and 650Bps.

In theory, it is possible to develop new modems for transmitting AX.25, the HamDRM modem used for slow-scan television and the FDMDV modem used in FreeDV being good starting points as both are proven modems with good performance.

These simply require an analogue interface between the computer sound card and radio, and appropriate software.  Such an interface made to link a 1200-baud TNC to a radio could be converted to link to a low-cost USB audio dongle for connection to a computer.

If someone is set up for 1200-baud packet, setting up for these other modes is not difficult.

High speed data

Going beyond standard radios, amateur radio also has some very high-speed data links available. D-Star Digital Data operates on the 23cm microwave band and can potentially transmit files at up to 16KBps, which approaches ADSL-lite speeds. Transceivers such as the Icom ID-1 provide this via an Ethernet interface for direct connection to a computer.

General Electric have a similar offering for industrial applications that operates on various commercial bands, some of which can reach amateur frequencies, thus would be usable on amateur bands. These devices offer transmit speeds up to 8KBps.

A recent experiment by amateurs using off-the-shelf 50mW 433MHz FSK modules and Realtek-based digital TV tuner receivers produced a high-speed speed data link capable of delivering data at up to 14KBps using a wideband (~230kHz) radio channel on the 70cm band.  They used it to send high definition photos from a high-altitude balloon.

The point?

We’ve got a lot of tools at our disposal for getting a message through, and collectively, 140 years of experience at our disposal. In an emergency situation, that means we have a lot of different options, if one doesn’t work, we can try another.

No, a 1200-baud VHF packet link won’t stream 4k HD video, but it has minimal latency and will take less than 20 minutes to transmit a 100kB file over distances of 10km or more.

A 1kB email will be at the other end before you can reach for your car keys.  Further experimentation and development means we can only improve.  Amateur radio is far from obsolete.