A little while back I decided to try out Mastodon, deploying my own instance running as a VM on my own hardware. This was primarily done to act as a testing ground for experimenting with integrating with it, but also as a means of keeping up with the news.
The latter is particularly important, as I no longer have the radio on all the time. I might hear a news item in the morning, but after the radio switches off, I’m unlikely to turn it back on until the next working day. A lot of news outlets moved to Twitter over the past decade, but with that site in its death throws, the ActivityPub ecosystem is looking like a safer bet.
Not many outlets are officially using this newer system yet. There are a few outlets that do publish directly to Mastodon/ActivityPub, examples being Rolling Stone, The Markup (who run their own instance), STAT, The Conversation AU/NZ and OSNews. Some outlets aren’tofficially posting on ActivityPub, but are nonetheless visible via bridges from RSS (e.g. Ars Technica) and others are proxies of these outlets’ Twitter accounts (e.g. Reuters, Al Jazeera, The Guardian, Slashdot). Others are there, but it’s not clear how the material is being mirrored or if they’re official.
As you can gather, a big chunk of who I follow is actually news outlets, or humour. There are a few people on my “follow” list whom are known for posting various humour pieces from elsewhere, and I often “boost” (re-post) their content.
Meta (who run Facebook) have made noises they might join in with their own Twitter-clone in the ActivityPub fediverse. I wouldn’t mind this so much — the alternatives to them doing this is: (1) the rest of us needing dozens of social media accounts to keep in touch with everybody, (2) relying on the good will of some mega-site to connect us all, or (3) forgoing being in touch altogether.
I tried option (1) in the early part of this century, and frankly I’m over it. Elon Musk dreams of Twitter becoming option (2) but I think the chances of this happening are buckleys and none. (3) is not realistic, we’re social beings.
Some of these instances will be ad supported, and I guess that’s a compromise we may have to live with. Servers need electricity and Internet to run, and these are not free. A bigger cost to running big social networks is actually managing the meat-ware side of the network — moderation, legal teams to decide how moderation should be applied, handling take-down notices… etc.
ActivityPub actually supports flagging the content so the post is not “listed” (indexed by instances’ search engines), private posts (cannot be boosted, visible to followers only), even restricting to just those mentioned specifically. I guess there’s room for one more: “non-commercial use only” — commercial instances could then decide to they forgo the advertising on that post, or do they filter the post.
ActivityPub posting privacy settings on Mastodon
I did hear rumblings that the EU was likely to pass some laws requiring a certain level of interoperability between social networks, which ActivityPub could in fact be the basis of.
Some worry about another Eternal September moment — a repeat of the time when AOL disgorged its gaggle of novice Internet users on an unsuspecting Usenet system. Usenet users prior to AOL opening up in 1993 only had to deal with similar shenanigans once a year around September when each new batch of first year uni students would receive their Internet access accounts.
I’m not sure linking of a site like Facebook or Tumblr (who have also mentioned joining the Fediverse) is all that big a deal — Mastodon lets you block whole domains if you so choose, and who says everybody on a certain site is going to cause trouble?
Email is a federated system, always has been, and while participation as a small player is more complicated than it used to be, it is still doable. Big players like Microsoft and Google haven’t killed off email (even with the former doing their best to do so with sub-par email clients and servers). Yes, we have a bigger spam problem than we had back in the 90s, but keeping the signal-to-noise ratio up to useful levels is not impossible, even for mere mortals.
We do have to be mindful of the embrace-extend-break game that big business like to play with open protocols, I think Mastodon gGmbH’s status as a not-for-profit and a reference implementation should help here.
I’d rather throw my support behind a system that can allow us to all interoperate, and managing the misbehaviour that may arise on a case-by-case basis, is a better solution than us developing our own little private islands. The info-sec press seem to have been quick to jump ship from Twitter to Mastodon. IT press is taking a little longer, but there’s a small but growing group. I think the journalism world is going to be key to making this work and ensuring there’s good-quality content to drown out the low-quality noise. If big players like Meta joining in help push this along, I think this is worth encouraging.
The other day I had a bit of a challenge to deal with. My workplace makes embedded data collection devices which are built around the Texas Instruments CC2538 SoC (internal photos visible here) and run OpenThread. To date, everything we’ve made has been an externally-powered device, running off either DC power (9-30V) or mains (120/240V 50/60Hz AC). CC2592 range extender support was added to OpenThread for this device.
The CC2538, although very light on RAM (32KiB), gets the job done with some constraints. Necessity threw us a curve-ball the other day, we wanted a device that ran off a battery. That meant going into sleep mode periodically, deep sleep! The CC2538 has a number of operating modes:
running mode (pretty much everything turned on)
light sleep mode (clocks, CPU and power stays on, but we pause a few peripherals)
deep sleep mode — this comes in four flavours
PM0: Much like light-sleep, but we’ve got the option to pause clocks to more peripherals
PM1: PM0, plus we halt the main system clock (32MHz crystal or 16MHz RC), halting the CPU
PM2: PM1 plus we power down the bottom 16KiB of RAM and some other internal peripherals
PM3: PM2 plus we turn off the 32kHz crystal used by the sleep timer and watchdog.
We wanted PM2, which meant while we could use the bottom 16KiB of RAM during run-time, the moment we went to sleep, we had to forget about whatever was kept in that bottom 16KiB RAM — since without power it would lose its state anyway.
The challenge
Managing RAM in a device like this is always a challenge. malloc() is generally frowned upon, however in some cases it’s a necessary evil. OpenThread internally uses mbedTLS and that, relies on having a heap. It can use one implemented by OpenThread, or one provided by you. Our code also uses malloc for some things, notably short-term tasks like downloading a new configuration file or for buffering serial traffic.
The big challenge is that OpenThread itself uses a little over 9KiB RAM. We have a 4KiB stack. We’ve got under 3KiB left. That’s bare-bones OpenThread. If you want JOINER support, for joining a mesh network, that pulls in DTLS, which by default, will tell OpenThread to static-allocate a 6KiB buffer.
9KiB becomes about 15KiB; plus the stack, that’s 19KiB. This is bigger than 16KiB — the linker gives up.
Using heap memory
There is a work-around that gets things linking; you can build OpenThread with the option OPENTHREAD_CONFIG_HEAP_EXTERNAL_ENABLE — if you set this to 1, OpenThread forgoes its own heap and just uses malloc / free instead, implemented by your toolchain.
OpenThread builds and links in 16KiB RAM, hooray… but then you try joining, and; NoBufs is the response. We’re out of RAM. Moving things to the heap just kicked the can down the road, we still need that 6KiB, but we only have under 3KiB to give it. Not enough.
We have a problem in that, the toolchain we use, is built on newlib, and while it implements malloc / free / realloc; it does so with a primitive called _sbrk(). We define a pointer initialised up the top of our .bss, and whenever malloc needs more memory for the heap, it calls _sbrk(N); we grab the value of our pointer, add N to it, and return the old value. Easy.
Except… we don’t just have one memory pool now, we have two. One of which, we cannot use all the time. OpenThread, via mbedTLS also winds up calling on malloc() very early in the initialisation (as early as the otInstanceInitSingle() call to initialise OpenThread). We need that block of RAM to wind up in the upper 16KiB that stays powered on — so we can’t start at address 0x2000:0000 and just skip over .data/.bss when we run out.
malloc() will also get mighty confused if we suddenly hand it an address that’s lower than the one we handed out previously. We can’t go backwards.
I looked at replacing malloc() with a dual-pool-aware version, but newlib is hard-coded in a few places to use its own malloc() and not a third-party one. picolibc might let us swap it out, but getting that integrated looked like a lot of work.
So we’re stuck with newlib‘s malloc() for better or worse.
The hybrid approach
One option, we can’t control what malloc the newlib functions use. So use newlib‘s malloc with _sbrk() to manage the upper heap. Wrap that malloc with our own creation that we pass to OpenThread: we implement otPlatCAlloc and otPlatFree — which are essentially, calloc and free wrappers.
The strategy is simple; first try the normalcalloc, if that returns NULL, then use our own.
Re-purposing an existing allocator
The first rule of software engineering, don’t write code you don’t have to. So naturally I went looking for options.
Page upon page of “No man don’t do it!!!”
jemalloc looked promising at first, it is the FreeBSD malloc(), but that there, lies a problem — it’s a pretty complicated piece of code aimed at x86 computers with megabytes of RAM minimum. It used uint64_ts in a lot of places and seemed like it would have a pretty high overhead on a little CC2538.
I tried avr-libc‘s malloc — it’s far simpler, and actually is a free-list implementation like newlib‘s version, but there is a snag. See, AVR microcontrollers are 8-bit beasts, they don’t care about memory alignment. But the Cortex M3 does! avrlibc_malloc did its job, handed back a pointer, but then I wound up in a HARDFAULT condition because mbedTLS tried to access a 32-bit word that was offset by a few bytes.
A simple memory allocator
The approach I took was a crude one. I would allocate memory in fixed-sized “blocks”. I first ran the OpenThread code under a debugger and set a break-point on malloc to see what sizes it was asking for — mostly blocks around the 128 byte mark, sometimes bigger, sometimes smaller. 64-byte blocks would work pretty well, although for initial testing, I went the lazy route and used 8-byte blocks: uint64_ts.
In my .bss, I made an array of uint8_ts; size equal to the number of 8-byte blocks in the lower heap divided by 4. This would be my usage bitmap — each block was allocated two bits, which I accessed using bit-banding: one bit I called used, and that simply reported the block was being used. The second was called chained, and that indicated that the data stored in this block spilled over to the next block.
To malloc some memory, I’d simply look for a string of free blocks big enough. When it came to freeing memory, I simply started at the block referenced, and cleared bits until I got to a block whose chained bit was already cleared. Because I was using 8-byte blocks, everything was guaranteed to be aligned.
8-byte blocks in 16KiB (2048 blocks) wound up with 512 bytes of usage data. As I say, using 64-byte blocks would be better (only 256 blocks, which fits in 64 bytes), but this was a quick test. The other trick would be to use the very first few blocks to store that bitmap (for 64-byte blocks, we only need to reserve the first block).
…and treat usage like an array; where element 0 was the usage data for the very first block down the bottom of SRAM.
To implement a memory allocator, I needed five routines:
one that scanned through, and told me where the first free block was after a given block number (returning the block number) — static uint16_t lowheap_first_free(uint16_t block)
one that, given the start of a run of free blocks, told me how many blocks following it were free — static uint16_t lowheap_chunk_free_length(uint16_t block, uint16_t required)
one that, given the start of a run of chained used blocks, told me how many blocks were chained together — static uint16_t lowheap_chunk_used_length(uint16_t block)
one that, given a block number and count, would claim that number of blocks starting at the given starting point — static void lowheap_chunk_claim(uint16_t block, uint16_t length)
one that, given a starting block, would clear the used bit for that block, and if chained was set; clear it and repeat the step on the following block (and keep going until all blocks were freed) — static void lowheap_chunk_release(uint16_t block)
From here, implementing calloc was simple:
first, try the newlibcalloc and see if that succeeded. Return the pointer we’re given if it’s not NULL.
if we’re still looking for memory, round up the memory requirement to the block size.
initialise our starting block number (start_nr) by calling lowheap_first_free(0) to find the first block; then in a loop:
find the size of the free block (chunk_len) by calling lowheap_chunk_free_length(start_nr, required_blocks).
If the returned size is big enough, break out of the loop.
If not big enough, increment start_nr by the return value from lowheap_chunk_used_length(start_nr + chunk_len) to advance it past the too-small free block and the following used chunk.
Stop iterating of start_nr is equal to or greater than the total number of blocks in the heap.
If start_nr winds up being past the end of the heap, fail with errno = ENOMEM and return NULL.
Otherwise, we’re safe, call lowheap_chunk_claim(start_nr, required_blocks); to reserve our space, zero out the actual blocks allocated, then return the address of the first block cast to void*.
Implementing free was not a challenge either: either the pointer was above our heap, in which case we simply passed the pointer to newlib‘s free — or if it was in our heap space, we did some arithmetic to figure out which block that address was in, and passed that to lowheap_chunk_release().
I won’t publish the code because I didn’t get it working properly in the end, but I figured I’d put the notes here on how I put it together to re-visit in the future. Maybe the thoughts might inspire someone else. 🙂
It was about this time, news was a-buzz with the talk of tanks lining up on Russia’s western border, then crossing over into Ukraine in a conflict that was meant to be over in a little over a fortnight.
Time must move slow for Vladimir Putin — 12 months in real time and they’re still at it! So that’s approximately one “Putin Day” equates to approximately 26 real days for the rest of us. (Some would argue the conflict actually began in 2014 — I guess there’s some merit in that opinion, but things really began heating up 12 months ago.)
I’ve actually learned a lot about the place. Okay, “a lot” is a relative measure, what I actually know could be scribbled onto the back of a postage stamp with a thick permanent marker, however I am picking up tidbits here and there.
12 months ago, I actually had to frequently correct my spelling — I kept missing the “i” (i.e. “Ukrane”). I wasn’t aware Chernobyl was actually on Ukraine’s northern border with Belarus (or that Belarus was even their northern neighbour). I might’ve heard of Moldova, but wasn’t sure where it was, I had not heard of the disputed territory of Transnistria. Nor did I realise they shared their western border with Poland.
Over the last 12 months I’ve slowly become a bit more familiar with where some of their more major cities are: Lviv in the west, the port cities of Odesa, Mykolaiv (and the general Kherson area — watermelon territory) and we heard lots about Mariupol, particularly the steelworks there. Dnipro and Luhansk in the east, Kharkiv and Sumy in the north-east… Kyiv up in the north.
Point me to a blank map 12 months ago and I wouldn’t have had much idea where those places were, but I have a vague idea now.
I could spot Cyrillic writing before this conflict but couldn’t read any of it. Today while I can’t identify the language, I’m starting to be able to pick out individual letters and recognise the odd word. Various news articles have covered various aspects of the Ukrainian culture. Of course, the before-and-after photos that pop up from time to time showing what was, and what’s just been pulverised by Russian shelling reveal a lot of ornate buildings that are now little more than rubble.
Okay, so little things… very basic facts. The depressing thing is it’s taken a bloody war to even gain a modest familiarity with these things. I have a fear of flying and have no passport, so there’s practically zero chance of me visiting that part of the world.
I guess there was no real necessity for me to really understand the geography of the area pre-conflict, it would have been a personal interest thing if I had done so. Whatever happens though, I think the rest of the world will have to be there ready to help pick up the pieces and help Ukraine re-build.
I wouldn’t be doing business with any businesses based in Belarus, Democratic People’s Republic of Korea (aka North Korea), Eritrea, Mali, Nicaragua, Russia or Syria… and I’d think twice about “no-limits” Russia supporter China.
If the governments in those places change their tune on this conflict, then we re-evaluate, but it’s a fact that supporting business there helps support that country’s government, which only positively-reinforces their current behaviour.
Sadly, with North Korea firing test missile after test missile into the sea, and China eyeing off Taiwan (and its proximity to the Mariana Trench — it’s just about chip fabrication) with jealous eyes — one can only wonder what the next few decades have in store for us.
The real scary thing, I don’t think we in Australia can really count on our allies. The United Kingdom is an utter basket-case post-Brexit and the United States is actually looking very much less united with every passing day as the society there slowly edges towards a race-fuelled civil war.
Methinks we need to start looking at doing things on our own soil, “global economy” looks like it’ll be taking a back seat for a little while!
So, last time I 90% finished the headset I’m likely to use at horse endurance ride and other “quiet” emergency comms events in the near future. The audio quality (at least on receive) sounds great. From what I can tell between hand-helds, the transmit audio sounds good. It’s quite comfortable to wear for extended periods, and while my modifications do muffle sound slightly, it’s perfectly workable.
There are just a couple of niggles:
the headset uses a dynamic microphone, thus is not compatible (microphone-wise) with the other radio interfaces I have
I used solid-core CAT5 which is sure to develop a fault at some inconvenient moment
the cable to the connector is way too short
CAT5 was fine for a proof-of-concept, but really, I want a stranded cable for this. Being a dynamic microphone, it’s not necessary for it to be screened, and in fact, we should not be using unbalanced coaxial-type cable like we’d use on an electret microphone. That brings up another problem: interfaces designed for an electret will not work with this microphone — the impedance is too low and they’ll supply a bias current which needs to be blocked for dynamic microphones.
Right now I use a DIN-5 connector, but this is misleading — it implies it’ll connect to any radio interface with a DIN-5, and that my electret headsets will plug into its interfaces. At most I can listen with such a set-up, but not talk. The real answer is to use a completely different connector to avoid getting them mixed up. I decided whatever I used, it should be relatively common: exotic connectors are a pain to replace when they break. My criteria is as follows:
As discussed, common, readily available.
Cheap
Able to carry both speaker and microphone audio in a single connector so we don’t get speaker and microphone mixed up
Polarised, so we can’t get a connector around the wrong way
Ruggedised
Panel and cable mount versions available
Contenders I was considering were the 240° DIN-5 (I bought some by mistake once), 5-pin XLR and mini-XLRs, and the humble “CB microphone” connector. Other options I’ve used in the past include the DIN-7/DIN-8 and DE15HD (aka “VGA” connectors). DIN-7/DIN-8s can be fiddly to solder, and are overkill for the number of contacts. Same with DE15HDs — and the DE15HDs do not like moisture!
In the end, I decided the CB microphone connector seemed like my best bet. Altronics and Jaycar both sell these. I don’t know what the official name of these things is. They were common on radio equipment made between the mid-70s through to the late 80s — my Yaesu FT-290R-II uses an 8-pin connector, my Kenwood TS-120S uses a 4-pin. They’re pretty rugged, feature a screwing locking ring, and have beefy contacts for passing current. Usually the socket is available as a panel-mount only, but I found Altronics sell a cable-mount version (and today I notice Jaycar do too). If someone knows a RS/Mouser/Element14/Digikey link for these, I’ll put it here.
The big decision was to also consider how to wire the connector up. As this is “my own” standard, I can use whatever I like, but for the sake of future-me, I’ll document what I decided as I’ve forgotten how I wired up DIN-5’s before. I did have it written down, but misplaced that scrap of paper. I ended up quickly opening up a connector and taking this photo to refresh my memory.
A photo of an actual headset connector, showing the connections.
To wit, I therefore shall commit to public record, exactly how I wired this thing, and propose a standard for dynamic microphone headsets.
The current (left) DIN-5 pin-out, and my proposed “CB microphone” pin-out — both looking into socket
Some will point out that yes, I’m creating yet another standard. In my defence, mine is aimed at stereo headsets, which traditionally have been two separate 3.5mm phone jacks. Very easy to mix up. Some might argue that there exists a new standard in the form of the 4-pole TRRS connector, however not all interfaces are compatible — at the time when I devised the DIN-5 connector, I was using a Nokia 3310 which did not like having the microphone and speaker connected to a common pin.
Keeping them separate also allows me to do balanced audio tricks for interfacing electret microphones with radios like the Yaesu FT-857D which expect a dynamic microphone. For this; I need 5 contacts — left/right speaker, speaker common, and two for the microphone. There are 5-pole TRRRS connectors, the TP-105 is one such example — but they’re not common outside of the aviation industry where they are used.
For the cabling, I’ve cut the CAT5 cabling shorter, and spliced onto the end some 4-wire telephone ribbon onto each side. That makes the headset cable a comfortable length. I began by first soldering the “CB microphone” connector, choosing colours for the speaker and microphone connections and wiring it up in a “loop”, before cutting the far end of the loop, stripping back insulation and tinning the wires. I used a multimeter to decide which was the “left” and “right” connections — then these were spliced with some heat shrink.
After a quick test on the radio, I sealed it up using some hot-melt glue. This should prevent the solder joints from flexing and thus prolong the life of the connection.
Headset, with new connectors and cabling.
I might look at a small J-FET or BJT adaptor cable that will allow me to use this headset in place of an electret microphone headset — as it’d be nice to be able to just plug this into the tablet to listen to music or use with VoIP. I’ve got extra line-mounted sockets for that. Not sure if it’s viable to go the other direction — I’d need a small battery to power the electret I think, that or a bypass switch on the PTT cable to allow me to power an electret microphone.
I keep a few copies of it. Between three of my machines and two USB drives (one HDD, one SSD), I keep a copy of the lossless archive. This is a recent addition since (1) I’ve got the space to do it, and (2) some experimentation with Ogg/Vorbis metadata corrupting files necessitated me re-ripping everything so I thought I’ll save future-me the hassle by keeping a lossless copy on-hand.
Actually, this is not the first time I’ve done a re-rip of the whole collection. The previous re-rip was done back in 2005 when I moved from MP3 to Ogg/Vorbis (and ditched a lot of illegally obtained MP3s while I was at it — leaving me with just the recordings that I had licenses for). But, back then, storing a lossless copy of every file as I re-ripped everything would have been prohibitively expensive in terms of required storage. When even my near-10-year-old laptop sports a 2TB SSD, this isn’t a problem.
The working copy that I generally do my listening from uses the Ogg/Vorbis format today. I haven’t quite re-ripped everything, there’s a stack of records that are waiting for me to put them back on the turntable … one day I’ll get to those … but every CD, DVD and digital download (which were FLAC to begin with) is losslessly stored in FLAC.
If I make a change to the files, I really want to synchronise my changes between the two copies. Notably, if I change the file data, I need to re-encode the FLAC file to Ogg/Vorbis — but if I simply change its metadata (i.e. cover art or tags), I merely need to re-write the metadata on the destination file and can save some processing cycles.
The thinking is, if I can “fingerprint” the various parts of the file, I can determine what bits changed and what to convert. Obviously when I transcode the audio data itself, the audio data bytes will bear little resemblance to the ones that were fed into the transcoder — that’s fine — I have other metadata which can link the two files. The aim of this exercise is to store the hashes for the audio data and tags, and detect when one of those things changes on the source side, so the change can be copied across to the destination.
Existing option: MD5 hash
FLAC actually does store a hash of its source input as part of the stream metadata. It uses the MD5 hashing algorithm, which while good enough for a rough check, and is certainly better than linear codes like CRC, it’s really quite dated as a cryptographic hash.
I’d prefer to use SHA-256 for this since it’s generally regarded as being a “secure” hash algorithm that is less vulnerable to collisions than MP3 or SHA-1.
Naïve approach: decode and compare
The naïve approach would be to just decode to raw audio data and compare the raw audio files. I could do this via a pipe to avoid writing the files out to disk just to delete them moments later. The following will output a raw file:
RC=0 stuartl@rikishi ~ $ time flac -d -f -o /tmp/test.raw /mnt/music-archive/by-album-artist/Traveling\ Wilburys/The\ Traveling\ Wilburys\ Collection/d1o001t001\ Traveling\ Wilburys\ -\ Handle\ With\ Care.flac
flac 1.3.4
Copyright (C) 2000-2009 Josh Coalson, 2011-2016 Xiph.Org Foundation
flac comes with ABSOLUTELY NO WARRANTY. This is free software, and you are
welcome to redistribute it under certain conditions. Type `flac' for details.
d1o001t001 Traveling Wilburys - Handle With Care.flac: done
real 0m0.457s
user 0m0.300s
sys 0m0.065s
On my laptop, it takes about 200~500ms to decode a single file to raw audio. Multiply that by 7624 and you get something that will take nearly an hour to complete. I think we can do better!
Alternate naïve approach: Copy the file then strip metadata
Making a copy of the file without the metadata is certainly an option. Something like this will do that:
RC=0 stuartl@rikishi ~ $ time ffmpeg -y -i \
/mnt/music-archive/by-album-artist/Traveling\ Wilburys/The\ Traveling\ Wilburys\ Collection/d1o001t001\ Traveling\ Wilburys\ -\ Handle\ With\ Care.flac \
-c:a copy -c:v copy -map_metadata -1 \
/tmp/test.flac
… snip lots of output …
Output #0, flac, to '/tmp/test.flac':
Metadata:
encoder : Lavf58.76.100
Stream #0:0: Video: mjpeg (Progressive), yuvj420p(pc, bt470bg/unknown/unknown), 1000x1000 [SAR 72:72 DAR 1:1], q=2-31, 90k tbr, 90k tbn, 90k tbc (attached pic)
Stream #0:1: Audio: flac, 44100 Hz, stereo, s16
Side data:
replaygain: track gain - -8.320000, track peak - 0.000023, album gain - -8.320000, album peak - 0.000023,
Stream mapping:
Stream #0:1 -> #0:0 (copy)
Stream #0:0 -> #0:1 (copy)
Press [q] to stop, [?] for help
frame= 1 fps=0.0 q=-1.0 Lsize= 24671kB time=00:03:19.50 bitrate=1013.0kbits/s speed=4.77e+03x
video:114kB audio:24549kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.033000%
real 0m0.139s
user 0m0.105s
sys 0m0.032s
This is a big improvement, but just because the audio blocks are the same does not mean the file itself won’t change in other ways — FLAC files can include “padding blocks” anywhere after the STREAMINFO block which will change the hash value without having any meaningful effect on the file content.
So this may not be as stable as I’d like. However, ffmpeg is on the right track…
Audio fingerprinting
MusicBrainz actually has an audio fingerprinting library that can be matched to a specific recording, and is reasonably “stable” across different audio compression formats. Great for the intended purpose, but in this case it’s likely going to be computationally expensive since it has to analyse the audio in terms of frequency components, try to extract tempo information, etc. I don’t need this level of detail.
It may also miss that one file might for example, be proceeded by lots of silence — multi-track exports out of Audacity are a prime example. Audacity used to just export the multiple tracks “as-is” so you could re-construct the full recording by concatenating the files, but some bright-spark thought it would be a good idea to prepend the exported tracks with silence by default so if re-imported, their relative positions were “preserved”. Consequently, I’ve got some record rips that I need to fix because of the extra “silence”!
Getting hashes out of ffmpeg
It turns out that ffmpeg can output any hash you’d like of whatever input data you give it:
Notice the hashes are the same, yet the first copy of the file we hashed does not contain the tags or cover art present in the file it was generated from. Speed isn’t as good as just stripping the metadata, but on the flip-side, it’s not as expensive as decoding the file to raw format, and should be more stable than naïvely hashing the whole file after metadata stripping.
Where to from here?
Well, having a hash, I can store this elsewhere (I’m thinking SQLite3 or LMDB), then compare it later to know if the audio has changed. It’s not difficult or expensive using mutagen or similar to extract the tags and images, those can be hashed using conventional means to generate a hash of that information. I can also store the mtime and a complete file hash for an even faster “quick check”.
I’m a late adopter of Bluetooth, having previously tried Bluetooth in its earlier days, hearing something that sounded like my music was being fed down a drain pipe, and deciding that Bluetooth was rubbish… it wasn’t until I bought a Logitech H830 headset that I found that Bluetooth can actually sound decent… moreover when I bought the Logitech Zone Wireless, that bi-directional Bluetooth can also sound decent.
Now, the Zone Wireless is fine if I’m in the office, or out walking somewhere. It’ll fit underneath the coolie hat if I decide to wear that, otherwise it works with a cap just fine. BUT, if I’m camping or at a WICEN event, I’m often wearing a full-brim hard hat. The headband on the Zone Wireless is a problem.
I really wanted a Bluetooth device that could be put on a lanyard, and I just plug in a regular common-garden variety wired headset. The closest I can get to this is a motorcycle headset — as these have to accommodate a wide variety of helmet styles, the radio module and the headset are actually separate components, and so conceivably, I can make my own compatible adaptor to plug in. Then, it wouldn’t matter… want to wear the hard hat? No problem, I already have modded earmuffs with a headset. Want to use it on the bike? Sure, plug the helmet straight in. Or am I in the office again? No problem, normal headset.
It’d also be nice to share that wired headset with a wired audio device… prime example here is a radio transceiver. Yes, there are devices that will make those do Bluetooth… and there are radios that have Bluetooth. I had one of the latter: Yaesu VX8-DR … it’s Bluetooth was next to useless… idiosyncratic and unreliable.
I see the Sena SR-10 mentioned in a few places as a way to “Bluetooth-enable” a two-way radio… but aside from being pricey, I see three complaints being raised: unreliable/slow pairing, intermittent darlek-like distortion on transmit and a noticeable connection delay on incoming signals.
I pondered doing my own… and I’ve slowly amassed parts to do exactly that. But, the other day, I stumbled on another option: Altronics sell a CSR8635 Bluetooth module. This advertises the ability to talk to two Bluetooth devices, and wideband voice. CSR’s own datasheet seems to give some hints as to how it can be used.
One catch, is the pads on this device are a 1mm spacing — so mounting this on some perfboard is going to be a big challenge. I prefer the minimalism of a module like this over a Raspberry Pi Zero W… a lot less to go wrong, and likely much better battery life.
So, it’s been 8 days since we woke up to the non-stop sports yap-trap that we were promised would be coming on the 1st July. Not that I heard much of it. I made a point of staying up and listening to the last of the old station before it went. The final few hours of the broadcast were ad-free and the final things heard on the station were:
Don McLean’s American Pie
The Beatles The End (from their album, Abbey Road — minus the secret track Her Maj)
crickets sound effect — fading out to silence
Then, shortly after… it cut over to the new mob. They just had a playlist going for the first 6 hours, with the cut-over about a minute into Survivor’s Eye Of The Tiger. I put the radio on mute shortly after and got some sleep… at 5AM they were still playing music, but with occasional cut-ins with various announcers mentioning what was coming after 6AM. When 6AM rolled around, I listened for a minute, then switched off for good.
The old website changed to being just a blank page.
Radio personality moves
Not sure where everyone has gone, but here’s what I do know:
Bob Gallagher, who some might know from 97.3’s breakfast program, and who chaired 4KQ’s last 4 morning on air… moved over to 4BH.
Vanessa Gibson (apologies if I have spelling wrong, I’ve never seen it written), who was one of the more promenant morning news readers at 4KQ has stayed with the 693kHz frequency on SEN-Q
4KQ’s breakfast crew have moved to 4BC — an odd match if ever I saw one given that trio’s love for music and 4BC being a talk-back radio station
Station changes
4BH have switched formats to a “classic hits” format like the old 4KQ with a subset covering 1960 ~ 1989 (inclusive). I say subset because 4KQ in their final days were already playing Seal Crazy (released 1992), Natalie Imbruglia Torn (1997) and Savage Garden The Animal Song (released 1999).
Whether they’re playing any of Brisbane’s “historic” bands remains to be seen.
4BC’s program seems to suggest they’re still sticking with their “news talk” format. Maybe music in the mornings, then the gab-fest begins.
97.3 still don’t acknowledge the existence of anything prior to 1980.
River 94.9 is still a very weak signal into Brisbane — if you’re in the right spot you can get them but otherwise they’re practically inaudible. Odd, since they share the tower at The Knobby with VK4RAI, and I both hear that and can work that repeater quite reliably with far less power than what River 94.9 would be transmitting. Clearly they are still beaming west, and what we hear in Brisbane is just what’s coming off the back of the beam.
How things have changed for me
Well, from my perspective… that Friday morning was quite disorienting. You get used to the time-calls and regular news updates which give you an idea of how time had passed. I put my own music on that morning… and yes, was a minute or two late for my workplace morning stand-up because I wasn’t watching the clock!
A week on, and apparently I’ve broken one of my own music listening records according to Last.FM:
Last week’s listening reportThis week
I had gone from listening to just 17 songs in total (most of those would have been the Friday afternoon), to over 1300. In the last 6 weeks of 4KQ I actually stopped listening to a lot of my music: I figured there was plenty of time for that once they went — I wouldn’t have 4KQ to listen to much longer, enjoy them whilst you’ve got them.
I also didn’t do any channel-hopping: previously if Abba came on (it’s a long story, but basically you didn’t want to be found listening to that group in a late 90s high-school), I’d switch stations or switch to my own music, sometimes for hours.
At mid-day I’d ordinarily flip over to Triple-M Classic Rock as they have a Essential Vinyl show which is often an interesting deep-dive into a particular iconic album from past decades, and is often interesting from the perspective of getting to know songs from an artist I might not otherwise know much about. Obscure entities like Buckingham Nicks (basically the precursor to the modern Fleetwood Mac) are in my shopping list thanks to that show. Just like a lot of my present collection can be tracked back to special features put on by 4KQ as well.
This last week… I didn’t do any of that. So where my music listening might’ve at most started at around 10:00AM or later… now basically I’ve been listening to my own music collection from 5:00AM through to to around 7:10PM. I have a cron job that manages it:
The amixer calls were there before, and would control the volume. qt-dab would sit on the desktop and receive 24/7. Now, strawberry sits on the desktop, and I’m using its CLI to start and stop: strawberry -p to fire things off in the morning, then strawberry -q to finish playing that last song before going silent.
It’s got a lot to be desired… maybe if I get creative with a text-to-speech engine, I might get some time-calls and a bit of news headlines to replicate some of what I’ve lost, although it’ll be a very poor substitute to what I had on the old station. It might just be “good enough” though… it’ll give me a time reference.
My feelings on this
While I’ve been able to largely “replace” 4KQ on my own stereo… I’m still a bit peeved by the whole experience. There’s a lot of music that’s very hard to get / impossible to get, that they used to play. Railroad Gin’s You Told The World, Do Ya Love Me and The Academy Rock being three that used to play on 4KQ… but so far have proven unobtainium. Other bands like New World, Moscos and Stone, among others… are similarly buried somewhere in record company vaults, never to be seen again.
We’re just left to fight over what few second-hand albums exist in circulation… or pirate.
As to the mess that started this: so the ACMA have a rule that limits the number of radio stations a company can control. Fair enough, they want to promote diversity, and having all the eggs in one basket does not help this. I get where the ACMA is coming from.
Here, There & Everywhere own the Australian Radio Network, who until this year, owned 4KQ. They also own 97.3. Some bright-spark at HT&E saw dollars in buying up rival network Grant Broadcasting who owned various regional stations such as River 94.9 (Ipswich) and Hot Tomato (Gold Coast), and merging them into ARN. That meant ARN were now over the threshold.
That decision to chase the dollar, on the surface of it, seems to me to be nothing less than a complete slap in the face to the Brisbane radio listenership and the staff of 4KQ, the latter of whom had given their working lives to the station. 30 years for a single announcer to work a shift is an Australian record that 4KQ and Laurel Edwards broke. This is how HT&E pay her loyalty. I think that speaks volumes.
Plans now
Immediate plans
The morning crew I used to listen to are back on air on Monday, so perhaps I’ll set up the radio for 4BC, and see how they go.
I expect there’ll be quite a few gaffs from a trio that have been used to saying “4KQ” and “32230693” for decades (Laurel Edwards has been doing that on-air longer than anyone else). Mark Hine got so used to saying 4KQ he accidentally (as ground announcer) blurted “… on Classic Hits 4K-” to the audience at The Gabba… cutting himself off when he realised which workplace he was at.
It’ll be interesting to see how they work music into their news format. That’ll be a deciding factor as to whether I continue listening after the morning stand-up, or whether I switch to my own music until the next morning.
Radio station ideas / aspirations
So to be clear, I’m not going to rush into starting up something myself. While I do have some music knowledge, and that’s probably the subject about running a commercial music radio station I would rate as being strongest — my knowledge has gaps so large a sperm whale could swim through in comfort! If that’s the state of my knowledge in that field, this does not bode well for other critical-knowledge areas.
Really I’d need to team up with people who have some media experience. I have some technical knowledge, but there’s a big difference between a 100W SSB amateur radio station which is small enough to be bicycle-mounted, and a honking big 10kW broadcast MW AM station. I’d also need very deep pockets to commission said station.
Requirements for digital-rights management being imposed by the PPCA make Internet streaming impractical. That would basically just leave DAB+. There may be room there… seems channel 9B has a little more space than 9A does, but who knows? I’d have to ask, find out what their fees and technical requirements are. Then, I’d have to then figure out what the going rate was for advertising slots, and work out the finances from there.
I don’t know how the music needs to be obtained at this point. I’m guessing purchasing MP3s from legal sources (the same that we might as individuals) may be acceptable since they’re fundamentally the same recordings — and we’d have a separate content license that would cover their broadcast. This is a guess though, I might be wrong.
It’s a big job — and not one I’m particularly suited for. I’m happy to sit back and let someone who knows what they’re doing go ahead and do it.
The idea of such a station would be a very loose copy of 4KQ in so far as we’d be playing similar music. Not the same, because to be honest I actually do not know what songs were “hits” in this city. I have some recollections of what I’ve heard, but likely this is just the tip of the iceberg.
I did manage to grab some feature playlists (e.g. Sizzling 70s, Easter Count-down, all-sorts… etc) from 4KQ before the site went offline. Those, when de-duplicated, amount to about 3500 songs, about 70% I already had. I can’t publish these as they belong to the Australian Radio Network (I have contacted ARN about this but not heard anything — I’ll take that as a “no, do not publish”) … but nothing stops me picking through the listings and incorporating the artists mentioned into my library where see them. If I see collections that are readily available, I’ll make note of them here.
What would the format look like? Well, musically it’ll be a mix of the heavier rock that you’d hear on stations like Triple M, and the softer stuff of 4KQ. Not exclusively focused on Brisbane hits, as I don’t have a record of what was popular… I just have a “rough idea” of what artists were popular, and would likely work on that basis. That might change if someone who does have records of this came on-board and could basically guide me on this or take on the music-director role properly.
The first days would likely be ad-free as we try to build up an audience and attract advertisers. Those booking advertising slots would have to organise their own recordings since we wouldn’t have studios to help them with that. The station would be “automatic”: no announcers, news, weather… just music, and later we’d get ad breaks in to help pay the bills and start building up a revenue stream.
If revenue picked up enough, then maybe we could organise to hire studio time and do pre-recorded shows, or perhaps live ones if we can figure out how to link studio and transmitter.
Some ideas for shows that’d work pre-recorded:
Classic Artists Today: a look at artists we know from the 60s~90s that are still producing music and what they’re doing these days… for example Jeff Lynne is still doing music with Electric Light Orchestra, The Who and Manfred Mann’s Earth Band did a few new songs in the early part of this century, Fleetwood Mac are still active.
Sunday Spotlight: a deep dive into an artist’s work (e.g. a show about George Harrison would start with his role in The Beatles, but then cover solo work and his work in The Traveling Wilburys; Graham Gouldman might cover his early songwriting for The Hollies, The Ohio Express then his later work with 10cc, solo work, and his team-up with Andrew Gold in Wax; Brian Cadd could have enough material to fill several hours I think with Axiom, The Groop and solo work, along with producing for other artists).
This is better done by someone who knows what they’re doing, and I know right now, that is not me, certainly not as a solo act. I suspect this will be at least a year off, likely longer if it happens at all.
A lot will depend on demand. I have a day job that’s paying the bills, there’s no sense of rushing off from that into the great unknown, no matter how much I might feel like a career change after some 20+ years connection with (and subsequent frustration with) the IT industry in one form or another! Time will tell.
So, recently I started giving consideration to building a station… starting of course with how the station might broadcast to an audience. This is in no way a sign that I’ll actually go and do it: to survive I need about AU$30000/year (yeah, I have low overheads at present) and I doubt a dinky little radio station is going to make me that much money.
That said, this is an industry I know little about, so it’s hard to know what the finances would look like.
Content licensing
Irrespective of how the broadcast is done, a station like 4KQ will need a content license for the music broadcast. Not just music though, news updates and even the weather are potentially in the scope of content licenses.
These can be negotiated with individual holders in some cases. I know for small narrowcasting services you can obtain a license through OneMusic, however looking at their offerings they don’t seem to cater to broadcasting services. Turns out there is one that does: the Phonographic Performance Company of Australia.
What this would all cost is a complete unknown.
Radio Broadcasting
The above only lets you use content in a broadcast, it doesn’t let you actually transmit anything on any radio frequency. For this, two things are needed (in addition to the broadcasting equipment). Both come from the ACMA:
Broadcast station license: This doesn’t cover transmitters, this is merely the right to have a radio station servicing a given geographical area, irrespective of how it reaches that area.
I’m not sure whether this is true of DAB+, the transmitters themselves are operated by Digital Radio Broadcasting Pty Ltd, it could be that the stations “piggy-back” on their license the way the do on the actual transmitter itself.
If we did decide to commission a transmitter, that’ll get expensive fast. I don’t expect much change out of AU$1M, in fact, even that may not be sufficient! Then there’s running costs: a 10kW class-B transmitter PA stage will need at least 12-15kW on signal peaks and it will want it now! So likely, 3-phase power is needed, and with a beefy local energy store to smooth out those sharp peaks.
An AM transmitter will also occupy a decent-size land area. If you want an idea; have a look at the 4QR/4QG site or 4KQ’s site as examples. That size area does not come cheap.
Internet broadcasting
This looks to be, in the short-term, the cheaper option if we aim to start small first. We still need the content license, but potentially there are fewer unknowns in the costs. The interesting bit is the content license requirements, specifically I had a look at the forms needed to apply for such a license… this question stuck out:
“What security measures will be in place to prevent downloading or stream ripping?”
This is a tricky-one. In terms of technology my first choice would be something like icecast to manage the audio streams, but this is trivially ripped (possibly using nothing more exotic than wget).
DAB+ can be ripped trivially — qt-dab has both a “frame dump” and an “audio dump”; the former gives you the raw HE-AAC frames, the latter gives you decompressed PCM audio. The same tool can even rip the whole multiplex, recording every single station simultaneously (all 28 stations for the Brisbane DAB 1 multiplex).
Fundamentally, our ears do not hear digital signals, they only respond to analogue pressure waves (travelling through a gas or liquid). To listen to a “digital” station, it must first be converted from whatever on-air format it’s in to a plain uncompressed audio stream, passed through a digital-to-analogue converter, then amplified to electrically drive a speaker transducer which converts the electrical signal into the sound-pressure waves that our ears respond to.
Those sound pressure waves are not protected from being converted back to an electrical signal, having that electrical signal sampled through a analogue-to-digital converter and captured by a storage device.
Years ago, yes, I had some pirated music, and this included a copy of Cold Chisel’s Khe Sanh (I now have a legally purchased CD of that song, and the MP3 no longer exists on my equipment), in which you could hear someone gently placing a microphone in front of a speaker and nudging it forward. That method works whether the source material is a Victorian-era wax-cylinder phonograph recording or a Blu-ray disc. It would also work for any streaming service you care to mention.
Indeed, most of the listening devices feature headphone sockets or Bluetooth interfaces — it is entirely possible to sample the analogue or digital electrical signal without the acoustic conversion. Most computer sound devices feature a “monitor” port you can record from, and there’s nothing stopping you plugging in a device that advertises itself to two hosts as a USB Audio class device, piping audio from one to the other.
Yes, there’s signal degradation doing that, but this does not matter in a piracy law suit: it could be downmixed to mono and downsampled to a 2kHz sample rate with a 4-bit resolution, and still be a copyright violation.
So I wonder what “counts” as a security measure. No doubt this was a request put in by the record companies who seem to forget the above limitation. Maybe services like Listener and iHeartRadio have some tricks up their sleeve… who knows? Firefox seems to see iHeartRadio like any other website, and of course, to pipewire, Firefox is just like any other pulseaudio client, so stream ripping is very doable.
Also interesting was the question of: “How will access by countries other than Australia be restricted?” We seem to live in a world where VPNs don’t exist or are 100% detectable by the hosts. If I can stand up a VPN server, and “dial” into it from my tablet from any sufficiently-open Internet connection on the planet: practically anyone can. In doing so, it would look like I’m streaming from my home Internet connection, not from the real connection.
Funding the costs
So, for a station to “stand on its own feet”, it would need to find a revenue stream that pays for the above. The way most do it is through advertising, and there are groups like Commercial Radio Australia that cater to that. No idea what they pay broadcasters, but I’d imagine it’s a function of service area, number of listeners and the listener demographics.
This is hard to know in advance. The station I’m looking at as a model targeted the 40+ market (noting that I myself am not in that age bracket). Some of this group will be less technically inclined to do Internet streaming unless there’s some sort of dedicated streaming client available through their device’s software repositories. Integration into smart speaker voice assistants is a desirable feature to some this group, but many I’ll bet are listening the same way they’ve done for decades: traditional radio broadcast.
I think a “new” Internet-only station is going to struggle justifying the same fee for an advertising slot as an established 75-year old broadcast station.
Asking for donations might be another avenue, but having done work for a few charities in my time, this is definitely not an easy way to raise funds. Staff would essentially be volunteers: this would be at best a side-hustle for me and anyone else that joins me in this venture.
Subscription services for Internet broadcasting could work, but then you’re competing against the likes of Spotify, Deezer, Bandcamp, et all… that’s tough going! They don’t make much, and pass even less on as royalties. Plus, the listeners will likely demand more than just an advertising-free experience, they’ll probably want music-on-demand, which is a whole different class of content-license, and would have to be factored into the subscription fees.
Time will tell on the above, but that at least gives some thought as to what I’d be up for if I decided to take this thought-exercise further.
So, this is quite sad news… I learned this on Friday morning that one of Brisbane’s longer-serving radio stations will be taken over by new management and will change its format from being a “classic hits” music station, to being a 24/7 sports coverage station.
It had been operated by the Australian Radio Network who had recently done a merger with a rival network, Grant Broadcasters, picking up their portfolio which included their portfolio of stations which included a number of other Greater Brisbane region stations. This tipped them over the edge and so they had to let one go, the unlucky victim was their oldest: 4KQ.
Now, you’re thinking, big deal, there are lots of radio stations out there, including Internet radio. Here’s why this matters. Back in the 90s, pretty much all of the stations here in Brisbane were locally run. They might’ve been part of a wider network, but generally, the programming about shows and music was decided on by people in this area. Lots of songs were hits only in Brisbane. There are some songs that did not make the music charts anywhere else world-wide. But, here in Brisbane, we requested those songs.
Sometimes the artists knew about this, sometimes not.
Over time, other stations have adjusted their format, and in many cases, abandoned local programming, doing everything from Sydney and Melbourne. Southern Cross Austereo tried this with Triple M years ago, and in the end they had to reverse the decision as their ratings tanked and complaints inundated the station.
4KQ represented one of the last stations to keep local programming. I’m not sure how many still do, but in particular this station was unique amongst the offerings in this area due to its wide coverage of popular music spanning 1960 ~ 1995, and in particular, its focus on the Brisbane top-40 charts.
Some of the radio programs too were great: Brent James in particular had an art for painting a picture of Brisbane at that time for both people who were there to experience it, those who missed out because they lived someplace else, and people like myself who were either too young to remember or not alive at the time in the first place. A lot of their other staff too, had a lot of music knowledge and trivia — yes you can reproduce the play lists with one’s own music collection, but the stories behind the hits are harder to replicate. Laurel Edwards is due to celebrate her 30th year with the station — that’s a long commitment, and it’s sad to think that this will be her last through no fault or decision of her own.
It’s loss as a music station is a major blow to the history of this city. To paraphrase Joni Mitchell, they’ve torn down Festival Hall to put up an apartment block!
A new normal
The question is, where to now? The real sad bit is that this was a successful station that was only culled because of a regulatory compliance issue: ARN now had too many stations in the Greater Brisbane area, and had to let one go. They reluctantly put it up for sale, and sure enough, a buyer took it, but that buyer was not interested in preserving anything other than the frequency, license and broadcast equipment.
In some ways, AM is a better fit for the yap-fest that is SEN-Q. They presently broadcast on DAB+ at 24kbps in essentially AM-radio quality. 4KQ has always been a MW station, originally transmitting at 650kHz back in 1947, moving to 690kHz a year later… then getting shuffled up 3kHz to its present-day 693kHz in 1978 when the authorities (in their wisdom at the time) decided to “make room” by moving all stations to a 9kHz spacing.
Music has never been a particularly good fit for AM radio, but back in 1947 that was the only viable option. FM did exist thanks to the work of Edwin Armstrong, but his patents were still active back then and the more complicated system was less favourable to radio manufacturers at a time when few could afford a radio (or the receiver license to operate it). So AM it was for most broadcasters of that time. “FM radio” as we know it today, wouldn’t come into existence in Brisbane until around 1980, by which time 4KQ was well-and-truly established.
The question remains though… ratings were pretty good, clearly there is demand for such a station. They had a winning formula. Could an independent station carry forward their legacy?
The options
So, in July we’ll have to get used to a new status-quo. It’s not known how long this will last. I am not advocating vigilante action against the new owners. The question will be, is there enough support for a phoenix to rise out of the ashes, and if so, how?
Existing station adopting 4KQ’s old format?
This might happen. Not sure who would be willing to throw out what they have now to try this out but this may be an option. There are a few stations that might be “close enough” to absorb such a change:
4BH (1116kHz AM) does specialise in the “older” music, but it tends to be the softer “easy listening” stuff, they don’t do the heavier stuff that 4KQ and others do. (e.g. you won’t hear AC/DC)
KIIS 97.3 (97.3MHz FM) was 4KQ’s sister station, at present they only do music from the 80s onwards.
Triple M (104.5MHz FM) would be their closest competitor. They still do some 60s-80s stuff, but they’re more focused on today’s music. There’s a sister-station, Triple M Classic Rock (202.928MHz DAB+) but they are an interstate station, with no regional focus.
Outside of Brisbane, River 94.9 (94.9MHz FM) in Ipswich would be the closest to 4KQ. They make frequent mentions of 4IP and its charts. Alas, they are likely beaming west as they are not receivable in this part of Brisbane at least. (VK4RAI on the other hand, located on the same tower can be received, and worked from here… so maybe it’s just a case of more transmit power and a new antenna to service Brisbane?)
I did a tune-around the other day and didn’t hear anything other than those which was in any way comparable.
Interesting aside, 4IP of course was the hit station of its day. These days, if you look up that call-sign, you get directed to RadioTAB… another sports radio station network. Ironic that its rival meets the same fate at the hands of a rival sports radio network.
A new station?
Could enough of us band together and start afresh? Well, this will be tough. It’d be a nice thing if we could, and maybe provide work for those who started the year thinking their job was mostly secure only to find they’ve got two more months left… but the tricky bit is we’re starting from scratch.
FM station?
A new FM station might be ideal in terms of suiting the format, and I did look into this. Alas, not going to happen unless there’s a sacrifice of some sort. I did a search on the ACMA license database; putting in Mt. Coot-tha as the location (likely position of hypothetical transmitter, I think I chose Ch 9 site, but any on that hill will do), giving a radius of 200km and a frequency range of 87-109MHz.
Broadcast FM radio stations are typically spaced out every 800kHz; so 87.7MHz, 88.5MHz, 89.3MHz, … etc. Every such frequency was either directly taken, or had a station within 400kHz of it. Even if the frequency “sounded” clear, it likely was being used by a station I could not receive. A big number of them are operated by churches and community centres, likely low-power narrowcast stations.
The FM broadcast band, as seen from a roof-top 2m “flower pot” in The Gap.
There’s only two ways a new station can spring up on FM in the Brisbane area:
an existing station closes down, relinquishing the frequency
all the existing stations reduce their deviation, allowing for new stations to be inserted in between the existing ones
The first is not likely to happen. Let’s consider the latter option though. FM bandwidth is decided by the deviation. That is, the modulating signal, as it swings from its minimum trough to its maximum peak, causes the carrier of the transmitter to deviate above or below its nominal frequency in proportion to the input signal amplitude. Sometimes the deviation is almost identical to the bandwidth of the modulating signal (narrowband FM) or sometimes it’s much greater (wideband FM).
UHF CB radios for example; deviate either 2.5kHz or 5kHz, depending on whether the radio is a newer “80-channel” device or an older “40-channel” device. This is narrowband FM. When the ACMA decided to “make room” on UHF CB, they did so by “grandfathering” the old 40-channel class license, and decreeing that new “80-channel” sets are to use a 2.5kHz deviation instead of 5kHz. This reduced the “size” of each channel by half. In between each 40-channel frequency, they inserted a new 80-channel frequency.
This is simple enough with a narrowband FM signal like UHF CB. There’s no sub-carriers to worry about, and it’s not high-fidelity, just plain old analogue voice.
Analogue television used FM for its audio, and in later years, did so in stereo. I’m not sure what the deviation is for broadcast FM radio or television, but I do know that the deviation used for television audio is narrower than that used for FM radio. So evidently, FM stereo stations could possibly have their deviation reduced, and still transmit a stereo signal. I’m not sure what the trade-off of that would be though. TV stations didn’t have to worry about mobile receivers, and most viewers were using dedicated, directional antennas which better handled multi-path propagation (which would otherwise cause ghosting).
Also, TV stations to my knowledge, while they did transmit sub-carriers for FM stereo, they didn’t transmit RDS like FM radio stations do. Reducing the deviation may have implications on signal robustness for mobile users and for over-the-air services like RDS. I don’t know.
That said, lets suppose it could be done, and say Triple M (104.5MHz) and B105 (105.3MHz) decided to drop their deviation by half: we could then maybe squeeze a new station in at 104.1MHz. The apparent “volume” of the other two stations would drop by maybe 3dB, so people will need to turn their volume knobs up higher, but might work.
I do not know however if this is technically possible though. In short, I think we can consider a new FM station a pipe dream that is unlikely to happen.
New AM station?
A new AM station might be more doable. A cursory look at the same database, putting in much the same parameters but this time, a 300km radius and a frequency range of 500kHz-1.7MHz, seems to suggest there are lots of seemingly “unallocated” 9kHz slots. I don’t know what the frequency allocation strategy is for AM stations within a geographic area. I went a wider radius because MW stations do propagate quite far at night: I can pick up 4BU in Bundaberg and ABC Radio Emerald from my home.
The tricky bit is physically setting up the transmitter. MW transmitters are big, and use lots of power. 4KQ for example transmitted 10kW during daylight hours. Given it’s a linear PA in that transmitter, that means it’s consuming 20kW, and when it hits a “peak” it will want that power now!
The antennas are necessarily large; 693kHz has a wavelength of 432m, so a ¼-wave groundplane is going to be in the order of 100m tall. You can compromise that a bit with some clever engineering (e.g. see 4QR’s transmitter site off the Bruce Highway at Bald Hills — guess what the capacitance hat on the top is for!) but nothing will shrink that antenna into something that will fit a suburban back yard.
You will need a big open area to erect the antenna, and that antenna will need an extensive groundplane installed in the ground. The stay-wires holding the mast up will also need a big clearance from the fence as they will be live! Then you’ve got to keep the transmitter fed with the power it demands.
Finding a place is going to be a challenge. It doesn’t have to be elevated for MW like it does for VHF services (FM broadcast, DAB+), but the sheer size of the area needed will make purchasing the land expensive.
And you’ve got to consider your potential neighbours too, some of whom may have valid concerns about the transmitter: not liking the appearance of a big tower “in their back yard”, concerns about interference, concerns about “health effects”… etc.
DAB+?
This could be more doable. I don’t know what costs would be, and the big downside is that DAB+ radios are more expensive, as well as the DAB+ signal being more fragile (particularly when mobile). Audio quality would be much better than AM, but not quite as good as FM (in my opinion).
It’d basically be a case of opening an account with Digital Radio Broadcasting Pty Ltd, who operate the Channel 9A (202.928MHz) and Channel 9B (204.64MHz) transmitters. Then presumably, we’d have to encode our audio stream as HE-AAC and stream it to them somehow, possibly over the Internet.
The prevalence of “pop-up” stations seems to suggest this method may be comparatively cost-effective for larger audiences compared to commissioning and running our own dedicated transmitter, since the price does not change whether we have 10 listeners or 10000: it’s one stream going to the transmitter, then from there, the same signal is radiated out to all.
Internet streaming?
Well, this really isn’t radio, it’s an audio stream on a website at this point. The listener will need an Internet connection of their own, and you, the station operator, will be paying for each listener that connects. The listener also pays too: their ISP will bill them for data usage.
A 64kbps audio stream will consume around 230MB every 8 hours. If you stream it during your typical 8-hour work day, think a CD landing on your desk every 3 days. That’s the data you’re consuming. That data needs to be paid for, because each listener will have their own stream. If there’s only a dozen or so listeners, Internet radio wins … but if things get big (and 4KQ’s listenership was big), it’ll get expensive fast.
The other downside is that some listeners may not have an Internet connection, or the technical know-how to stream a radio station. I for example, do not have Internet access when riding the bicycle, so Internet radio is a no-go in that situation. I also refuse to stream Internet radio at work as I do not believe I should be using a workplace Internet connection for personal entertainment.
Staff?
The elephant in the room is staffing… there’s a workforce that kept 4KQ going who would soon be out of work, would they still be around if such a station were to materialise in the near future? I don’t know. Some of the announcers may want a new position in the field, others may be willing to go back to other vocations, and some are of an age that they may decide hanging up the headphones sounds tempting.
I guess that will be a decision for each person involved. For the listeners though, we’ve come to know these people, and will miss not hearing from them if they do wind up not returning to the air.
In the meantime
What am I doing now? Well, not saving up for a broadcast radio license (as much as my 5-year-old self would be disgusted at me passing up such an opportunity). I am expanding my music collection… and I guess over the next two months, I’ll be taking special note of songs I listen to that aren’t in my collection so I can chase down copies: ideally CDs or FLAC recordings (legally purchased of course!)… or LPs if CDs are too difficult.
Record companies and artists could help here — there are services like ZDigital that allow people to purchase and download individual songs or full albums in FLAC format. There are also lots of albums that were released decades ago, that have not been re-released by record companies. Sometimes record companies don’t release particular songs because they seemingly “weren’t popular”, or were popular in only a few specific geographic areas (like Brisbane).
People like us do not want to pirate music. We want to support the artists. Their songs did get played on radio, and still do; but may not be for much longer. Not everything is on Spotify, and sometimes that big yellow taxi has a habit of taking those hits away that you previously purchased. They could help themselves, and the artists they represent, by releasing some of these “less popular” songs as FLAC recordings for people to purchase. (Or MP3 if they really insist… but some of us prefer FLAC for archival copies.)
The songs have been produced, the recordings already exist, it seems it’s little skin of their nose to just release them as digital-only singles on these purchase-for-download platforms. I can understand not wanting to spend money pressing discs and having to market and ship them, but a file? Some emails, a few signed agreements and one file transfer and it’s done. Not complicated or expensive.
Please, help us help you.
Anyway… I guess I have a shopping list to compile.
My situation has changed a bit… the death of a former work colleague shook me up quite a bit, and while I have been riding, I haven’t been doing it nearly as much. Then, COVID-19 reared its ugly head.
Suffice to say, my commute is now one side of the bedroom to the other. Right at this moment, I’m in self-imposed lockdown until I can get my booster shot: I had my second AstraZenica shot on the 4th November, and the Queensland Government has moved the booster shots to being 3 months after the second shot, so for me, that means I’m due on the 4th February. I’m already booked in with a local chemist here in The Gap, I did that weeks ago so that the appointment would be nailed to the floor, and thus currently I’m doing everything in my power to ensure that appointment goes ahead on-time.
I haven’t been on the bike much at all. That doesn’t mean though that I stop thinking about how I can make my ride more comfortable.
Castle Clothing Coveralls
Yes, I’m the one clad in yellow far left.
They had quite few positives:
They were great in wet weather
They were great in ambient temperatures below 20°C
The pocket was handy for storing keys/a phone/a wallet
They had good visibility day and night
They keep the wind out well. (On the Main Range, Threadbo Top Station was reporting 87km/hr wind gusts that day.)
But, they weren’t without their issues:
They’re (unsurprisingly) no good on a sunny summer’s day (on the day that photo was taken, it was borderline too hot, weather prediction was for showers and those didn’t happen)
They’re knackered after about 30 washes or so: the outer waterproof layer peels off the lining
In intermittent rain / sunshine, they’d keep you dry during the rainy bit, but when the sun came out, you’d get steamed
To cap it off, they’re no longer being manufactured. Castle Clothing have basically canned them. They’ve got a plain yellow version with no stripes, but otherwise, nothing like their old product. I wound up buying 4 of them in the end… the first two had to be chucked because of the aforementioned peeling problem, the other two are in good condition now, but eventually they’ll need replacement.
Mammoth Workwear do have some alternatives. The “Supertouch” ones I have tried, they’re even shorter lived than the Castle ones, and feel like wearing a plastic bag. The others are either not night-time visible, or they’re lined for winter use.
So, back to research again.
Zentai suits?
Now, I know I’ve said previously I’m no MAMIL… and for the most part I stand by this. I did try wearing a stinger suit on the bike once… on the plus side they are very breathable, so quite comfortable to ride in. BUT, three negatives with stinger suits:
That got me thinking, what’s the difference between a stinger suit and an open-face zentai suit? Not a lot. The zentai suit, if it has gloves, can be bought as a “mitten” or (more commonly) a proper multi-finger glove version. They come in a lot more colours than a stinger suit does. They’re about the same price. And there’s no logos, just plain colours (or you can do various patterns/designs if that’s your thing).
A downside is that the zipper is at the back, which means answering calls from nature is more difficult. But then again, some stinger suits and most wetsuits also feature a back-entry.
I’ve got two coming to try the idea out. I suspect they’ll get worn over other clothing, I’ll just duck into a loo, take my shirt off, put the zentai suit on, then jump on the bike to ride to my destination… that way my shirt isn’t soaked with sweat. We’ll see.
One is a black one, which was primarily bought to replace one of the stinger suits for swimming activities, but I can also evaluate the fabric too (it is the usual lycra material).
The other is a silver one (thus a lycra/latex blend), to try out the visibility — it’ll be interesting to see whether it’s somewhat water-repellent due to the latex mix in the material, and see what effect this has on sweat.
Both of these are open-face! You should never try swimming with a full-face zentai suit. I can’t imagine getting caught in the rain ending well either, and the ability to see where you’re going is paramount when operating any vehicle (especially a bicycle)!
They’ll turn up in a week or two, I can try them out then. Maybe won’t be the final solution, but it may answer a few questions.
Heavy Wet/Cold weather gear
So, with the lighter-weight class out of the way, that turns my attention to what to do in truly foul weather, or just bitterly cold weather.
Now, let me define the latter: low single digits °C. Possibly with a westerly breeze carrying it. For some reading this, this will feel like a hot summer’s day, but for those of us in Brisbane, temperatures this low are what we see in the middle of winter.
The waterproof overalls I was wearing before worked well in dry-but-cold weather, however I did note my hands copped the cold… I needed gloves. The ends of the legs also could get tangled with the chain if I wasn’t careful, and my shoes would still get wet. Riggers boots work okay for this, but they’re hard to come by.
I happened to stumble on Sujuvat ratkaisut Oy, who do specialist wet-weather clothing meant for Europe. Meeko (who runs the site) has a commercial relationship with a few manufacturers, notably AJGroup who supply the material for a lot of Meeko’s “extreme” range.
The suits are a variant of PVC, which will mean they’re less breathable than what I have now, but should also mean they’re a lot more durable. There’s a decent range of colours available, with many options having the possibility of reflective bands, attached gloves and attached wellington boots. It’s worth noting the BikeSuit (no longer available) I was looking at 8 years ago was also a PVC outfit.
In the winter time, the big problem is not so much sweat, but rather, sweat being hit by wind-chill. Thus I’m ordering one of the Extreme Drainage Coveralls to try them out.
I’ve seen something similar out of AliExpress, however the options there are often built for the Chinese market… so rarely feature size options that fit someone like myself. Most of the Chinese ones are dark colours, with one “tan”-coloured option listed, and a couple of rubber ones that were lighter colours (a dark “pink”, and a yellow). Some of the rubber ones also had a strange opening arrangement: a tube opening in the stomach, which you pulled yourself through, then clamped shut with a peg. Innovative, but looks very untidy and just begging to get caught in something! I’ll stick with something a bit more conventional.
The coverall I’m ordering will be a 500g/m² white fabric… so about twice the weight of my current Castle workwear overalls (which are about 330g/m²), and will have the gloves and boots attached. I’m curious to see how that’s done up close, and see how it works out in my use case.
Being a white rather than a yellow/orange will make them less visible in the day time, but I suspect this won’t be much of an issue as it’s night-time visibility I’m particularly after. Also, being white instead of a “strong” fluro colour will likely be better at horse endurance rides, as horses tend to react to fluro colours.
The zip arrangement intrigues me as well… it’s been placed up high so that you can pretty much wade into water up to your chest and not get wet. There’s a lighter-weight option of the same suit, however with fewer options for colours. If the extreme version doesn’t work out for cycling, I might look at this alternative (the bike doesn’t react to strong colours like a horse does).
There’s about a 2-month lead-time on this gear because it’s made-to-order, a reasonable trade-off given you get to more-or-less get it made exactly how you want it. Looking around, I’m seeing off-the-shelf not-customisable outfits at AU$400 a pop, €160 (~AU$252) is looking a good option.
The fact that this is being run as a small side-hustle is commendable. I look forward to seeing the product.
Recent Comments