COVID-19 2024 edition

So, we’ve rolled around into a new year… and being often the season for get-togethers, we often find ourselves sharing more than just food, gifts and company. For some of us, it’s also the trade of unintended gifts in the form of infectious disease.

Thus far, I’ve avoided a second bout, once was enough! The evidence at this stage suggests the risk of long-term effects from this condition compound each-time you get it. Compounding interest on a term deposit is a good thing… compounding medical conditions from a virus is anything but!

I loathe wearing masks, and back when restrictions were finally lifted, I was glad to put mine on the shelf and leave it be. However, back then we had >80% up-to-date vaccination status, Omicron COVID-19 was relatively new, we had booster shots that knew about this variant. It “felt” relatively safe to do this.

I kept my shots up since my bout. While I hate needles, I hate disease more. I was doing them on a 6-month cadence, but when I enquired in October last year whether I should get another, I was told that since I’m not “vulnerable”, I should wait it out until 12 months. That was before JN.1 knocked on Brisbane’s door.

The Christmas period the last few years have been a period where cases spiked, and the 2023 Christmas period is no different: except this time around, we had two variants vying for attention: XBB 1.5 and JN.1. JN.1 has been marked a variant of concern by the WHO. This latter one is becoming dominant here in Brisbane and has prolonged the “tail” of Christmas cases we’ve become accustomed to.

So far, I’ve dodged it. Am I merely asymptomatic? That’s hard to know. RAT tests are ineffective at detecting these without actual symptoms present, and the “gold standard” PCR tests are not readily available.

My frustration is the lack of clear information as to what’s going on.

Community monitoring

Some people I’d like to call out in particular, which have helped plug a gaping hole in reporting coverage, and are helping to make things a lot better…

“Dennis – The COVID info guy” has been doing a fantastic job monitoring the media and collating COVID-19 related articles from across the globe as well as domestically. Most media outlets have stopped reporting on this condition since it’s no longer “novel”, it’s all too easy for news on this condition to fly under the radar.

Similarly, Mike Honey has been doing a brilliant job locating the raw data sets and providing great visualisations of that data.

Both these people have been instrumental for surfacing information that otherwise might be difficult or impossible to find any other way now that we don’t have regular media updates from the respective state governments any more.

They both post to the auscovid19 group. If you’re on the Fediverse (e.g. Mastodon), follow @auscovid19@a.gup.pe and you’ll see posts from both (among others). I highly recommend this group.

That said, the work these two and others do, is somewhat hobbled by the lacklustre reporting from today’s state governments.

Status reporting

Rewind back 2 years ago, we had very clear tracking of two factors to the general public:

  • the number of cases, detected, hospitalised and in ICU, from week to week, for each area
  • the number of people vaccinated, and to what level

Admittedly, this was at a time when 3 shots was the most anyone had unless they had special consideration. These days, the better approach is to just consider whether someone is “up to date”. For most people, that is “a shot in the last 12 months”, or “a shot in the last 6 months” for “vulnerable” people.

We also had week-by-week snapshot of case numbers, and in many cases, waste-water testing data.

This has all been almost completely abandoned. Queensland Health gives monthly stats if any. I feel given how fast this virus moves, and how mobile we are now, this was a hideously naïve decision.

Admittedly case numbers require people to report cases (either through their doctor or directly), but vaccinations, that is data that could be automatically collated and produced. We don’t need to name-and-shame people who are not up-to-date… but a break-down of people who had a shot “within the last 6 months”, “within the last 12 months”, “12 or more months ago” and “never” for each local government area could be a great start!

Waste water testing also is a pretty good proxy for individual case numbers. It’d be worth seeing that published again.

It was nice to see it all broken down for us, but even just having the raw data would allow those of us in the community who have the tools and expertise to crunch the numbers, and allow us to “do our own risk assessment”.

Mask requirements

I hate the idea of going back to needing them, but it seems we dropped restrictions way too early. Dropping restrictions really needed to happen after another crucial step: retrofitting of buildings’ HVAC systems to ensure they properly “scrub” the air.

This requirement was hinted at years ago with bushfire smoke permeating through buildings and triggering smoke alarms. When COVID-19 first showed up, we thought it was “droplet” spread, hence the insistence of “social distancing” (1.5m or more), keeping surfaces sanitised, and any kind of mask you could get your hands on.

Now, we understand its aerosol spread, which spreads through buildings just like smoke does. It hangs in the air just like smoke does. It can hang in the air for hours, and a slight draft can spread it from one end of the building to the other. 1.5m separation and clean bench-tops are meaningless.

There’s also a call to move to KF-94 or better (N95/P2 or N100/P3) masks as opposed to crappy droplet masks, ideally ones that filter both ways (inhaled and exhaled). Ocular transmission has also been observed — a face shield or glasses are sufficient protection for most people, but there’s still a small risk there. Aerosol spread though, requires you have something that properly seals and filters down to PM2.5 particle levels.

Here in Queensland, it’s up to the individual business what they allow. My local doctor actually requires masks before entry, but is seemingly not fussy about what ones you choose.

A good thing in some ways, because valve-less ones really do not agree with me: I’ve tried them before many times and found I couldn’t stand wearing one for more than a few minutes… if I force myself to wear one for longer, I find the constant re-breathing of my own breath causes me to become light-headded at first, and later come down with cold-like symptoms.

That said, when I was doing demolition work for HSBNE, we had vented P2/N95 masks. Those gave me no problems, and in theory, I could use those. The catch being, these work only for stopping you breathing in COVID-19 particles while you are wearing one. They do nothing about what you breathe out. They’ll work just fine if you can keep wearing one 100% of the time — but no one truly can. You have to eat and drink at some point, you’ll need to clean your teeth, you might need to show your full face to someone for identification… you may even need someone looking in your mouth for medical care. The moment you do, you can be exposed, develop an infection, then from that point, nothing is “containing” the viral load you are shedding through exhaled breath.

I actually spotted a bargain a couple of years back: a full-face elastomeric with P3/N100 filters going cheap in a clearance. When I got it, I tried it on and was instantly amazed, this thing was easier to breath in than anything I’ve owned or used before. This though, has the same problem out-of-the-box as the N95s we were using at HSBNE: it has an unfiltered exhalation port. Unlike those masks though, which were single-use disposable types, this one could (unofficially at least) be retrofitted:

https://mastodon.longlandclan.id.au/@stuartl/110200584178073090/embed

The model I bought also had another trick: it could accept an air hose from a PAPR set-up, which was my next logical move if this mask didn’t work. (That said, a PAPR kit with hood is >$2000… vs $200 for this model.) I haven’t yet needed this, but it’s a welcome feature.

That filter mod was reversible, so a good option if you use the mask for work purposes and need to keep things “stock”. I found though, I could “optimise” things a little by drilling some holes to allow better airflow through the makeshift exhalation filter.

https://mastodon.longlandclan.id.au/@stuartl/110233139690073724/embed

This mod, although not reversible, did not compromise the filtering ability of the mask since it was simply adding more holes to an outlet grille that protected the exhalation valve from object ingress.

I’ve since ordered a second mask which I’ll leave unmodified, to use in cases where the mod is seen as unacceptable or to replace the first mask if it becomes damaged.

I nearly considered a half-face model, but these seem to be harder to modify with exhalation filtering. I also have a decent stash of filters that fit the existing mask — it was cheaper to buy the full-face (which is still being sold at a clearance price) than a compatible half-face mask.

What I think may happen

We’re seeing a perfect storm of three things coming together:

  • apparent lapsing vaccination status for a majority of the population
  • a lack of official monitoring data
  • more and more infectious strains, some of which are able to skip vaccine immunity and simultaneously cause more serious infections

What the northern hemisphere cops in their winter, we normally see in our winter period 6 month later. Not that COVID-19 is seasonal: it isn’t. However, lots of other diseases are, and these in combination with COVID-19, mean we’ll likely be in for a doozy of a winter!

The Queensland Government has not said they’d go back to lock-downs or mask mandates like the “bad old days”. In fact, they’ve so far been saying the opposite. However, if the little data they’re still collecting suggested such measures were still required, I would not at all be surprised if they back-flipped on one or both of these areas. It is a big reason why I refuse to go interstate at the moment — the fear of being locked out of home!

My feelings on this

One thing that frustrates me is the lack of official guidance coupled with the lack of data. There’s no guidance from the Queensland Government suggesting what should happen, so everyone makes their own rules. There’s no data, so those decisions all seem arbitrary.

And when you do make a decision unilaterally, it seems no matter which way you go, it’s wrong. Earlier in the pandemic, I tried to lock down and isolate as much as possible — my reasoning if masks were good, not being there was platinum standard for avoiding disease spread.

Some insist this is the right thing to keep doing. Don’t go out, stay home, work from home, and mask up everywhere.

It is not lost on me that I co-habit with someone that is in the “vulnerable” group (in this case: over the age of 60)… so I do need to be at least a little careful.

That said, I find myself pulled towards social outings where a mask would be highly awkward or unworkable, often by family members who are in this “vulnerable” group. Refusing that seems like the wrong thing to do as well.

I almost feel like I’m being unintentionally gaslit from both sides. My instinct is to try and “blend in”: that comes from decades of trying to “mask” my Asperger’s Syndrome… doing something different to everyone else flies in the face of that no matter how good a reason you have to do so.

What I wish governments would do

Resume more regular reporting of data

We don’t need weekly “front the media” discussions, but there should be a data feed that we can all access, that can be used to feed into community-driven dashboards so people can at least come up with a semi-informed decision on what we should do as individuals.

This should include both the current circulating respiratory diseases (influenza, RSV, etc as well as COVID-19), and vaccination status against each.

Instigate and enforce standards for air quality

We now know these diseases spread through aerosol transmission. We also know other environmental threats like dust storms and bush fires can wreak havoc on our urban buildings.

Masks work, but they’re not practical 100% of the time. It has been found by judicious use of air purifying devices and retrofits to HVAC systems, dramatic improvements to respiratory health can be achieved. This needs to be better studied, with minimum standards devised.

With that in hand, building owners should be first encouraged (through grants or other means) to apply this knowledge to assess how their buildings fare, and fix any problems identified. Later, enforcement can be applied to catch up with the stragglers. Clean disease-free air should be a right not a privilege!

What I intend to do

Sadly, masking is not going away any time soon.

I don’t know how I’ll manage at the dentist — COVID-19 will not avoid a potential host because they happen to be occupying the dental chair, and if it’s unsafe to sit in the waiting room unmasked, it’s equally unsafe in the dentist’s chair!

Right now, with days exceeding 30°C and 75% RH, it’s too hot to be wearing one of these masks all the time unless you absolutely have to. It’s also hard to communicate in a mask.

At work

How my workplace would react to me wearing one is a complete unknown. I previously have avoided infection by staying at home… last year I was in a hybrid arrangement, working at home 4 days a week, and one day a week in the office. This has changed to a 2:3 ratio (2 days at home, 3 in the office). It’s an open-plan office in a shared building, with the bottom floor being some medical facilities.

I can work out on the back deck, and have done so… I might be doing that more when it is fine since the risk of transmission outside is far lower. I’d rather work from home if risks are high, but there may be some days where this is unavoidable. I dodged a few bullets late last year.

If there’s a building-wide or workplace-wide mask mandate, there’s no decision to make — it’s either work from home or mask-up. If lots of people begin calling in sick, I guess I’ll have to make that assessment on the day.

Dining out

It’s common for my father and I to dine out… there are four regular places we go to (in Ashgrove):

  • Taj Bengal: No option to dine outside, but usually this place is quiet on Mondays/Tuesdays… dining on these days should be relatively “low risk”
  • Cafe Tutto: There’s an outside dining area which is often more pleasant than sitting inside, we also dine there on quieter days. Decent airflow, often quiet, low-risk.
  • Osaka: Indoor dine-in only, probably the riskiest place as it can get quite popular, but usually things have been quiet, so it hasn’t been a problem.
  • Smokey Joe’s Pizza: Outside dining only, whilst I’d like to see the overhead fans pushing a little more air, things are relatively open and I don’t feel much threat dining here. We try to get there early because it gets busy later in the evening.

Concerts

My mother and I have been resuming going to concerts… so far, they’ve all been open-air affairs. The most “enclosed” one being Sir Paul McCartney’s “Got Back” concert at Lang Park in November last year.

I’d be very surprised if someone didn’t have some viral load there, but the open-air format means there’s not much chance for diseased aerosols to hang around.

That said, there’s security to get past. They need to identify you, and how they’d react to such a mask is a complete unknown, they may consider it excessive. I think if we are spread out enough, and there’s a decent breeze keeping the air moving, we should be safe enough without.

Shopping

Not that I do a lot, but right now I “read the room”… lately I’ve seen more and more people masking however. This is one area where masking is actually more practical.

I think this year I should try to make an effort in this area, as I have little excuse to do otherwise.

Radio comms exercises

This is an area where I really do need to be able to communicate clearly. I’ll have the mask with me, but it’ll depend on what I’m doing at the time. If I’m operating a station, the mask may need to stay off in order for me to do my job properly.

Outdoors

This is where I feel there’s the least risk. We’ll see how this winter plays out. I might mask-up for the hell of it if it gets cold enough.

Exhalation valve filtering

In cases where I do decide to mask-up… exhalation valve filtering will be predicated on a couple of factors:

  • Is there a mask mandate in place requiring this to be filtered? If yes, I’ll put the filter in.
  • Am I in a medical, disability care or aged-care facility? Again, in goes the filter.
  • Is this otherwise a unilateral decision in a medium or low-risk situation? I might not bother, it’s only a quick moment to slip it in if required.

Vaccination

My next shot is due in May. Around late March, I’ll have to put a booking in. Doing this is probably the single most important thing I can do.

What I urge others to do

If you’re not vaccinated, or your status is lapsed, go book a shot! The risks are low, and prevention is many times better than any cure.

If you own a building that people live or work in, go check it out for airflow issues. Employers and home owners will thank you.

Wear masks in “high-risk” situations: i.e. indoors with high population densities, in places where lots of vulnerable people are (e.g. disability and aged care centres) and in places where sick people congregate (e.g. doctors, hospitals). If you can manage it, do it in low-risk situations (not everyone can).

Stay home if you’re sick unless you’re getting medical attention (and go straight home after you’re done).

Above all: do not judge those for their mask-wearing choice either way — some just can’t wear masks at all, some will wear them all the time.

My position on generative AI

As we enter 2024, one technology seems to be looming large over many facets of society. Back in the 1960s, the idea that a “machine” (“computers” were actually people who operated calculating machines) could “think” for itself and give “intelligent” answers was the stuff of science fiction.

Television shows like StarTrek and movies/books like 2001 popularised an ever-present voice-controlled assistant that could be hailed, and asked questions or given instructions. Most of these were benevolent (2001’s HAL being a notable exception).

Fast forward nearly 80 years, and we now have voice assistants from major technology vendors like Amazon (Alexa), Apple (Siri) and Google (“OK Google”). Microsoft tried to jump in on this too with Cortana in Windows 10, since removed. Alexa and Siri are allegedly bleeding their parent companies’ income as the novelty wears off… and so these technology firms are starting to look at what’s next.

The latest gold rush seems to be generative AI. This has been brewing for some time.

Many moons ago I recall mucking around with a markov chain plug-in that was embedded in Perlbot on IRC (no_body on the old Freenode network). Very crude, but it sometimes did generate somewhat coherent sentences. It was done for fun, ran on the scrap CPU cycles of an old PIII 550MHz server that also hosted this blog and acted as a web server. Nothing huge by any stretch of the imagination. No GPU in sight.

A few years ago, we started seeing articles about an AI system that could generate imagery. Fore-runners of the likes of DALL-E. Ask it to generate a beach scene, you’d get some weird psychedelic image which vaguely looked like a beach if you squinted right, but with odd things merged together, like a seagull merged into a railing or building. Faces were badly distorted, nothing looked “right”.

Unfortunately, I cannot recall where I saw the image I’m thinking of or what keywords to search that will summon it. Otherwise I’d show an actual example. (I think it was either on The Register or Ars Technica… most likely pre-pandemic.)

Fast forward to 2021, and yes, it could generate a vaguely believable image, but it still struggled with human anatomy. A good example of this is the faked Donald Trump arrest photo that was doing the rounds:

Donald Trump being arrested by police? No, this is an AI-generated image. (Source)

This was a big improvement on what came a few years before it, but it still had lots of visual defects.

This time last year, ChatGPT v3 was available to the general public, and it could passably converse with people. For a statistical model, it did a remarkable job of appearing “intelligent”, but ask it to perform some basic tasks, and it soon fell apart. Yes, it could generate code, but you’d constantly have to massage the prompt to get code that even compiled, let alone functioned the way required.

The big rub with all of this, is the extreme amount of computation required to render the result of a simple prompt. Whether the output be text, an image, audio or video… generative AI is often highly computationally expensive, requiring vast data centres crammed full with GPUs and special-purpose ASICs much like the cryptocurrency rigs of a few years ago. There are some small models that can run on your local computer. A top-of-the-line Raspberry Pi can just cram in some AI models with some trade-offs in accuracy, however you cannot train an AI model with such modest hardware.

Generating the models is the real sticking point: it requires vast compute resources, and in addition, lots of data. It’s Johnny 5 on steroids! Where is that data sourced from? More often than not, it is scraped from websites without authors’ consent. While some content is public-domain, there are examples where copyrighted material was used.

Yes, we can point and laugh when an AI hallucinates a watermark, but for the copyright holder or would-be user, this is really no laughing matter. Microsoft is already facing a lawsuit from The Times over Bing Chat (now Copilot) spitting out big chunks of copyrighted articles.

A human usually has a vague idea where they learned something, even if they can’t find it later… and based on that knowledge, they might have some idea whether such content can be legally used in some given context, or can at least ask. AIs typically do not tell you what source material was used in the construction of the output, nor is there any consideration given to whether you can legally use that material.

Some vendors try to make that your problem, MailChimp recently added an AI feature to its mailing list offering, but then made the user responsible for checking up whether the content it generated was appropriately licensed… and decided that your user-generated content was appropriate to feed the training of said AI engine.

It has been ruled in various courts that as purely AI generated content is not “human generated”, it is not eligible for copyright protection. (This ruling is why I was able to include the “Trump arrest” image above despite it not being “my work”.)

This is not the last we’ll see of this technology. AI is actually a very old term dating back to the very early days of programmable electronic computers, from ELIZA (which really was a testing ground for pattern matching, not AI at all!) and PARRY (which was the same idea expanded a little). It includes tools like expert systems. Anyone that’s dealt with open-source software will have seen one very famous expert system: make.

Having a system that can inspect a photo and then describe what is in the image along with reading out what text might be important, would be a game changer to the visually impaired. In this case, it’s simply describing what is there.

Having a text to speech tool that could be trained on recordings of the voice of someone who has lost their ability to speak (e.g. motor neuron), that the person could then use to communicate, would be a very noble use of generative AI.

The surviving members of The Beatles recently did this with the song “Now and Then“, taking old recordings of John Lennon’s demos, and basically doing some sophisticated signal processing to separate out the components so that a studio-grade recording could be produced.

The technology does have good uses. In both the latter cases, we’re not “putting words into the mouths” of these people, it’s their words, they chose them.

However, I think this year we’ll likely see its dark side, if we haven’t done already. Stephen Fry got a rude shock when he came across an audio book apparently “read” by him, except it was a book he had never actually read: it was the product of generative AI. Someone had trained a text-to-speech model on his voice, then fed this book into it.

Imagine someone using tools like that to dupe a work colleague into resetting a password and enrolling a new 2FA token over the telephone? Depending on where you work, that could have disastrous consequences.

For this reason, I’m particularly leery of systems that take audio or video as input. My workplace used to use Atlassian’s HipChat as a communication tool originally, and when that shut down, we migrated to Slack. At the moment it’s privacy policy and terms of service make no mention of the use of such tools. Zoom was forced to back down on AI use after a biiig user backlash. Microsoft won’t say how it is training its models, but seems hell-bent on jamming its Copilot everywhere it can cram it. They’re even talking of a new keyboard button dedicated for it.

For this reason, I flatly refuse to touch Microsoft Teams. Last time I used it (in my browser), it was for one particular meeting a couple of years ago… it picked up I had a headset, and used that for speaker audio, but when it came to the microphone, did it use the same place? Noooo… the line-in socket connected to an old Sony ST-2950F stereo tuner was more interesting!

Since then, it too has gotten the AI treatment, with little transparency on what that AI is trained on and what its functions are. It’s not clear what it is being trained on, and what the resulting data sets are used for. Furthermore, we’re to trust them to store such training data responsibly? The same mob that wrote code that accepted an expired and incorrectly signed digital certificate as an access-all-areas pass?

That said, the snake-oil salesmen are out in force, and the investors are going wild. We’re seeing ChatGPT-powered sales and service bots appear on all kinds of websites now (until they’re caught out). There are also lots of sites with AI-generated screed polluting search engine results. It’ll likely play a big part in the upcoming 2024 US Federal election. We’re in for a wild year I think.

I for one, do not use ChatGPT or its elk in my day-to-day work, and refuse to do so. My position on AI-infused tools like Microsoft Teams remains the same until such time as the AI feature is removed or its role better clarified.

I still have code up on Github as it was there prior to Microsoft’s purchase of that service: I don’t like that my code may be being used in this manner, however the worst case scenario is copyright infringement — removing my code from Github does not prevent this. I regard video and audio differently: as this can be used for impersonation, I am not going to willingly supply such a feed directly into a tool that may be training itself on it for purposes unknown to me.

Right now, LLMs (large language models) are approaching the “peak of inflated expectations” phase of the Gartner hype cycle. I figure they will die off before long before their actual utility comes to the fore. They may improve the accuracy of machine language translations, specialised ones might be able to give domain-specific advice on a topic (much like a fancy expert system), and they may be able to fill in the gaps where a human can’t be there 100% of the time.

They won’t be replacing artists, journalists, programmers, etc long-term. Some of us will possibly lose jobs temporarily, but once the limitations are realised, I have a feeling those laid off will soon be fielding enquiries from those wishing to slay the monster they just created. It’ll just be a matter of time.

New laptop: StarBook Mk VI

I rarely replace computers… I’ll replace something when it is no longer able to perform its usual duty, or if I feel it might decide to abruptly resign anyway. For the last 10 years, I’ve been running a Panasonic CF-53 MkII as my workhorse, and it continues to be a reliable machine.

I just replaced the battery in it, so I now have two batteries, the original which now has about 1.5-2 hours of capacity, and a new one which gives me about 6 hours. A nice thing about that particular machine is it still implements legacy interfaces like RS-232 and Cardbus/PCMCIA. I’ve upgraded the internal storage to a 2TB SSD and replaced the DVD burner with a Blu-ray burner. There is one thing though it does lack which didn’t matter much prior to 2020: an internal microphone. I can plug a headset in, and that works okay for joining in on work meetings, but if there’s a group of us, that doesn’t work so well.

The machine is also a hefty lump to lug around due to being a “semi-rugged”. There’s also no webcam, not a deal breaker, but again, a reflection of how we communicate in 2023 vs what was typical in 2013.

Given I figured it “didn’t owe me anything”… it was time to look at a replacement and get that up and running before the old faithful decided to quit working and leave me stranded. I wanted something designed for open-source software ground-up this time around. The Panasonic worked great for that because it was quite conservative on specs — despite being purchased new in 2013, it sported an Intel IvyBridge-class Core i5, whereas the latest and greatest was the Haswell generation. Linux worked well, and still does, but it did so because of conservatism rather than explicit design.

Enter the StarBook Mk VI. This machine was built for Linux first and foremost. Windows is an option, that you pay extra for on this system. You also can choose your preferred CPU option, and even choose your preferred boot firmware, with AMI UEFI and coreboot (*Intel models only for now) available.

Figuring, I’ll probably be using this for the better part of 10 years from now… I aimed for the stars:

  • CPU: AMD Ryzen 7 5800U 8-core CPU with hyperthreading
  • RAM: 64GiB DDR4
  • SSD: 1.8TB NVMe
  • Boot firmware: coreboot
  • OS: Ubuntu 22.04 LTS (used to test the machine then install Gentoo)
  • Keyboard Layout: US
  • Power adapter: AU with 2m USB-C cable
         -/oyddmdhs+:.                stuartl@vk4msl-sb 
     -odNMMMMMMMMNNmhy+-`             ----------------- 
   -yNMMMMMMMMMMMNNNmmdhy+-           OS: Gentoo Linux x86_64 
 `omMMMMMMMMMMMMNmdmmmmddhhy/`        Host: StarBook Version 1.0 
 omMMMMMMMMMMMNhhyyyohmdddhhhdo`      Kernel: 6.5.7-vk4msl-sb-… 
.ydMMMMMMMMMMdhs++so/smdddhhhhdm+`    Uptime: 1 hour, 15 mins 
 oyhdmNMMMMMMMNdyooydmddddhhhhyhNd.   Packages: 2497 (emerge) 
  :oyhhdNNMMMMMMMNNNmmdddhhhhhyymMh   Shell: bash 5.1.16 
    .:+sydNMMMMMNNNmmmdddhhhhhhmMmy   Resolution: 1920x1080 
       /mMMMMMMNNNmmmdddhhhhhmMNhs:   WM: fvwm3 
    `oNMMMMMMMNNNmmmddddhhdmMNhs+`    Theme: Adwaita [GTK2/3] 
  `sNMMMMMMMMNNNmmmdddddmNMmhs/.      Icons: oxygen [GTK2/3] 
 /NMMMMMMMMNNNNmmmdddmNMNdso:`        Terminal: konsole 
+MMMMMMMNNNNNmmmmdmNMNdso/-           Terminal Font: Terminus (TTF) 16 
yMMNNNNNNNmmmmmNNMmhs+/-`             CPU: AMD Ryzen 7 5800U (16) @ 4.507GHz 
/hMMNNNNNNNNMNdhs++/-`                GPU: AMD ATI Radeon Vega Series / Radeon Vega Mobile Series 
`/ohdmmddhys+++/:.`                   Memory: 4685MiB / 63703MiB 
  `-//////:--.

First impressions

The machine arrived on Thursday, and I’ve spent much of the last few days setting it up. I first checked it out with the stock Ubuntu install: the machine boots up into an installer of sorts, which is good as it means you set up the user account yourself — there’s no credentials loose in the box. Downside is you don’t get to pick the partition layout.

The machine, despite being ordered with coreboot boot firmware, actually arrived with AMI boot firmware instead. Apparently the port of coreboot for AMD systems is still under active development, and I’m told there will be a guide published describing the procedure for installing coreboot. Minor irritation, I was looking forward to trying out coreboot on this machine — but not a show-stopper… I look forward to trying the guide when it becomes available.

The machine itself felt quite zippy… but then again, when you’re used to a ~12-year-old CPU, 8GB RAM and a 2TB SATA-II SSD for storage, it isn’t much of a surprise that the performance would be a big jump.

Installing Gentoo

After trying the machine out, I booted up a SysRescueCD USB stick and used gparted to shove-over the Ubuntu install into the last 32GiB of the partition, then proceeded to create a set of partitions for Gentoo’s root, a 80GiB swap partition (seems a lot, but it’s 64GiB for suspend-to-disk plus 16GiB for contingencies) some space for a /home partition, some LVM space for VMs, and my Ubuntu install right at the end.

I booted back into Ubuntu, and used it as my environment for bootstrapping Gentoo, that way I could experience how the machine behaved under a heavy load. Firefox was, not bad, under the circumstances. My only gripe being the tug-o-war between Ubuntu insisting that I use their Snap package, and me preferring a native install due to the former’s inability to respect my existing profile settings. This is a weekly battle I have with the office workstation.

In discussing with Starlabs Systems, they mentioned two possible gremlins to watch out for, WiFi (important since this machine has no Ethernet) and the touch pad.

I used a self-built Gentoo stage 3, unwittingly I used one built against the still-experimental 23.0 profiles, which meant it used a merged /usr base layout… but we’ll see how that goes anyway… since it’s the direction that Debian and others are going anyway. So far, the only issue has been the inability to install openrc and minicom together since both install a runscript binary in the same place.

Once I had enough installed to be able to boot the Gentoo install, including building a kernel, I got the boot-loader installed, re-configured UEFI to boot that in preference to Ubuntu, then booted the new OS.

First boot under Gentoo

OS boot-up was near instantaneous. I’m used to about 10-15 seconds spent, but this took no time at all.

WiFi worked out-of-the-box with kernel 6.5.7, but the touch pad was not detected. Actually, under X11 the keyboard was unresponsive too because I forgot to install the various drivers for X.org. Oops! I sorted out the drivers easy enough, but the touch pad was still an issue.

Troubleshooting the touch pad

To get the touch pad working, I ended up taking the Ubuntu kernel config, setting NVMe and btrfs to being built-in, and re-built the whole thing again… took a long time, but success, I had the touch pad working.

The tricky bit is the touch pad is a I²C device connected via the AMD chipset, and described in the ACPI. Not quite sure how this will work under coreboot, but we’ll cross that bridge later. I spent a little time today refining the kernel down a little from the everything kernel that Ubuntu use… to something a little more specific. Notably, things you can’t directly plug into this machine (like ISA/PCI/PCIe cards, CardBus/PCMCIA, etc) or interfaces the machine did not have (e.g. floppy drive, 8250 serial), I took out. Things that could conceivably be plugged in like USB devices were left in.

It took several tries, but I got something that’s workable for this hardware in the end.

Final kernel configuration

The end result is this kernel config. Intel StarBook users might be better off starting with the Ubuntu kernel config like I did and pare it back, but that file may still give you some clues.

Thoughts

Whilst compiling, this machine does not muck around… being a 8-core SMT machine it actually builds things quite rapidly, although on this occasion I gave the machine a helping hand on some bigger packages like Chromium by using a pre-built binary built for my other machines.

Everything I use seems to work just fine under Gentoo… and of course, having copied my /home from the Panasonic (you never realise how much crap you’ve got until you move house!), it was just a little tweaking of settings to suit the newer software installed.

I’m yet to try it out a full day running on the battery to see how that fares. Going flat-chat doing builds it only lasted about 2 hours, but that’s to be expected when you’ve got the machine under a heavy load.

Zoom sees the webcam and can pick up the microphone just fine. I expect Slack will too, but I’ll find that out when I return to work (ugh!) in a fortnight.

My only gripe right now is that my right pinkie finger keeps finding the SysRq/PrintScreen button when moving around with the arrow keys… been used to that arrow cluster being on the far-right of the keyboard not one row back like this one. Other than that, the keyboard seems reasonable for typing on. The touch pad not being recessed sometimes picks up stray movements when typing, but one can disable/enable it pretty easily via Fn+F10 (yes, I have Fn-lock enabled). The keyboard backlight is a nice touch too.

The lack of an Ethernet port is my other gripe, but not hard to work-around, I have a USB-C “dock” that I bought to use with my tablet that gives me 3×USB-3A, full-size SD, microSD, 2×HDMI, Ethernet and audio out and pass-through USB-C for charging. The Ethernet port on that works and the laptop happily charges through it, so that works well enough.

The power supply for this thing is tiny, 65W with USB-A and USB-C ports. I also tried charging this laptop with a conventional USB-A charger but it did not want to know (the PSU probably doesn’t do USB PD). Should be possible to find a 12V-powered USB-C charger that will work though.

The Toughbook will likely be my go-to on camping trips and WICEN events, despite being a heavier and bigger unit, as usually I’m not lugging the thing around, it’s better ruggedised for outdoor activities, and it’s also looks about 10 years older than it really is, so not attractive to steal.

New photography portfolio site: Imagery Captivation

My uncle, Peter Longland has been starting to branch out into doing some solo photography work, having done it professionally for various organisations for pretty much my entire lifetime. That said, when going solo, the assumption today is that you have a portfolio that people can look up. We started this project last year, however as both of us work full-time, work on this site took a back seat. After much development, that portfolio site went live this morning.

The site is online at: https://imagery-captivation.com/

While he has a background in graphic design, and has been doing websites longer than I have, most websites today are not built statically (or even with tools like Macromedia Dreamweaver), but rather either use site generation tools (Hugo, Jeckyll, Cobalt, Pelican, et all), or are built on a content management system (Joomla, Drupal, WordPress, etc). I’ve had a little exposure to Drupal through my workplace, but have otherwise been running this WordPress blog since 2005 when I started it as a means to communicate news about the Gentoo/MIPS project.

Thus we collaborated on this portfolio site. I let Peter take the lead on the visual design aspects of the site as his design sense is far better and well developed than mine. That said, where we got stuck trying to achieve a particular look — my background having worked with WordPress for the past 18 years meant I had some tricks up my sleeve for bending this tool to making it work.

For the hosting, I used my VPS at Binary Lane, that required me figuring out how to get PHP 8.1 and OpenBSD’s OpenHTTPD talking to each-other. Something to consider if I ever need to move this site from its present home.

We used an off-the-shelf theme and a small number of plug-ins to assemble the site, with custom styling rounded out using hand-crafted CSS rules to override some aspects of the theme. Much of the effort was spent at Peter’s end deciding which photos to publish and performing the necessary post-processing on them to have them look their best online.

Right now comes the fun bit… presenting this to the search engines. I’ve submitted the site map to Google’s search console, so that’ll get indexed in due course. Microsoft Bing has its own search engine console which may be worth setting up. As for Duck Duck Go, looking through their site there’s no obvious way to submit a site to the crawler, so I guess we just have to wait for it to stumble on it organically as it waddles the Internet.

In the meantime, I welcome constructive feedback. I’ve checked the site on a number of browsers and devices I have access to, and it seems to render fine there.

Leave at last

So… I’ve been busy at work lately, and that for the last few months has been my primary focus. A big reason why I’ve been keeping my head low is because a few years ago, it was pointed out that I had been physically with the company I’m working for for about 10 years.

Here in Australia, that milestone grants long-service leave; a bonus 8⅔ weeks of leave. This is something that’s automatic for full-time employees of a company, and harks back to the days of when people used to travel to Australia from England by ship to work, this gave that person an opportunity to travel back and visit family “back home”.

But at the time, I wasn’t there yet! See, for the first few years I was a contractor, so not a full-time employee. I didn’t become full-time until 2013, meaning my 10 years would not tick up until this year.

While the milestone is 99% symbolic, the thing is at my age (nearing 40), I’m unlikely to ever see that milestone come up again. If I did something that blew it or put it in jeopardy in any way, it’d be up in smoke.

There are some select cases where such leave may be granted early (between 7-10 years):

  • if the person dies, suffers total physical disability or serious illness
  • the person’s position becomes untenable
  • the person’s domestic situation forces them to leave (e.g. dropping out of work to become a carer for a family member)
  • the employer dismisses the person for reasons other than that person’s performance, conduct or capacity
  • unfair dismissal

I thought, it was worth sticking it out… after 10 years, it’s a done deal, the person is entitled to the full amount. If they booted me out after that, they’d still have to pay out that, plus the holiday leave (which I still have lots because I haven’t taken much since 2018).

Employment plans

Right now, I’m not going anywhere, I’ve got nowhere to go anyway. While doing work on things like electricity billing brings me no joy whatsoever (“bill shock as-a-service” is what it feels like), it pays the bills, and I’m not quite at the point where I can safely let it all go and coast into an early retirement.

Work has actually changed a lot in the past few years. Years ago, we did a lot of Python work, I also did some work in C. Today, it’s lots of JavaScript, which is an idiosyncratic language to say the least, and don’t get me started on the moving target that is UI frameworks for web pages!

Dealing with the disaster that is Office365 (which is threatening to invade even more into my work) doesn’t make this any easier, but sadly that piece of malware has infected most organisations now. (Just do a dig -t MX <employer-domain>, or just look at the headers of an email from their employees, many show Office365 today). I’ve so far dodged Microsoft Teams which I now flatly refuse to use as I do not consent to my likeness/voice being used in AI models and Microsoft isn’t being open about how it uses AI.

Most people my age get shepherded into management positions, really not my scene. In a new job I’d be competing with 20-somethings that are more familiar with the current software stacks. Non-technical jobs exist, but many assume you own a motor vehicle and possess the requisite license to operate it.

This pretty much means I’m unemployable if I leave or are booted out, so whatever I have in the bank balance needs to make it through until my time on this planet is done.

Thus, I must stick it out a bit longer… I might not get the 15-year bonus (4⅓ weeks), but at least I can’t lose what I have now. If excrement does meet a rotary cooling device though, simulations suggest with some creative accounting, I may just scrape through. I don’t plan on setting up a donations page and talking to Centrelink is a waste of time, I’ll die a pauper before they answer the phone.

Plans for this month

So I have holiday leave off until November. Unlike previous times I’ve taken big amounts off, I won’t be travelling far this time around. Instead, it’s a project month.

Financial work

I need to plan ahead for the possibility that I wind up in long-term unemployment. I don’t expect to live long (the planet cannot sustain everyone living to >100 years), but I do need to be around to finalise the estates of my parents and see my cat out.

That suggests I need to keep the lights on for another 20~30 years. Presently my annual expenditure has been around the $30k mark, but much of that is discretionary (most of it has been on music), and I could possibly reduce that to around the $10k mark.

I have some shares, but need to expand on this further. David Rowe posted some ideas in a two part series which provides some food for thought here.

At the moment, I’m nowhere near that 10% yield figure mentioned…that post was written in 2015 and lot has changed in 8 years. Interest rates are currently at ~5% for term deposits.

I do plan to start one though all the same. After Suncorp closed both The Gap and Ashgrove branches (forcing me all the way to Michelton), I set up an account at BOQ who have branches in both Ashgrove and The Gap… so I can do a term deposit with either, and they’re both offering a 5% 12-month term deposit.

I have a year’s worth sitting at BOQ in an interest bearing account… so that’s money that’s readily accessible. The remainder I have, I plan to split — some going into the aforementioned term deposit, the other will go into that interest bearing account in case I decide to buy more shares.

That should start building the reserves up.

Hardware refurbishment and replacement

Some of my equipment is getting a bit long in the tooth. The old desktop I bought back in 2010 is playing silly-buggers at the moment, and even the laptop I’m typing this on is nearing 10 years old. I have one desktop which used to be my office workstation before the pandemic, so between it and the old desktop, I have decent processing capacity.

The server rack needs work though. One compute node is down, and I’m actually running out of space. I also need to greatly expand the battery bank. I bought a full-height open-frame rack to replace the old one, and was gifted a new solar controller, so some time during this break, I’ll be assembling that, moving the old servers into it… and getting the replacement compute node up and running.

Software updates

I’ve been doing this to critical servers… I recently replaced the mail server with a new VM instance which made the maintenance work-load a lot lower… but there’s still some machines that need my attention.

I’m already working on getting my Mastodon instance up to release 4.2.0 (I bumped it to 4.1.9 to at least get a security patch off my back), there are a couple of OpenBSD routers that need updates and some similar remedial work.

Projects

Already mentioned is the server cluster (hardware and software), but there are some other projects that need my attention.

  • setuptools-pyproject-migration is a project that David Zaslavsky and I have been working on that is intended to help projects migrate from the old setup.py scripts in Python projects to the new pyproject.toml structure. Work has kept me busy, but the project is nearly ready for the first release. I need to help finish up the bits that are missing, and get that out there.
  • aioax25 could use some love, connected mode nearly works, plus it could do with a modernisation.
  • Brisbane WICEN‘s RFID tracking project is something I have not posted much about, but nonetheless got a lot of attention at the Tom Quilty this year, this needs further work.

Self-Training

Some things I’d like to try and get my head around, if possible…

  • Work uses NodeJS for a lot of things, but we’re butting up against its limits a lot. We use a lot of projects that are written in GoLang (e.g. InfluxDB, Grafana, Terraform, Vault), and while I did manage to hack some features into s3sync needed for work, I should get to know GoLang properly.
  • Rust interests me a lot. I should at least have a closer look at this and learn a little. It has been getting a mention around the office in the context of writing NodeJS extensions. Definitely worth looking into further.
  • I need to properly get to understand OAuth2, as I don’t think I completely understand it as it stands now. I’m not sure I’m doing it “right”.
  • COSE would have applications in both the WideSky Hub (end-to-end encryption) and in Brisbane WICEN’s RFID tracking system (digital signatures).

Physical exercise

I have not been out on the bike for some time, and it shows! I need to get out more. I intend to do quite a bit of that over the next few weeks.

Maybe I might do the odd over-nighter, but we’ll see.

Assembling a Diamond X300N antenna

Recently, I noticed the 2m flower-pot antenna that has been my home base antenna for some 15 years now, had developed a high VSWR. Either that, or the feed-line had. I haven’t tried troubleshooting which.

I tried a mobile antenna on a mag-mount base sat atop the corrugated iron roof of the deck, plugged in where the flow pot connects, and got good results there, so I think the cabling going from the back deck into my room is fine, it’s just the section that connects to the bulk-head BNC to the outside world and/or the antenna itself.

That section includes a 3m-length of HDF-400 coax, purchased back in the days of the now defunct Brisbane Mesh. I figured if the coax is bad, okay, I’ll re-use the flower-pot for other requirements (it’d be great as a small antenna to use at Imbil), the base antenna could use an upgrade anyway.

I ordered a Diamond X300N, something that would have decent gain on 2m/70cm, not be too obnoxiously large, and still should allow me to get a signal out up high. I ordered it through Andrews Communications, and it arrived late last week… today I finally got around to putting it together.

The instructions seemed simple enough:

Okay, so insert this bit into that bit… sounds simple enough, except:

(1) the tip of the lower element has a tag on it saying “do not pull on this” (fair enough), and
(2) the coupling I’m supposed to insert it into is buried far down the end of the upper antenna section

Turns out, you can put the hack-saw away, the answer is simple enough. The top-section can slide back and forth, and in transit it may settle inside the tube. You grab the top-section by the upper outer-shell joint to stop it rattling, and bang the whole lot against a flat surface to encourage gravity to “pull” that upper conductor down. Eventually it reaches the bottom and you can pull it out with conventional needle-nose pliers.

The instructions are pretty straight-forward from here, although installation will have to wait until someone is here to guard the door or until “Jesus Cat” is sound asleep, as I’m not having a repeat of last week-end again.

Missing cat, “Sam”: Lower Eranga St, The Gap — FOUND

Current situation

He has returned this morning. Tracey sent me a message via SMS reporting some neighbours across the street from me had spotted him sleeping on their outdoor lounge. When they tried to catch him, he made a bolt for it and disappeared.

I had a quick shower and got dressed, then went outside to have a look around. Sure enough, he was across the street, and on seeing me, immediately came running. The bed bug is back. I think it’ll be supervised outdoor access only from now on.

Unconfirmed sightings

At around 6PM on Friday, whilst I was searching the back yard with a headlight, I did spot what appeared to be two green eyes like Sam’s reflecting back at me, the headlight however seemed to spook the cat and it disappeared, not seen since. This would have been in the back yard / fence between the properties 92 / 94 Settlement Road.

Later around 8PM, I did another walk, and saw another cat, not unlike Sam frolicking around Eranga St / Katoa St… however I also know there is a cat that lives in that area very similar to Sam (possibly a Burmese or Russian Blue). This cat dived under a parked van.

Update 2023-07-01T18:09 With the help of neighbour Tracey, I was able to get a photo and a small post sent to ‘The Gap Grapevine’ Facebook Group. Via that group, a resident in Tamboura Ct reportedly saw a cat matching Sam’s description around mid-day Friday. Tracey and I did a quick search in that area, and I re-visited that area this evening, walking up to where the old quarry was, crossing Settlement Road, and walking back. Thankfully no signs of fresh road kill, but no cat either.

Areas searched

I have done several walks around lower Eranga St. / Katoa St / Kaloma Rd / Settlement Rd, down to Chaprowe Rd. Neither the cat nor any signs of road kill were observed along the route.

I also did a walk down around Michalea Cres, with no sightings.

Description

  • Name: Sam
  • Sex: Male
  • Breed: Russian Blue / Domestic Shorthair Cross
  • Age: Very close to 8 years old
  • Weight: 10.4kg at last weigh-in
  • Temperament: Usually quite relaxed, but is known to give people a nip
  • Desexed: Yes, but some parts (“decorative” in function now) remain
  • Microchipped: Yes, HomeSafeID tag ending in …1857
  • Collar: No
  • Diet: Eats a mix of roo or beef mince and dry biscuits usually a large teaspoon of mince and ⅛ cup of biscuits morning and afternoon. At present he’s on metabolic biscuits to try and get his weight down.
  • Health issues: none other than having a bit of weight on him.
  • Veterinary Clinic: The Gap Vet Surgery, corner Settlement Rd / Waterworks Rd, The Gap.

Contact details

No longer relevant.

Has he been known to roam before?

Yes, but usually the pattern is he’ll visit neighbours for a few hours, then return home.

Prior to me adopting him in 2020 from my maternal grandmother (who had suffered a fall that would later prove fatal), he had been known to occasionally go out on multi-night jaunts, but until now had always come home after an hour or two.

Outdoors access

He is normally let out in the mornings (as he is an outdoor cat) with the expectation that he’s back in a few hours. At 3PM he’s normally inside bellowing for dinner, so we usually will shut the door at that point and keep him indoors overnight.

There are no “cat doors” at our property.

Generating ball tickets/programmes using LaTeX

My father does a lot of Scottish Country dancing, he was a treasurer for the Clan MacKenzie association for quite a while, and a president there for about 10 years too. He was given a task for making some ball tickets, but each one being uniquely numbered.

After hearing him swear at LibreOffice for a bit, then at Avery’s label making software, I decided to take matters into my own hands.

First step was to come up with a template. The programs were to be A6-size booklets; made up of A5 pages folded in half. For ease of manufacture, they would be printed two to a page on A4 pages.

The first step was to come up with the template that would serve as the outer and inner pages. The outer page would have a placeholder that we’d substitute.

The outer pages of the programme/ticket booklet… yes there is a typo in the last line of the “back” page.
\documentclass[a5paper,landscape,16pt]{minimal}
\usepackage{multicol}
\setlength{\columnsep}{0cm}
\usepackage[top=1cm, left=0cm, right=0cm, bottom=1cm]{geometry}
\linespread{2}
\begin{document}
\begin{multicols}{2}[]

\vspace*{1cm}

\begin{center}
\begin{em}
We thank you for your company today\linebreak
and helping to celebrate 50 years of friendship\linebreak
fun and learning in the Redlands.
\end{em}
\end{center}

\begin{center}
\begin{em}
May the road rise to greet you,\linebreak
may the wind always be at your back,\linebreak
may the sun shine warm upon your face,\linebreak
the rains fall soft upon your fields\linebreak
and until we meet again,\linebreak
may God gold you in the palm of his hand.
\end{em}
\end{center}

\vspace*{1cm}

\columnbreak
\begin{center}
\begin{em}
\textbf{CLEVELAND SCOTTISH COUNTRY DANCERS\linebreak
50th GOLD n' TARTAN ANNIVERSARY TEA DANCE}\linebreak
\linebreak
1973 - 2023\linebreak
Saturday 20th May 2023\linebreak
1.00pm for 1.30pm - 5pm\linebreak
Redlands Memorial Hall\linebreak
South Street\linebreak
Cleveland\linebreak
\end{em}
\end{center}

\begin{center}
\begin{em}
Live Music by Emma Nixon \& Iain Mckenzie\linebreak
Black Bear Duo
\end{em}
\end{center}

\vspace{1cm}

\begin{center}
\begin{em}
Cost \$25 per person, non-dancer \$15\linebreak
\textbf{Ticket No \${NUM}}
\end{em}
\end{center}
\end{multicols}
\end{document}

The inner pages were the same for all booklets, so we just came up with one file that was used for all. I won’t put the code here, but suffice to say, it was similar to the above.

The inner pages, no placeholders needed here.

So we had two files; ticket-outer.tex and ticket-inner.tex. What next? Well, we needed to make 100 versions of ticket-outer.tex, each with a different number substituted for $NUM, and rendered as PDF. Similarly, we needed the inner pages rendered as a PDF (which we can do just once, since they’re all the same).

#!/bin/bash
NUM_TICKETS=100

set -ex

pdflatex ticket-inner.tex
for n in $( seq 1 ${NUM_TICKETS} ); do
	sed -e 's:\\\${NUM}:'${n}':' \
            < ticket-outer.tex \
            > ticket-outer-${n}.tex
	pdflatex ticket-outer-${n}.tex
done

This gives us a single ticket-outer.pdf, and 100 different ticket-inner-NN.pdf files that look like this:

A ticket outer pages document with substituted placeholder

Now, we just need to put everything together. The final document should have no margins, and should just import the relevant PDF files in-place. So naturally, we just script it; this time stepping every 2 tickets, so we can assemble the A4 PDF document with our A5 tickets: outer pages of the odd-numbered ticket, outer pages of the even-numbered ticket, followed by two copies of the inner pages. Repeat for all tickets. We also need to ensure that initial paragraph lines are not indented, so setting \parindent solves that.

This is the rest of my quick-and-dirty shell script:

cat > tickets.tex <<EOF
\documentclass[a4paper]{minimal}
\usepackage[top=0cm, left=0cm, right=0cm, bottom=0cm]{geometry}
\usepackage{pdfpages}
\setlength{\parindent}{0pt}
\begin{document}
EOF
for n in $( seq 1 2 ${NUM_TICKETS} ); do
	m=$(( ${n} + 1 ))
	cat >> tickets.tex <<EOF
\includegraphics[width=21cm]{ticket-outer-${n}.pdf}
\includegraphics[width=21cm]{ticket-outer-${m}.pdf}
\includegraphics[width=21cm]{ticket-inner.pdf}
\includegraphics[width=21cm]{ticket-inner.pdf}
EOF
done
cat >> tickets.tex <<EOF
\end{document}
EOF
pdflatex tickets.tex

The result is a 100-page PDF, which when printed double-sided, will yield a stack of tickets that are uniquely numbered and serve as programmes.

Mastodon experiment: a few months in

A little while back I decided to try out Mastodon, deploying my own instance running as a VM on my own hardware. This was primarily done to act as a testing ground for experimenting with integrating with it, but also as a means of keeping up with the news.

The latter is particularly important, as I no longer have the radio on all the time. I might hear a news item in the morning, but after the radio switches off, I’m unlikely to turn it back on until the next working day. A lot of news outlets moved to Twitter over the past decade, but with that site in its death throws, the ActivityPub ecosystem is looking like a safer bet.

Not many outlets are officially using this newer system yet. There are a few outlets that do publish directly to Mastodon/ActivityPub, examples being Rolling Stone, The Markup (who run their own instance), STAT, The Conversation AU/NZ and OSNews. Some outlets aren’t officially posting on ActivityPub, but are nonetheless visible via bridges from RSS (e.g. Ars Technica) and others are proxies of these outlets’ Twitter accounts (e.g. Reuters, Al Jazeera, The Guardian, Slashdot). Others are there, but it’s not clear how the material is being mirrored or if they’re official.

There’s also a decent dose of satire if you want it, including satirical news outlets The Chaser and The Shovel, and cartoonists such as Christopher Downes, David Rowe, Fiona Katauskas, Jon Kudelka, David Pope, Cathy Wilcox and Glen Le Lievre.

As you can gather, a big chunk of who I follow is actually news outlets, or humour. There are a few people on my “follow” list whom are known for posting various humour pieces from elsewhere, and I often “boost” (re-post) their content.

Meta (who run Facebook) have made noises they might join in with their own Twitter-clone in the ActivityPub fediverse. I wouldn’t mind this so much — the alternatives to them doing this is: (1) the rest of us needing dozens of social media accounts to keep in touch with everybody, (2) relying on the good will of some mega-site to connect us all, or (3) forgoing being in touch altogether.

I tried option (1) in the early part of this century, and frankly I’m over it. Elon Musk dreams of Twitter becoming option (2) but I think the chances of this happening are buckleys and none. (3) is not realistic, we’re social beings.

Some of these instances will be ad supported, and I guess that’s a compromise we may have to live with. Servers need electricity and Internet to run, and these are not free. A bigger cost to running big social networks is actually managing the meat-ware side of the network — moderation, legal teams to decide how moderation should be applied, handling take-down notices… etc.

ActivityPub actually supports flagging the content so the post is not “listed” (indexed by instances’ search engines), private posts (cannot be boosted, visible to followers only), even restricting to just those mentioned specifically. I guess there’s room for one more: “non-commercial use only” — commercial instances could then decide to they forgo the advertising on that post, or do they filter the post.

ActivityPub posting privacy settings on Mastodon

I did hear rumblings that the EU was likely to pass some laws requiring a certain level of interoperability between social networks, which ActivityPub could in fact be the basis of.

Some worry about another Eternal September moment — a repeat of the time when AOL disgorged its gaggle of novice Internet users on an unsuspecting Usenet system. Usenet users prior to AOL opening up in 1993 only had to deal with similar shenanigans once a year around September when each new batch of first year uni students would receive their Internet access accounts.

I’m not sure linking of a site like Facebook or Tumblr (who have also mentioned joining the Fediverse) is all that big a deal — Mastodon lets you block whole domains if you so choose, and who says everybody on a certain site is going to cause trouble?

Email is a federated system, always has been, and while participation as a small player is more complicated than it used to be, it is still doable. Big players like Microsoft and Google haven’t killed off email (even with the former doing their best to do so with sub-par email clients and servers). Yes, we have a bigger spam problem than we had back in the 90s, but keeping the signal-to-noise ratio up to useful levels is not impossible, even for mere mortals.

We do have to be mindful of the embrace-extend-break game that big business like to play with open protocols, I think Mastodon gGmbH’s status as a not-for-profit and a reference implementation should help here.

I’d rather throw my support behind a system that can allow us to all interoperate, and managing the misbehaviour that may arise on a case-by-case basis, is a better solution than us developing our own little private islands. The info-sec press seem to have been quick to jump ship from Twitter to Mastodon. IT press is taking a little longer, but there’s a small but growing group. I think the journalism world is going to be key to making this work and ensuring there’s good-quality content to drown out the low-quality noise. If big players like Meta joining in help push this along, I think this is worth encouraging.

A crude attempt at memory management

The other day I had a bit of a challenge to deal with. My workplace makes embedded data collection devices which are built around the Texas Instruments CC2538 SoC (internal photos visible here) and run OpenThread. To date, everything we’ve made has been an externally-powered device, running off either DC power (9-30V) or mains (120/240V 50/60Hz AC). CC2592 range extender support was added to OpenThread for this device.

The CC2538, although very light on RAM (32KiB), gets the job done with some constraints. Necessity threw us a curve-ball the other day, we wanted a device that ran off a battery. That meant going into sleep mode periodically, deep sleep! The CC2538 has a number of operating modes:

  1. running mode (pretty much everything turned on)
  2. light sleep mode (clocks, CPU and power stays on, but we pause a few peripherals)
  3. deep sleep mode — this comes in four flavours
    • PM0: Much like light-sleep, but we’ve got the option to pause clocks to more peripherals
    • PM1: PM0, plus we halt the main system clock (32MHz crystal or 16MHz RC), halting the CPU
    • PM2: PM1 plus we power down the bottom 16KiB of RAM and some other internal peripherals
    • PM3: PM2 plus we turn off the 32kHz crystal used by the sleep timer and watchdog.

We wanted PM2, which meant while we could use the bottom 16KiB of RAM during run-time, the moment we went to sleep, we had to forget about whatever was kept in that bottom 16KiB RAM — since without power it would lose its state anyway.

The challenge

Managing RAM in a device like this is always a challenge. malloc() is generally frowned upon, however in some cases it’s a necessary evil. OpenThread internally uses mbedTLS and that, relies on having a heap. It can use one implemented by OpenThread, or one provided by you. Our code also uses malloc for some things, notably short-term tasks like downloading a new configuration file or for buffering serial traffic.

The big challenge is that OpenThread itself uses a little over 9KiB RAM. We have a 4KiB stack. We’ve got under 3KiB left. That’s bare-bones OpenThread. If you want JOINER support, for joining a mesh network, that pulls in DTLS, which by default, will tell OpenThread to static-allocate a 6KiB buffer.

9KiB becomes about 15KiB; plus the stack, that’s 19KiB. This is bigger than 16KiB — the linker gives up.

Using heap memory

There is a work-around that gets things linking; you can build OpenThread with the option OPENTHREAD_CONFIG_HEAP_EXTERNAL_ENABLE — if you set this to 1, OpenThread forgoes its own heap and just uses malloc / free instead, implemented by your toolchain.

OpenThread builds and links in 16KiB RAM, hooray… but then you try joining, and; NoBufs is the response. We’re out of RAM. Moving things to the heap just kicked the can down the road, we still need that 6KiB, but we only have under 3KiB to give it. Not enough.

We have a problem in that, the toolchain we use, is built on newlib, and while it implements malloc / free / realloc; it does so with a primitive called _sbrk(). We define a pointer initialised up the top of our .bss, and whenever malloc needs more memory for the heap, it calls _sbrk(N); we grab the value of our pointer, add N to it, and return the old value. Easy.

Except… we don’t just have one memory pool now, we have two. One of which, we cannot use all the time. OpenThread, via mbedTLS also winds up calling on malloc() very early in the initialisation (as early as the otInstanceInitSingle() call to initialise OpenThread). We need that block of RAM to wind up in the upper 16KiB that stays powered on — so we can’t start at address 0x2000:0000 and just skip over .data/.bss when we run out.

malloc() will also get mighty confused if we suddenly hand it an address that’s lower than the one we handed out previously. We can’t go backwards.

I looked at replacing malloc() with a dual-pool-aware version, but newlib is hard-coded in a few places to use its own malloc() and not a third-party one. picolibc might let us swap it out, but getting that integrated looked like a lot of work.

So we’re stuck with newlib‘s malloc() for better or worse.

The hybrid approach

One option, we can’t control what malloc the newlib functions use. So use newlib‘s malloc with _sbrk() to manage the upper heap. Wrap that malloc with our own creation that we pass to OpenThread: we implement otPlatCAlloc and otPlatFree — which are essentially, calloc and free wrappers.

The strategy is simple; first try the normal calloc, if that returns NULL, then use our own.

Re-purposing an existing allocator

The first rule of software engineering, don’t write code you don’t have to. So naturally I went looking for options.

Page upon page of “No man don’t do it!!!”

jemalloc looked promising at first, it is the FreeBSD malloc(), but that there, lies a problem — it’s a pretty complicated piece of code aimed at x86 computers with megabytes of RAM minimum. It used uint64_ts in a lot of places and seemed like it would have a pretty high overhead on a little CC2538.

I tried avr-libc‘s malloc — it’s far simpler, and actually is a free-list implementation like newlib‘s version, but there is a snag. See, AVR microcontrollers are 8-bit beasts, they don’t care about memory alignment. But the Cortex M3 does! avrlibc_malloc did its job, handed back a pointer, but then I wound up in a HARDFAULT condition because mbedTLS tried to access a 32-bit word that was offset by a few bytes.

A simple memory allocator

The approach I took was a crude one. I would allocate memory in fixed-sized “blocks”. I first ran the OpenThread code under a debugger and set a break-point on malloc to see what sizes it was asking for — mostly blocks around the 128 byte mark, sometimes bigger, sometimes smaller. 64-byte blocks would work pretty well, although for initial testing, I went the lazy route and used 8-byte blocks: uint64_ts.

In my .bss, I made an array of uint8_ts; size equal to the number of 8-byte blocks in the lower heap divided by 4. This would be my usage bitmap — each block was allocated two bits, which I accessed using bit-banding: one bit I called used, and that simply reported the block was being used. The second was called chained, and that indicated that the data stored in this block spilled over to the next block.

To malloc some memory, I’d simply look for a string of free blocks big enough. When it came to freeing memory, I simply started at the block referenced, and cleared bits until I got to a block whose chained bit was already cleared. Because I was using 8-byte blocks, everything was guaranteed to be aligned.

8-byte blocks in 16KiB (2048 blocks) wound up with 512 bytes of usage data. As I say, using 64-byte blocks would be better (only 256 blocks, which fits in 64 bytes), but this was a quick test. The other trick would be to use the very first few blocks to store that bitmap (for 64-byte blocks, we only need to reserve the first block).

The scheme is somewhat inspired by the buddy allocator scheme, but simpler.

Bit banding was simple enough; I defined my struct for accessing the bits:

struct lowheap_usage_t {
        uint32_t used;
        uint32_t chained;
};

and in my code, I used a C macro to do the arithmetic:

#define LOWHEAP_USAGE                                                   \
        ((struct lowheap_usage_t*)(((((uint32_t)&lowheap_usage_bytes)   \
                                     - 0x20000000)                      \
                                    * 32)                               \
                                   + 0x22000000))

The magic numbers here are:

  • 0x20000000: the start of SRAM on the CC2538
  • 0x22000000: the start of the SRAM bit-band region
  • 32: the width of each word in the CC2538

Then, in my malloc, I could simply call…

struct lowheap_usage_t* usage = LOWHEAP_USAGE;

…and treat usage like an array; where element 0 was the usage data for the very first block down the bottom of SRAM.

To implement a memory allocator, I needed five routines:

  • one that scanned through, and told me where the first free block was after a given block number (returning the block number) — static uint16_t lowheap_first_free(uint16_t block)
  • one that, given the start of a run of free blocks, told me how many blocks following it were free — static uint16_t lowheap_chunk_free_length(uint16_t block, uint16_t required)
  • one that, given the start of a run of chained used blocks, told me how many blocks were chained together — static uint16_t lowheap_chunk_used_length(uint16_t block)
  • one that, given a block number and count, would claim that number of blocks starting at the given starting point — static void lowheap_chunk_claim(uint16_t block, uint16_t length)
  • one that, given a starting block, would clear the used bit for that block, and if chained was set; clear it and repeat the step on the following block (and keep going until all blocks were freed) — static void lowheap_chunk_release(uint16_t block)

From here, implementing calloc was simple:

  1. first, try the newlib calloc and see if that succeeded. Return the pointer we’re given if it’s not NULL.
  2. if we’re still looking for memory, round up the memory requirement to the block size.
  3. initialise our starting block number (start_nr) by calling lowheap_first_free(0) to find the first block; then in a loop:
    • find the size of the free block (chunk_len) by calling lowheap_chunk_free_length(start_nr, required_blocks).
    • If the returned size is big enough, break out of the loop.
    • If not big enough, increment start_nr by the return value from lowheap_chunk_used_length(start_nr + chunk_len) to advance it past the too-small free block and the following used chunk.
    • Stop iterating of start_nr is equal to or greater than the total number of blocks in the heap.
  4. If start_nr winds up being past the end of the heap, fail with errno = ENOMEM and return NULL.
  5. Otherwise, we’re safe, call lowheap_chunk_claim(start_nr, required_blocks); to reserve our space, zero out the actual blocks allocated, then return the address of the first block cast to void*.

Implementing free was not a challenge either: either the pointer was above our heap, in which case we simply passed the pointer to newlib‘s free — or if it was in our heap space, we did some arithmetic to figure out which block that address was in, and passed that to lowheap_chunk_release().

I won’t publish the code because I didn’t get it working properly in the end, but I figured I’d put the notes here on how I put it together to re-visit in the future. Maybe the thoughts might inspire someone else. 🙂