My father does a lot of Scottish Country dancing, he was a treasurer for the Clan MacKenzie association for quite a while, and a president there for about 10 years too. He was given a task for making some ball tickets, but each one being uniquely numbered.
After hearing him swear at LibreOffice for a bit, then at Avery’s label making software, I decided to take matters into my own hands.
First step was to come up with a template. The programs were to be A6-size booklets; made up of A5 pages folded in half. For ease of manufacture, they would be printed two to a page on A4 pages.
The first step was to come up with the template that would serve as the outer and inner pages. The outer page would have a placeholder that we’d substitute.
The outer pages of the programme/ticket booklet… yes there is a typo in the last line of the “back” page.
\documentclass[a5paper,landscape,16pt]{minimal}
\usepackage{multicol}
\setlength{\columnsep}{0cm}
\usepackage[top=1cm, left=0cm, right=0cm, bottom=1cm]{geometry}
\linespread{2}
\begin{document}
\begin{multicols}{2}[]
\vspace*{1cm}
\begin{center}
\begin{em}
We thank you for your company today\linebreak
and helping to celebrate 50 years of friendship\linebreak
fun and learning in the Redlands.
\end{em}
\end{center}
\begin{center}
\begin{em}
May the road rise to greet you,\linebreak
may the wind always be at your back,\linebreak
may the sun shine warm upon your face,\linebreak
the rains fall soft upon your fields\linebreak
and until we meet again,\linebreak
may God gold you in the palm of his hand.
\end{em}
\end{center}
\vspace*{1cm}
\columnbreak
\begin{center}
\begin{em}
\textbf{CLEVELAND SCOTTISH COUNTRY DANCERS\linebreak
50th GOLD n' TARTAN ANNIVERSARY TEA DANCE}\linebreak
\linebreak
1973 - 2023\linebreak
Saturday 20th May 2023\linebreak
1.00pm for 1.30pm - 5pm\linebreak
Redlands Memorial Hall\linebreak
South Street\linebreak
Cleveland\linebreak
\end{em}
\end{center}
\begin{center}
\begin{em}
Live Music by Emma Nixon \& Iain Mckenzie\linebreak
Black Bear Duo
\end{em}
\end{center}
\vspace{1cm}
\begin{center}
\begin{em}
Cost \$25 per person, non-dancer \$15\linebreak
\textbf{Ticket No \${NUM}}
\end{em}
\end{center}
\end{multicols}
\end{document}
The inner pages were the same for all booklets, so we just came up with one file that was used for all. I won’t put the code here, but suffice to say, it was similar to the above.
The inner pages, no placeholders needed here.
So we had two files; ticket-outer.tex and ticket-inner.tex. What next? Well, we needed to make 100 versions of ticket-outer.tex, each with a different number substituted for $NUM, and rendered as PDF. Similarly, we needed the inner pages rendered as a PDF (which we can do just once, since they’re all the same).
#!/bin/bash
NUM_TICKETS=100
set -ex
pdflatex ticket-inner.tex
for n in $( seq 1 ${NUM_TICKETS} ); do
sed -e 's:\\\${NUM}:'${n}':' \
< ticket-outer.tex \
> ticket-outer-${n}.tex
pdflatex ticket-outer-${n}.tex
done
This gives us a single ticket-outer.pdf, and 100 different ticket-inner-NN.pdf files that look like this:
A ticket outer pages document with substituted placeholder
Now, we just need to put everything together. The final document should have no margins, and should just import the relevant PDF files in-place. So naturally, we just script it; this time stepping every 2 tickets, so we can assemble the A4 PDF document with our A5 tickets: outer pages of the odd-numbered ticket, outer pages of the even-numbered ticket, followed by two copies of the inner pages. Repeat for all tickets. We also need to ensure that initial paragraph lines are not indented, so setting \parindent solves that.
This is the rest of my quick-and-dirty shell script:
The other day I had a bit of a challenge to deal with. My workplace makes embedded data collection devices which are built around the Texas Instruments CC2538 SoC (internal photos visible here) and run OpenThread. To date, everything we’ve made has been an externally-powered device, running off either DC power (9-30V) or mains (120/240V 50/60Hz AC). CC2592 range extender support was added to OpenThread for this device.
The CC2538, although very light on RAM (32KiB), gets the job done with some constraints. Necessity threw us a curve-ball the other day, we wanted a device that ran off a battery. That meant going into sleep mode periodically, deep sleep! The CC2538 has a number of operating modes:
running mode (pretty much everything turned on)
light sleep mode (clocks, CPU and power stays on, but we pause a few peripherals)
deep sleep mode — this comes in four flavours
PM0: Much like light-sleep, but we’ve got the option to pause clocks to more peripherals
PM1: PM0, plus we halt the main system clock (32MHz crystal or 16MHz RC), halting the CPU
PM2: PM1 plus we power down the bottom 16KiB of RAM and some other internal peripherals
PM3: PM2 plus we turn off the 32kHz crystal used by the sleep timer and watchdog.
We wanted PM2, which meant while we could use the bottom 16KiB of RAM during run-time, the moment we went to sleep, we had to forget about whatever was kept in that bottom 16KiB RAM — since without power it would lose its state anyway.
The challenge
Managing RAM in a device like this is always a challenge. malloc() is generally frowned upon, however in some cases it’s a necessary evil. OpenThread internally uses mbedTLS and that, relies on having a heap. It can use one implemented by OpenThread, or one provided by you. Our code also uses malloc for some things, notably short-term tasks like downloading a new configuration file or for buffering serial traffic.
The big challenge is that OpenThread itself uses a little over 9KiB RAM. We have a 4KiB stack. We’ve got under 3KiB left. That’s bare-bones OpenThread. If you want JOINER support, for joining a mesh network, that pulls in DTLS, which by default, will tell OpenThread to static-allocate a 6KiB buffer.
9KiB becomes about 15KiB; plus the stack, that’s 19KiB. This is bigger than 16KiB — the linker gives up.
Using heap memory
There is a work-around that gets things linking; you can build OpenThread with the option OPENTHREAD_CONFIG_HEAP_EXTERNAL_ENABLE — if you set this to 1, OpenThread forgoes its own heap and just uses malloc / free instead, implemented by your toolchain.
OpenThread builds and links in 16KiB RAM, hooray… but then you try joining, and; NoBufs is the response. We’re out of RAM. Moving things to the heap just kicked the can down the road, we still need that 6KiB, but we only have under 3KiB to give it. Not enough.
We have a problem in that, the toolchain we use, is built on newlib, and while it implements malloc / free / realloc; it does so with a primitive called _sbrk(). We define a pointer initialised up the top of our .bss, and whenever malloc needs more memory for the heap, it calls _sbrk(N); we grab the value of our pointer, add N to it, and return the old value. Easy.
Except… we don’t just have one memory pool now, we have two. One of which, we cannot use all the time. OpenThread, via mbedTLS also winds up calling on malloc() very early in the initialisation (as early as the otInstanceInitSingle() call to initialise OpenThread). We need that block of RAM to wind up in the upper 16KiB that stays powered on — so we can’t start at address 0x2000:0000 and just skip over .data/.bss when we run out.
malloc() will also get mighty confused if we suddenly hand it an address that’s lower than the one we handed out previously. We can’t go backwards.
I looked at replacing malloc() with a dual-pool-aware version, but newlib is hard-coded in a few places to use its own malloc() and not a third-party one. picolibc might let us swap it out, but getting that integrated looked like a lot of work.
So we’re stuck with newlib‘s malloc() for better or worse.
The hybrid approach
One option, we can’t control what malloc the newlib functions use. So use newlib‘s malloc with _sbrk() to manage the upper heap. Wrap that malloc with our own creation that we pass to OpenThread: we implement otPlatCAlloc and otPlatFree — which are essentially, calloc and free wrappers.
The strategy is simple; first try the normalcalloc, if that returns NULL, then use our own.
Re-purposing an existing allocator
The first rule of software engineering, don’t write code you don’t have to. So naturally I went looking for options.
Page upon page of “No man don’t do it!!!”
jemalloc looked promising at first, it is the FreeBSD malloc(), but that there, lies a problem — it’s a pretty complicated piece of code aimed at x86 computers with megabytes of RAM minimum. It used uint64_ts in a lot of places and seemed like it would have a pretty high overhead on a little CC2538.
I tried avr-libc‘s malloc — it’s far simpler, and actually is a free-list implementation like newlib‘s version, but there is a snag. See, AVR microcontrollers are 8-bit beasts, they don’t care about memory alignment. But the Cortex M3 does! avrlibc_malloc did its job, handed back a pointer, but then I wound up in a HARDFAULT condition because mbedTLS tried to access a 32-bit word that was offset by a few bytes.
A simple memory allocator
The approach I took was a crude one. I would allocate memory in fixed-sized “blocks”. I first ran the OpenThread code under a debugger and set a break-point on malloc to see what sizes it was asking for — mostly blocks around the 128 byte mark, sometimes bigger, sometimes smaller. 64-byte blocks would work pretty well, although for initial testing, I went the lazy route and used 8-byte blocks: uint64_ts.
In my .bss, I made an array of uint8_ts; size equal to the number of 8-byte blocks in the lower heap divided by 4. This would be my usage bitmap — each block was allocated two bits, which I accessed using bit-banding: one bit I called used, and that simply reported the block was being used. The second was called chained, and that indicated that the data stored in this block spilled over to the next block.
To malloc some memory, I’d simply look for a string of free blocks big enough. When it came to freeing memory, I simply started at the block referenced, and cleared bits until I got to a block whose chained bit was already cleared. Because I was using 8-byte blocks, everything was guaranteed to be aligned.
8-byte blocks in 16KiB (2048 blocks) wound up with 512 bytes of usage data. As I say, using 64-byte blocks would be better (only 256 blocks, which fits in 64 bytes), but this was a quick test. The other trick would be to use the very first few blocks to store that bitmap (for 64-byte blocks, we only need to reserve the first block).
…and treat usage like an array; where element 0 was the usage data for the very first block down the bottom of SRAM.
To implement a memory allocator, I needed five routines:
one that scanned through, and told me where the first free block was after a given block number (returning the block number) — static uint16_t lowheap_first_free(uint16_t block)
one that, given the start of a run of free blocks, told me how many blocks following it were free — static uint16_t lowheap_chunk_free_length(uint16_t block, uint16_t required)
one that, given the start of a run of chained used blocks, told me how many blocks were chained together — static uint16_t lowheap_chunk_used_length(uint16_t block)
one that, given a block number and count, would claim that number of blocks starting at the given starting point — static void lowheap_chunk_claim(uint16_t block, uint16_t length)
one that, given a starting block, would clear the used bit for that block, and if chained was set; clear it and repeat the step on the following block (and keep going until all blocks were freed) — static void lowheap_chunk_release(uint16_t block)
From here, implementing calloc was simple:
first, try the newlibcalloc and see if that succeeded. Return the pointer we’re given if it’s not NULL.
if we’re still looking for memory, round up the memory requirement to the block size.
initialise our starting block number (start_nr) by calling lowheap_first_free(0) to find the first block; then in a loop:
find the size of the free block (chunk_len) by calling lowheap_chunk_free_length(start_nr, required_blocks).
If the returned size is big enough, break out of the loop.
If not big enough, increment start_nr by the return value from lowheap_chunk_used_length(start_nr + chunk_len) to advance it past the too-small free block and the following used chunk.
Stop iterating of start_nr is equal to or greater than the total number of blocks in the heap.
If start_nr winds up being past the end of the heap, fail with errno = ENOMEM and return NULL.
Otherwise, we’re safe, call lowheap_chunk_claim(start_nr, required_blocks); to reserve our space, zero out the actual blocks allocated, then return the address of the first block cast to void*.
Implementing free was not a challenge either: either the pointer was above our heap, in which case we simply passed the pointer to newlib‘s free — or if it was in our heap space, we did some arithmetic to figure out which block that address was in, and passed that to lowheap_chunk_release().
I won’t publish the code because I didn’t get it working properly in the end, but I figured I’d put the notes here on how I put it together to re-visit in the future. Maybe the thoughts might inspire someone else. 🙂
So today I finally got around to the SGI kit in my possession. Not quite sure where all of it went, there’s a SGI PS/2 keyboard, Indy Presenter LCD and a SGI O2 R5000 180MHz CPU module that have gone AWOL, but this morning I took advantage of the Brisbane City Council kerb-side clean-up.
Screenshot of the post — yes I need to get Mastodon post embedding working
I rounded up some old Plextor 12× CD-ROM drives (SCSI interface) that took CD caddies (remember those?) as well to go onto the pile, and some SCSI HDDs I found laying around — since there’s a good chance the disks in the machines are duds. I did once boot the Indy off one of those CD-ROM drives, so I know they work with the SGI kit.
The machines themselves had gotten to the point where they no longer powered on. The O2 at first did, and I tried saving it, but I found:
it was unreliable, frequently freezing up — until one day it stopped powering on
the case had become ridiculously brittle
The Indy exploded last time I popped the cover, and fragments of the Indigo2 were falling off. The Octane is the only machine whose case seemed largely intact. I had gathered up what IRIX kit I had too, just in case the new owners wanted to experiment. archive.org actually has the images, and I had a crack at patching irixboot to be able to use them. Never got to test that though.
Today I made the final step of putting the machines out on the street to find a new home. It looks like exactly that has happened, someone grabbed the homebrew DB15 to 13W3 cable I used for interfacing to the Indy and Indigo2, then later in the day I found the lot had disappeared.
With more room, I went and lugged the old SGI monitor down, it’s still there, but suspect it’ll possibly go too. The Indy and Indigo2 looked to be pretty much maxxed-out on RAM, so should be handy for bits for restoring other similar-era SGI kit. I do wish the new owners well with their restoration journey, if that’s what they choose to do.
For me though, it’s the end of an era. Time to move on.
I keep a few copies of it. Between three of my machines and two USB drives (one HDD, one SSD), I keep a copy of the lossless archive. This is a recent addition since (1) I’ve got the space to do it, and (2) some experimentation with Ogg/Vorbis metadata corrupting files necessitated me re-ripping everything so I thought I’ll save future-me the hassle by keeping a lossless copy on-hand.
Actually, this is not the first time I’ve done a re-rip of the whole collection. The previous re-rip was done back in 2005 when I moved from MP3 to Ogg/Vorbis (and ditched a lot of illegally obtained MP3s while I was at it — leaving me with just the recordings that I had licenses for). But, back then, storing a lossless copy of every file as I re-ripped everything would have been prohibitively expensive in terms of required storage. When even my near-10-year-old laptop sports a 2TB SSD, this isn’t a problem.
The working copy that I generally do my listening from uses the Ogg/Vorbis format today. I haven’t quite re-ripped everything, there’s a stack of records that are waiting for me to put them back on the turntable … one day I’ll get to those … but every CD, DVD and digital download (which were FLAC to begin with) is losslessly stored in FLAC.
If I make a change to the files, I really want to synchronise my changes between the two copies. Notably, if I change the file data, I need to re-encode the FLAC file to Ogg/Vorbis — but if I simply change its metadata (i.e. cover art or tags), I merely need to re-write the metadata on the destination file and can save some processing cycles.
The thinking is, if I can “fingerprint” the various parts of the file, I can determine what bits changed and what to convert. Obviously when I transcode the audio data itself, the audio data bytes will bear little resemblance to the ones that were fed into the transcoder — that’s fine — I have other metadata which can link the two files. The aim of this exercise is to store the hashes for the audio data and tags, and detect when one of those things changes on the source side, so the change can be copied across to the destination.
Existing option: MD5 hash
FLAC actually does store a hash of its source input as part of the stream metadata. It uses the MD5 hashing algorithm, which while good enough for a rough check, and is certainly better than linear codes like CRC, it’s really quite dated as a cryptographic hash.
I’d prefer to use SHA-256 for this since it’s generally regarded as being a “secure” hash algorithm that is less vulnerable to collisions than MP3 or SHA-1.
Naïve approach: decode and compare
The naïve approach would be to just decode to raw audio data and compare the raw audio files. I could do this via a pipe to avoid writing the files out to disk just to delete them moments later. The following will output a raw file:
RC=0 stuartl@rikishi ~ $ time flac -d -f -o /tmp/test.raw /mnt/music-archive/by-album-artist/Traveling\ Wilburys/The\ Traveling\ Wilburys\ Collection/d1o001t001\ Traveling\ Wilburys\ -\ Handle\ With\ Care.flac
flac 1.3.4
Copyright (C) 2000-2009 Josh Coalson, 2011-2016 Xiph.Org Foundation
flac comes with ABSOLUTELY NO WARRANTY. This is free software, and you are
welcome to redistribute it under certain conditions. Type `flac' for details.
d1o001t001 Traveling Wilburys - Handle With Care.flac: done
real 0m0.457s
user 0m0.300s
sys 0m0.065s
On my laptop, it takes about 200~500ms to decode a single file to raw audio. Multiply that by 7624 and you get something that will take nearly an hour to complete. I think we can do better!
Alternate naïve approach: Copy the file then strip metadata
Making a copy of the file without the metadata is certainly an option. Something like this will do that:
RC=0 stuartl@rikishi ~ $ time ffmpeg -y -i \
/mnt/music-archive/by-album-artist/Traveling\ Wilburys/The\ Traveling\ Wilburys\ Collection/d1o001t001\ Traveling\ Wilburys\ -\ Handle\ With\ Care.flac \
-c:a copy -c:v copy -map_metadata -1 \
/tmp/test.flac
… snip lots of output …
Output #0, flac, to '/tmp/test.flac':
Metadata:
encoder : Lavf58.76.100
Stream #0:0: Video: mjpeg (Progressive), yuvj420p(pc, bt470bg/unknown/unknown), 1000x1000 [SAR 72:72 DAR 1:1], q=2-31, 90k tbr, 90k tbn, 90k tbc (attached pic)
Stream #0:1: Audio: flac, 44100 Hz, stereo, s16
Side data:
replaygain: track gain - -8.320000, track peak - 0.000023, album gain - -8.320000, album peak - 0.000023,
Stream mapping:
Stream #0:1 -> #0:0 (copy)
Stream #0:0 -> #0:1 (copy)
Press [q] to stop, [?] for help
frame= 1 fps=0.0 q=-1.0 Lsize= 24671kB time=00:03:19.50 bitrate=1013.0kbits/s speed=4.77e+03x
video:114kB audio:24549kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.033000%
real 0m0.139s
user 0m0.105s
sys 0m0.032s
This is a big improvement, but just because the audio blocks are the same does not mean the file itself won’t change in other ways — FLAC files can include “padding blocks” anywhere after the STREAMINFO block which will change the hash value without having any meaningful effect on the file content.
So this may not be as stable as I’d like. However, ffmpeg is on the right track…
Audio fingerprinting
MusicBrainz actually has an audio fingerprinting library that can be matched to a specific recording, and is reasonably “stable” across different audio compression formats. Great for the intended purpose, but in this case it’s likely going to be computationally expensive since it has to analyse the audio in terms of frequency components, try to extract tempo information, etc. I don’t need this level of detail.
It may also miss that one file might for example, be proceeded by lots of silence — multi-track exports out of Audacity are a prime example. Audacity used to just export the multiple tracks “as-is” so you could re-construct the full recording by concatenating the files, but some bright-spark thought it would be a good idea to prepend the exported tracks with silence by default so if re-imported, their relative positions were “preserved”. Consequently, I’ve got some record rips that I need to fix because of the extra “silence”!
Getting hashes out of ffmpeg
It turns out that ffmpeg can output any hash you’d like of whatever input data you give it:
Notice the hashes are the same, yet the first copy of the file we hashed does not contain the tags or cover art present in the file it was generated from. Speed isn’t as good as just stripping the metadata, but on the flip-side, it’s not as expensive as decoding the file to raw format, and should be more stable than naïvely hashing the whole file after metadata stripping.
Where to from here?
Well, having a hash, I can store this elsewhere (I’m thinking SQLite3 or LMDB), then compare it later to know if the audio has changed. It’s not difficult or expensive using mutagen or similar to extract the tags and images, those can be hashed using conventional means to generate a hash of that information. I can also store the mtime and a complete file hash for an even faster “quick check”.
I’m the owner of two domain licenses, longlandclan.id.au and vk4msl.id.au, both purchased for personal use. The former I share with other family members where as the latter I use for my own use. Consequently, I’m on auDA’s mailing lists and receive the occasional email from them. No big deal. Lately, they’ve been pushing .au domains (i.e. dropping the .id bit out), which I’m not worried about myself, but I can see the appeal for businesses.
Anyway… I practice what I preach with regards to email: I do not send email in HTML format — and my email client is set to receive emails in plain text, not HTML, unless there is no plain-text component. This morning, I received what I consider, a textbook example of why I think HTML email is so bad for the Internet today.
From: .au Domain Administration <noreply@auda.com.au>
Subject: Notice: .au Direct Registration
Date: Wed, 10 Aug 2022 23:00:04 +0000
Reply-To: .au Domain Administration <noreply@auda.com.au>
X-Mailer: Mailchimp Mailer - **CID292f65320f63be5c3fcd**
The .au Domain Administration (auDA) recently launched Australia’s newest domain namespace – .au direct.
Dear Stuart Longland,
The .au Domain Administration (https://www.auda.org.au/) (auDA), recently launched Australia’s newest domain namespace – .au direct. The new namespace provides eligible registrants the option to register domain names directly before the .au for the first time (e.g. forexample.au).
Registrants with an existing .au domain name licence are eligible to apply for a direct match of their .au direct domain name through the Priority Allocation Process (e.g. if you hold forexample.com.au (https://aus01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fforexample.com.au%2F&data=05%7C01%7Cprivate.address%40auda.org.au%7C95a9271d4eff4973013b08da3240a115%7C81810bc45d6845f6ba4e3d6c9fb37e43%7C0%7C0%7C637877550424818538%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=2WhPYMxV3FI9nEpXDEk8KdyJwWGyqcI%2FwRd%2FNc7DQks%3D&reserved=0) , you can apply for Priority Status to register forexample.au). Information about your existing domain name licence is available here: https://whois.auda.org.au/. The Priority Allocation Process is now open and will close on 20 Sept 2022.
That is the email, as it appeared in my email client (I have censored the unfortunate auDA employee’s email address). I can see what happened:
Someone composed an email (likely in HTML format) that would be part of the marketing campaign they were going to send via MailChimp. The person composing the email for MailChimp clearly is using Microsoft Outlook (or maybe that should be called Microsoft LookOut!). Microsoft’s software saw what it thought was a hyperlink and thought, “I need to ‘protect’ this”, and made it a “safe” link. A link with the user’s email address embedded in it!
Funnily enough, this seems to be the only place where a link was mangled by Microsoft’s mal^H^H^Hsoftware. I think this underscores the importance of verifying that you are indeed sending what you think you are sending — and highlights how difficult HTML (and Microsoft) have made this task.
don’t assume that people will only see the HTML email
don’t assume that what you see in the HTML view is identical to what will be seen in plain text
Might be better to compose the plain text, get that right… then paste that into the HTML view and “make it pretty”… or perhaps don’t bother and just go back to plain-text? KISS principle!
Years ago, I was getting into Linux on esoteric architectures, which started with a Gateway Microserver which runs the MIPS architecture… to better understand this platform I also obtained a few SiliconGraphics workstations, including this O2.
Originally a R5000SC at 180MHz and 128MB RAM, I managed to get hold of an RM5200 300MHz CPU module for it, and with the help of people on the #mipslinux channel on the oldirc.freenode.net, managed to obtain a PROM update to get the RM5200 working. Aside from new HDDs (the original died just recently), it’s largely the stock hardware.
I figured it deserved to go to a new home, and a fellow on Gumtree (in WA) happened to be looking for some of these iconic machines, so I figured I might as well clean this machine up as best I can and get it over there while it’s still somewhat functioning. That said, age has not been friendly to this beast.
the CD-ROM drive tray gear has popped off the motor spindle, rendering the CD-ROM drive non-functional
in trying to fix the CD-ROM issue (I tried disassembling the drive but couldn’t get at the parts needed), the tab that holds the lid of the machine on broke
the PSU fan, although free to spin, does not appear to be operational
the machine seems to want to shut off after a few minutes of run-time
The latter two are related I think: likely things get too hot and a protection circuit kicks in to turn the machine off. There’s no dust in the machine to cause a lack of air flow, I thus suspect the fan is the issue. This will be my biggest challenge I suspect. It looks to be a fairly standard case fan, so opening up the power supply (not for the feint of heart!) to replace it with a modern one isn’t out of the question.
The CD-ROM drive is a different matter. SGI machines use 512-byte sectors on their CDs, and this requires CD-ROM firmware that supports this. I have a couple of Plextor SCSI drives that do offer this (there is a jumper marked “BLOCK”), but they won’t physically fit in the O2 (they are caddy-loading drives). Somewhere around the house I have a 68-pin SCSI cable, I should be able to link that to the back of the O2 via its external SCSI port, then cobble together a power supply to run the drive externally… but that’ll be a project for another day.
A working monitor was a possible challenge, but a happy accident: it seems some LCD montiors can do sync-on-green, and thus are compatible with the O2. I’m using a little 7″ USB-powered WaveShare LCD which I normally use for provisioning Raspberry Pi PCs. I just power the monitor via a USB power supply and use the separately-provided VGA adaptor cable to plug it into the O2. So I don’t have to ship a bulky 20″ CRT across the country.
The big issue is getting an OS onto the thing. I may have to address the sudden-shutdown issue first before I can get a reasonable chance at an OS install. The big problem being an OS that these things can run. My options today seem to be:
Debian Jessie and earlier (Stretch dropped support for mips4 systems, favouring newer mips64r2/mips32r2 systems)
Gentoo Linux (which it currently does run)
OpenBSD 6.9 and earlier (7.0 discontinues the sgi port)
NetBSD 9.2
IRIX 6.5
The fellow ideally wants IRIX 6.5 on the thing, which is understandable, that is the native OS for these machines. I never had a full copy of IRIX install media, and have only used IRIX once (it came installed on the Indy). I’ve only ever installed Gentoo on the O2.
Adding to the challenge, I’ll have to network boot the thing because of the duff CD-ROM drive. I had thought I’d just throw NetBSD on the thing since that is “current” and would at least prove the hardware works with a minimum of fuss… but then I stumbled on some other bits and pieces:
irixboot is a Vagrant-based virtual machine with tools needed to network-boot a SGI workstation. The instructions used for IP22 hardware (Indy/Indigo² should work here because IP32 hardware like the O2 also have 32-bit PROMs)
The Internet Archive provides CD images for IRIX 6.5, including the foundation discs which I’ve never posessed
Thus, there seems to be all the bits needed to get IRIX onto this thing, if I can get the machine to stay running long enough.
So, this week mother nature threw South East Queensland a curve-ball like none of us have seen in over a decade: a massive flood. My workplace, VRT Systems / WideSky.Cloud Pty Ltd resides at 38b Douglas Street, Milton, which is a low-lying area not far from the Brisbane River. Sister company CETA is just two doors down at no. 40. Mid-February, a rain depression developed in the Sunshine Coast Hinterland / Wide Bay area north of Brisbane.
That weather system crawled all the way south, hitting Brisbane with constant heavy rain for 5 days straight… eventually creeping down to the Gold Coast and over the border to the Northern Rivers part of NSW.
The result on our offices was devastating. (Copyright notice: these images are placed here for non-commercial use with permission of the original photographers… contact me if you wish to use these photos and I can forward your request on.)
Douglas St and VRT office the morning the waters started rising (Credit: Patrick Coffey)Inside the VRT and CETA offices as the water rose (Credit: Scott Carden)The CETA office in the afternoon (Credit: Patrick Coffey)Douglas St and VRT office as the flood started to recede (Credit: Joshua Bryce)VRT front door showing the water line (Credit: Scott Carden)Inside the CETA office before the clean-up (Credit: Scott Carden)Waterlogged WideSky Hubs (Credit: Stuart Longland)“Important – Do not destroy” – mother nature doesn’t care (Credit: Stuart Longland)Siemens water meters rescued from soggy packaging (Credit: Stuart Longland)Siemens water meters still work after flood (Credit: Stuart Longland)Calvin washing off the Brisbane River mud (Credit: Stuart Longland)Ruined stock from flood-water (Credit: Scott Carden)CETA office during the clean-up (Credit: Scott Carden)I guess we won’t be playing “planning poker” for a while (Credit: Stuart Longland)The VRT office lower floor stripped of soggy plasterboard (Credit: Stuart Longland)The (temporary) Douglas St. Tip – view into driveway (Credit: Ryan Monaghan)The (temporary) Douglas St. Tip – more of the spoil pile (Credit: Ryan Monaghan)The (temporary) Douglas St. Tip – view toward Park Road (Credit: Ryan Monaghan)
Some of the stock still worked after the flood — the Siemens UH-40s pictured were still working (bar a small handful) and normally sell for high triple-figures. The WideSky Hubs and CET meters all feature a conformal coating on the PCBs that will make them robust to water ingress and the Wavenis meters are potted sealed units. So not all a loss — but there’s a big pile of expensive EDMI meters (Mk7s and Mk10s) though that are not economic to salvage due to approval requirements which is going to hurt!
Le Mans Motors, pictured in those photos is an automotive workshop, so would have had lots of lubricants, oils and grease in stock needed to service vehicles — much of those contaminants were now across the street, so washing that stuff off the surviving stock was the order of the day for much of Wednesday, before demolition day Friday.
As for the server equipment, knowing that this was a flood-prone area (which also by the way means insurance is non-existent), we deliberately put our server room on the first floor, well above the known flood marks of 1974 and 2011. This flood didn’t get that high, getting to about chest-height on the ground floor. Thus, aside from some desktops, laptops, a workshop (including a $7000 oscilloscope belonging to an employee), a new coffee machine (that hurt the coffee drinkers), and lots of furniture/fittings, most of the IT equipment came through unscathed. The servers “had the opportunity to run without the burden of electricity“.
We needed our stuff working, so we needed to first rescue the machines from the waterlogged building and set them up elsewhere. Elsewhere wound up being at the homes of some of our staff with beefy NBN Internet connections. Okay, not beefy compared to the 500Mbps symmetric microwave link we had, but 50Mbps uplinks were not to be snorted at in this time of need.
The initial plan was the machines that once shared an Ethernet switch, now would be in physically separate locations — but we still needed everything to look like the old network. We also didn’t want to run more VPN tunnels than necessary. Enter OpenVPN L2 mode.
Establishing the VPN server
Up to this point, I had deployed a temporary VPN server as a VPS in a Sydney data centre. This was a plain-jane Ubuntu 20.04 box with a modest amount of disk and RAM, but hopefully decent CPU grunt for the amount of cryptographic operations it was about to do.
Most of our customer sites used OpenVPN tunnels, so I migrated those first — I managed to grab a copy of the running server config as the waters rose before the power tripped out. Copying that config over to the new server, start up OpenVPN, open a UDP port to the world, then fiddled DNS to point the clients to the new box. They soon joined.
Connecting staff
Next problem was getting the staff linked — originally we used a rather aging Cisco router with its VPN client (or vpnc on Linux/BSD), but I didn’t feel like trying to experiment with an IPSec server to replicate that — so up came a second OpenVPN instance, on a new subnet. I got the Engineering team to run the following command to generate a certificate signing request (CSR):
They sent me their .req files, and I used EasyRSA v3 to manage a quickly-slapped-together CA to sign the requests. Downloading them via Slack required that I fish them out of the place where Slack decided to put them (without asking me) and place it in the correct directory. Sometimes I had to rename the file too (it doesn’t ask you what you want to call it either) so it had a .req extension. Having imported the request, I could sign it.
A new file pki/issued/theclient.crt could then be sent back to the user. I also provided them with pki/ca.crt and a configuration file derived from the example configuration files. (My example came from OpenBSD’s OpenVPN package.)
They were then able to connect, and see all the customer site VPNs, so could do remote support. Great. So far so good. Now the servers.
Server connection VPN
For this, a third OpenVPN daemon was deployed on another port, but this time in L2 mode (dev tap) not L3 mode. In addition, I had servers on two different VLANs, I didn’t want to have to deploy yet more VPN servers and clients, so I decided to try tunnelling 802.1Q. This required boosting the MTU from the default of 1500 to 1518 to accommodate the 802.1Q VLAN tag.
The VPN server configuration looked like this:
port 1196
proto udp
dev tap
ca l2-ca.crt
cert l2-server.crt
key l2-server.key
dh data/dh4096.pem
server-bridge
client-to-client
keepalive 10 120
cipher AES-256-CBC
persist-key
persist-tun
status /etc/openvpn/l2-clients.txt
verb 3
explicit-exit-notify 1
tun-mtu 1518
In addition, we had to tell netplan to create some bridges, we created a vpn.conf in /etc/netplan/vpn.yaml that looked like this:
Those aren’t the real VLAN IDs or IP addresses, but you get the idea. Bridge up on the cloud end isn’t strictly necessary, but it does mean we can do other forms of tunnelling if needed.
On the clients, we did something very similar. OpenVPN client config:
client
dev tap
proto udp
remote vpn.example.com 1196
resolv-retry infinite
nobind
persist-key
persist-tun
ca l2-ca.crt
cert l2-client.crt
key l2-client.key
remote-cert-tls server
cipher AES-256-CBC
verb 3
tun-mtu 1518
Having done this, we had the ability to expand our virtual “L2” network by simply adding more clients on other home Internet connections, the bridges would allow all servers to see each-other as if they were connected to the same Ethernet switch.
So, I’ve been wanting to do this for the better part of a decade… but lately, the cost of more capable embedded devices has come right down to make this actually feasible.
It’s taken a number of incarnations, the earliest being the idea of DIYing it myself with a UHF-band analogue transceiver. Then the thought was to pair a I²S audio CODEC with a ESP8266 or ESP32.
I don’t want to rely on technology that might disappear from the market should relations with China suddenly get narky, and of course, time marches on… I learn about protocols like ROC. Bluetooth also isn’t what it was back when I first started down this path — back then A2DP was one-way and sounded terrible, HSP was limited to 8kHz mono audio.
Today, Bluetooth headsets are actually pretty good. I’ve been quite happy with the Logitech Zone Wireless for the most part — the first one I bought had a microphone that failed, but Logitech themselves were good about replacing it under warranty. It does have a limitation though: it will talk to no more than two Bluetooth devices. The USB dongle it’s supplied with, whilst a USB Audio class device, also occupies one of those two slots.
The other day I spent up on a DAB+ radio and a shortwave radio — it’d be nice to listen to these via the same Bluetooth headset I use for calls and the tablet. There are Bluetooth audio devices that I could plug into either of these, then pair with my headset, but I’d have to disconnect either the phone or the tablet to use it.
So, bugger it… the wireless headset interface will get an upgrade. The plan is a small pocket audio swiss-army-knife that can connect to…
an analogue device such as a wired headset or radio receiver/transceiver
my phone via Bluetooth
my tablet via Bluetooth
the aforementioned Bluetooth headset
a desktop PC or laptop over WiFi
…and route audio between them as needs require.
The device will have a small LCD display for control with a directional joystick button for control, and will be able to connect to a USB host for management.
Proposed parts list
The chip crisis is actually a big limitation, some of the bits aren’t as easily available as I’d like. But, I’ve managed to pull together the following:
A LCD module pulled from an Ericsson A1018S mobile phone – believed to be PCF8563 based
Directional joystick button
The only bit that’s old stock is the LCD, it’s been sitting on my shelf gathering dust for over a decade. Somewhere in one of my junk boxes I’ve got some joystick buttons also bought many years ago.
Proposed software
For the sake of others looking to duplicate my efforts, I’ll stick with Raspberry Pi OS. As my device is an ARMv6 device, I’ll have to stick with the 32-bit release. Not that big a deal, and long-term I’ll probably look at using OpenEmbedded or Gentoo Embedded long-term to make a minimalist image that just does what I need it to do.
The starter kit came with a SD card loaded with NOOBS… I ignored this and just flashed the SD card with a bare minimum Debian Bullseye image. The plan is I’ll get PipeWire up and running on this for its Bluetooth audio interface. Then we’ll try and get the hardware bits going.
Right now, I have the zero booting up, connecting to my local WiFi network, and making itself available via SSH. A good start.
Data sheet for the LCD
The LCD will possibly be one of the more challenging bits. This is from a phone that was new last century! As it happens though, Bergthaller Iulian-Alexandru was kind enough to publish some details on a number of LCD screens. Someone’s since bought and squatted the domain, but The Wayback Machine has an archive of the site.
I’ve mirrored his notes on various Ericsson LCDs here:
The diagrams on that page appear to show the connections as viewed from the front of the LCD panel. I guess if I let magic smoke out, too bad! The alternative is I do have two Nokia 3310s floating around, so harvest the LCDs out of them — in short, I have a fallback plan!
PipeWire on the Pi Zero
This will be the interesting bit. Not sure how well it’ll work, but we’ll give it a shot. The trickiest bit is getting binaries for the device, no one builds for armhf yet. There are these binaries for Ubuntu AMD64, and luckily there are source packages available.
I guess worst case scenario is I put the Pi Zero W aside and get a Pi Zero 2 W instead. Key will be to test PipeWire first before I warm up the soldering iron, let’s at least prove the software side of things, maybe using USB audio devices in place of the AudioInjector board.
I’m going through and building the .debs for armhf myself now, taking notes as I go. I’ll post these when I’m done.
We’ve had WiFi in one form or another for some years on this network. Originally it started with an interest in the Brisbane Mesh metropolitan area network which more-or-less imploded around 2006 or so. Back then, I think I had one of the few WiFi access points in The Gap. 2.4GHz was basically microwave ovens and not much else. The same is not true today.
WiFi networks in my local area, 2.4GHz isn’t as quiet as it once was.
Since then, the network has changed a bit: from a little D-Link 802.11b AP, we moved to a Prism54g WiFi card (that I still have) with hostapd, using OpenVPN to provide security. That got replaced by a Telstra-branded Netcomm WiFi router which I figured out supported WPA-Enterprise, so I went down the rabbit hole of setting up FreeRADIUS, and we ran that until a lightning strike blew it up. The next consumer AP that replaced it was a miserable failure, so it’s been business APs since then.
Initially a Cisco WAP4410N, which was a great little AP… worked reliably for years, but about 12 months ago I noticed it was dropping packets occasionally and getting a bit intermittent. Thinking that maybe the device is past its prime, I bought a replacement: a WAP150, which proved to be a bit disappointing. Range wasn’t as good compared to the WAP4410N, and I soon found myself moving the WAP150 downstairs to service the network there and re-instating the WAP4410N.
In particular, one feature I liked about the two Cisco units is they support 802.1Q VLANs, with the ability to assign a different WiFi SSID to each. The 4410N could do 4 SSIDs, the 150 8. This is a feature that consumer APs don’t do, and it is a handy feature here as it enables me to have a “work” LAN (with VPN to my workplace) and a “home” LAN which everybody else uses.
Years ago, our Internet usage was over a 512kbps/128kbps ADSL link, and it was mostly Internet browsing… so intermittent packet loss wasn’t a big deal… one AP did just fine. Now with the move to NBN, our telephone service is a VoIP service, and I’m finding that WiFi IP phones are very picky about APs. We have three IP phones and an ATA… the ATA (Grandstream HT814) is Ethernet of course, as is one of the IP phones (Grandstream GXP1615), but the other two IP phones are WiFi (Aristel Wi-Fi Genius X1+ and Grandstream WP810).
The Aristel device in particular, was really choppy… and the first one sent out seemed to be a DoA, with poor performance even when right beside the AP. A replacement was provided under RMA, and this one performed much better, but still suffered intermittent loss. The Grandstream WP810 in general worked, but there were noticeable dead spots in a few areas around the house.
The final straw with the existing pair of APs came at the last Brisbane WICEN meeting, conducted over Zoom… both APs seem to suffer a problem where they started dropping packets and glitching badly. A power-cycle “fixes” the problem, but the issue returns after a week or two. Clearly, they were no longer up to snuff.
I went the long-range one for upstairs since it’s in a high spot (sitting atop a stereo speaker on a top shelf in my room) so would be able to “radiate” over a long distance to hopefully reach down the drive way and into the back-yard.
The other one is to fill in dead spots downstairs, and since it’s going to be pretty much sitting at waist level, there’s no point in it being “long range”.
The devices I bought were purchased through mWave (here and here), as they had them in stock at the time.
Power injectors
These are 48V passive PoE devices… so to make them go, you need a separate power injector. The “standard” Ubiquiti power injector was out-of-stock, but I wanted these to work on 12V anyway, so I looked around for a suitable option. Core Electronics do have some step-up converters which work great for 24V devices, but the range available doesn’t quite reach 48V. I did find though that Telco Antennas sell these 48V PoE injectors. (They also sell the APs here and here, but were out-of-stock at the time of purchase.)
Admittedly, they’re 10/100Mbps only, which means you don’t quite get the full throughput out of the WiFi6 APs, but meh, it’s good enough… if the IP phones need more than 100Mbps, they’ll run up against the 25Mbps limit of the NBN link!
Controller
These APs, unlike the Cisco devices they’re replacing (and everything else I’ve used prior), these have no built-in management interface, they talk to a network controller device… normally the UniFi Cloud Key. I had a run-in with the first generation of these at the Stirling’s Crossing Endurance Centre. For a big network, the idea of a central device does make a lot of sense (that site has 5 UAP-AC-Ms and 3 8-port PoE switches), but for a two-AP network like mine it seemed overkill.
One thing I learned, is these things positively DO NOT like being power-cycled! Repeated power-cycling corrupts the database in very short order, and you find yourself restoring configurations from a back-up soon after. So I was squeamish about buying one of these. The second generation version has its own back-up battery, but reports suggest they can be just as unreliable. In any case, they were out of stock everywhere, and I didn’t want to spring the extra cash for the “plus” model (that has a HDD… not much use to me) or the Dream Machine router.
I did consider using a Raspberry Pi 3, in fact that was my original plan… I had one spare, and so started down the path of setting it up as a UniFi controller… however, ran into two road blocks:
UniFi controller at this time requires Java v8… Debian Bullseye ships with v11 minimum
UniFi controller needs MongoDB 3.4, which isn’t available on Debian Bullsye on ARM64
I did some further research: Ubuntu 20.04 does offer a Java 8 runtime, and on AMD64, I can use existing binaries for MongoDB. I looked around and purchased this small-form-factor PC. Windows 10 went bye byes once I managed to hit F1 at the right point in the BIOS set-up, and Ubuntu 20.04 was PXE-loaded. I could then follow the standard instructions to install via APT. The controller seems to be working fine using OpenJDK JRE v8. I’d recommend this over the licensing quagmire that is using Oracle JRE.
Installation
With a controller, and all the requisite bits, things went smoothly. I found at first, the controller insisted on using 192.168.1.0/24 addresses to talk to the APs… so wound up setting that up in the netplan config. I later found that the UniFi controller won’t let you set a network subnet address unless you turn off Auto Scale Network.
Setting the network subnet is not possible until “Auto Scale Network” is disabled.
So maybe from here-on-in, new APs will appear in the correct subnet, but to be honest, it’s no big deal either way, unless an AP has an untimely end, I shouldn’t need to buy new ones for a while!
Auto-negotiation quirks with Cisco switches
One oddity I noticed was the upstairs (U6 LR) AP was reluctant to communicate via Ethernet, instead funnelling its traffic via the downstairs AP. While it’s handy they can do that, means I don’t necessarily need to worry about powering the upstairs switches in a power outage, the AP should be able to use its Ethernet back-end.
The downstairs one was having no problems, and the set-up was similar: switch port → PoE injector → AP, via short cables. I tried a few different cables with no change. Logged into the switch and had a look, it was set to auto-negotiate, which was working fine downstairs. The downstairs switch is a Netgear GS748T, whereas the one upstairs is a Cisco SG200-08 (not the P version that does PoE).
I found I could log into the AP over SSH (you can provide your SSH key via the UniFi controller)… so I logged in as root and had a look around. They run Linux with (a sadly tainted due to ubnthal.ko and ubnt_common.ko) kernel 4.4, and a Busybox/musl environment with an ARM64 CPU. (Well, the U6 LRs are ARM64, the U6 Lites are MediaTek MT7621s… mipsel32r2 with kernel 5.4.0 and not tainted.) ip told me that eth0 was up, and that the AP’s IP address was assigned to br0 which was also up. brctl told me that eth0 was enslaved by br0. Curiously, /sys/class/net/eth0/carrier was reporting 1, which disagreed with what the switch was telling me.
On a hunch, I tried turning off auto-negotiation, forcing instead 100Mbps full-duplex. Bingo, a link LED appeared. The topology showed the AP was now wired, not talking via downstairs.
Network topology shown in the UniFi Controller UI
Switched back to auto-negotiation, and the AP switched to being a wireless extender with the link LED disappearing from the switch. This may be a quirk of the PoE injectors I’m using, which do not handle 100Mbps, and maybe the switch hasn’t realised this because the AP otherwise “advertises” 1Gbps link capability. For now, I’m leaving that switch port locked at 100Mbps full-duplex. If you have problems with an AP showing up via Ethernet, here’s a place that is worth checking.
So, this year I had a new-year’s resolution of sorts… when we first started this “work from home” journey due to China’s “gift”, I just temporarily set up on the dinner table, which was of course, meant to be another few months.
Well, nearly 2 years later, we’re still working from home, and work has expanded to the point that a move to the office, on any permanent basis, is pretty much impossible now unless the business moves to a bigger building. With this in mind, I decided I’d clear off the dinner table, and clean up my room sufficiently to set up my workstation in there.
That meant re-arranging some things, but for the most part, I had the space already. So some stuff I wasn’t using got thrown into boxes to be moved into the garage. My CD collection similarly got moved into the garage (I have it on the computer, but need to retain the physical discs as they represent my “personal use license”), and lo and behold, I could set up my workstation.
The new workspace
One of my colleagues spotted the Indy and commented about the old classic SGI logo. Some might notice there’s also an O2 lurking in the shadows. Those who have known me for a while, will remember I did help maintain a Linux distribution for these classic machines, among others, and had a reasonable collection of my own:
My Indy, O2 and Indigo2 R10000The Octane, booting up
These machines were all eBay purchases, as is the Sun monitor pictured (it came with the Octane). Sadly, fast forward a number of years, and these machines are mostly door stops and paperweights now.
The Octane’s and Indigo2’s demises
The Octane died when I tried to clean it out with a vacuum cleaner, without realising the effect of static electricity generated by the vacuum cleaner itself. I might add mine was a particularly old unit: it had a 175MHz R10000 CPU, and I remember the Linux kernel didn’t recognise the power management circuitry in the PSU without me manually patching it.
The Indigo2 mysteriously stopped working without any clear reason why, I’ve never actually tried to diagnose the issue.
That left the Indy and the O2 as working machines. I haven’t fired them up in a long time until today. I figured, what the hell, do they still run?
Trying the Indy and O2 out
Plug in the Indy, hit the power button… nothing. Dead as a doornail. Okay, put it aside… what about the O2?
I plug it in, shuffle it closer to the monitor so I can connect it. ‘Lo and behold:
The O2 lives!
Of course, the machine was set up to use a serial console as its primary interface, and boot-up was running very slow.
Booting up… very slowly…
It sat there like that for a while, figuring the action was happening on a serial port, I went to go get my null modem cable, only to find a log-in prompt by the time I got back.
Next was remembering what password I was using when I last ran this machine. We had the OpenSSL heartbleed vulnerability happen since then, and at about that time, I revoked all OpenPGP keys and changed all passwords, so it isn’t what I use today. I couldn’t get in as root, but my regular user account worked, and I was able to change the root password via sudo.
Remembering my old log-in credentials, from 22 years ago it seems
The machine soon crashed after that. I tried rebooting, this time I tweaked some PROM settings (and yes, I was rusty remembering how to do it) to be able to see what was going. (I had the null modem cable in hand, but didn’t feel like trying to blindly find the serial port at the back of my desktop.)
Changing PROM settingsThe subsequent boot, and crash
Evidently, I had a dud disk. This did not surprise me in the slightest. I also noticed the PSU fan was not spinning, possibly seized after years of non-use.
Okay, there were two disks installed in this machine, both 80-pin SCA SCSI drives. Which one was it? I took a punt and tried the one furtherest away from the I/O ports.
Success, she boots now
I managed to reset the root password, before the machine powered itself off (possibly because of overheating). I suspect the machine will need the dust blown out of it (safely! — not using the method that killed the Octane!), and the HDDs will need replacements. The guilty culprit was this one (which I guessed correctly first go):
a 4GB HDD was a big drive back in 1998!
The computer I’m typing this on has a HDD that stores 1000 of these drives. Today, there are modern alternatives, such as SCSI2SD that could get this machine running fully if needed. The tricky bit would be handling the 80-pin hot-swap interface. There’d be some hardware hacking needed to connect the two, but AU$145 plus an adaptor seems like a safer bet than trying some random used HDD.
So, replacement for the HDDs, a clean-out, and possibly a new fan or two, and that machine will be back to “working” state. Of course the Linux landscape has moved on since then, Debian no longer support the MIPS4 ISA that the RM5200 CPU understands, Gentoo still could work on this though, and maybe OpenBSD still support this too. In short, this machine is enough of a “go-er” that it should not be sent to land-fill… yet.
Turning my attention back to the Indy
So the Indy was my first SGI machine. Bought to better understand the MIPS processor architecture, and perhaps gain enough understanding to try and breathe life into a Cobalt Qube II server appliance (remember those?), it did teach me a lot about Linux and how things vary between platforms.
I figured I might as well pop the cover and see if there’s anything “obviously” wrong. The procedure I was rusty on, but I recalled there was a little catch on the back of the case that needed to be release before the cover slid off. So I lugged the 20″ CRT off the top of the machine, pulled the non-functioning Indy out, and put it on the table to inspect further.
Upon trying to pop the cover (gently!), the top of the case just exploded. Two pieces of the top cover go flying, and the RF shield parts company with the cover’s underside.
The RF shield parted company with the underside of the lid
I was left with a handful of small plastic fragments that were the heat-set posts holding the RF shield to the inside of the lid.
Some of the fragments that once held the RF shield in place
Clearly, the plastic has become brittle over the years. These machines were released in 1993, I think this might be a 1994-model as it has a slightly upgraded R4600 CPU in it.
As to the machine itself, I had a quick sticky-beak, there didn’t seem to be any immediately obvious things, but to be honest, I didn’t do a very thorough check. Maybe there’s some corrosion under the motherboard I didn’t spot, maybe it’s just a blown fuse in the PSU, who knows?
The inside of the Indy
This particular machine had 256MB RAM (a lot for its day), 8-bit Newport XL graphics, the “Indy Presenter” LCD interface (somewhere, we have the 15″ monitor it came with — sadly the connecting cable has some damaged conductors), and the HDD is a 9.1GB HDD I added some time back.
Where to now?
I was hanging on to these machines with the thinking that someone who was interested in experimenting with RISC machines might want them — find them a new home rather than sending them to landfill. I guess that’s still an option for the O2, as it still boots: so long as its remaining HDD doesn’t die it’ll be fine.
For the others, there’s the possibility of combining bits to make a functional frankenmachine from lots of parts. The Indy will need a new PROM battery if someone does manage to breathe life into it.
The Octane had two SCSI interfaces, one of which was dead — a problem that was known-of before I even acquired it. The PROM would bitch and moan about the dead SCSI interface for a good two minutes before giving up and dumping you in the boot menu. Press 1, and it’d hand over to arcload, which would boot a Linux kernel from a disk on the working controller. Linux would see the dead SCSI controller, and proceed to ignore it, booting just fine as if nothing had happened.
The Indigo2 R10000 was always the red-hedded step child: an artefact of the machine’s design. The IP22 design (Indy and Indigo2) was never designed with the intent of being used with a R10000 CPU, and the speculative execution features played havoc with caching on this design. The Octane worked fine because it was designed from the outset to run this CPU. The O2 could be made to work because of a quirk of its hardware design, but the Indigo2 was not so flexible, so kernel-space code had to hack around the problem in software.
I guess I’d still like to see the machines go to a good home, but no idea who that’d be in this day and age. Somewhere, I have a partial disc set of Irix 6.5, and there’s also a 20″ SGI GDM5410 monitor (not the Sun monitor pictured above) that, at last check, did still work.
Recent Comments