May 222020
 

For the past 2 years now, there’s been quite a bit in the press about the next evolution of mobile telephony standards.

The 5G standard is supposed to bring with it higher speeds and greater user density handling. As with a lot of systems, “5G” itself, describes a family of standards… some concern the use of millimetre-wave communications for tower-to-handset communications, some cover the communications channels for more modest frequencies in the high UHF bands.

One thing that I really can’t get my head around is the so-called claims of health effects.

Now, these are as old as radio communications itself. And for sure, danger to radio transmissions does increase with frequency, proximity and transmit power. There is a reason why radio transmitter sites such as those that broadcast medium wave radio or television are fenced off: electrocution is a real risk at high power.

0G: glorified two-way radios

Mobile phones originally were little more than up-market cordless phones. They often were a luggable device if they were portable at all. Many were not, they were installed into a vehicle (hence “mobile”). No such thing as cell hand-over, and often incoming calls had to be manually switched.

Often the sets were half-duplex, and despite using a hand-set, would have a very distinctive “radio” feel to them, requiring the user use a call-sign when initiating a call, and pressing a push-to-talk button to switch between listening and talking modes.

These did not see much deployment outside the US or maybe Europe.

1G: cellular communications

Back in the late 80s, when AMPS mobile phones (1G) were little more than executive toys, there might not have been much press about, but I’m sure there’d be anecdotal evidence of people being concerned about “radiation”.

If any standard was going to cause problems, it’d have been 1G, since the sets generally used much higher transmit power to compensate for the lack of coverage. They were little more than glorified FM transceivers with a little digital control channel on the side which implemented the selective calling and cell hand-off.

This was the first standard we saw here in Australia, and was the first to be actually practical. Analogue services didn’t last that long, and because of the expense of running AMPS services, they were mostly an expensive luxury. So that did limit its up-take.

2G: voice goes digital

The next big change was 2G, which replaced the analogue FM voice channel and used digital modulation techniques. GSM (which used Gaussian Minimum Shift Keying) and CDMA (which used phase shift keying) encoded everything in a single digital transmission.

This meant audio could be compressed (with some loss in fidelity), and have forward error correction added to make the signal more robust to noise. The cells could handle more users than the 1G services could. Transmit power could be reduced, improving battery life and the sets became cheaper to make and services became more economical.

Then came all the claims that 2G was going to cause us to develop brain cancer.

Now, many of those 2G services started popping up in the mid 90s… has there been a mass pandemic of cancer cases? Nope! About the only thing GSM was bad for, was its ability to leak into any audio frequency circuit.

2G went through a few sub-revisions, but it basically was AMPS done digitally, so fundamentally worked much the same. A sore point was how data was handled. 2G and its predecessors all tried to emulate what the wired network was doing: establishing a dedicated circuit between callers.

The Internet was really starting to get popular, and people wanted a way to access it on the move. GPRS did allow for some of that, but it really didn’t work that well due to the way 2G saw the world, so things moved on.

3G: packet switching

The big change here was the move from “circuits” to sending data around in packets. This is more like how the Internet operates, and so it meant the services could better support an Internet connection.

Voice still went the old-fashioned way, dedicated circuits, since the QoS (quality of service) could be better maintained that way.

The cells could support more users than 2G could, and the packet mode meant mobile Internet finally became a “thing” for most people.

I don’t recall there being the same concern about health as there was for 2G… it was probably still simmering below the surface. Services were deployed further afield and of course, the uptake continued.

4G: bye bye circuit switching

4G or LTE is the current standard that most of us are using. The biggest change is it ditches the circuit switching used in 1G, 2G and 3G. Voice is done using VoLTE… basically the voice call is sent the same way calls are routed over the Internet.

The cell towers are no longer trying to keep a “circuit” connected to your phone as you move around, instead it’s just directing packets. It’s your handset’s problem to sort out whether it heard a given packet already, or re-arrange incoming packets if they arrive out-of-order.

To make this work, obviously the latency inherent in 3G had to be addressed. As a sweetener, the speeds were bumped up, and the voice CODEC could be updated, so we gained wide-band voice calls. (Pity Bluetooth hasn’t kept up!)

5G: new frequencies, higher speed, smaller cells

So far, the cellular standards have largely co-existed in the same frequency bands. 4G actually varies quite a bit in frequency, but basically there are bands from the low UHF around 410MHz right up to microwave at 2600MHz.

Higher frequencies

5G has been contentious because some implementations of it reach even higher. Frequency Range 1 used in the 5G NR standard is basically much the same as 4G, but frequency range 2 soars as high as 40GHz.

Now, in terms of the electromagnetic spectrum, compared to other forms of radiation that we rely on for survival (and have done ever since life first began on this planet), this might as well be DC!

Infrared radiation, which is the very bottom of the “light” spectrum, starts at 300GHz. At these frequencies, we typically forget about frequencies, and instead consider wavelengths (1mm in this case). Visible light is even higher, 430THz (yes, that’s T for tera!).

Now, where do we start to worry about radiation? The nasty stuff begins with ultraviolet radiation, specifically UVC which is at a dizzying 1.1PHz (yes, that’s peta-hertz). It’s worth noting that UVB, which is a little lower in frequency can cause problems when exposure is excessive… however none is dangerous too, you actually need UVB exposure on your body to produce vitamin D for survival!

Dielectric heating

So that’s where the danger is in terms of frequency. I did mention that danger also increases with power… this is why microwave ovens, which typically operate at a fairly modest 2.4GHz frequency, pose a risk.

No, they won’t make you develop cancer, but the danger there is when there’s a lot of power, it can cause dielectric heating. That is, it causes molecules to move around, and in doing so, collide transferring energy which is then given off as heat. It happens at all frequencies in the EM spectrum, but it starts to become more practical at microwave frequencies.

To do something like cook dinner, a microwave oven bombards your food with hundreds of watts of RF energy at it. The microwave has a thick RF shield around it for a reason! If that shield is doing what it should, you might be exposed to no more than a watt of energy escaping the shield. Not enough to cause any significant heating.

I hear that if you put a 4W power amp on a 2.4GHz WiFi access point and put your hand in front of the antenna, you can “feel” framing packets. (Never tried this myself.) That’s pretty high power for most microwave links, and would be many orders of magnitude more than what any cell phone would be capable of.

Verdict: not a health risk

In my view, there’s practically no risk in terms of health effects from 5G. I expect my reasoning above will be thoroughly rubbished by those who are protesting against the roll-out.

However, that does not mean I am in favour of 5G.

The case against 5G

So I seem to be sticking up for 5G above, but let me make one thing abundantly clear, for us here in Australia, I do not think 5G is the “right” thing for us to use. It’s perfectly safe in terms of health effects, but simply the wrong tool for the job.

Small cells

Did I mention before the cells were smaller? Compared to its predecessors, 5G cells are tiny! The whole point of 5G was to serve a large number of users in a small area. Think of 10s of thousands of people crammed into a single stadium (okay, once COVID-19 is put to bed). That’s the use case for 5G.

5G’s range when deployed on the lower bands, is about on par with 4G. Maybe a little better in certain ideal conditions with higher speeds. This is likely the variant we’re most likely to see outside of major city CBDs. How reliable it is at that higher speed remains to be seen, as there’s a crazy amount of DSP going on to make stuff work at those data rates.

5G when deployed with mmWave bands, barely makes 500 metres. This will make deployment in the suburbs prohibitively expensive. Outdoor Wi-Fi or WiMAX might not be as fast, but would be more cost-effective!

Processor load

Did I mention about the crazy amount of DSP going on? To process data streams that exceed 1Gbps, you’re doing a lot of processing to extract the data out of the radio signal. 5G leans heavily on MIMO for its higher speeds, basically dividing the high-rate stream into parts which are directed to separate antennas. This reduces the bandwidth needed to achieve a high data rate, but it does make processing the signal at the far end more complex.

Consequently, the current crop of 5G handsets run hot. How hot? Well, subject them to 29.5°C, and they shut down! Now, think about the weather we get in this country? How many days have we experienced lately where 29°C has been a daily minimum, not a maximum?

5G isn’t the future for Australia

We need a wireless standard that goes the distance, and can take the heat! 5G is not looking so great in this marathon race. Personally, I’d like to see more investment into the 4G services and getting those rolled out to more locations. There’s plenty of locations that are less than a day’s drive from most capital cities, where mobile coverage is next to useless.

Plenty of modern 4GX handsets also suffer technical elitism… they see 3G services, but then refuse to talk to them, instead dropping to -1G: brick emulation. There’s a reason I stick by my rather ancient ZTE T83 and why I had high hopes for the Kite.

I think for the most part, many of the wireless standards we see have been driven by Europe and Asia, both areas with high population densities and relatively cool annual temperatures.

It saddens me when I hear Telstra tell everybody that they “aspire” to be a technology company, when back in the early 90s, Telecom Australia very much was a technology company, and a well respected trail-blazing one at that! It’s time they pulled their finger out and returned to those days.

May 122020
 

So, the other day I pondered about whether BlueTrace could be ported to an older device, or somehow re-implemented so it would be compatible with older phones.

The Australian Government has released their version of TraceTogether, COVIDSafe, which is available for newer devices on the Google and Apple application repositories. It suffers a number of technical issues, one glaring one being that even on devices it theoretically supports, it doesn’t work properly unless you have it running in the foreground and your phone unlocked!

Well, there’s a fail right there! Lots of people, actually need to be able to lock their phones. (e.g. a condition of their employment, preventing pocket dials, saving battery life, etc…)

My phone, will never run COVIDSafe, as provided. Even compiling it for Android 4.1 won’t be enough, it uses Bluetooth Low Energy, which is a Bluetooth 4.0 feature. However, the government did one thing right, they have published the source code. A quick fish-eye over the diff against TraceTogether, suggests the changes are largely superficial.

Interestingly, although the original code is GPLv3, our government has decided to supply their own license. I’m not sure how legal that is. Others have questioned this too.

So, maybe I can run it after all? All I need is a device that can do BLE. That then “phones home” somehow, to retrieve tokens or upload data. Newer phones (almost anything Android-based) usually can do WiFi hotspot, which would work fine with a ESP32.

Older phones don’t have WiFi at all, but many can still provide an Internet connection over a Bluetooth link, likely via the LAN Access Profile. I think this would mean my “token” would need to negotiate HTTPS itself. Not fun on a MCU, but I suspect someone has possibly done it already on ESP32.

Nordic platforms are another option if we go the pure Bluetooth route. I have two nRF52840-DK boards kicking around here, bought for OpenThread development, but not yet in use. A nicety is these do have a holder for a CR2032 cell, so can operate battery-powered.

Either way, I think it important that the chosen platform be:

  1. easily available through usual channels
  2. cheap
  3. hackable, so the devices can be re-purposed after this COVID-19 nonsense blows over

A first step might be to see if COVIDSafe can be cleaved in two… with the BLE part running on a ESP32 or nRF52840, and the HTTPS part running on my Android phone. Also useful, would be some sort of staging server so I can test my code without exposing things. Not sure if there is such a beast publicly available that we can all make use of.

Guess that’ll be the next bit to look at.

May 032020
 

This afternoon, I was pondering about how I might do text-to-speech, but still have the result sound somewhat natural. For what use case? Well, two that come to mind…

The first being for doing “strapper call” announcements at horse endurance rides. A horse endurance ride is where competitors and their horses traverse a long (sometimes as long as 320km) trail through a wilderness area. Usually these rides (particularly the long ones) are broken up into separate stages or “legs”.

Upon arrival back at base, the competitor has a limited amount of time to get the horse’s vital signs into acceptable ranges before they must present to the vet. If the horse has a too-high temperature, or their horse’s heart rate is too high, they are “vetted out”.

When the competitor reaches the final check-point, ideally you want to let that competitor’s support team know they’re on their way back to base so they can be there to meet the competitor and begin their work with the horse.

Historically, this was done over a PA system, however this isn’t always possible for the people at base to achieve. So having an automated mechanism to do this would be great. In recent times, Brisbane WICEN has been developing a public display that people can see real-time results on, and this also doubles as a strapper-call display.

Getting the information to that display is something of a work-in-progress, but it’s recognised that if you miss the message popping up on the display, there’s no repeat. A better solution would be to “read out” the message. Then you don’t have to be watching the screen, you can go about your business. This could be done over a PA system, or at one location there’s an extensive WiFi network there, so streaming via Icecast is possible.

But how do you get the text into speech?

Enter flite

flite is a minimalist speech synthesizer from the Festival project. Out of the box it includes 3 voices, mostly male American voices. (I think the rms one might be Richard M. Stallman, but I could be wrong on that!) There’s a couple of demos there that can be run direct from the command line.

So, for the sake of argument, let’s try something simple, I’ll use the slt voice (a US female voice) and just get the program to read out what might otherwise be read out during a horse ride event:

$ flite_cmu_us_slt -t 'strapper call for the 160 kilometer event competitor numbers 123 and 234' slt-strapper-nopunctuation-digits.wav
slt-strapper-nopunctuation-digits.ogg

Not bad, but not that great either. Specifically, the speech is probably a little quick. The question is, how do you control this? Turns out there’s a bit of hidden functionality.

There is an option marked -ssml which tells flite to interpret the text as SSML. However, if you try it, you may find it does little to improve matters, I don’t think flite actually implements much of it.

Things are improved if we spell everything out. So if you instead replace the digits with words, you do get a better result:

$ flite_cmu_us_slt -t 'strapper call for the one hundred and sixty kilometer event competitor number one two three and two three four' slt-strapper-nopunctuation-words.wav
slt-strapper-nopunctuation-words.ogg

Definitely better. It could use some pauses. Now, we don’t have very fine-grained control over those pauses, but we can introduce some punctuation to have some control nonetheless.

$ flite_cmu_us_slt -t 'strapper call.  for the one hundred and sixty kilometer event.  competitor number one two three and two three four' slt-strapper-punctuation.wav
slt-strapper-punctuation.ogg

Much better. Of course it still sounds somewhat robotic though. I’m not sure how to adjust the cadence on the whole, but presumably we can just feed the text in piece-wise, render those to individual .wav files, then stitch them together with the pauses we want.

How about other changes though? If you look at flite --help, there is feature options which can control the synthesis. There’s no real documentation on what these do, what I’ve found so far was found by grep-ing through the flite source code. Tip: do a grep for feat_set_, and you’ll see a whole heap.

Controlling pitch

There’s two parameters for the pitch… int_f0_target_mean controls the “centre” frequency of the speech in Hertz, and int_f0_target_stddev controls the deviation. For the slt voice, …mean seems to sit around 160Hz and the deviation is about 20Hz.

So we can say, set the frequency to 90Hz and get a lower tone:

$ flite_cmu_us_slt --setf int_f0_target_mean=90 -t 'strapper call' slt-strapper-mean-90.wav
slt-strapper-mean-90.ogg

… or 200Hz for a higher one:

$ flite_cmu_us_slt --setf int_f0_target_mean=200 -t 'strapper call' slt-strapper-mean-200.wav
slt-strapper-mean-200.ogg

… or we can change the variance:

$ flite_cmu_us_slt --setf int_f0_target_stddev=0.0 -t 'strapper call' slt-strapper-stddev-0.wav
$ flite_cmu_us_slt --setf int_f0_target_stddev=70.0 -t 'strapper call' slt-strapper-stddev-70.wav
slt-strapper-stddev-0.ogg
slt-strapper-stddev-70.ogg

We can’t change these values during a block of speech, but presumably we can cut up the text we want to render, render each piece at the frequency/variance we want, then stitch those together.

Controlling rate

So I mentioned we can control the rate, somewhat coarsely using usual punctuation devices. We can also change the rate overall by setting duration_stretch. This basically is a control of how “long” we want to stretch out the pronunciation of words.

$ flite_cmu_us_slt --setf duration_stretch=0.5 -t 'strapper call' slt-strapper-stretch-05.wav
$ flite_cmu_us_slt --setf duration_stretch=0.7 -t 'strapper call' slt-strapper-stretch-07.wav
$ flite_cmu_us_slt --setf duration_stretch=1.0 -t 'strapper call' slt-strapper-stretch-10.wav
$ flite_cmu_us_slt --setf duration_stretch=1.3 -t 'strapper call' slt-strapper-stretch-13.wav
$ flite_cmu_us_slt --setf duration_stretch=2.0 -t 'strapper call' slt-strapper-stretch-20.wav
slt-strapper-stretch-05.ogg
slt-strapper-stretch-07.ogg
slt-strapper-stretch-10.ogg
slt-strapper-stretch-13.ogg
slt-strapper-stretch-20.ogg

Putting it together

So it looks as if all the pieces are there, we just need to stitch them together.

RC=0 stuartl@rikishi /tmp $ flite_cmu_us_slt --setf duration_stretch=1.2 --setf int_f0_target_stddev=50.0 --setf int_f0_target_mean=180.0 -t 'strapper call' slt-strapper-call.wav
RC=0 stuartl@rikishi /tmp $ flite_cmu_us_slt --setf duration_stretch=1.1 --setf int_f0_target_stddev=30.0 --setf int_f0_target_mean=180.0 -t 'for the, one hundred, and sixty kilometer event' slt-160km-event.wav
RC=0 stuartl@rikishi /tmp $ flite_cmu_us_slt --setf duration_stretch=1.4 --setf int_f0_target_stddev=40.0 --setf int_f0_target_mean=180.0 -t 'competitors, one two three, and, two three four' slt-competitors.wav
Above files stitched together in Audacity

Here, I manually imported all three files into Audacity, arranged them, then exported the result, but there’s no reason why the same could not be achieved by a program, I’m just inserting pauses after all.

There are tools for manipulating RIFF waveform files in most languages, and generating silence is not rocket science. The voice itself could be fine-tuned, but that’s simply a matter of tweaking settings. Generating the text is basically a look-up table feeding into snprintf (or its equivalent in your programming language of choice).

It’d be nice to implement a wrapper around flite that took the full SSML or JSML text and rendered it out as speech, but this gets pretty close without writing much code at all. Definitely worth continuing with.

Apr 192020
 

COVID-SARS-2 is a nasty condition caused by COVID-19 that has seen many a person’s life cut short. The COVID-19 virus which originated from Wuhan, China has one particularly insidious trait: it can be spread by asymptomatic people. That is, you do not have to be suffering symptoms to be an infectious carrier of the condition.

As frustrating as isolation has been, it’s really our only viable solution to preventing this infectious condition from spreading like wildfire until we get a vaccine that will finally knock it on the head.

One solution that has been proposed has been to use contract tracing applications which rely on Bluetooth messaging to detect when an infected person comes into contact with others. Singapore developed the TraceTogether application. The Australian Government look like they might be adopting this application, our deputy CMO even suggesting it’d be made compulsory (before the PM poured water on that plan).

Now, the Android version of this, requires Android 5.1. My phone runs 4.1: I cannot run this application. Not everybody is in the habit of using Bluetooth, or even carries a phone. This got me thinking: can this be implemented in a stand-alone device?

The guts of this application is a protocol called BlueTrace which is described in this whitepaper. Reference implementations exist for Android and iOS.

I’ll have to look at the nitty-gritty of it, but essentially it looks like a stand-alone implementation on a ESP32 module maybe a doable proposition. The protocol basically works like this:

  • Clients register using some contact details (e.g. a telephone number) to a server, which then issues back a “user ID” (randomised).
  • The server then uses this to generate “temporary IDs” which are constructed by concatenating the “User ID” and token life-time start/finish timestamps together, encrypting that with the secret key, then appending the IV and an authentication token. This BLOB is then Base64-encoded.
  • The client pulls down batches of these temporary IDs (forward-dated) to use for when it has no Internet connection available.
  • Clients, then exchange these temporary IDs using BLE messaging.

This, looks doable in an ESP32 module. The ESP32 could be loaded up with tokens by a workstation. You then go about your daily business, carrying this device with you. When you get home, you plug the device into your workstation, and it uploads the “temporary IDs” it saw.

I’ll have to dig out my ESP32 module, but this looks like a doable proposition.

Apr 112020
 

So, for years… decades even, our telephone service has been via the Public Switched Telephone Network. Originally intended for just voice traffic, this later became our Internet connection, using dial-up modems, then using ADSL.

The number itself gained two digits in its life-time: originally 6 digits (late 70s/early 80s), it gained a 0 at some point, then in 1996 a 3 was prepended to all Brisbane numbers.

So yeah, we’ve had our phone a long time, and the underlying technology has remained largely the same for that time period. Even the handset is the same one from all those years ago. Telecom Australia used to pay for it as a “priority service” back then as my father was working for them at the time.

By September, this will change. The National Broadband Network is in our suburb, and a little while back I migrated the ADSL2+ connection over to HFC. After a brief hiccup getting OpenBSD talking to it, we were away. It’s been pretty stable so far. Stable enough now that I haven’t had the ADSL modem connected in weeks.

At the time of migration, I could have migrated the telephone too to NBN phone, however there’s a snag: how do you access NBN phone? The answer is the ISP sends you a pre-configured ATA+Router+WiFi AP (and sometimes there’s an ADSL modem in there too). As I had decided to use my own router, I didn’t have such a device.

I asked Internode about the SIP details, and the situation is this: when they provision a NBN connection, there’s an automated process that configures their VoIP service then stores the credentials and other settings in a ACS server. They have no visibility of these credentials at all. Ordinarily, if I had purchased hardware, they would have provisioned that with credentials that would allow it to authenticate over the TR069 protocol. So without spending $200 on a device I wasn’t going to use, I wasn’t getting these credentials.

There’s another option though: VoIP. The NBN phone is actually a VoIP service, so fundamentally nothing much changes. Keeping the services separate though does give me flexibility in the future. There’s a list of VoIP providers as long as my arm that I can go with.

VoIP is notoriously difficult to set up though, I didn’t want to mess up our only incoming telephone service, so I decided to migrate the NBN phone number over to Internode’s NodePhone VoIP service which would allow me to experiment.

Requirements

So, first thing to consider is what my needs are. We have 4 devices that are plugged into the telephone line at present:

  • Telecom Australia Touchfone 200 wired telephone
  • Telstra 9200a cordless telephone base-station
  • Epson WF-7510 printer/scanner/fax
  • Maestro Jetstream 56kbps modem

Now, we don’t send many faxes (maybe two a year), and the last time that modem was used, it was to dial into a weighbridge system in Rockhampton after my workplace had moved to VoIP.

That weighbridge system was originally built in 1995 on top of computers running SCO OpenServer 5 and using SCO UUCP over dial-up lines running at 1200 baud (some of the nodes were at remote sites with dodgy phone lines). In 2013 the core servers were upgraded with new hardware and Ubuntu 12.04LTS, but the UUCP links remained as the SCO boxes at sites were slowly replaced. mgetty would answer the phone, and certain user accounts would use uucico as the “shell”. For admin purposes, we could log in, and that would give us a BASH prompt.

Any time we had a support issue at work, muggins would be the one to literally “dial in” to that site. While it’s been years since I’ve needed to touch it (and I think now there’s a VPN to that site), I wanted to retain dial-up capability if needed.

As for the faxes… the stuff we’re doing can be done over email. Getting the modem and the fax machine working is a stretch goal.

99% of the traffic will be voice traffic. I’m not sure if it’s possible to use double-adaptors with an ATA, and so for hardware I opted for the Grandstream HT814. These are available domestically (e.g. MyITHub) for under AU$130 and looked to be pretty good bang-per-buck. I just wasn’t sure how well it’d get along with the old T200.

As a contingency plan, I also ordered an IP phone, a Grandstream GXP1615 which is also available locally. That way if the T200 gave me problems, I’d just wire up Ethernet to the old point where the T200 was and put the GXP1615 there.

Since I’ve got two SIP devices, I need to be able to control incoming and outgoing call flows. So that means running a soft-PBX somewhere. Asterisk is the obvious choice here, being a free-software VoIP package with lots of flexibility. OpenBSD 6.6 ships with Asterisk 16.6.2.

Opening the account

Opening the account is simple enough. As I was porting the NBN phone number over, I just needed some details off my last Internode bill. Later when I go to do the house phone some time next financial year, I’ll be after a Telstra bill (assuming Telstra Wholesale have sorted out their CoVID-19 issues by then).

I looked at the hardware options that Internode offered… and again, it was basically the same as what they offer for their Internet service.

One thing I note is that while Internode has been a great ISP (I switched to them in 2012), their credentials as a VSP are still developing somewhat.

Initial connection

Provisioning was much less smooth than my initial ADSL connection (which was practically seamless) or the subsequent move to HFC NBN. The first bump in the road was when they emailed me:

Subject: Internode: Your Internode NodePhone VoIP service xxxxxxxxxx can’t make/receive calls yet [xxxxxxxxx]

Following activation of your NBN service, we ran some tests to make sure things were running smoothly.

We weren’t able to detect that your NodePhone VoIP service (phone number xxxxxxxxxx) has been set up successfully.

Initial contact regarding the telephone service…

Okay, fair enough, on the date I received that email I had only just received the ATA that I had purchased (through another supplier). Evidently they thought I had the hardware already. I found some time and quickly cobbled together a set-up:

First test set-up: siproxd on the border router to the HT814

I did some quick research, rather than doing NAT, I figured siproxd was closer to my eventual goals, so I installed that on the border router as there were fewer knobs and dials to deal with. I configured the HT814 with the settings as best I understood them from Internode’s guides with one difference: I set the Proxy field to my border router’s internal IP address.

This failed with Internode’s server giving me a 404: Not Found response when my HT814 sent its REGISTER request. I took some captures using tshark and reported this back to their helpdesk. I disconnected the ATA since there was no sense in banging on their front door every 20 seconds: it wasn’t working.

They got back to me a few days later and told me I had the right user name, and that my number still wasn’t registered. Plugging the ATA in, same response, 404: Not Found. In exasperation, I tried some changes:

  • Different settings on the HT814 (too many to list)
  • Trying with Twinkle on my laptop connected to the DMZ, both through via siproxd and also shutting down siproxd and setting up NAT.
  • Installing Asterisk on the border router (which was in the plans) and trying to set that up.

All three hit the same problem, 404: Not Found. I figured either I had entered the same wrong details 3 times, or it was definitely their end. So I reported that, unplugged the ATA and waited. This was on the 27th March.

On the 31st March, I get an email from Internode Provisioning to say they would be porting the number soon. After receiving this email, REGISTER worked, I was getting 200: OK in reply. Funnily enough, there was no challenge to credentials, it just saw my end and said “OK, you’re in”.

Calling outbound after this worked: the call trace showed the Internode end challenging the ATA for credentials, but once supplied, it connected the call, all worked. Inbound calls were a different matter though, a recorded (female) voice announced that the number was invalid. We were half-way there.

The next day, that message changed, it now was a recorded (male) voice announcing the number was unavailable, and offering to leave a voice-mail message. Progress. I had not changed my end, I still had T200 → HT814 → siproxd on the border router as my set-up. I was not seeing any incoming traffic being blocked by pf, although I was seeing people playing with sipvicious. I decided to firewall off my SIP ports, only exposing them to Internode’s SIP server.

Later on the help-desk got back to me. Despite their server telling me 200: OK when I sent a REGISTER, the number was still “not registered”. Confusing!

On a whim, I ripped out siproxd again and set up NAT on the border router. BINGO! We registered, and incoming calls worked. Internode use Broadsoft’s BroadWorks platform for their NodePhone VoIP service, and something about the operation of siproxd confused it. It then sent a misleading response leading my devices to think they were registered when they were not!

At this point, it was time to do two things: uninstall siproxd, and start reading up on Asterisk.

Configuring Asterisk

Asterisk is regarded as the “swiss army knife” of VoIP. It can be configured to do a lot. Officially, it is supported on Linux i386 and AMD64 platforms, but there are builds of it for platforms such as the Raspberry Pi.

OpenBSD also build and ship it in their ports. I wanted to avoid the pain of NAT, and I also wanted to avoid the telephone service going off-line if my cluster went haywire. So installation of this on the border router was a no-brainer. Yes, it’s adding a bit more attack surface to that box, but with appropriate firewalling rules, this could be managed.

As for configuration, there were two SIP channel drivers I could use, either the old chan_sip method, or the newer res_pjsip method. I had seen guides that discussed the older method (e.g. OCAU, WP), however in the back of my mind is the question: “how long do Digium plan to keep supporting two methods?” Thus from the outset I decided to use res_pjsip.

Audio CODECs

This is a pretty big aspect of configuration and should not be skimped on. You’ll find even if you buy all your equipment from one supplier, there are differences in what audio CODECs are supported. For instance the HT814 supports the OPUS CODEC (something I’d like to take advantage of eventually) but the GXP1615 does not. Some CODECs require patent licenses (e.g. SILK, SIREN7, G.722.1, G.722.2/AMR-WB), some did require patent licenses but are now “free” (G.729, G.723.1) and some are open-source (OPUS, Speex).

Some also go by multiple names. G.711a is also called PCMA and G.711u is PCMU. Pretty much everything supports these, and depending on the country you’re in, one will be preferred over the other. In my case, G.711a is the preferred option.

Your voice provider is a factor here too. Some only support particular CODECs, some will allow any CODEC. NodePhone VoIP allegedly will allow you to use whatever you like, but calls into and out of their network to non-VoIP targets may be restricted to narrow-band CODECs.

Internode-specific gotcha: G.729

One gotcha I stumbled on the hard way was that sometimes SIP implementations do not play by the rules.

Specifically, I found once I got Asterisk installed, incoming calls would be immediately hung-up on the moment I answered. Mobile or PSTN didn’t matter. I fired up tshark again and dug into the problem. The only clue I had was this error message:

[Apr  5 16:27:40] WARNING[-1][C-00000001] channel.c: Unable to find a codec translation path: (g729) -> (alaw)

Even if I specified only use G.711a (alaw), somehow I’d still get that message. I asked about it on the Asterisk forum. I was seeing this pattern in my SIP traffic:

|Time     | ${PROVIDER}                           |
|         |                   | ${ASTERISK_BOX}   |                   
|0.000000 |         INVITE SDP (g711A g7          |SIP INVITE From: <sip:${MYPSTNNUM}@${PROVIDER}29;user=phone> To:"${PROVIDER_NAME}"<sip:${MYSIPNUM}@${PROVIDER_DOMAIN}> Call-ID:BW061858648050420-1965318496@${PROVIDER}   CSeq:612303213
|         |(5060)   ------------------>  (5060)   |
|0.003443 |         100 Trying|                   |SIP Status 100 Trying
|         |(5060)   <------------------  (5060)   |
|0.025416 |         180 Ringing                   |SIP Status 180 Ringing
|         |(5060)   <------------------  (5060)   |
|0.954015 |         200 OK SDP (g711A g7          |SIP Status 200 OK
|         |(5060)   <------------------  (5060)   |
|0.986287 |         RTP (g711A)                   |RTP, 6 packets. Duration: 0.099s SSRC: 0x4F53507
|         |(27642)  <------------------  (18024)  |
|1.092729 |         RTP (g729)                    |RTP, 3 packets. Duration: 0.041s SSRC: 0xCD74F85
|         |(27642)  ------------------>  (18024)  |
|1.097523 |         ACK       |                   |SIP Request INVITE ACK 200 CSeq:612303213
|         |(5060)   ------------------>  (5060)   |
|1.099552 |         BYE       |                   |SIP Request BYE CSeq:29850
|         |(5060)   <------------------  (5060)   |
|1.149521 |         200 OK    |                   |SIP Status 200 OK
|         |(5060)   ------------------>  (5060)   |

I was reliably informed that this was definitely against the rules. So another help-desk email informing them of the problem. If you’re an Internode NodePhone VoIP customer, and you see the above behaviour, these are your options:

  1. Purchase and Install Digium’s G.729 CODEC: only an option if you are running Asterisk on a i386 or AMD64-based Linux machine.
  2. If, like me, you’re running Asterisk on something else, or you despise proprietary software, there is an open-source G.729 CODEC. On OpenBSD, install the asterisk-g729 package.
  3. Asterisk have added a work-around in later versions of their code. Patches exist for version 17, 16 and 13. It is also included in 13.32.0, 16.9.0 and 17.3.0. OpenBSD 6.7 will likely ship with a version of Asterisk that includes this work-around.
  4. You can also complain to their help-desk. We can work-around their problem, but really, it’s their end that’s doing the wrong thing, they should fix it.

Dial Plans

The other big thing to consider is your dial-plan layout. Every SIP endpoint is assigned a context which is used in extensions.conf to determine what is meant when a particular sequence of digits is entered or how a call should be routed.

The pattern syntax is documented on their Pattern Matching page. Notably, extensions are “literal” if they do not begin with an underscore (_) and a X matches any numeric digit. So an example:

exten => 123456,1,DoSomething()
exten => 123987,1,DoSomethingDifferent()
exten => _123XXX,1,DoSomethingElse()

The non-pattern extensions take precedence. So the number 123456 would exactly match the first extension listed above and would call DoSomething(). The number 123987 would exactly match the second entry and would call DoSomethingDifferent(). Any other 6-digit number beginning with 123 would trigger DoSomethingElse().

Avoiding expensive mistakes

Before doing anything, it’s worth reading Asterisk’s page on Dial plan Security. You do NOT want someone to call your PBX, then dial outbound to expensive international numbers and run up a big phone bill! In particular, you want to keep the default context as lean as possible! It’s fine to put all your internal extensions in default, but anything that dials outside, should be in a separate context for internal extensions.

Dialling more than one phone at a time

It is possible to have a dial-plan entry ring multiple phones in parallel by separating the endpoints with & symbols, but don’t put spaces around the & characters! So Dial(PJSIP/ep1&PJSIP/ep2&PJSIP/ep3), not Dial(PJSIP/ep1 & PJSIP…). The most obvious use case for this is when someone rings the home number and you want all phones to ring.

Incoming calls in the dial plan

When a call comes in from outside, the number dialled by the outside party (i.e. your number) appears as the extension dialled.

A simple option is to just ring everyone… so if your phone number was, say 0735359696, you’d create a context in your extensions.conf like this:

; Incoming calls
[incoming]
exten => 0735359696,1,Dial(PJSIP/ep1&PJSIP/ep2&…)

… then in pjsip.conf, assign that context to the endpoint:

[Provider-endpoint]
type=endpoint
context = incoming
; … etc

Outgoing calls

Usually you want to be able to ring outbound too. Some guides suggested the pattern _X! can be used to “match all” but I couldn’t get this to work.

Fax over IP

I mentioned that we had a fax machine we wanted to continue using. Whilst we don’t use it every day, it’s nice to know it is there if we want to use it.

Likewise with the dial-up modem. Even if I needed to do reduced-speed for it to work, it’d be better than nothing for situations where I need to use dial-up. It’ll never connect to the Internet again, but that doesn’t make it useless.

There are two ways to do Fax over IP. One is to use G.711u and hope for the best. The other is to use T.38, where the ATA basically “spoofs” the fax modem at the remote end, decodes the symbols being sent back to digital data, encodes that in UDP packets and transmits those over the Internet. The other end then reverses the process.

The good news is there are diagnostic tools out there for testing purposes. You don’t (yet) have to know of someone who has a working fax. Here in Australia, there’s FOLDS-B.

I’d recommend if you’ve got multiple fax-capable devices though, you plug two of them into separate ports on one or more ATAs and try faxing from one extension to the other. I tried calling FOLDS-B directly at first with no luck, then tried setting up Asterisk with an extension that calls SendFax to send a TIFF image, but had no luck — handshaking would be cut short after a second.

Using two internal endpoints saved a lot of phone calls externally and allowed me to “prove” the ATA could work for this.

Once you’ve got things working locally, you’re ready to hit the outside world. You’ll need a fairly complex image to send as it needs to be transmitting for ~70 seconds. I tried a few pages (e.g. this one) on the flatbed scanner of the WF-7510 without much luck… the fax would barely muster 20 seconds of data for them even at “fine” levels.

I had better luck using the 56kbps modem and a copy of efax-gtk. I took RCA’s test card image off Wikipedia as a SVG, loaded that up into The Gimp, rendering the SVG at 600dpi, extending the image size to A4-paper sized, rotated it to portrait mode, then converted it to 1-bit-per-pixel monochrome with patterned dithering and saved it as an uncompressed TIFF.

I then passed this through tiff2ps which gave me this file:

Sending that at 14400bps took 72 seconds. Perfect! Then the reply came back. Fax reception is very much a work-in-progress, so rather than receive on the VoIP line, I decided to use the PSTN to receive the reports. This was accomplished by specifying my PSTN phone number in the fax identity (FOLDS-B evidently doesn’t check this against caller ID).

I tried again, and this time, received the report. FOLDS-B got a garbled mess apparently. The suggestion was to set the speed to 4800bps. In efax this is accomplished by setting the modem capabilities:

CAPABILITIES
       The capabilities of the local hardware and software can be set using a string of 8 digits separated by commas:

       vr,br,wd,ln,df,ec,bf,st

       where:

       vr  (vertical resolution) =
                0 for 98 lines per inch
                1 for 196 lpi

       br  (bit rate) =
                0 for 2400 bps
                1 for 4800
                2 for 7200
                3 for 9600
                4 for 12000 (V.17)
                5 for 14400 (V.17)

So in this case, I should set the capabilities to 1,1,0,2,0,0,0,0. I tried this, but then found the call just didn’t negotiate either. On a whim again I tried 7200bps (that’s 1,2,0,2,0,0,0,0). Eureka! I got this:

A pretty much “perfect” FOLDS-B report at 7200bps.

The transmission level and SNR are pretty much spot-on. Emboldened by this, I tried again faxing the same image (which I had printed out) from the WF-7510. I chose “Photo” quality and scanned it on the flat-bed. This didn’t work so well — evidently some detail was missed and it only transmitted for 60 seconds so I tried again, and faxed something else for the second sheet.

FOLDS-B wasn’t happy about the second page, but at least it had something to go on. The fax modem in the WF-7510 doesn’t appear to have a speed control other than turning V.34 mode (33.6kbps) on/off. It appears it tried sending at 14400bps. FOLDS-B tells a sorry tale:

An ugly 14400bps transmission

Now it is possible to get near perfect results at 14400bps through an ATA that supports T.38. Maybe a longer transmission on the first page might have helped, but fair to say the speed is not helping matters.

Transmit level is very quiet, and I can’t see a way to adjust this on the WF-7510. Nor can I force it to V.29 (7200bps) mode.

There’s allegedly a “reference” test sheet you can get by “receive polling” 019725112. When I ring this number (with a fax or telephone) I get a recorded message saying the number is not valid. If anyone knows what the correct number is, or how to get a copy of this reference sheet, it’d be greatly appreciated.

So yes, Fax over IP is doable… a lot of pissing about with ATA settings… and you better hope the fax machine itself has lots of knobs and dials. Hylafax + t38modem will probably crack this nut. Then again, so will email.

Configuration Settings

Asterisk Configuration

So whilst my set-up is still a work-in-progress, I figured I’d post what I have here.

Most of these are defaults shipped with OpenBSD 6.6. Specifically of interest would be pjsip.conf and extensions.conf.

pf firewall rules

You’re going to want to pierce your firewall appropriately to expose yourself just enough for your VoIP service to work.

# Define some interfaces
internal=em0

# SIP addresses
sip_provider=203.2.134.1  # sip.internode.on.net
sip_endpoints="{ 10.0.0.3, 10.0.0.4 }"

# Allow SIP from router to SIP devices/softphones and vice versa
pass in on $internal proto udp from $sip_endpoints to self port { 5060, 10000:30000 }
pass out on $internal proto udp from self port { 5060, 10000:30000 } to $sip_endpoints

# Allow SIP from Internode to self and vice versa.  This could be
# tightened up a lot further.  Maybe try some calls both ways and log
# the traffic to see which specific ports.  "All UDP ports" is in line
# with Internode recommendations:
# https://www.internode.on.net/support/faq/phone_and_voip/nodephone/troubleshooting_nodephone/#Do_I_have_to_open_up_any_ports_i
pass in on egress inet proto udp from $sip_provider to self
pass out on egress inet proto udp from self to $sip_provider

Grandstream HT814 settings

Some notable settings first before I dump full screenshots…

Ring Cadence Settings

A special thank-you to @scottsip on the Grandstream forums for pointing me to this document from the ITU (Page 4 has the settings for Australia). Also worth mentioning is this Whirlpool forum thread and this review/teardown of the Grandstream HT802. The ones marked (?) I have no idea what they’re for.

  • System Ring Cadence: c=400/200-400/2000;
  • Dial Tone: f1=400@-12,f2=425@-12;
  • Busy Tone: f1=425,c=38/38;
  • Reorder Tone: f1=480,f2=620,c=25/25; (?)
  • Confirmation Tone: f1=350@-11,f2=440@-11,c=100/100-100/100-100/100; (?)
  • Call Waiting Tone: f1=425,c=20/20-20/440;
  • Prompt Tone: f1=350@-17,f2=440@-17,c=0/0; (?)
  • Conference Party Hangup Tone: f1=425@-15,c=600/600;

Notable fax-related settings

I changed lots of these trying to get something working, so if you set this and still don’t get any joy, check the screenshots below.

  • Fax Mode: T.38
  • Re-INVITE After Fax Tone Detected: Enabled
  • Jitter Buffer Type: Fixed
  • Jitter Buffer Length: Low (note, I can get away with this because it’s Ethernet LAN from ATA to Asterisk box)
  • Gain:
    • TX: 0dB
    • RX: 0dB
  • Disable Line Echo Canceller: Yes
  • Disable Network Echo Suppressor: Yes

Screenshots

Grandstream GXP1615 Settings

These are much the same as the HT814 above. The cadence settings use a slightly different syntax (they don’t have the @nn parts) and there are more tone settings. Again, the ones marked with (?) have an unknown purpose.

  • System Ringtone: c=400/200-400/2000;
  • Dial Tone: f1=400,f2=425;
  • Second Dial Tone: f1=450,f2=425; (?)
  • Message Waiting: f1=525;
  • Ring Back Tone: f1=1209,f2=852,c=20/20-20/200;
  • Call-Waiting Tone: f1=425,c=20/20-20/440;
    • Gain: Low
  • Busy Tone: f1=425,c=38/38;
  • Reorder Tone: f1=480,f2=620,c=25/25; (?)
GXP1615 ringtone settings

The steps forward

Things I need to figure out with this system:

  • Feature Codes: these are used to signal events like “transferring calls”, “park”, and other features you might want to initiate in the PBX. They are normally signalled using DTMF tone sequences, however this can give problems if your tones clash with some IVR you’re interacting with. I’m currently researching this area.
  • OPUS support: I mentioned the HT814 supports it, and as it’s an open CODEC I’d like to support it. There is this project which provides OPUS support to Asterisk. I’ll probably look at writing an OpenBSD port for it.
  • Voice mail and IVR… I’m thinking something along the lines of “Dial the year of birth of the person you wish to reach”… then that can ring the relevant phones or offer to leave a message.
  • I’d like to test wideband audio at some point. Work seems to have G.722 on their phones (or maybe its OPUS?), I should see if wideband pass-through in fact works.
Mar 022020
 

This is just a short note… today we finally made the jump across to the NBN. We’re running hybrid-fibre coax here in The Gap, and right now the HFC NTD is running off mains power (getting that onto solar will be a future project).

I already had my OpenBSD 6.6 router running through the ADSL terminating the PPPoE link. The router itself is a PC Engines APU2, which features 3 Ethernet ports. em0 faces my DMZ and internal network. em1 is presently plugged into the ADSL modem/router, and em2 was spare.

Thus, my interface configuration looked like this:

# /etc/hostname.em0
inet 10.20.50.254 255.255.255.0
inet6 alias 2001:44b8:21ac:70f9::fe 64
# … and a stack of !route commands for the internal subnets

# /etc/hostname.em1
up

# /etc/hostname.pppoe0
# pppoedev em1: ADSL
# pppoedev vlan2: NBN
inet 0.0.0.0 255.255.255.255 NONE \
        pppoedev em1 authproto chap \
        authname user@example.com \
        authkey mypassword up

… and of course /etc/pf.conf was configured with appropriate rules for my network. For the NBN, I read up that VLAN #2 was required, so I set up the following:

# /etc/hostname.em2  
up

# /etc/hostname.vlan2                                                                                                
vnetid 2
parent em2
up 

I then changed /etc/hostname.pppoe0 to point to vlan2 instead of em1. When the NBN NTD got installed, I tried this out… no dice, there was PADI frames being sent, but nada, nothing.

Digging around, I needed to set the transmit priority, so I amended /etc/hostname.vlan2:

# /etc/hostname.vlan2                                                                                                
vnetid 2
parent em2
txprio 1    # ← ADD THIS
up 

Bingo! I was now seeing PPPoE traffic. However I wasn’t out of the woods, nothing behind the router was able to get to the Internet. Turns out pf needs to be told what transmit priority to use. I amended my /etc/pf.conf:

# Scrub incoming traffic
match in all scrub (no-df)

# Set pppoe0 priority
match out on $external set prio 1  # ← ADD THIS

# Block all traffic by default (paranoia)
block log all
#block all

That was sufficient to get traffic working. I’m now getting the following out of SpeedTest.

~25Mbps down / ~18Mbps up on Internode HFC NBN

The link is theoretically supposed to be 50Mbps… but whatever. The primary concern is that it didn’t suddenly drop to 0Mbps when the plug got pulled in September. I’ll check again when things are “quieter” (it’ll be peak periods now), but as far as I’m concerned, this is a matter of ensuring continued service.

It already outperforms the ADSL2+ link (which was about 15Mbps / 2Mbps). Next stop will be to port the old telephone number over, but that can wait another day!

Nov 282019
 

This is more of a brain dump for yours truly, since as a day job, I’m dealing with OpenThread a lot, and so there’s lots of little tricks out there for building it on various platforms and configurations. The following is not so much a how-to guide, but a quick brain dump of different things I’ve learned over the past year or so of messing with OpenThread.

Verbose builds

To get a print out of every invocation of the toolchain with all flags, specify V=1 on the call to make:

$ make -f examples/Makefile-${PLATFORM} …${ARGS}… V=1

Running one step at a time

To disable the parallel builds when debugging the build system, append BuildJobs=1 to your make call:

$ make -f examples/Makefile-${PLATFORM} …${ARGS}… BuildJobs=1

Building a border router NCP image

General case

$ make -f examples/Makefile-${PLATFORM} …${ARGS}… BORDER_AGENT=1 BORDER_ROUTER=1 COMMISSIONER=1 UDP_FORWARD=1

For TI CC2538

# Normal CC2538 (e.g. Zolertia Firefly)
$ make -f examples/Makefile-cc2538 BORDER_AGENT=1 BORDER_ROUTER=1 COMMISSIONER=1 UDP_FORWARD=1

# CC2538 + CC2592
$ make -f examples/Makefile-cc2538 BORDER_AGENT=1 BORDER_ROUTER=1 COMMISSIONER=1 UDP_FORWARD=1 CC2592=1

If you want to run the latter image on a WideSky Hub, you’ll need to edit examples/platform/cc2538/uart.c and comment out line 117 before compiling as it uses a simple diode for 5V to 3.3V level shifting, and this requires the internal pull-up to be enabled:

115     // rx pin
116     HWREG(IOC_UARTRXD_UART0) = IOC_PAD_IN_SEL_PA0;
117     HWREG(IOC_PA0_OVER)      = IOC_OVERRIDE_DIS; // ← comment out this to allow UART RX to work
118     HWREG(GPIO_A_BASE + GPIO_O_AFSEL) |= GPIO_PIN_0;

For Nordic nRF52840

# Nordic development board (PCA10056) via J2 (near battery)
$ make -f examples/Makefile-nrf52840 BORDER_AGENT=1 BORDER_ROUTER=1 COMMISSIONER=1 UDP_FORWARD=1

# Ditto, but instead using J3 (near middle of bottom edge)
$ make -f examples/Makefile-nrf52840 BORDER_AGENT=1 BORDER_ROUTER=1 COMMISSIONER=1 UDP_FORWARD=1 USB=1

# Nordic dongle (PCA10059)
$ make -f examples/Makefile-nrf52840 BORDER_AGENT=1 BORDER_ROUTER=1 COMMISSIONER=1 UDP_FORWARD=1 USB=1 BOOTLOADER=USB

I’m working on what needs to be done for the Fanstel BT840X… watch this space.

Building certification test images

For CC2538

$ make -f examples/Makefile-cc2538 BORDER_ROUTER=1 COMMISSIONER=1 DHCP6_CLIENT=1 JOINER=1

Running CI tests outside of Travis CI

You will need:

  • Python 3.5 or later. 3.6 recommended.
  • pycryptodome (not pycrypto: if you get an AttributeError referencing AES.MODE_CCM, that’s why!
  • enum34 (for now… I suspect this will disappear once the Python 2.7 requirement is dropped for Android tools)
  • ipaddress
  • pexpect

The test suites work by running the POSIX ot-cli-${TYPE} (where ${TYPE} is ftd or mtd).

Running tests

From the root of the OpenThread tree:

$ make -f examples/Makefile-posix check

Making tests

The majority of the tests lurk under tests/scripts/thread-cert. Don’t be fooled by the name, lots of non-Thread tests live there.

The file node.py wraps pexpect up in an object with methods for calling the various CLI commands or waiting for things to be printed to the console.

The tests themselves are written using using the standard Python unittest framework, with setUp creating a few Node objects (node.py) and tearDown cleaning them up. The network is simulated.

The test files must be executable, and call unittest.main() after checking if the script is called directly.

Logs during the test runs

During the tests, the logs are stashed in build/${CHOST}/tests/scripts/thread-cert and will be named ${SCRIPTNAME}.log. (e.g test_coap_observe.py.log). Also present is the .pcap file (Packet dump).

Logs after the tests

The test suite will report the logs are at tests/scripts/thread-cert/test-suite.log. This is relative to the build directory… so look in build/${CHOST}/tests/scripts/thread-cert/test-suite.log.

make pretty with clang 8.0

Officially this isn’t supported, but you can “fool” OpenThread’s build system into using clang 8.0 anyway:

$ cat ~/bin/clang-format-6.0 
#!/bin/bash

if [ "$1" == "--version" ]; then
        echo "clang-format version 6.0"
else
        clang-format "$@"
fi

Put that file in your ${PATH}, make it executable, then OpenThread will think you’ve got clang-format version 6 installed. This appears to work without ill effect, so maybe a future release of OpenThread will support it.

Jul 182019
 

So, a few months back I had the failure of one of my storage nodes. Since I need 3 storage nodes to operate, but can get away with a single compute node, I did a board-shuffle. I just evacuated lithium of all its virtual machines, slapped the SSD, HDD and cover from hydrogen in/on it, and it became the new storage node.

Actually I took the opportunity to upgrade to 2TB HDDs at the same time, as well as adding two new storage nodes (Intel NUCs). I then ordered a new motherboard to get lithium back up again. Again, there was an opportunity to upgrade, so ~$1500 later I ordered a SuperMicro A2SDi-16C-HLN4F. 16 cores, and full-size DDR4 DIMMs, so much easier to get bits for. It also takes M.2 SATA.

The new board arrived a few weeks ago, but I was heavily snowed under with activities surrounding Brisbane Area WICEN Group and their efforts to assist the Stirling’s Crossing Endurance Club running the Tom Quilty 2019. So it got shoved to the side with the RAM I had purchased to be dealt with another day.

I found time on Monday to assemble the hardware, then had fun and games with the UEFI firmware on this board. Put simply, the legacy BIOS support on this board is totally and utterly broken. The UEFI shell is also riddled with bugs (e.g. ifconfig help describes how to bring up an interface via DHCP or statically, but doing so fails). And of course, PXE is not PXE when UEFI is involved.

I ended up using Ubuntu’s GRUB binary and netboot image to boot-strap the machine, after which I could copy my Gentoo install back in. I now have the machine back in the rack, and whilst I haven’t deployed any VMs to it yet, I will do so soon. I did however, give it a burn-in test updating the kernel:

  LD [M]  security/keys/encrypted-keys/encrypted-keys.ko
  MKPIGGY arch/x86/boot/compressed/piggy.S
  AS      arch/x86/boot/compressed/piggy.o
  LD      arch/x86/boot/compressed/vmlinux
ld: arch/x86/boot/compressed/head_64.o: warning: relocation in read-only section `.head.text'
ld: warning: creating a DT_TEXTREL in object.
  ZOFFSET arch/x86/boot/zoffset.h
  OBJCOPY arch/x86/boot/vmlinux.bin
  AS      arch/x86/boot/header.o
  LD      arch/x86/boot/setup.elf
  OBJCOPY arch/x86/boot/setup.bin
  BUILD   arch/x86/boot/bzImage
Setup is 16444 bytes (padded to 16896 bytes).
System is 6273 kB
CRC ca5d7cb3
Kernel: arch/x86/boot/bzImage is ready  (#1)

real    7m7.727s
user    62m6.396s
sys     5m8.970s
lithium /usr/src/linux-stable # git describe
v5.1.11

7m for make -j 17 to build a current Linux kernel is not bad at all!

May 292019
 

It’s been on my TO-DO list now for a long time to wire in some current shunts to monitor the solar input, replace the near useless Powertech solar controller with something better, and put in some more outlets.

Saturday, I finally got around to doing exactly that. I meant to also add a low-voltage disconnect to the rig … I’ve got the parts for this but haven’t yet built or tested it — I’d like to wait until I have done both,but I needed the power capacity. So I’m running a risk without the over-discharge protection, but I think I’ll take that gamble for now.

Right now:

  • The Powertech MP-3735 is permanently out, the Redarc BCDC-1225 is back in.
  • I have nearly a dozen spare 12V outlet points now.
  • There are current shunts on:
    • Raw solar input (50A)
    • Solar controller output (50A)
    • Battery (100A)
    • Load (100A)
  • The Meanwell HEP-600C-12 is mounted to the back of the server rack, freeing space from the top.
  • The janky spade lugs and undersized cable connecting the HEP-600C-12 to the battery has been replaced with a more substantial cable.

This is what it looks like now around the back:

Rear of the rack, after re-wiring

What difference has this made? I’ll let the graphs speak. This was the battery voltage this time last week:

Battery voltage for 2019-05-22

… and this was today…

Battery voltage 2019-05-29

Chalk-and-bloody-cheese! The weather has been quite consistent, and the solar output has greatly improved just replacing the controller. The panels actually got a bit overenthusiastic and overshot the 14.6V maximum… but not by much thankfully. I think once I get some more nodes on, it’ll come down a bit.

I’ve gone from about 8 hours off-grid to nearly 12! Expanding the battery capacity is an option, and could see the cluster possibly run overnight.

I need to get the two new nodes onto battery power (the two new NUCs) and the Netgear switch. Actually I’m waiting on a rack-mount kit for the Netgear as I have misplaced the one it came with, failing that I’ll hack one up out of aluminium angle — it doesn’t look hard!

A new motherboard is coming for the downed node, that will bring me back up to two compute nodes (one with 16 cores), and I have new 2TB HDDs to replace the aging 1TB drives. Once that’s done I’ll have:

  • 24 CPU cores and 64GB RAM in compute nodes
  • 28 CPU cores and 112GB RAM in storage nodes
  • 10TB of raw disk storage

I’ll have to pull my finger out on the power monitoring, there’s all the shunts in place now so I have no excuse but to make up those INA-219 boards and get everything going.

May 252019
 

So recently I was musing about how I might go about expanding the storage on the cluster. This was largely driven by the fact that I was about 80% full, and thus needed to increase capacity somehow.

I also was noting that the 5400RPM HDDs (HGST HTS541010A9E680), now with a bit of load, were starting to show signs of not keeping up. The cases I have can take two 2.5″ SATA HDDs, one spot is occupied by a boot drive (120GB SSD) and the other a HDD.

A few weeks ago, I had a node fail. That really did send the cluster into a spin, since due to space constraints, things weren’t as “redundant” as I would have liked, and with one disk down, I/O throughput which was already rivalling Microsoft Azure levels of slow, really took a bad downward turn.

I hastily bought two NUCs, which I’m working towards deploying… with those I also bought two 120GB M.2 SSDs (for boot drives) and two 2TB HDDs (WD Blues).

It was at that point I noticed that some of the working drives were giving off the odd read error which was throwing Ceph off, causing “inconsistent” placement groups. At that point, I decided I’d actually deploy one of the new drives (the old drive was connected to another node so I had nothing to lose), and I’ll probably deploy the other shortly. The WD Blue 2TB drives are also 5400RPM, but unlike the 1TB Hitachis I was using before, have 128MB of cache vs just 8MB.

That should boost the read performance just a little bit. We’ll see how they go. I figured this isn’t mutually exclusive to the plans of external storage upgrades, I can still buy and mod external enclosures like I planned, but perhaps with a bit more breathing room, the immediate need has passed.

I’ve since ordered another 3 of these drives, two will replace the existing 1TB drives, and a third will go back in the NUC I stole a 2TB drive from.

Thinking about the problem more, one big issue is that I don’t have room inside the case for 3 2.5″ HDDs, and the motherboards I have do not feature mSATA or M.2 SATA. I might cram a PCIe SSD in, but those are pricey.

The 120GB SSD is only there as a boot drive. If I could move that off to some other medium, I could possibly move to a bigger SSD in place of the 120GB SSD, maybe a ~500GB unit. These are reasonably priced. The issue is then where to put the OS.

An unattractive option is to shove a USB stick in and boot off that. There’s no internal USB ports, but there are two front USB ports in the case I could rig up to an internal header so they’re not sticking out like a sore thumb(-drive) begging to be broken off by a side-wards slap. The flash memory in these is usually the cheapest variety, so maybe if I went this route, I’d buy two: one for the root FS, the other for swap/logs.

The other option is a Disk-on-Module. The motherboards provide the necessary DC power connector for running these things, and there’s a chance I could cram one in there. They’re pricey, but not as bad as going NVMe SSDs, and there’s a greater chance of success squeezing this in.

Right now I’ve just bought a replacement motherboard and some RAM for it… this time the 16-core model, and it takes full-size DIMMs. It’ll go back in as a compute node with 32GB RAM (I can take it all the way to 256GB if I want to). Coupled with that and a purchase of some HDDs, I think I’ll let the bank account cool off before I go splurging more. 🙂