May 272017
 

So these are some completely untested notes on how I might achieve auto-detection of peripherals and I²C devices.

The idea here is that a peripheral can be plugged into any of the 8 ports on the device. It won’t matter, because there’ll be a standard boot-up procedure, consisting of:

  1. Deriving an I²C bus address (possibly by sampling ADC LSB bits to generate a random address) and verifying that there are no clashes by attempting to poll the derived address a few times.
  2. Performing a read of the bus roll-call I²C address to enumerate the IDs of all automatically-addressed I²C devices on the bus. During this step, the GPIOs are monitored: the slave should pulse the GPIO it is connected to as it “calls out” its ID.
  3. Measuring ADC readings at the GPIO pins and observing power-up state (common GPIO pin in high-impedance state)
  4. Pulling the common GPIO pin (MISO) high and seeing which GPIO pins go high.
  5. Pulling the common GPIO pin (MISO) low and seeing which GPIO pins go low.

The peripherals may be connected via a number of ways:

The boot-up procedure is used to determine what is attached, by performing some simple actions and observing the GPIO pin, which thanks to the ‘4066s, also doubles as an ADC.

As for I²C addressing… I do not want to have to program a unique address into each device. The concept here is instead that each device generates its own address, waits for a clear moment on the bus, then tries polling its proposed address to see if it gets an ACK. In the case of a time-out, it waits a random length of time and tries again when the bus is clear, and after a few times, can safely assume it has the address.

This gives rise to a problem though, knowing what the addresses of the neighbours are.

The idea here is to use a roll-call address. This is common to all units implementing the auto-addressing protocol. When a device hears the roll-call address, it tries to send its ID in reply. When another device with a numerically lower ID sends at the same time, the device sees the collision and tries again on the next byte. As each calls out its ID, it pulses the interrupt pin low (if it uses one).

The result from the master’s perspective is it sees a stream of IDs sorted from numerically lowest to highest, terminated by a stream of 0xff bytes. Direct peers are able to then detect who their peers are.

Timing of all of this will be the challenge. We want to ensure everyone has a unique ID by he time the first roll-call is performed so we can move on to detecting the dumb devices (LEDs and buttons). This network feature will be a stretch-goal of the project though, so plenty of time to think about that.

Apr 292017
 

I’ve been doing quite a bit of thinking on this. Solid-state works but suffers from voltage drop. Relays work but either require the coil to be energised constantly (~1W load) unless you look for latching relays, for which 30A units are hard to come by.

These look promising though. A latching relay is nice since I only need to pulse the coil, not hold it on indefinitely.

That got me thinking what else can I use to switch power? The ideal for me is something that has practically no voltage drop and remembers its state without power. A latching relay fits this requirement. So does a uniselector or stepping switch. Those were commonplace in telephone exchanges years ago, but have since gone the way of the dodo as semiconductor technology replaced it.

The nice thing about a uniselector though for my application is you can switch between N points, instead of just two like a regular relay. So if I buy a third battery, I can wire it up to the uniselector, and have it switch the compute load between the batteries. Likewise, I can connect a charger to the battery most in need of a charge. MCU measures battery voltages, picks the battery with highest voltage to run the load, and the lowest voltage to get a charge. Easy.

That got me thinking… can I make a uniselector? Well of course I can! I basically need to make a rotary switch that can revolve around indefinitely. The shaft of the switch would then be turned by a DC motor.

The stator of a N-way switch would have N+1 pads, one which is the “common”, and the other N would be to each selection. The common pad would be a 180° arc, the others would be 180°/N.

The rotor would feature two brushes 180° apart with a wire connecting them. It is free to move vertically, but must rotate with the shaft, a spring between a nut on the end of the shaft and the rotor applies tension to keep the rotor pressed firmly against the stator.

The interface between rotor and stator features some triangular grooves, so that when the rotor is turned, it pushes it away from the stator, breaking contact. When the rotor passes a critical point, the spring pressing the rotor against these grooves makes the rotor “want” to continue turning until it hits the bottom of the groove, at which point it “sinks” down towards the stator and eventually makes contact again.

Visually, it looks like this:

A small microswitch mounted on the stator could tell us when touch-down takes place, if we use the normally-closed contact to power the motor it will automatically stop the motor when the next position is reached. We then just need to override that open switch by applying a pulse to get things moving.

Power is only needed when we want to change the selector switch. This should be simple enough to fabricate here out of plywood. I don’t have a 3D printer, but you could do it with one of those very easily.

The nature of this switch makes it a break-before-make switch, which has a downside when using it to select which battery to use: there’s a momentary break in power.

I can use diodes to carry the current temporarily. If I run a high-current diode from each battery to the output via a current sensor. If the current sensor measures current flowing through the diodes whilst a battery is selected, then we know that battery is lower than the others by at least the diode voltage drop, and we should consider switching.

Mar 262017
 

Yesterday’s post was rather long, but was intended for mostly technical audiences outside of amateur radio.  This post serves as a brain dump of volatile memory before I go to sleep for the night.  (Human conscious memory is more like D-RAM than one might realise.)

Radio interface

So, many in our group use packet radio TNCs already, with a good number using the venerable Kantronics KPC3.  These have a DB9 port that connects to the radio and a second DB25 RS-323 port that connects to the computer.

My proposal: we make an audio interface that either plugs into that DB9 port and re-uses the interface cables we already have, or directly into the radio’s data port.

This should connect to an audio interface on the computer.

For EMI’s sake, I’d recommend a USB sound dongle like this, or these, or this as that audio interface.  I looked on Jaycar and did see this one, which would also work (and burn a hole in your wallet!).

If you walk in and the asking price is more than $30, I’d seriously consider these other options.  Of those options, U-Mart are here in Brisbane; go to their site, order a dongle then tell the site you’ll come and pick it up.  They’ll send you an email with an order number when it’s ready, you just need to roll up to the store, punch that number into a terminal in the shop, then they’ll call your name out for you to collect and pay for it.

Scorptec are in Melbourne, so you’ll have to have items shipped, but are also worth talking to.  (They helped me source some bits for my server cluster when U-Mart wouldn’t.)

USB works over two copper pairs; one delivers +5V and 0V, the other is a differential pair for data.  In short, the USB link should be pretty immune from EMI issues.

At worst, you should be able to deal with it with judicious application of ferrite beads to knock down the common mode current and using a combination of low-ESR electrolytic and ceramic capacitors across the power rails.

If you then keep the analogue cables as short as absolutely possible, you should have little opportunity for RF to get in.

I don’t recommend the TigerTronics Signalink interfaces, they use cheap and nasty isolation transformers that lead to serious performance issues.

Receive audio

For the receive audio, we feed the audio from the radio and we feed that via potentiometer to a 3.5mm TRS (“phono”) plug tip, with sleeve going to common.  This plugs into the Line-In or Microphone input on the sound device.

Push to Talk and Transmit audio

I’ve bundled these together for a good reason.  The conventional way for computers to drive PTT is via an RS-232 serial port.

We can do that, but we won’t unless we have to.

Unless you’re running an original SoundBLASTER card, your audio interface is likely stereo.  We can get PTT control via an envelope detector forming a minimal-latency VOX control.

Another 3.5mm TRS plug connects to the “headphone” or “line-out” jack on our sound device and breaks out the left and right channels.

The left and right channels from the sound device should be fed into the “throw” contacts on two single-pole double-throw toggle switches.

The select pin (mechanically operated by the toggle handle) on each switch thus is used to select the left or right channel.

One switch’s select pin feeds into a potentiometer, then to the radio’s input.  We will call that the “modulator” switch; it selects which channel “modulates” our audio.  We can again adjust the gain with the potentiometer.

The other switch first feeds through a small Schottky diode then across a small electrolytic capacitor (to 0V) then through a small resistor before finally into the base of a small NPN signal transistor (e.g. BC547).  The emitter goes to 0V, the collector is our PTT signal.

This is the envelope detector we all know and love from our old experiments with crystal sets.  In theory, we could hook a speaker to the collector up to a power source and listen to AM radio stations, but in this case, we’ll be sending a tone down this channel to turn the transistor, and thus or PTT, on.

The switch feeding this arrangement we’ll call the “PTT” switch.

By using this arrangement, we can use either audio channel for modulation or PTT control, or we can use one channel for both.  1200-baud AFSK, FreeDV, etc, should work fine with both on the one channel.

If we just want to pass through analogue audio, then we probably want modulation separate, so we can hold the PTT open during speech breaks without having an annoying tone superimposed on our signal.

It may be prudent to feed a second resistor into the base of that NPN, running off to the RTS pin on an RS-232 interface.  This will let us use software that relies on RS-232 PTT control, which can be added by way of a USB-RS232 dongle.

The cheap Prolific PL-2303 ones sold by a few places (including Jaycar) will work for this.  (If your software expects a 16550 UART interface on port 0x3f8 or similar, consider running it in a virtual machine.)

Ideally though, this should not be needed, and if added, can be left disconnected without harm.

Software

There are a few “off-the-shelf” packages that should work fine with this arrangement.

AX.25 software

AGWPE on Windows provides a software TNC.  On Linux, there’s soundmodem (which I have used, and presently mirror) and Direwolf.

Shouldn’t need a separate PTT channel, it should be sufficient to make the pre-amble long enough to engage PTT and rely on the envelope detector recognising the packet.

Digital Voice

FreeDV provides an open-source digital voice platform system for Windows, Linux and MacOS X.

This tool also lets us send analogue voice.  Digital voice should be fine, the first frame might get lost but as a frame is 40ms, we just wait before we start talking, like we would for regular analogue radio.

For the analogue side of things, we would want tone-driven PTT.  Not sure if that’s supported, but hey, we’ve got the source code, and yours truly has worked with it, it shouldn’t be hard to add.

Slow-scan television

The two to watch here would be QSSTV (Linux) and EasyPal (Windows).  QSSTV is open-source, so if we need to make modifications, we can.

Not sure who maintains EasyPal these days, not Eric VK4AES as he’s no longer with us (RIP and thank-you).  Here, we might need an RS-232 PTT interface, which as discussed, is not a hard modification.

Radioteletype

Most is covered by FLDigi.  Modes with a fairly consistent duty cycle will work fine with the VOX PTT, and once again, we have the source, we can make others work.

Custom software ideas

So we can use a few off-the-shelf packages to do basic comms.

  • We need auditability of our messaging system.  Analogue FM, we can just use a VOX-like function on the computer to record individual received messages, and to record outgoing traffic.  Text messages and files can be logged.
  • Ideally, we should have some digital signing of logs to make them tamper-resistant.  Then we can mathematically prove what was sent.
  • In a true  emergency, it may be necessary to encrypt what we transmit.  This is fine, we’re allowed to do this in such cases, and we can always turn over our audited logs for authorities anyway.
  • Files will be sent as blocks which are forward-error corrected (or forward-erasure coded).  We can use a block cipher such as AES-256 to encrypt these blocks before FEC.  OpenPGP would work well here rather doing it from scratch; just send the OpenPGP output using FEC blocks.  It should be possible to pick out symmetric key used at the receiving end for auditing, this would be done if asked for by Government.  DIY not necessary, the building blocks are there.
  • Digital voice is a stream, we can use block ciphers but this introduces latency and there’s always the issue of bit errors.  Stream ciphers on the other hand, work by generating a key stream, then XOR-ing that with the data.  So long as we can keep sync in the face of bit errors, use of a stream cipher should not impair noise immunity.
  • Signal fade is a worse problem, I suggest a cleartext (3-bit, 4-bit?) gray-code sync field for synchronisation.  Receiver can time the length of a fade, estimate the number of lost frames, then use the field to re-sync.
  • There’s more than a dozen stream ciphers to choose from.  Some promising ones are ACHTERBAHN-128, Grain 128a, HC-256, Phelix, Py, the Salsa20 family, SNOW 2/3G, SOBER-128, Scream, Turing, MUGI, Panama, ISAAC and Pike.
  • Most (all?) stream ciphers are symmetric.  We would have to negotiate/distribute a key somehow, either use Diffie-Hellman or send a generated key as an encrypted file transfer (see above).  The key and both encrypted + decrypted streams could be made available to Government if needed.
  • The software should be capable of:
    • Real-time digital voice (encrypted and clear; the latter being compatible with FreeDV)
    • File transfer (again, clear and encrypted using OpenPGP, and using good FEC, files will be cryptographically signed by sender)
    • Voice mail and SSTV, implemented using file transfer.
    • Radioteletype modes (perhaps PSK31, Olivia, etc), with logs made.
    • Analogue voice pass-through, with recordings made.
    • All messages logged and time-stamped, received messages/files hashed, hashes cryptographically signed (OpenPGP signature)
    • Operation over packet networks (AX.25, TCP/IP)
    • Standard message forms with some basic input validation.
    • Ad-hoc routing between interfaces (e.g. SSB to AX.25, AX.25 to TCP/IP, etc) should be possible.
  • The above stack should ideally work on low-cost single-board computers that are readily available and are low-power.  Linux support will be highest priority, Windows/MacOS X/BSD is a nice-to-have.
  • GNU Radio has building blocks that should let us do most of the above.
Mar 252017
 

So, there’s been a bit of discussion lately about our communications infrastructure. I’ve been doing quite a bit of thinking about the topic.

The situation today

Here in Australia, a lot of people are being moved over to the National Broadband Network… with the analogue fixed line phone (if it hasn’t disappeared already) being replaced with a digital service.

For many, their cellular “mobile” phone is their only means of contact. More than the over-glorified two-way radios that was pre-cellular car phones used by the social elites in the early 70s, or the slightly more sophisticated and tennis-elbow inducing AMPS hand-held mobile phones that we saw in the 80s, mobile phones today are truly versatile and powerful hand-held computers.

In fact, they are more powerful than the teen-aged computer I am typing this on. (And yes, I have upgraded it; 1GB RAM, 250GB mSATA SSD, Linux kernel 4.0… this 2GHz P4 still runs, and yes I’ll update that kernel in a moment. Now, how’s that iPhone 3G going, still running well?)

All of these devices are able to provide data communications throughput in the order of millions of bits per second, and outside of emergencies, are generally, very reliable.

It is easy to forget just how much needs to work properly in order for you to receive that funny cat picture.

Mobile networks

One thing that is not clear about the NBN, is what happens when the power is lost. The electricity grid is not infallible, and requires regular maintenance, so while reliability is good, it is not guaranteed.

For FTTP users, battery backup is an optional extra. If you haven’t opted in, then your “land line” goes down when the power goes out.

This is not a fact that people think about. Most will say, “that’s fine, I’ve got my mobile” … but do you? The typical mobile phone cell tower has several hours battery back-up, and can be overwhelmed by traffic even in non-emergencies.They are fundamentally engineered to a cost, thus compromises are made on how long they can run without back-up power, and how much call capacity they carry.

In the 2008 storms that hit The Gap, I had no mobile telephone coverage for 2 days. My Nokia 3310 would occasionally pick up a signal from a tower in a neighbouring suburb such as Keperra, Red Hill or Bardon, and would thus occasionally receive the odd text message… but rarely could muster the effective radiated power to be able to reply back or make calls. (Yes, and Nokia did tell me that internal antennas surpassed the need for external ones. A 850MHz yagi might’ve worked!)

Emergency Services

Now, you tell yourself, “Well, the emergency services have their own radios…”, and this is correct. They do have their own radio networks. They too are generally quite reliable. They have their problems. The Emergency Alerting System employed in Victoria was having capacity problems as far back as 2006 (emphasis mine):

A high-priority project under the Statewide Integrated Public Safety Communications Strategy was establishing a reliable statewide paging system; the emergency alerting system. The EAS became operational in 2006 at a cost of $212 million. It provides coverage to about 96 per cent of Victoria through more than 220 remote transmitter sites. The system is managed by the Emergency Services Telecommunications Agency on behalf of the State and is used by the CFA, VICSES and Ambulance Victoria (rural) to alert approximately 37,400 personnel, mostly volunteers, to an incident. It has recently been extended to a small number of DSE and MFB staff.

Under the EAS there are three levels of message priority: emergency, non-emergency, and administrative. Within each category the system sends messages on a first-in, first-out basis. This means queued emergency messages are sent before any other message type and non-emergency messages have priority over administrative messages.

A problem with the transmission speed and coverage of messages was identified in 2006. The CFA expressed concern that areas already experiencing marginal coverage would suffer additional message loss when the system reached its limits during peak events.

To ensure statewide coverage for all pagers, in November 2006 EAS users decided to restrict transmission speed and respond to the capacity problems by upgrading the system. An additional problem with the EAS was caused by linking. The EAS can be configured to link messages by automatically sending a copy of a message to another pager address. If multiple copies of a message are sent the overall load on the system increases.

By February 2008 linking had increased by 25 per cent.

During the 2008 windstorm in Victoria the EAS was significantly short of delivery targets for non-emergency and administrative messages. The Emergency Services Telecommunications Agency subsequently reviewed how different agencies were using the system, including their message type selection and message linking. It recommended that the agencies establish business rules about the use of linking and processes for authorising and monitoring de-linking.

The planned upgrade was designed to ensure the EAS could cope better with more messages without the use of linking.

The upgrade was delayed several times and rescheduled for February 2009; it had not been rolled out by the time of Black Saturday. Unfortunately this affected the system on that day, after which the upgrade was postponed indefinitely.

I can find mention of this upgrade taking place around 2013. From what I gather, it did eventually happen, but it took a roasting from mother nature to make it happen. The lesson here is that even purpose built networks can fall over, and thus particularly in major incidents, it is prudent to have a back-up plan.

Alternatives

For the lay person, CB radio can be a useful tool for short-range (longer-than-yelling-range) voice communications. UHF CB will cover a few kilometres in urban environments and can achieve quite long distances if good line-of-sight is maintained. They require no apparatus license, and are relatively inexpensive.

It is worth having a couple of cheap ones, a small torch and a packet of AAA batteries (stored separately) in the car or in a bag you take with you. You can’t use them if they’re in a cupboard at home and you’re not there.

The downside with the hand-helds, particularly the low end ones, is effective radiated power. They will have small “rubber ducky” antennas, optimised for size, and will typically have limited transmit power, some can do the 5W limit, but most will be 1W or less.

If you need a bit more grunt, a mobile UHF CB set and magnetic mount antenna could be assembled and fitted to most cars, and will provide 5W transmit power, capable of about 5-10km in good conditions.

HF (27MHz) CB can go further, and with 12W peak envelope power, it is possible to get across town with one, even interstate or overseas when conditions permit. These too, are worth looking at, and many can be had cheaply second-hand. They require a larger antenna however to be effective, and are less common today.

Beware of fakes though… A CB radio must meet “type approval”, just being technically able to transmit in that band doesn’t automatically make it a CB, it must meet all aspects of the Citizens Band Radio Service Class License to be classified a CB.

If it does more than 5W on UHF, it is not a UHF CB. If it mentions a transmit range outside of 476-478MHz, it is not a UHF CB.  Programming it to do UHF channels doesn’t change this.

Similarly, if your HF CB radio can do 26MHz (NZ CB, not Australia), uses FM instead of SSB/AM (UK CB, again not Australia), does more than 12W, or can do 28-30MHz (10m amateur), it doesn’t qualify as being a CB under the class license.

Amateur radio licensing

If you’ve got a good understanding of high-school mathematics and physics, then a Foundation amateur radio license is well within reach.  In fact, I’d strongly recommend it for anyone doing first year Electrical Engineering … as it will give you a good practical grounding in electrical theories.

Doing so, you get to use up to 10W of power (double what UHF CB gives you; 6dB can matter!) and access to four HF, one VHF and one UHF band using analogue voice or hand-keyed Morse code.

You can then use those “CB radios” that sell on eBay/DealExtreme/BangGood/AliExpress…etc, without issue, as being un-modified “commercial off-the-shelf”, they are acceptable for use under the Foundation license.

Beyond Voice: amateur radio digital modes

Now, all good and well being able to get voice traffic across a couple of suburban blocks. In a large-scale disaster, it is often necessary to co-ordinate recovery efforts, which often means listings of inventory and requirements, welfare information, etc, needs to be broadcast.

You can broadcast this by voice over radio… very slowly!

You can put a spreadsheet on a USB stick and drive it there. You can deliver photos that way too. During an emergency event, roads may be in-passable, or they may be congested. If the regular communications channels are down, how does one get such files across town quickly?

Amateur radio requires operators who have undergone training and hold current apparatus licenses, but this service does permit the transmission of digital data (for standard and advanced licensees), with encryption if needed (“intercommunications when participating in emergency services operations or related training exercises”).

Amateur radio is by its nature, experimental. Lots of different mechanisms have been developed through experiment for intercommunication over amateur radio bands using digital techniques.

Morse code

The oldest by far is commonly known as “Morse code”, and while it is slower than voice, it requires simpler transmitting and receiving equipment, and concentrates the transmitted power over a very narrow bandwidth, meaning it can be heard reliably at times when more sophisticated modes cannot. However, not everybody can send or receive it (yours truly included).

I won’t dwell on it here, as there are more practical mechanisms for transmitting lots of data, but have included it here for completeness. I will point out though, due to its simplicity, it has practically no latency, thus it can be faster than SMS.

Radio Teletype

Okay, there are actually quite a few modes that can be described in this manner, and I’ll use this term to refer to the family of modes. Basically, you can think of it as two dumb terminals linked via a radio channel. When you type text into one, that text appears on the other in near real-time. The latency is thus very low, on par with Morse code.

The earliest of these is the RTTY mode, but more modern incarnations of the same idea include PSK31.

These are normally used as-is. With some manual copying and pasting pieces of text at each end, it is possible to encode other forms of data as short runs of text and send files in short hand-crafted “packets”, which are then hand-deconstructed and decoded at the far end.

This can be automated to remove the human error component.

The method is slow, but these radioteletype modes are known for being able to “punch through” poor signal conditions.

When I was studying web design back in 2001, we were instructed to keep all photos below 30kB in size. At the time, dial-up Internet was common, and loading times were a prime consideration.

Thus instead of posting photos like this, we had to shrink them down, like this. Yes, some detail is lost, but it is good enough to get an “idea” of the situation.

The former photo is 2.8MB, the latter is 28kB. Via the above contrived transmission system, it would take about 20 minutes to transmit.

The method would work well for anything that is text, particularly simple spread sheets, which could be converted to Comma Separated Values to strip all but the most essential information, bringing file sizes down into realms that would allow transmission times in the order of 5 minutes. Text also compresses well, thus in some cases, transmission time can be reduced.

To put this into perspective, a drive from The Gap where that photo was taken, into the Brisbane CBD, takes about 20 minutes in non-peak-hour normal circumstances. It can take an hour at peak times. In cases of natural disaster, the roads available to you may be more congested than usual, thus you can expect peak-hour-like trip times.

Radio Faximile and Slow Scan Television

This covers a wide variety of modes, ranging from the ancient like Hellschreiber which has its origins in the German Military back in World War II, various analogue slow-scan television modes through to the modern digital slow-scan television.

This allows the transmission of photos and visual information over radio. Some systems like EasyPAL and its elk (based on HamDRM, a variant of Digital Radio Mondiale) are in fact, general purpose modems for transmitting files, and thus can transmit non-graphical data too.

Transmit times can vary, but the analogue modes take between 30 seconds and two minutes depending on quality. For the HamDRM-based systems, transmit speeds vary between 86Bps up to 795kBps depending on the settings used.

Packet Radio

Packet radio is the concept of implementing packet-switched networks over radio links. There are various forms of this, the most common in amateur radio being PACTOR, WINMOR, the 1200-baud AFSK and 9600-baud FSK and 300-baud AFSK packet modes.

300-baud AFSK is normally used on HF links, and hails from experiments using surplus Bell 103 modems modified to work with radio. Similarly, on VHF and UHF FM radio, experiments were done with surplus Bell 202 modems, giving rise to the 1200-baud AFSK mode.

The 9600-baud FSK mode was the invention of James Miller G3RUH, and was one of the first packet radio modes actually developed by radio amateur operators for use on radio.

These are all general-purpose data modems, and while they can be used for radioteletype applications, they are designed with computer networking in mind.

The feature facilities like automatic repeating of lost messages, and in some cases support forward error correction. PACTOR/WINMOR is actually used with the Winlink radio network which provides email services.

The 300-baud, 1200-baud and 9600-baud versions generally use a networking protocol called AX.25, and by configuring stations with multiple such “terminal node controllers” (modems) connected and appropriate software, a station can operate as a router, relaying traffic received via one radio channel to a station that’s connected via another, or to non-AX.25 stations on Winlink or the Internet.

It is well suited to automatic stations, operating without human intervention.

AX.25 packet and PACTOR I are open standards, the later PACTOR modems are proprietary devices produced by SCS in Germany.

AX.25 packet is capable of transmit speeds between 15Bps (300 baud) and 1kBps (9600 baud). PACTOR varies between 5Bps and 650Bps.

In theory, it is possible to develop new modems for transmitting AX.25, the HamDRM modem used for slow-scan television and the FDMDV modem used in FreeDV being good starting points as both are proven modems with good performance.

These simply require an analogue interface between the computer sound card and radio, and appropriate software.  Such an interface made to link a 1200-baud TNC to a radio could be converted to link to a low-cost USB audio dongle for connection to a computer.

If someone is set up for 1200-baud packet, setting up for these other modes is not difficult.

High speed data

Going beyond standard radios, amateur radio also has some very high-speed data links available. D-Star Digital Data operates on the 23cm microwave band and can potentially transmit files at up to 16KBps, which approaches ADSL-lite speeds. Transceivers such as the Icom ID-1 provide this via an Ethernet interface for direct connection to a computer.

General Electric have a similar offering for industrial applications that operates on various commercial bands, some of which can reach amateur frequencies, thus would be usable on amateur bands. These devices offer transmit speeds up to 8KBps.

A recent experiment by amateurs using off-the-shelf 50mW 433MHz FSK modules and Realtek-based digital TV tuner receivers produced a high-speed speed data link capable of delivering data at up to 14KBps using a wideband (~230kHz) radio channel on the 70cm band.  They used it to send high definition photos from a high-altitude balloon.

The point?

We’ve got a lot of tools at our disposal for getting a message through, and collectively, 140 years of experience at our disposal. In an emergency situation, that means we have a lot of different options, if one doesn’t work, we can try another.

No, a 1200-baud VHF packet link won’t stream 4k HD video, but it has minimal latency and will take less than 20 minutes to transmit a 100kB file over distances of 10km or more.

A 1kB email will be at the other end before you can reach for your car keys.  Further experimentation and development means we can only improve.  Amateur radio is far from obsolete.

Feb 112017
 

Okay, so now the searing heat of the day has dissipated a bit, I’ve dragged the cluster out and got everything running.

No homebrew charge controller, we have:

240v mains → 20A battery charger → battery → volt/current meter → cluster.

Here’s the volt meter, showing the battery voltage mid-charge:

… and this is what the IPMI sensor readings display…

Okay, the 3.3V and 5V rails are lower than I’d expect, but that’s the duty of the PSU/motherboard, not my direct problem.

The nodes also vary a bit… here’s another one. Same set-up, and this time I’ll show the thresholds (which are the same on all nodes):

Going forward… I need to get the cluster solar ready. Running it all from 12V is half the story, but I need to be able to manage switching between mains and solar.

The battery I am using at the moment is a second-hand 100Ah (so more realistically ~70Ah) AGM cell battery. I have made a simple charger for my LiFePO₄ packs that I use on the bicycle, there I just use a LM2576 switchmode regulator to put out a constant voltage at 3A and leave the battery connected to “trickle charge”. Crude, but it works. When at home, I use a former IBM laptop power supply to provide 16V 3A… when camping I use a 40W “12V” solar panel. I’m able to use either to charge my batteries.

The low output means I can just leave it running. 3A is well below the maximum inrush current capacity of the batteries I use (typically 10 or 20Ah) which can often handle more than 1C charging current.

Here, I’m using an off-the-shelf charger made by Xantrex, and it is significantly more sophisticated, using PWM control, multi-stage charging, temperature compensation, etc. It also puts out a good bit more power than my simple charger.

Consequently I see a faster rise in the voltage, and that is something my little controller will have to expect.

In short, I am going to need a more sophisticated state machine… one that leaves the cut-off voltage decision to the charger. One that notices the sudden drop from ~15V to ~14V and shuts off or switches only after it remains at that level for some time (or gets below some critical limit).

Sep 102016
 

So I’m doing some more thought exercises on this project. Yes, been a long time since I posted anything and I only just got around to having a look at the report into Phillip Hughes’ death.

In this case, the ball managed to strike a vertebral artery. There are typically two of these, and they flow to the basilar artery which feeds the brain stem. The other major route is via the carotid arteries . Both of these sets branch out into much finer blood vessels within the brain.

That, to my mind would have triggered a sharp impulse in the blood pressure in those arteries. The arteries are somewhat elastic, but they probably don’t appreciate that level of abuse: they have pretty thin walls in that part of the body and are quite small in diameter as they branch out from the brain stem. In the report, they talk about “traumatic basal subarachnoid haemorrhage“.

According to Wikipedia, that means blood is leaking out of vessels and into the protective layers that surround the brain. We often joke about the smoke escaping from an electronic component when we put too much current through it.

This would seem to be the blood vessel equivalent: a minor artery or three went “pop”.

It would appear that this is definitely one mode of failure in the brain that should be considered when designing protective gear. It’s particularly an issue for soldiers with IEDs. In their case, the helmet can work “perfectly”, but it’s the pressure wave hitting the body causing pressure up the blood vessels in the neck that do damage, as well as pressure waves in the cerebral fluid itself.

The other mode of failure seems to be in the neurons and their connections.

Thinking about this latter point, we know that the brain is a very delicate organ when taken out of its casing. It is described as having the consistency of tofu in some articles.

This makes me wonder whether the brain should be considered a solid at all. To my way of thinking, perhaps a very dense mesh of interconnected nodes is a more accurate representation of what’s going on.

As the head moves through space (which is what seems to do much of the damage), Newton’s First Law does its worst. If you consider a single node in this mesh travelling through a linear path, the node experiences a tension force from each connection to its neighbours.

These connections have a limit to the amount of tension they can withstand. When they break, that’s when brain damage takes place.

Now, under constant velocity, or at rest, these tension forces are within tolerance, nothing bad happens. However, when a sudden blow is experienced, it is this additional tension as individual nodes get jostled around from their own inertia and the transferred tensional forces from neighbouring nodes that may trigger these connections to break.

Modelling both of these situations is an interesting problem. The latter could be done with strain gauges, but they’d have to be sensitive ones. How sensitive? Not sure. 2AM isn’t a great hour to be answering such questions.

As for the vessels, perhaps piezo transducers at differing locations might give some insight into how much pressure is in different parts.

It may also be possible to have a tree-structure of tubes representing the arteries, and measure how much they expand as the pressure wave hits by measuring how much light is blocked between a LED and a photo diode with the “artery” running in the path: the fatter it is the less light hits the diode.

More food for thought I guess. 🙂

May 012016
 

So, after putting aside the charge controller for now, I’ve taken some time to see if I can get the software side of things into shape.

In the midst of my development, I found a small wiring fault that was responsible for blowing a couple of fuses. A small nick in the sheath of the positive wire in a power cable was letting the crimp part of a DC barrel connector contact +12V. A tweak of that crimp and things are back to normal. I’ve swapped all the 10A fuses for 5A ones, since the regulators are only rated at 7.5A.

The VLANs are assigned now, and I have bonding going between the two pairs of Ethernet devices. In spite of the switch only supporting 4 LAGs, it seems fine with me doing LACP on effectively 10 LAGs. I’ll see how it goes.

The switch has 5 ports spare after plugging in all 5 nodes and a 16-port switch for the IPMI subnet. One will be used for a management interface so I can plug a laptop in, and the others will be paired with LACP for linking to my two existing Cisco SG200-8s.

One of the goals of this project is to try and push the performance of Ceph. In the office, we tried bare Ceph, and found that, while it’s fine for sequential I/O, it suffers a bit with random read/writes, and Windows-based HyperV images like to do a lot of random reads/writes.

Putting FlashCache in the mix really helped, but I note now, it’s no longer maintained. EnhanceIO had only just forked when I tried FlashCache, now it seems that’s the official successor.

There are two alternatives to FlashCache/EnhanceIO: bcache and dm-cache.

I’ll rule out bcache now as it requires the backing image be “formatted” for use. In other words, the backing image is not a raw image, but some proprietary (to bcache) format. This isn’t unworkable, but it raises concerns with me about portability: if I migrate a VM, do I need to migrate its cache too, or is it sufficient to cleanly shut down and detach the bcache device before re-assembling it on the new host?

By contrast, dm-cache and EnhanceIO/FlashCache work with raw backing images, making them much more attractive. Flush the cache before migration or use writethru mode, and all should be fine. dm-cache does however require a separate metadata device: messy, but not unworkable. We can provision the cache-related devices we need using LVM2, and use the kernel-mode Rados block device as our backing image.

So I think my caching subsystem is a two-horse race: dm-cache or EnhanceIO. I guess we’ll give them a try and see how they go.

For those following along at home, if you’re running kernel >4.3, you might want use this fork of EnhanceIO due to changes in the kernel block I/O layer.

To manage the OpenNebula master node, I’ve installed corosync/pacemaker. Normally these are used with DR:BD, however I figure Ceph can fulfil that role. The concepts are similar: it’s a shared block device. I’m not sure if it’ll be LXC, Docker or a VM at this point that “contains” the server, but whatever it is, it should be possible for it to have its root FS and data on Ceph.

I’m leaning towards LXC for this. Time for some more experimentation.

Apr 272016
 

It seems good old “common courtesy” is absent without leave, as is “common sense”. Some would say it’s been absent for most of my lifetime, but to me it seems particularly so of late.

In particular, where it comes to the safety of one’s self, and to others, people don’t seem to actually think or care about what they are doing, and how that might affect others. To say it annoys me is putting it mildly.

In February, I lost a close work colleague in a bicycle accident. I won’t mention his name, as I do not have his family’s permission to do so.

I remember arriving at my workplace early on Friday the 12th before 6AM, having my shower, and about 6:15 wandering upstairs to begin my work day. Reaching my desk, I recall looking down at an open TS-7670 industrial computer and saying out aloud, “It’s just you and me, no distractions, we’re going to get U-Boot working”, before sitting down and beginning my battle with the machine.

So much for the “no distractions” however. At 6:34AM, the office phone rings. I’m the only one there and so I answer. It was a social worker looking for “next of kin” details for a colleague of mine. Seems they found our office details via a Cab Charge card they happened to find in his wallet.

Well, first thing I do is start scrabbling for the office directory to get his home number so I can pass the bad news onto his wife only to find: he’s only listed his mobile number. Great. After getting in contact with our HR person, we later discover there isn’t any contact details in the employee records either. He was around before such paperwork existed in our company.

Common sense would have dictated that one carry an “in case of emergency” number on a card in one’s wallet! At the very least let your boss know!

We find out later that morning that the crash happened on a particularly sharp bend of the Go Between Bridge, where the offramp sweeps left to join the Bicentennial bikeway. It’s a rather sharp bend that narrows suddenly, with handlebar-height handrails running along its length and “Bicycle Only” signs clearly signposted at each end.

Common sense and common courtesy would suggest you slow down on that bridge as a cyclist. Common sense and common courtesy would suggest you use the other side as a pedestrian. Common sense would question the utility of hand rails on a cycle path.

In the meantime our colleague is still fighting for his life, and we’re all holding out hope for him as he’s one of our key members. As for me, I had a network to migrate that weekend. Two of us worked the Saturday and Sunday.

Sunday evening, emotions hit me like a freight train as I realised I was in denial, and realised the true horror of the situation.

We later find out on the Tuesday, our colleague is in a very bad way with worst-case scenario brain damage as a result of the crash. From shining light to vegetable, he’d never work for us again.

Wednesday I took a walk down to the crash site to try and understand what happened. I took a number of photographs, and managed to speak to a gentleman who saw our colleague being scraped off the pavement. Even today, some months later, the marks on the railings (possibly from handlebar grips) and a large blood smear on the path itself, can still be seen.

It was apparent that our colleague had hit this railing at some significant speed. He wasn’t obese, but he certainly wasn’t small, and a fully grown adult does not ricochet off a metal railing and slide face-first for over a metre without some serious kinetic energy involved.

Common sense seems to suggest the average cyclist goes much faster than the 20km/hr collision the typical bicycle helmet is designed for under AS/NZS 2063:2008.

I took the Thursday and Friday off as time-in-lieu for the previous weekend, as I was an emotional wreck. The following Tuesday I resumed cycling to work, and that morning I tried an experiment to reproduce the crash conditions. The bicycle I ride wasn’t that much different to his, both bikes having 29″ wheels.

From what I could gather that morning, it seemed he veered right just prior to the bend then lost control, listing to the right at what I estimated to be about a 30° angle. What caused that? We don’t know. It’s consistent with him dodging someone or something on the path — but this is pure speculation on my part.

Mechanical failure? The police apparently have ruled that out. There’s not much in the way of CCTV cameras in the area, plenty on the pedestrian side, not so much on the cycle side of the bridge.

Common sense would suggest relying on a cyclist to remember what happened to them in a crash is not a good plan.

In any case, common sense did not win out that day. Our colleague passed away from his injuries a little over a fortnight after his crash, aged 46. He is sadly missed.

I’ve since made a point of taking my breakfast down to that point where the bridge joins the cycleway. It’s the point where my colleague had his last conscious thoughts.

Over the course of the last few months, I’ve noticed a number of things.

Most cyclists sensibly slow down on that bend, but a few race past at ludicrous speed. One morning, I nearly thought they’d be an encore performance as two construction workers on City Cycle bikes, sans helmets, came careening around the corner, one almost losing it.

Then I see the pedestrians. There’s a well lit, covered walkway, on the opposite side of the bridge for pedestrian use. It has bench seats, drinking fountains, good lighting, everything you’d want as a pedestrian. Yet, some feel it is not worth the personal exertion to take the 100m extra distance to make use of it.

Instead, they show a lack of courtesy by using the bicycle path. Walking on a bicycle path isn’t just dangerous to the pedestrian like stepping out onto a road, it’s dangerous for the cyclist too!

If a car hits a pedestrian or cyclist, the damage to the occupants of the car is going to be minimal to nonexistent, compared to what happens to the cyclist or pedestrian. If a cyclist or motorcyclist hits a pedestrian however, they surround the frame, thus hit the ground first. Possibly at significant speed.

Yet, pedestrians think it is acceptable to play Russian roulette with their own lives and the lives of every cycle user by continuing to walk where it is not safe for them to go. They’d never do it on a motorway, but somehow a bicycle path is considered fair game.

Most pedestrians are understanding, I’ve politely asked a number to not walk on the bikeway, and most oblige after I point out how they get to the pedestrian walkway.

Common sense would suggest some signage on where the pedestrian can walk would be prudent.

However, I have had at least two that ignored me, one this morning telling me to “mind my own shit”. Yes mate, I am minding “my own shit” as you put it: I’m trying to stop the hypothetical me from possibly crashing into the hypothetical you!

It’s this sort of reaction that seems symbolic of the whole “lack of common courtesy” that abounds these days.

It’s the same attitude that seems to hint to people that it’s okay to park a car so that it blocks the footpath: newsflash, it’s not! I know of one friend of mine who frequently runs into this problem. He’s in a wheelchair — a vehicle not known for its off-road capabilities or ability to squeeze past the narrow gap left by a car.

It seems the drivers think it’s acceptable to force footpath users of all types, including the elderly, the young and the disabled, to “step out” onto the road to avoid the car that they so arrogantly parked there. It makes me wonder how many people subsequently become disabled as a result of a collision caused by them having to step around such obstacles. Would the owner of the parked car be liable?

I don’t know, I’m no lawyer, but I should think they should carry some responsibility!

In Queensland, pedestrians have right-of-way on the footpath. That includes cyclists: cyclists of all ages are allowed there subject to council laws and signage — but once again, they need to give way. In other words, don’t charge down the path like a lunatic, and don’t block it!

No doubt, the people who I’m trying to convince are too arrogant to care about the above, and what their actions might have on others. Still, I needed to get the above off my chest!

Nothing will bring my colleague back, a fact that truly pains me, and I’ve learned some valuable lessons about the sort of encouragement I give people. I regret not telling him to slow down, 5 minutes longer wouldn’t have killed him, and I certainly did not want a race! Was he trying to race me so he could keep an eye on me? I’ll never know.

He was a bright person though, it is proof though that even the intelligent among us are prone to possibly doing stupid things. With thrills come spills, and one might question whether one’s commute to work is the appropriate venue for such thrills, or whether those can wait for another time.

I for one have learned that it does not pay to be the hare, thus I intend to just enjoy the ride for what it is. No need to rush, common sense tells me it just isn’t worth it!

Apr 092016
 

One elephant in the room, is how I’m going to store the system whilst in operation.

The obvious solution is some sort of metal cabinet with provision for 19″ rack mounting and DIN rail equipment. Question is, how big?

A big consideration here is thermal matters. When going flat out, there will be 100W-150W worth of thermal energy being dissipated in there. So room for convection currents is a must!

Some decent fans on the top to suck the hot air out would also be a good idea. Blowing up so that dust doesn’t get sucked down into the works.

I figured I’d sit everything sort-of in situ. I figured out that the DIN rail mounts don’t have to go on the bottom, with these cases, if you remove the front panel there’s four holes for mounting those same DIN rail mounts on the front. So that’s what I’ve done. I’ve now got a DIN rail spare for future expansion.

If I try to pack everything up as densely as possible (not wise), this is what it looks like:

There’s room there for possibly one more node to squeeze in there. I’d think that’d be pushing it however. 5 is probably a good number, meaning we can space the units out a bit to allow them to draw air in via the gaps.

On top of the units I have my two switches. The old Netcomm 24-port switch was retired from our network when a lightning strike to a neighbour’s tree an 8-port switch, my Yaesu FT-897D radio transceiver, some ports on a wireless 3G router/switch, and an ADSL router out. It also did damage some ports on the big Netcomm switch, so in short, I know it has issues.

Replacing its 3.3V PSU with one that steps down from 12V would cost me the price of a 16-port 10/100Mbps switch brand new.

When we replaced the switch (paid for by insurance) we decided to buy a 8-port and 16-port switch. The 16-port switch, retired due to an upgrade to gigabit, is sitting on top, and takes 12V 1A input. It’ll be perfect for the IPMI VLAN, where speed is not important. It also accepts the DC plugs I bought by mistake.

The 8-port one takes 7.5V 1A, so a little less convenient for this task, I’d need to make a DC-DC converter for it. Maybe later if this works.

So considering a cabinet for this, we have:

  • 5 nodes measuring 190mm in height: ~5 RU
  • A 24-port switch: 1 RU
  • A 16-port switch: 1 RU
  • Some power distribution electronics: 3RU

Yes, the battery and its charger is external to the cabinet.

Judging from this, the cabinet probably needs to be a 10RU or 12RU cabinet to give us space for mounting everything cleanly and to ensure good ventilation. Using 8-port IPMI switches and 24+2-port comms switches, that leaves us with sufficient port space for the 5 nodes and gives us one port left for a small in-chassis monitoring device and 4 ports left on the main switch for an uplink trunk.

You could conceptually then consider these as homogeneous building blocks for larger networks, using Ceph’s CRUSH maps to ensure copies get distributed amongst these “cabinets”.

Apr 092016
 

So, I’ve been doing a bit of research about how I can stabilise the battery voltage which will drift between around 11V and 14.6V. It’s a deep-cycle type battery, so it’s actually capable of going down to 10V, but I really don’t want to push it that far.

Once I get below 12V, that’s the time to signal to the VM hosts to start hibernating the VMs and preparing for a blackout, until such time as the voltage picks back up again.

The rise above 13.5V is a challenge due to the PicoPSU limitations. @Vlad Conut rightly pointed out that the M3-ATX-HV PSUs sold by the same company would have been a better choice. For about $20 more, or an extra $100 for the cluster, I’d have something that’d work 6-30V. I’d still have to solve the problem with the switch, but it’d just be that one device, not 6.

Maybe it was because they were out of stock that I went the PicoPSU route, I also wasn’t sure about power demands, I knew the CPU needed 20W, but wasn’t sure about everything else. So I over-dimensioned everything. Hindsight is 20:20.

One option I considered was a regulator in front of each node. I had mentioned the LM7812 because I knew of it. I also knew it was a crap choice for this task, the 1.5V drop, with a 5A load would result in about 7.5W dissipated thermally. So 20W would jump to nearly 28W — not good.

That of course assumes a 7812 would handle 5A, which it won’t.

LDOs were the way to go for anything linear, otherwise I was looking at a switchmode power supply. The LM2576 has similar requirements to the LM7812, but is much more efficient being a buck converter. If 1.5V was fine, I’d be looking for a 5A-capable equivalent.

The other option would be to have one single power supply regulate power for all nodes. I mentioned in my previous log about the Redarc DC-DC power supply, and that is certainly still worthy of consideration. It is a single point of failure however, but then again, Redarc aren’t known for making crap, and the unit has a 2 year warranty anyway. I’d have downtime, but would not lose data if this went down.

@K.C. Lee pointed me to one LDO that would be a good candidate though, and is cheap enough to be worth the experiment: the Micrel MIC29750. 7.5A current, and comes in an adjustable form up to 16V. I’d imagine if I set this near 13.5V, it’d dissipate maybe 2.5W max at 5A, or 1W at 2A. Much better.

Not as good as Redarc’s solution of course, and that’s still an option, but cheap enough to try out.