Homebrew

Homebrewing notes

Cataloguing Chaos

So, this last 2 years, I’ve been trying to keep multiple projects on the go, then others come along and pile their own projects on top. It kinda makes a mess of one’s free time, including for things like keeping on top of where things have been put.

COVID-19 has not helped here, as it’s meant I’ve lugged a lot of gear that belongs to my workplace, or belongs at my workplace, home, to use there. This all needs tracking to ensure nothing is lost.

Years ago, I threw together a crude parts catalogue system. It was built on Django, django-mptt and PostgreSQL, and basically abused the admin part of Django to manage electronic parts storage.

I later re-purposed some of its code for an estate database for my late grandmother: I just wrote a front-end so that members of the family could be given login accounts, and “claim” certain items of the estate. In that sense, the concept was extremely powerful.

The overarching principle of how both these systems worked is that you had “items” stored within “locations”. Locations were in a tree-structure (hence django-mptt) where a location could contain further “locations”… e.g. a root-level location might be a bed room, within that might be a couple of wardrobes and draws, and there might be containers within those.

You could nest locations as deeply as you liked. In my parts database, I didn’t consider rooms, but I’d have labelled boxes like “IC Parts 1”, “IC Parts 2”, these were Plano StowAway 932 boxes… which work okay, although I’ve since discovered you don’t leave the inner boxes exposed to UV light: the plastic becomes brittle and falls apart.

The inner boxes themselves were labelled by their position within the outer box (row, column), and each “bin” inside the inner box was labelled by row and column.

IC tubes themselves were also labelled, so if I had several sitting in a box, I could identify them and their location. Some were small enough to fit inside these boxes, others were stored in large storage tubs (I have two).

If I wanted to know where I had put some LM311 op-amps, I might look up the database and it’d tell me that there were 3 of them in IC Box 1/Row 2/Row 3/Column 5. If luck was on my side, I’d go to that box, pull out the inner box, open it up and find what I was looking for plugged into some anti-static foam or stashed in a small IC tube.

The parts themselves were fairly basic, just a description, a link to a data sheet, and some other particulars. I’d then have a separate table that recorded how many of each part was present, and in which location.

So from the locations perspective, it did everything I wanted, but parametric search was out of the question.

The place here looks like a tip now, so I really do need to get on top of what I have, so much so I’m telling people no more projects until I get on top of what I have now.

Other solutions exist. OpenERP had a warehouse inventory module, and I suspect Odoo continues this, but it’s a bit of a beast to try and figure out and it seems customisation has been significantly curtailed from the OpenERP days.

PartKeepr (if you can tolerate deliberate bad spelling) is another option. It seems to have very good parametric search of parts, but one downside is that it has a flat view of locations. There’s a proposal to enhance this, but it’s been languishing for 4 years now.

VRT used to have a semi-active track-and-trace business built on a tracking software package called P-Trak. P-Trak had some nice ideas (including a surprisingly modern message-passing back-end, even if it was a proprietary one), but is overkill of my needs, and it’s a pain to try and deploy, even if I was licensed to do so.

That doesn’t mean though I can’t borrow some ideas from it. It integrated barcode scanners as part of the user interface, something these open-source part inventory packages seem to overlook. I don’t have a dedicated barcode scanner, but I do have a phone with a camera, and a webcam on my netbook. Libraries exist to do this from a web browser, such as this one for QR codes.

My big problem right now is the need to do a stock-take to see what I’ve still got, and what I’ve added since then, along with where it has gone. I’ve got a lot of “random boxes” now which are unlabelled, and just have random items thrown in due to lack-of-time. It’s likely those items won’t remain there either. I need some frictionless way to record where things are getting put. It doesn’t matter exactly where something gets put, just so long as I record that information for use later. If something is going to move to a new location, I want to be able to record that with as little fuss as possible.

So the thinking is this:

  • Print labels for all my storage locations with UUIDs stored as barcodes
  • Enter those storage locations into a database using the UUIDs allocated
  • Expand (or re-write) my parts catalogue database to handle these UUIDs:
    • adding new locations (e.g. when a consignment comes in)
    • recording movement of containers between parent locations
    • sub-dividing locations (e.g. recording the content of a consignment)
    • (partial and complete) merging locations (e.g. picking parts from stock into a project-specific container)

The first step on this journey is to catalogue the storage containers I have now. Some are already entered into the old system, so I’ve grabbed a snapshot of that and can pick through it. Others are new boxes that have arrived since, and had additional things thrown in.

I looked at ways I could label the boxes. Previously that was a spirit pen hand-writing a label, but this does not scale. If I’m to do things efficiently, then a barcode seems the logical way to go since it uses what I already have.

Something new comes in? Put a barcode on the box, scan it, enter it into the system as a new location, then mark where that box is being stored by scanning the location barcode where I’ll put the box. Later, I’ll grab the box, open it up, and I might repeat the process with any IC tubes or packets of parts inside, marking them as being present inside that box.

Need something? Look up where it is, then “check it out” into my work area… now, ideally when I’m finished, it should go back there, but if I’m in a hurry, I just throw it in a box, somewhere, then record that I put it there. Next time I need it, I can look up where it is. Logical order isn’t needed up front, and can come later.

So, step 1 is to label all the locations. Since I’m doing this before the database is fully worked-out, I want to avoid ID clashes, I’m using UUIDs to label all the locations. Initially I thought of QR codes, but then realised some of the “locations” are DIP IC storage tubes, which do not permit large square labels. I did some experiments with Code-128, but found it was near impossible to reliably encode a UUID that way, my phone had difficulty recognising the entire barcode.

I returned to the idea of QR-codes, and found that my phone will scan a 10mm×10mm QR code printed on a page. That’s about the right height for the side of an IC tube. We had some inkjet labels kicking around, small 38.1×21.2mm labels arranged in a 5×11 grid (Avery J8651/L7651 layout). Could I make a script that generated a page full of QR codes?

Turns out, pylabels will do this. It is built on reportlab which amongst other things, embeds a barcode generator that supports various symbologies including QR codes. @hugohadfield had contributed a pull request which demonstrated using this tool with QR codes. I just had to tweak this for my needs.

# This file is part of pylabels, a Python library to create PDFs for printing
# labels.
# Copyright (C) 2012, 2013, 2014 Blair Bonnett
#
# pylabels is free software: you can redistribute it and/or modify it under the
# terms of the GNU General Public License as published by the Free Software
# Foundation, either version 3 of the License, or (at your option) any later
# version.
#
# pylabels is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
# A PARTICULAR PURPOSE.  See the GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along with
# pylabels.  If not, see <http://www.gnu.org/licenses/>.

import uuid

import labels
from reportlab.graphics.barcode import qr
from reportlab.lib.units import mm

# Create an A4 portrait (210mm x 297mm) sheets with 5 columns and 13 rows of
# labels. Each label is 38.1mm x 21.2mm with a 2mm rounded corner. The margins
# are automatically calculated.
specs = labels.Specification(210, 297, 5, 13, 38.1, 21.2, corner_radius=2,
        left_margin=6.7, right_margin=3, top_margin=10.7, bottom_margin=10.7)

def draw_label(label, width, height, obj):
    size = 12 * mm
    label.add(qr.QrCodeWidget(
            str(uuid.uuid4()),
            barHeight=height, barWidth=size, barBorder=2))

# Create the sheet.
sheet = labels.Sheet(specs, draw_label, border=True)

sheet.add_labels(range(1, 66))

# Save the file and we are done.
sheet.save('basic.pdf')
print("{0:d} label(s) output on {1:d} page(s).".format(sheet.label_count, sheet.page_count))

The alignment is slightly off, but not severely. I’ll fine tune it later. I’m already through about 30 of those labels. It’s enough to get me started.

For the larger J8165 2×4 sheets, the following specs work. (I can see this being a database table!)

# Specifications for Avery J8165 2×4 99.1×67.7mm
specs = labels.Specification(210, 297, 2, 4, 99.1, 67.7, corner_radius=3,
        left_margin=5.5, right_margin=4.5, top_margin=13.5, bottom_margin=12.5)

Later when I get the database ready (standing up a new VM to host the database and writing the code) I can enter this information in and get back on top of my inventory once again.

6LoWHAM: ARQ over a “mesh” network

So earlier, I had mentioned that it’s really not desirable to have ARQ (automatic repeat request) on a link carrying TCP datagrams.  My comment is based on this observation:

http://sites.inka.de/bigred/devel/tcp-tcp.html

In that article, the discussion is about one TCP connection being tunnelled over another TCP connection.  Basically it comes down to the lower layer buffering and re-sending the TCP datagrams just as the upper layer gives up on hearing a reply and re-sends its own attempt.

Now, end-to-end ACKs have been done on long chains of AX.25 networks before.  It’s generally accepted to be an unreliable mechanism.  UDP for sure can benefit, but then many protocols that use UDP already do their own handling of lost messages.  CoAP for instance does its own ARQ, as does TFTP.

Gerald Wagenknecht, Markus Anwander and Torsten Braun discuss some of the impacts of this on a 802.15.4 network in their thesis “Hop-to-Hop Reliability in IP-based Wireless Sensor Networks – a Cross-Layer Approach“.  In this, they talk about a variant of TCP called TSS: TCP Support for Sensor Networks.  This was discussed at depth in a thesis by Adam Dunkels, “Towards TCP/IP for Wireless Sensor Networks“.

This latter document, was apparently the inspiration for 6LoWPAN.  Section 4.4.3 discusses the approaches to handling ARQ in TCP.  Section 9.6 goes into further detail on how ARQ might be handled elsewhere in the network.

Thankfully in our case, it’s only the network that’s constrained, the nodes themselves will be no smaller than a Raspberry Pi which would have held its own against the PC that Adam Dunkels used to write that thesis!

In short, it looks as if just routing IP packets is not going to cut it, we need to actually handle the TCP side of things as well.  As for other protocols like CoAP, I guess the answer is be patient.  The timeout settings defined in RFC-7252 are usually tuneable, and it may be desirable to back those off just a little for use over AX.25.

6LoWHAM Research

So, doing some more digging here.  One question people might ask is what kind of applications would I use over this network?

Bear in mind that it’s running at 1200 baud!  If we use HTTP at all, tiny is the word!  No bloated images, and definitely no big heavy JavaScript frameworks like ReactJS, Angular, DoJo or JQuery.  You can forget watching Netflicks in 4k over this link.

HTTP really isn’t designed for low-bandwidth links, as Steve Netting demonstrated:

The page itself is bad enough, but even then, it’s loaded after a minute.  The real slow bit is the 20kB GIF.

So yeah, slow-scan television, the ability to send weather radar images over, that is something I was thinking of, but not like that!

HTTP uses pretty verbose headers:

GET /qld/forecasts/brisbane.shtml?ref=hdr HTTP/1.1
Host: www.bom.gov.au
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:62.0) Gecko/20100101 Firefox/62.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-AU,en-GB;q=0.8,en-US;q=0.5,en;q=0.3
Accept-Encoding: gzip, deflate
Referer: http://www.bom.gov.au/products/IDR664.loop.shtml
Cookie: bom_meteye_windspeed_units_knots=yes
Connection: keep-alive
Upgrade-Insecure-Requests: 1
Pragma: no-cache
Cache-Control: no-cache

HTTP/1.1 200 OK
Accept-Ranges: bytes
Content-Encoding: gzip
Content-Type: text/html; charset=UTF-8
Server: Apache
Vary: Accept-Encoding
Content-Length: 6321
Date: Sat, 20 Oct 2018 10:56:12 GMT
Connection: keep-alive

That request is 508 bytes and the response headers are 216 bytes.  It’d be inappropriate on 6LoWPAN as you’d be fragmenting that packet left right and centre in order to squeeze it into the 128-byte 802.15.4 frames.

In that video, ICMP echo requests were also demonstrated, and those weren’t bad!  Yes, a little slow, but workable.  So to me, it’s not the packet network that’s the problem, it’s just that something big like HTTP is just not appropriate for a 1200-baud radio link.

It might work on 9600 baud packet … maybe.  My Kantronics KPC3 doesn’t do 9600 baud over the air.

CoAP was designed for tight messages.  It is UDP based, so your TCP connection overhead disappears, and the “options” are encoded as individual bytes in many cases.  There are other UDP-based protocols that would work fine too, as well as older TCP protocols such as Telnet.

A request, and reply in CoAP look something like this:

Hex dump of request:
00000000  40 01 00 01 3b 65 78 61  6d 70 6c 65 2e 63 6f 6d   @...;exa mple.com
00000010  81 63 03 52 46 77 11 3c                            .c.RFw.< 

Hex dump of response:
    00000000  60 45 00 01 c1 3c ff a1  1a 00 01 11 70 a1 01 a3   `E...<.. ....p...
    00000010  04 18 64 02 6b 31 39 32  2e 31 36 38 2e 30 2e 31   ..d.k192 .168.0.1
    00000020  03 64 65 74 68 30                                  .deth0

Or in more human readable form:

Request:
Constrained Application Protocol, Confirmable, GET, MID:1
    01.. .... = Version: 1
    ..00 .... = Type: Confirmable (0)
    .... 0000 = Token Length: 0
    Code: GET (1)
    Message ID: 1
    Opt Name: #1: Uri-Host: example.com
        Opt Desc: Type 3, Critical, Unsafe
        0011 .... = Opt Delta: 3
        .... 1011 = Opt Length: 11
        Uri-Host: example.com
    Opt Name: #2: Uri-Path: c
        Opt Desc: Type 11, Critical, Unsafe
        1000 .... = Opt Delta: 8
        .... 0001 = Opt Length: 1
        Uri-Path: c
    Opt Name: #3: Uri-Path: RFw
        Opt Desc: Type 11, Critical, Unsafe
        0000 .... = Opt Delta: 0
        .... 0011 = Opt Length: 3
        Uri-Path: RFw
    Opt Name: #4: Content-Format: application/cbor
        Opt Desc: Type 12, Elective, Safe
        0001 .... = Opt Delta: 1
        .... 0001 = Opt Length: 1
        Content-type: application/cbor
    [Uri-Path: coap://example.com/c/RFw]

Response:
Constrained Application Protocol, Acknowledgement, 2.05 Content, MID:1
    01.. .... = Version: 1
    ..10 .... = Type: Acknowledgement (2)
    .... 0000 = Token Length: 0
    Code: 2.05 Content (69)
    Message ID: 1
    Opt Name: #1: Content-Format: application/cbor
        Opt Desc: Type 12, Elective, Safe
        1100 .... = Opt Delta: 12
        .... 0001 = Opt Length: 1
        Content-type: application/cbor
    End of options marker: 255
    Payload: Payload Content-Format: application/cbor, Length: 31
        Payload Desc: application/cbor
        [Payload Length: 31]
Concise Binary Object Representation
    Map: (1 entries)
        Unsigned Integer: 70000
            Map: (1 entries)
                ...0 0001 = Unsigned Integer: 1
                    Map: (3 entries)
                        ...0 0100 = Unsigned Integer: 4
                            Unsigned Integer: 100
                        ...0 0010 = Unsigned Integer: 2
                            Text String: 192.168.0.1
                        ...0 0011 = Unsigned Integer: 3
                            Text String: eth0

That there, also shows another tool to data packing: CBOR.  CBOR is basically binary JSON.  Just like JSON it is schemaless, it has objects, arrays, strings, booleans, nulls and numbers (CBOR differentiates between integers of various sizes and floats).  Unlike JSON, it is tight.  The CBOR blob in this response would look like this as JSON (in the most compact representation possible):

{70000:{4:100,2:"192.168.0.1",3:"eth0"}}

The entire exchange is 190 bytes, less than a quarter of the size of just the HTTP request alone.  I think that would work just fine over 1200 baud packet.  As a bonus, you can also multicast, try doing that with HTTP.

So you’d be writing higher-level services that would use this instead of JSON-REST interfaces.  There’s a growing number of libraries that can consume this sort of thing, and IoT is pushing that further.  I think it’s doable.

Now, on the routing front, I’ve been digging up a bit on Net/ROM.  Net/ROM is actually two parts, Net/ROM Level 3 does the routing and level 4 does the circuit switching.  It’s the “Level 3” bit we want.

Coming up with a definitive specification of the protocol has been a bit tough, it doesn’t help that there is a company called NetROM, but I did manage to find this document.  In a way, if I could make my software behave like a Net/ROM node, I could piggy-back off that to discover neighbours.  Thus this protocol would co-exist along side Net/ROM networks that may be completely oblivious to TCP/IP.

This is preferable to just re-inventing the wheel…yes I know non-circular wheels are so much fun!  Really, once Net/ROM L3 has figured out where everyone is, IP routing just becomes a matter of correctly addressing the AX.25 frame so the next hop receives the message.

VK4RZB at Mt. Coot-tha is one such node running TheNet.  Easy enough to do tests on as it’s a mere stone throw away from my home QTH.

There’s a little consideration to make about how to label the AX.25 frame.  Obviously, it’ll be a UI frame, but what PID field should I use?  My instinct suggests that I should just label it as “ARPA Internet Protocol”, since it is Internet Protocol traffic, just IPv6 instead of v4.  Not all the codes are taken though, 0xc9 is free, so I could be cheeky and use that instead.  If the idea takes off, we can talk with the TAPR then.

6LoWHAM Background

So, I’ll admit to looking at AX.25 with the typical modems available (the classical 1200-baud AFSK and the more modern G3RUH modem which runs at a blistering 9600 baud… look out 5G!) years ago and wondering “what’s the point”?

It was Brisbane Area WICEN’s involvement in the International Rally of Queensland that changed my view somewhat.  This was an event that, until CAMS knocked it on the head, ran annually in the Imbil State Forest up in the Sunshine Coast hinterland.

There, WICEN used it for forwarding the scores of drivers as they passed through each stage of the rally.  A checkpoint would be at the start and finish of each stage, and a packet network would be set up with digipeaters in strategic locations and a base station, often located at the Imbil school.

The organisers of IRoQ did experiment with other ways of getting scores through, including hiring bandwidth on satellites, flying planes around in circles over the area, and other shenanigans.  Although these systems had faster throughput speeds, one thing they had which we did not have, was latency.  The score would arrive back at base long before the car had left the check point.

This freed up the analogue FM network for reporting other more serious matters.

In addition to this kind of work, WICEN also help out with horse endurance rides.  Traditionally we’ve just relied on good ol’e analogue FM radio, but in events such as the Tom Quilty, there has been a desire to use packet as a mechanism for reporting when horses arrive at given checkpoints and to perhaps enable autonomous stations that can detect horses via RFID and report those “back to base” to deter riders from cheating.

The challenge of AX.25 is two-fold:

  1. With the exception of Linux, no other OS has any kind of baked-in support for it, so writing applications that can interact with it means either implementing your own AX.25 stack or interfacing to some third-party stack such as BPQ.
  2. Due to the specialised stack, applications often have to run as privileged applications, can have problems with firewalling, etc.

The AX.25 protocol does do static routing.  It offers connected-mode links (like TCP) and a connectionless-mode (like UDP), and there are at least two routing protocols I know of that allow for dynamic routing (ROSE, Net/ROM).  There is a standard for doing IPv4 over AX.25, but you still need to manage the allocation of addresses and other details, it isn’t plug-and-play.

Net/ROM would make an ideal way to forward 6LoWPAN traffic, except it only does connected mode, and doing IP over a “TCP-like” link is really a bad idea.  (Anything that does automatic repeat requests really messes with TCP/IP.)

I have no idea whether ROSE does the connectionless mode, but the idea of needing to come up with a 10-digit numeric “address” is a real turn-off.

If the address used can be derived off the call-sign of the operator, that makes life a lot easier.

The IPv6 address format has enough bits to do that.  To me the most obvious way would be to derive a MAC address from a call-sign and an arbitrarily chosen digit (0-7).  It would be reversible of course, and since the MAC address is used in SLAAC, you would see the station’s call-sign in the IPv6 address.

The thinking is that there’s a lot of problems that have been solved in 6LoWPAN.  Discovery of services for example is handled using mechanisms like mDNS and CoRE RD.  We don’t need to forward Internet traffic, although being able to pull up the Mt. Kanigan and Mt. Stapylton radars over such a network would be real handy at times (yes, I know it’ll be slow).

The OS will view the packet network like a VPN, and so writing applications that can talk over packet will be no different to writing any other kind of network software.  Any consumer desktop OS written in the last 16 years has the necessary infrastructure to support it (even Windows 2000, there was a downloadable add-on for it).

Linking two separate “mesh” networks via point-to-point links is also trivial.  Each mesh will of course see the other as “external” but participants on both can nonetheless communicate.

The guts of 6LoWPAN is in RFC-4944.  This specifies details about how the IPv6 datagram is encoded as a IEEE 802.15.4 payload, and how the infrastructure within 802.15.4 is used to route IPv6.  Gnarly details like how fragmentation of a 1280-byte IPv6 datagram into something that will fit the 128-byte maximum 802.15.4 frames is handled here.  For what it’s worth, AX.25 allows 255 bytes (or was it 256?), so we’re ahead there.

Crucially, it is assumed that the 802.15.4 layer can figure out how to get from node A to node Z via B… C…, etc.  802.15.4 networks are managed by a PAN coordinator, which provides various services to the network.

AX.25 makes this “our problem”.  Yes the sender of a frame can direct which digipeaters a frame should be passed to, but they have to figure that out.  It’s like sending an email by UUCP, you need a map of the Internet to figure out what someone’s address is relative to your site.

Plain AX.25 digipeaters will of course be part of the mix, so having the ability for a node stuck on one side of such a digipeater would be worth having, but ultimately, the aim here will be to provide a route discovery mechanism in place that, knowing a few static digipeater routes, can figure out who is able to hear whom, and route traffic accordingly.

Digital Emergency comms ideas

Yesterday’s post was rather long, but was intended for mostly technical audiences outside of amateur radio.  This post serves as a brain dump of volatile memory before I go to sleep for the night.  (Human conscious memory is more like D-RAM than one might realise.)

Radio interface

So, many in our group use packet radio TNCs already, with a good number using the venerable Kantronics KPC3.  These have a DB9 port that connects to the radio and a second DB25 RS-323 port that connects to the computer.

My proposal: we make an audio interface that either plugs into that DB9 port and re-uses the interface cables we already have, or directly into the radio’s data port.

This should connect to an audio interface on the computer.

For EMI’s sake, I’d recommend a USB sound dongle like this, or these, or this as that audio interface.  I looked on Jaycar and did see this one, which would also work (and burn a hole in your wallet!).

If you walk in and the asking price is more than $30, I’d seriously consider these other options.  Of those options, U-Mart are here in Brisbane; go to their site, order a dongle then tell the site you’ll come and pick it up.  They’ll send you an email with an order number when it’s ready, you just need to roll up to the store, punch that number into a terminal in the shop, then they’ll call your name out for you to collect and pay for it.

Scorptec are in Melbourne, so you’ll have to have items shipped, but are also worth talking to.  (They helped me source some bits for my server cluster when U-Mart wouldn’t.)

USB works over two copper pairs; one delivers +5V and 0V, the other is a differential pair for data.  In short, the USB link should be pretty immune from EMI issues.

At worst, you should be able to deal with it with judicious application of ferrite beads to knock down the common mode current and using a combination of low-ESR electrolytic and ceramic capacitors across the power rails.

If you then keep the analogue cables as short as absolutely possible, you should have little opportunity for RF to get in.

I don’t recommend the TigerTronics Signalink interfaces, they use cheap and nasty isolation transformers that lead to serious performance issues.

Receive audio

For the receive audio, we feed the audio from the radio and we feed that via potentiometer to a 3.5mm TRS (“phono”) plug tip, with sleeve going to common.  This plugs into the Line-In or Microphone input on the sound device.

Push to Talk and Transmit audio

I’ve bundled these together for a good reason.  The conventional way for computers to drive PTT is via an RS-232 serial port.

We can do that, but we won’t unless we have to.

Unless you’re running an original SoundBLASTER card, your audio interface is likely stereo.  We can get PTT control via an envelope detector forming a minimal-latency VOX control.

Another 3.5mm TRS plug connects to the “headphone” or “line-out” jack on our sound device and breaks out the left and right channels.

The left and right channels from the sound device should be fed into the “throw” contacts on two single-pole double-throw toggle switches.

The select pin (mechanically operated by the toggle handle) on each switch thus is used to select the left or right channel.

One switch’s select pin feeds into a potentiometer, then to the radio’s input.  We will call that the “modulator” switch; it selects which channel “modulates” our audio.  We can again adjust the gain with the potentiometer.

The other switch first feeds through a small Schottky diode then across a small electrolytic capacitor (to 0V) then through a small resistor before finally into the base of a small NPN signal transistor (e.g. BC547).  The emitter goes to 0V, the collector is our PTT signal.

This is the envelope detector we all know and love from our old experiments with crystal sets.  In theory, we could hook a speaker to the collector up to a power source and listen to AM radio stations, but in this case, we’ll be sending a tone down this channel to turn the transistor, and thus or PTT, on.

The switch feeding this arrangement we’ll call the “PTT” switch.

By using this arrangement, we can use either audio channel for modulation or PTT control, or we can use one channel for both.  1200-baud AFSK, FreeDV, etc, should work fine with both on the one channel.

If we just want to pass through analogue audio, then we probably want modulation separate, so we can hold the PTT open during speech breaks without having an annoying tone superimposed on our signal.

It may be prudent to feed a second resistor into the base of that NPN, running off to the RTS pin on an RS-232 interface.  This will let us use software that relies on RS-232 PTT control, which can be added by way of a USB-RS232 dongle.

The cheap Prolific PL-2303 ones sold by a few places (including Jaycar) will work for this.  (If your software expects a 16550 UART interface on port 0x3f8 or similar, consider running it in a virtual machine.)

Ideally though, this should not be needed, and if added, can be left disconnected without harm.

Software

There are a few “off-the-shelf” packages that should work fine with this arrangement.

AX.25 software

AGWPE on Windows provides a software TNC.  On Linux, there’s soundmodem (which I have used, and presently mirror) and Direwolf.

Shouldn’t need a separate PTT channel, it should be sufficient to make the pre-amble long enough to engage PTT and rely on the envelope detector recognising the packet.

Digital Voice

FreeDV provides an open-source digital voice platform system for Windows, Linux and MacOS X.

This tool also lets us send analogue voice.  Digital voice should be fine, the first frame might get lost but as a frame is 40ms, we just wait before we start talking, like we would for regular analogue radio.

For the analogue side of things, we would want tone-driven PTT.  Not sure if that’s supported, but hey, we’ve got the source code, and yours truly has worked with it, it shouldn’t be hard to add.

Slow-scan television

The two to watch here would be QSSTV (Linux) and EasyPal (Windows).  QSSTV is open-source, so if we need to make modifications, we can.

Not sure who maintains EasyPal these days, not Eric VK4AES as he’s no longer with us (RIP and thank-you).  Here, we might need an RS-232 PTT interface, which as discussed, is not a hard modification.

Radioteletype

Most is covered by FLDigi.  Modes with a fairly consistent duty cycle will work fine with the VOX PTT, and once again, we have the source, we can make others work.

Custom software ideas

So we can use a few off-the-shelf packages to do basic comms.

  • We need auditability of our messaging system.  Analogue FM, we can just use a VOX-like function on the computer to record individual received messages, and to record outgoing traffic.  Text messages and files can be logged.
  • Ideally, we should have some digital signing of logs to make them tamper-resistant.  Then we can mathematically prove what was sent.
  • In a true  emergency, it may be necessary to encrypt what we transmit.  This is fine, we’re allowed to do this in such cases, and we can always turn over our audited logs for authorities anyway.
  • Files will be sent as blocks which are forward-error corrected (or forward-erasure coded).  We can use a block cipher such as AES-256 to encrypt these blocks before FEC.  OpenPGP would work well here rather doing it from scratch; just send the OpenPGP output using FEC blocks.  It should be possible to pick out symmetric key used at the receiving end for auditing, this would be done if asked for by Government.  DIY not necessary, the building blocks are there.
  • Digital voice is a stream, we can use block ciphers but this introduces latency and there’s always the issue of bit errors.  Stream ciphers on the other hand, work by generating a key stream, then XOR-ing that with the data.  So long as we can keep sync in the face of bit errors, use of a stream cipher should not impair noise immunity.
  • Signal fade is a worse problem, I suggest a cleartext (3-bit, 4-bit?) gray-code sync field for synchronisation.  Receiver can time the length of a fade, estimate the number of lost frames, then use the field to re-sync.
  • There’s more than a dozen stream ciphers to choose from.  Some promising ones are ACHTERBAHN-128, Grain 128a, HC-256, Phelix, Py, the Salsa20 family, SNOW 2/3G, SOBER-128, Scream, Turing, MUGI, Panama, ISAAC and Pike.
  • Most (all?) stream ciphers are symmetric.  We would have to negotiate/distribute a key somehow, either use Diffie-Hellman or send a generated key as an encrypted file transfer (see above).  The key and both encrypted + decrypted streams could be made available to Government if needed.
  • The software should be capable of:
    • Real-time digital voice (encrypted and clear; the latter being compatible with FreeDV)
    • File transfer (again, clear and encrypted using OpenPGP, and using good FEC, files will be cryptographically signed by sender)
    • Voice mail and SSTV, implemented using file transfer.
    • Radioteletype modes (perhaps PSK31, Olivia, etc), with logs made.
    • Analogue voice pass-through, with recordings made.
    • All messages logged and time-stamped, received messages/files hashed, hashes cryptographically signed (OpenPGP signature)
    • Operation over packet networks (AX.25, TCP/IP)
    • Standard message forms with some basic input validation.
    • Ad-hoc routing between interfaces (e.g. SSB to AX.25, AX.25 to TCP/IP, etc) should be possible.
  • The above stack should ideally work on low-cost single-board computers that are readily available and are low-power.  Linux support will be highest priority, Windows/MacOS X/BSD is a nice-to-have.
  • GNU Radio has building blocks that should let us do most of the above.

Getting the UDRC-II to play nice with the Yaesu FT-897D

The Yaesu FT-897D has the de-facto standard 6-pin Mini-DIN data jack on the back to which you can plug a digital modem.  Amongst the pins it provides is a squelch status pin, and in the past I’ve tried using that to drive (via transistors) the carrier detect pin on various computer interfaces to enable the modem to detect when a signal is incoming.

The FT-897D is fussy however.  Any load at all pulling this pin down, and you get no audio.  Any load.  One really must be careful about that.

Last week when I tried the UDRC-II, I hit the same problem.  I was able to prove it was the UDRC-II by construction of a crude adapter cable that hooked up to the DB15-HD connector, converting that to Mini-DIN6: by avoiding the squelch status pin, I avoided the problem.

One possible solution was to cut the supplied Mini-DIN6 cable open, locate the offending wire and cut it.  Not a solution I relish doing.  The other was to try and fix the UDRC-II.

Discussing this on the list, it was suggested by Bryan Hoyer that I use a 4.7k pull-up resistor on the offending pin to 3.3V.  He provided a diagram that indicated where to find the needed signals to tap into.

With that information, I performed the following modification.  A 1206 4.7k resistor is tacked onto the squelch status pin, and a small wire run from there to the 3.3V pin on a spare header.

UDRC-II modification for Yaesu FT-897D

UDRC-II modification for Yaesu FT-897D

I’m at two minds whether this should be a diode instead, just in case a radio asserts +12V on this line, I don’t want +12V frying the SoC in the Raspberry Pi.  On the other hand, this is working, it isn’t “broke”.

Doing the above fixed the squelch drive issue and now I’m able to transmit and receive using the UDRC-II.  Many thanks to Bryan Hoyer for pointing this modification out.

A DIY computer

Well, I’ve been thinking a lot lately about single board computers. There’s a big market out there. Since the Raspberry Pi, there’s been a real explosion available to the small-end of town, the individual. Prior to this, development boards were mostly in the 4-figures sort of price range.

So we’re now rather spoiled for choice. I have a Raspberry Pi. There’s also the BeagleBone Black, Banana Pi, and several others. One gripe I have with the Raspberry Pi is the complete absence of any kind of analogue input. There’s an analogue line out, you can interface some USB audio devices (although I hear two is problematic), or you can get an I2S module.

There’s a GPU in there that’s capable of some DSP work and a CLKOUT pin that can generate a wide range of frequencies. That sounds like the beginnings of a decent SDR, however one glitch, while I can use the CLKOUT pin to drive a mixer and the GPIOs to do band selection, there’s nothing that will take that analogue signal and sample it.

If I want something wider than audio frequencies (and even a 192kHz audio CODEC is not guaranteed above ~20kHz) I have to interface to SPI, and the pickings are somewhat slim. Then I read this article on a DIY single board computer.

That got me thinking about whether I could do my own. At work we use the Technologic Systems TS-7670 single-board computers, and as nice as those machines are, they’re a little slow and RAM-limited. Something that could work as a credible replacement there too would be nice, key needs there being RS-485, Ethernet and a 85 degree temperature rating.

Form factor is a consideration here, and I figured something modular, using either header pins or edge connectors would work. That would make the module easily embeddable in hobby projects.

Since all the really nice SoCs are BGA packages, I figured I’d first need to know how easy I could work with them. We’ve got a stack of old motherboards sitting in a cupboard that I figured I could raid for BGAs to play with, just to see first-hand how fine the pins were. A crazy thought came to me: maybe for prototyping, I could do it dead-bug style?

Key thing here being able to solder directly to a ball securely, then route the wire to its destination. I may need to glue it to a bit of grounded foil to keep the capacitance in check. So, the first step I figured, would be to try removing some components from the boards I had laying around to see this first-hand.

In amongst the boards I came across was one old 386 motherboard that I initially mistook for a 286 minus the CPU. The empty (PLCC) socket is for an 80387 math co-processor. The board was in the cupboard for a good reason, corrosion from the CMOS battery had pretty much destroyed key traces on one corner of the board.

Corrosion on a motherboard caused by a CMOS battery

Corrosion on a motherboard caused by a CMOS battery

I decided to take to it with the heat gun first. The above picture was taken post-heatgun, but you can see just how bad the corrosion was. The ISA slots were okay, and so where a stack of other useful IC sockets, ICs, passive components, etc.

With the heat gun at full blast, I’d just wave it over an area of interest until the board started to de-laminate, then with needle-nose pliers, pull the socket or component from the board. Sometimes the component simply dropped out.

At one point I heard a loud “plop”. Looking under the board, one of the larger surface-mounted chips had fallen off. That gave me an idea, could the 386 chip be de-soldered? I aimed the heat-gun directly at the area underneath. A few seconds later and it too hit the deck.

All in all, it was a successful haul.

Parts off the 386 motherboard

Parts off the 386 motherboard

I also took apart an 8-bit ISA joystick card. It had some nice looking logic chips that I figured could be re-purposed. The real star though was the CPU itself:

Intel NG80306SX-20

Intel NG80306SX-20

The question comes up, what does one do with a crusty old 386 that’s nearly as old as I am? A quick search turned up this scanned copy of the Intel 80386SX datasheet. The chip has a 16-bit bus with 23 bits worth of address lines (bit 0 is assumed to be zero). It requires a clock that is double the chip’s operating frequency (there’s an internal divide-by-two). This particular chip runs internally at 20MHz. Nothing jumped out as being scary. Could I use this as a practice run for making an ARM computer module?

A dig around dug up some more parts:

More parts

More parts

In this pile we have…

I also have some SIMMs laying around, but the SDRAM modules look easier to handle since the controllers on board synchronise with what would otherwise be the front-side bus.  The datasheet does not give a minimum clock (although clearly this is not DC; DRAM does need to be refreshed) and mentions a clock frequency of 33MHz when set to run at a CAS latency of 1.  It just so happens that I have a 33MHz oscillator.  There’s a couple of nits in this plan though:

  • the SDRAM modules a 3.3V, the CPU is 5V: no problem, there are level conversion chips out there.
  • the SDRAM modules are 64-bits wide.  We’ll have to buffer the output to eight 8-bit registers.  Writes do a read-modify-write cycle, and we use a 2-in-4 decoder to select the CE pin on two of the registers from address bits 1 and 2 from the CPU.
  • Each SDRAM module holds 32MB.  We have a 23-bit address bus, which with 16-bit words gives us a total address space of 16MB.  Solution: the old 8-bit computers of yesteryear used bank-switching to address more RAM/ROM than they had address lines for, we can interface an 8-bit register at I/O address 0x0000 (easily decoded with a stack of Schottky diodes and a NOT gate) which can hold the remaining address bits mapping the memory to the lower 8MB of physical memory.  We then hijack the 386’s MMU to map the 8MB chunks and use the page faults to switch memory banks.  (If we put the SRAM and ROM up in the top 1MB, this gives us ~7MB of memory-mapped I/O to play with.)

So, not show stoppers.  There’s an example circuit showing interfacing an ATMega8515 to a single SDRAM chip for driving a VGA interface, and some example code, with comments in German. Unfortunately you’d learn more German in an episode of Hogan’s Heroes than what I know, but I can sort-of figure out the sequence used to read and write from/to the SDRAM chip. Nothing looks scary there either.  This SDRAM tutorial seems to be a goldmine.

Thus, it looks like I’ve got enough bits to have a crack at it.  I can run the 386 from that 33MHz brick; which will give me a chip running at 16.5MHz.  Somewhere I’ve got the 40MHz brick laying around from the motherboard (I liberated that some time ago), but that can wait.

A first step would be to try interfacing the 386 chip to an AVR, and feed it instructions one step at a time, check that it’s still alive.  Then, the next steps should become clear.

Implementing EEPROM emulation on the SM1000

Well, lately I’ve been doing a bit of work hacking the firmware on the Rowetel SM1000 digital microphone.  For those who don’t know it, this is a hardware (microcontroller) implementation of the FreeDV digital voice mode: it’s a modem that plugs into the microphone/headphone ports of any SSB-capable transceiver and converts FreeDV modem tones to analogue voice.

I plan to set this unit of mine up on the bicycle, but there’s a few nits that I had.

  • There’s no time-out timer
  • The unit is half-duplex

If there’s no timeout timer, I really need to hear the tones coming from the radio to tell me it has timed out.  Others might find a VOX feature useful, and there’s active experimentation in the FreeDV 700B mode (the SM1000 currently only supports FreeDV 1600) which has been very promising to date.

Long story short, the unit needed a more capable UI, and importantly, it also needed to be able to remember settings across power cycles.  There’s no EEPROM chip on these things, and while the STM32F405VG has a pin for providing backup-battery power, there’s no battery or supercapacitor, so the SM1000 forgets everything on shut down.

ST do have an application note on their website on precisely this topic.  AN3969 (and its software sources) discuss a method for using a portion of the STM32’s flash for this task.  However, I found their “license” confusing.  So I decided to have a crack myself.  How hard can it be, right?

There’s 5 things that a virtual EEPROM driver needs to bear in mind:

  • The flash is organised into sectors.
  • These sectors when erased contain nothing but ones.
  • We store data by programming zeros.
  • The only way to change a zero back to a one is to do an erase of the entire sector.
  • The sector may be erased a limited number of times.

So on this note, a virtual EEPROM should aim to do the following:

  • It should keep tabs on what parts of the sector are in use.  For simplicity, we’ll divide this into fixed-size blocks.
  • When a block of data is to be changed, if the change can’t be done by changing ones to zeros, a copy of the entire block should be written to a new location, and a flag set (by writing zeros) on the old block to mark it as obsolete.
  • When a sector is full of obsolete blocks, we may erase it.
  • We try to put off doing the erase until such time as the space is needed.

Step 1: making room

The first step is to make room for the flash variables.  They will be directly accessible in the same manner as variables in RAM, however from the application point of view, they will be constant.  In many microcontroller projects, there’ll be several regions of memory, defined by memory address.  This comes from the datasheet of your MCU.

An example, taken from the SM1000 firmware, prior to my hacking (stm32_flash.ld at r2389):

/* Specify the memory areas */
MEMORY
{
  FLASH (rx)      : ORIGIN = 0x08000000, LENGTH = 1024K
  RAM (rwx)       : ORIGIN = 0x20000000, LENGTH = 128K
  CCM (rwx)       : ORIGIN = 0x10000000, LENGTH = 64K
}

The MCU here is the STM32F405VG, which has 1MB of flash starting at address 0x08000000. This 1MB is divided into (in order):

  • Sectors 0…3: 16kB starting at 0x08000000
  • Sector 4: 64kB starting at 0x0800c000
  • Sector 5 onwards: 128kB starting at 0x08010000

We need at least two sectors, as when one fills up, we will swap over to the other. Now it would have been nice if the arrangement were reversed, with the smaller sectors at the end of the device.

The Cortex M4 CPU is basically hard-wired to boot from address 0, the BOOT pins on the STM32F4 decide how that gets mapped. The very first few instructions are the interrupt vector table, and it MUST be the thing the CPU sees first. Unless told to boot from external memory, or system memory, then address 0 is aliased to 0x08000000. i.e. flash sector 0, thus if you are booting from internal flash, you have no choice, the vector table MUST reside in sector 0.

Normally code and interrupt vector table live together as one happy family. We could use a couple of 128k sectors, but 256k is rather a lot for just an EEPROM storing maybe 1kB of data tops. Two 16kB sectors is just dandy, in fact, we’ll throw in the third one for free since we’ve got plenty to go around.

However, the first one will have to be reserved for the interrupt vector table that will have the space to itself.

So here’s what my new memory regions look like (stm32_flash.ld at 2390):

/* Specify the memory areas */
MEMORY
{
  /* ISR vectors *must* be placed here as they get mapped to address 0 */
  VECTOR (rx)     : ORIGIN = 0x08000000, LENGTH = 16K
  /* Virtual EEPROM area, we use the remaining 16kB blocks for this. */
  EEPROM (rx)     : ORIGIN = 0x08004000, LENGTH = 48K
  /* The rest of flash is used for program data */
  FLASH (rx)      : ORIGIN = 0x08010000, LENGTH = 960K
  /* Memory area */
  RAM (rwx)       : ORIGIN = 0x20000000, LENGTH = 128K
  /* Core Coupled Memory */
  CCM (rwx)       : ORIGIN = 0x10000000, LENGTH = 64K
}

This is only half the story, we also need to create the section that will be emitted in the ELF binary:

SECTIONS
{
  .isr_vector :
  {
    . = ALIGN(4);
    KEEP(*(.isr_vector))
    . = ALIGN(4);
  } >FLASH

  .text :
  {
    . = ALIGN(4);
    *(.text)           /* .text sections (code) */
    *(.text*)          /* .text* sections (code) */
    *(.rodata)         /* .rodata sections (constants, strings, etc.) */
    *(.rodata*)        /* .rodata* sections (constants, strings, etc.) */
    *(.glue_7)         /* glue arm to thumb code */
    *(.glue_7t)        /* glue thumb to arm code */
	*(.eh_frame)

    KEEP (*(.init))
    KEEP (*(.fini))

    . = ALIGN(4);
    _etext = .;        /* define a global symbols at end of code */
    _exit = .;
  } >FLASH…

There’s rather a lot here, and so I haven’t reproduced all of it, but this is the same file as before at revision 2389, but a little further down. You’ll note the .isr_vector is pointed at the region called FLASH which is most definitely NOT what we want. The image will not boot with the vectors down there. We need to change it to put the vectors in the VECTOR region.

Whilst we’re here, we’ll create a small region for the EEPROM.

SECTIONS
{
  .isr_vector :
  {
    . = ALIGN(4);
    KEEP(*(.isr_vector))
    . = ALIGN(4);
  } >VECTOR


  .eeprom :
  {
    . = ALIGN(4);
    *(.eeprom)         /* special section for persistent data */
    . = ALIGN(4);
  } >EEPROM


  .text :
  {
    . = ALIGN(4);
    *(.text)           /* .text sections (code) */
    *(.text*)          /* .text* sections (code) */

THAT’s better! Things will boot now. However, there is still a subtle problem that initially caught me out here. Sure, the shiny new .eeprom section is unpopulated, BUT the linker has helpfully filled it with zeros. We cannot program zeroes back into ones! Either we have to erase it in the program, or we tell the linker to fill it with ones for us. Thankfully, the latter is easy (stm32_flash.ld at 2395):

  .eeprom :
  {
    . = ALIGN(4);
    KEEP(*(.eeprom))   /* special section for persistent data */
    . = ORIGIN(EEPROM) + LENGTH(EEPROM) - 1;
    BYTE(0xFF)
    . = ALIGN(4);
  } >EEPROM = 0xff

Credit: Erich Styger

We have to do two things. One, is we need to tell it that we want the region filled with the pattern 0xff. Two, we need to make sure it gets filled with ones by telling the linker to write one as the very last byte. Otherwise, it’ll think, “Huh? There’s nothing here, I won’t bother!” and leave it as a string of zeros.

Step 2: Organising the space

Having made room, we now need to decide how to break this data up.  We know the following:

  • We have 3 sectors, each 16kB
  • The sectors have an endurance of 10000 program-erase cycles

Give some thought as to what data you’ll be storing.  This will decide how big to make the blocks.  If you’re storing only tiny bits of data, more blocks makes more sense.  If however you’ve got some fairly big lumps of data, you might want bigger blocks to reduce overheads.

I ended up dividing the sectors into 256-byte blocks.  I figured that was a nice round (binary sense) figure to work with.  At the moment, we have 16 bytes of configuration data, so I can do with a lot less, but I expect this to grow.  The blocks will need a header to tell you whether or not the block is being used.  Some checksumming is usually not a bad idea either, since that will clue you in to when the sector has worn out prematurely.  So some data in each block will be header data for our virtual EEPROM.

If we don’t care about erase cycles, this is fine, we can just make all blocks data blocks, however it’d be wise to track this, and avoid erasing and attempting to use a depleted sector, so we need somewhere to track this.  256 bytes gives us enough space to stash an erase counter and a map of what blocks are in use within that sector.

So we’ll reserve the first block in the sector to act as this index for the entire sector.  This gives us enough room to have 16-bits worth of flags for each block stored in the index.  That gives us 63 blocks per sector for data use.

It’d be handy to be able to use this flash region for a few virtual EEPROMs, so we’ll allocate some space to give us a virtual ROM ID.  It is prudent to do some checksumming, and the STM32F4 has a CRC32 module, so in that goes, and we might choose to not use all of a block, so we should throw in a size field (8 bits, since the size can’t be bigger than 255).  If we pad this out a bit to give us a byte for reserved data, we get a header with the following structure:

15 14 13 12 11 10 19 8 7 6 5 4 3 2 1 0
+0 CRC32 Checksum
+2
+4 ROM ID Block Index
+6 Block Size Reserved

So that subtracts 8 bytes from the 256 bytes, leaving us 248 for actual program data. If we want to store 320 bytes, we use two blocks, block index 0 stores bytes 0…247 and has a size of 248, and block index 1 stores bytes 248…319 and has a size of 72.

I mentioned there being a sector header, it looks like this:

15 14 13 12 11 10 19 8 7 6 5 4 3 2 1 0
+0 Program Cycles Remaining
+2
+4
+6
+8 Block 0 flags
+10 Block 1 flags
+12 Block 2 flags

No checksums here, because it’s constantly changing.  We can’t re-write a CRC without erasing the entire sector, we don’t want to do that unless we have to.  The flags for each block are currently allocated accordingly:

15 14 13 12 11 10 19 8 7 6 5 4 3 2 1 0
+0 Reserved In use

When the sector is erased, all blocks show up as having all flags set as ones, so the flags is considered “inverted”.  When we come to use a block, we mark the “in use” bit with a zero, leaving the rest as ones.  When we erase, we mark the entire flags block as zeros.  We can set other bits here as we need for accounting purposes.

Thus we have now a format for our flash sector header, and for our block headers.  We can move onto the algorithm.

Step 3: The Code

This is the implementation of the above ideas.  Our code needs to worry about 3 basic operations:

  • reading
  • writing
  • erasing

This is good enough if the size of a ROM image doesn’t change (normal case).  For flexibility, I made my code so that it works crudely like a file, you can seek to any point in the ROM image and start reading/writing, or you can blow the whole thing away.

Constants

It is bad taste to leave magic numbers everywhere, so constants should be used to represent some quantities:

  • VROM_SECT_SZ=16384:
    The virtual ROM sector size in bytes.  (Those watching Codec2 Subversion will note I cocked this one up at first.)
  • VROM_SECT_CNT=3:
    The number of sectors.
  • VROM_BLOCK_SZ=256:
    The size of a block
  • VROM_START_ADDR=0x08004000:
    The address where the virtual ROM starts in Flash
  • VROM_START_SECT=1:
    The base sector number where our ROM starts
  • VROM_MAX_CYCLES=10000:
    Our maximum number of program-erase cycles

Our programming environment may also define some, for example UINTx_MAX.

Derived constants

From the above, we can determine:

  • VROM_DATA_SZ = VROM_BLOCK_SZ – sizeof(block_header):
    The amount of data per block.
  • VROM_BLOCK_CNT = VROM_SECT_SZ / VROM_BLOCK_SZ:
    The number of blocks per sector, including the index block
  • VROM_SECT_APP_BLOCK_CNT = VROM_BLOCK_CNT – 1
    The number of application blocks per sector (i.e. total minus the index block)

CRC32 computation

I decided to use the STM32’s CRC module for this, which takes its data in 32-bit words.  There’s also the complexity of checking the contents of a structure that includes its own CRC.  I played around with Python’s crcmod module, but couldn’t find some arithmetic that would allow it to remain there.

So I copy the entire block, headers and all to a temporary copy (on the stack), set the CRC field to zero in the header, then compute the CRC. Since I need to read it in 32-bit words, I pack 4 bytes into a word, big-endian style. In cases where I have less than 4 bytes, the least-significant bits are left at zero.

Locating blocks

We identify each block in an image by the ROM ID and the block index.  We need to search for these when requested, as they can be located literally anywhere in flash.  There are probably cleverer ways to do this, but I chose the brute force method.  We cycle through each sector and block, see if the block is allocated (in the index), see if the checksum is correct, see if it belongs to the ROM we’re looking for, then look and see if it’s the right index.

Reading data

To read from the above scheme, having been told a ROM ID (rom), start offset and a size, the latter two being in byte sand given a buffer we’ll call out, we first need to translate the start offset to a sector and block index and block offset.  This is simple integer division and modulus.

The first and last blocks of our read, we’ll probably only read part of.  The rest, we’ll read entire blocks in.  The block offset is only relevant for this first block.

So we start at the block we calculate to have the start of our data range.  If we can’t find it, or it’s too small, then we stop there, otherwise, we proceed to read out the data.  Until we run out of data to read, we increment the block index, try to locate the block, and if found, copy its data out.

Writing and Erasing

Writing is a similar affair.  We look for each block, if we find one, we overwrite it by copying the old data to a temporary buffer, copy our new data in over the top then mark the old block as obsolete before writing the new one out with a new checksum.

Trickery is in invoking the wear levelling algorithm on an as-needed basis.  We mark a block obsolete by setting its header fields to zero, but when we run out of free blocks, then we go looking for sectors that are full of obsolete blocks waiting to be erased.  When we encounter a sector that has been erased, we write a new header at the start and proceed to use its first data block.

In the case of erasing, we don’t bother writing anything out, we just mark the blocks as obsolete.

Implementation

The full C code is in the Codec2 Subversion repository.  For those who prefer Git, I have a git-svn mirror (yes, I really should move it off that domain).  The code is available under the Lesser GNU General Public License v2.1 and may be ported to run on any CPU you like, not just ST’s.

Experimentations with supercaps

The Problem

I’ve been running a station from the bicycle for some time now and I suppose I’ve tried a few different battery types on the station.

Originally I ran 9Ah 12V gel cells, which work fine for about 6 months, then the load of the radio gets a bit much and I find myself taking two with me on a journey to work because one no longer lasts the day.  I replaced this with a 40Ah Thundersky LiFePO4 pack which I bought from EVWorks, which while good, weighed 8kg!  This is a lot lighter than an equivalent lead acid, gel cell or AGM battery, but it’s still a hefty load for a bicycle.

At the time that was the smallest I could get.  Eventually I found a mob that sold 10Ah packs. These particular cells were made by LiFeBatt, and while pricey, I’ve pretty much recouped my costs. (I’d have bought and disposed of about 16 gel cell batteries in this time at $50 each, versus $400 for one of these.)   These are what I’ve been running now since about mid 2011, and they’ve been pretty good for my needs.  They handle the load of the FT-857 okay on 2m FM which is what I use most of the time.

A week or two back though, I was using one of these packs outside with the home base in a “portable” set-up with my FT-897D.  Tuned up on the 40m WICEN net on 7075kHz, a few stations reported that I had scratchy audio.  Odd, the radio was known to be good, I’ve operated from the back deck before and not had problems, what changed?

The one and only thing different is I was using one of these 10Ah packs.  I’ve had fun with RF problems on the bicycle too.  On transmit, the battery was hovering around the 10.2V mark, perhaps a bit low.  Could it be the radio is distorting on voice peaks due to input current starvation?  I tried after the net swapping it for my 40Ah pack, which improved things.  Not totally cleared up, but it was better, and the pack hadn’t been charged in a while so it was probably a little low too.

The idea

I thought about the problem for a bit.  SSB requires full power on voice peaks.  For a 100W radio, that’s a 20A load right now.  Batteries don’t like this.  Perhaps there was a bit of internal resistance from age and the nature of the cells?  Could I do something to give it a little hand?

Supercapacitors are basically very high capacity electrolytic capacitors with a low breakdown voltage, normally in the order of a few volts and capacitances of over a farad.  They are good for temporarily storing charge that needs to be dumped into a load in a hurry.  Could this help?

My cells are in a series bank of 4: ~3.3V/cell with 4 cells gives me 13.2V.  There’s a battery balancer already present.  If a cell gets above 4V, that cell is toast, so the balancer is present to try to prevent that from happening.  I could buy these 1F 5.5V capacitors for only a few dollars each, so I thought, “what the hell, give it a try”.  I don’t have much information on them other that Elna Japan made them.  The plan was to make some capacitor “modules” that would hook in parallel to each cell.

My 13.2V battery pack, out of case

My 13.2V battery pack, out of its case

Supercapacitors

Supercapacitors

For my modules, the construction was simple, two reasonably heavy gauge wires tacked onto the terminals, the whole capacitor then encased in heatshrink tubing and ring lugs crimped to the leads. I was wondering whether I should solder a resistor and diode in parallel and put that in series with the supercap to prevent high in-rush current, but so far that hasn’t been necessary.

The re-assembled pack

I’ve put the pack back together and so far, it has charged up and is ready to face its first post-retrofit challenge.  I guess I’ll be trying out the HF station tomorrow to see how it goes.

Assembled pack

Assembled pack

The Verdict

Not a complete solution to the RF feedback, it seems to help in other ways. I did a quick test on the drive way first with the standard Yaesu handmic and with the headset. Headset still faces interference problems on HF, but I can wind it up to about 30W~40W now instead of 20.

More pondering to come but we’ll see what the other impacts are.

A horn for the bicycle

I’ve been riding on the road now for some years, and while I normally try to avoid it, I do sometimes find myself riding on the road itself rather than on the footpath or bicycle path.

Most of the time, the traffic is fine.  I’m mindful of where everyone is, and there aren’t any problems, but I have had a couple of close calls from time to time.  Close calls that have me saying “ode for a horn”.

By law we’re required to have a bell on our bikes.  No problem there, I have a mechanical one which is there purely for legal purposes.  If I get pulled over by police, and they ask, I can point it out and demonstrate it.  Requirement met?  Tick to that.

It’s of minimal use with pedestrians, and utterly useless in traffic.

Early on with my riding I developed a lighting system which included indicators.  Initially this was silent, I figured I’d see the lights flashing, but after a few occasions forgetting to turn indicators off, I fitted a piezo buzzer.  This was an idea inspired by the motorcycles ridden by Australia Post contractors, which have a very audible buzzer.  Jaycar sell a 85dB buzzer that’s waterproof, overkill in the audio department but fit for purpose.  It lets me know I have indicators on and alerts people to my presence.

That is, if they equate the loud beep to a bicycle.  Some do not.  And of course, it’s still utterly useless on the road.

I figured a louder alert system was in order.  Something that I could adjust the volume on, but loud enough to give a pedestrian a good 30 seconds warning.  That way they’ve got plenty of time to take evasive action while I also start reducing speed.  It’s not that I’m impatient, I’ll happily give way, but I don’t want to surprise people either.  Drivers on the other hand, if they do something stupid it’d be nice to let them know you’re there!

My workplace looks after a number of defence bases in South-East Queensland, one of which has a railway crossing for driver training.  This particular boom gate assembly copped a whack from a lightning strike, which damaged several items of equipment, including the electronic “bells” on the boom gate itself.  These “bells” consisted of a horn speaker with a small potted PCB mounted to the back which contained an amplifier and bell sound generator.  Apply +12V and the units would make a very loud dinging noise.  That’s in theory; in practise, all that happened was a TO-220 transistor got hot.  Either the board or the speaker (or both) was faulty.

It was decided these were a write-off, and after disassembly I soon discovered why: the voice coils in the horn speakers had been burnt out.  A little investigation, and I figured I could replace the blown out compression drivers and get the speakers themselves working again, building my own horn.

A concept formed: the horn would have two modes, a “bell” mode with a sound similar to a bicycle bell, and a “horn” mode for use in traffic.  I’d build the circuit in parts, the first being the power amplifier then interface to it the sound effect generator.

To make life easier testing, I also decided to add a line-in/microphone-in feature which would serve to debug construction issues in the power amplifier and add a megaphone function.  (Who knows, might be handy with WICEN events.)

Replacing the compression drivers

Obviously it’d be ideal to replace it with the correct part, but looking around, I couldn’t see anything that would fit the housing.  That, and what I did see, was more expensive than buying a whole new horn speaker.

There was a small aperture in the back about 40mm in diameter.  The original drivers were 8ohms, and probably rated at 30W and had a convex diaphragm which matched the concave geometry in the back of the horn assembly.

Looking around, I saw these 2W mylar cone speakers.  Not as good as a compression driver, but maybe good enough?  It was cheap enough to experiment.  I bought two to try it out.

I got them home, tacked some wires onto one of them and plugged it into a radio.  On its own, not very loud, but when I held it against the back of a horn assembly, the amplification was quite apparent.  Good enough to go further.  I did some experiments with how to mount the speakers to the assembly, which required some modifications to be made.

I soon settled on mounting the assembly to an aluminium case with some tapped holes for clamping the speaker in place.  There was ample room for a small amplifier which would be housed inside the new case, which would also serve as a means of mounting the whole lot to the bike.

Bell generator

I wasn’t sure what to use for this, I had two options: build an analogue circuit to make the effect, or program a microcontroller.  I did some experiments with an ATMega8L, did manage to get some sound out of it using the PWM output, but 8kB of flash just wasn’t enough for decent audio.

A Freetronics LeoStick proved to be the ticket.  32kB flash, USB device support, small form factor, what’s not to like?  I ignored the Arduino-compatible aspect and programmed the device directly.  Behind the novice-friendly pin names, they’re an ATMega32U4 with a 16MHz crystal.  I knocked up a quick prototype that just played a sound repeatedly.  It sounded a bit like a crowbar being dropped, but who cares, it was sufficient.

Experimenting with low-pass filters I soon discovered that a buffer-amp would be needed, as any significant load on the filter would render it useless.

A 2W power amplifier

Initially I was thinking along the lines of a LM386, but after reading the datasheet I soon learned that this would not cut it.  They are okay for 500mW, but not 2W.  I didn’t have any transistors on hand that would do it and still fit in the case, then I stumbled on the TDA 1905.  These ICs are actually capable of 5W into 4 ohms if you feed them with a 14V supply.  With 9V they produce 2.5W, which is about what I’m after.

I bought a couple then set to work with the breadboard.  A little tinkering and I soon had one of the horn speakers working with this new amplifier.  Plugged into my laptop, I found the audio output to be quite acceptable, in fact turned up half-way, it was uncomfortable to sit in front of.

I re-built the circuit to try and make use of the muting feature.  For whatever reason, I couldn’t get this to work, but the alternate circuit provided a volume control which was useful in itself.

The pre-amplifier

For the line-level audio, there’s no need for anything more fancy than a couple of resistors to act as a passive summation of the left and right channels, however for a microphone and for the LeoStick, I’d need a preamp.  I grabbed a LM358 and plugged that into my breadboard alongside the TDA1905.

Before long, I had a working microphone preamp working using one half of the LM358, based on a circuit I found.  I experimented with some resistor values and found I got reasonable amplification if I upped some of the resistor values to dial the gain back a little.  Otherwise I got feedback.

For the LeoStick, it already puts out 5V TTL, so a unity-gain voltage follower was all that was needed.  The second half of the LM358 provided this.  A passive summation network consisting of two resistors and DC-blocking capacitor allowed me to combine these outputs for the TDA1905.

One thing I found necessary, the TDA1905 and LM358 misbehave badly unless there’s a decent size capacitor on the 9V rail.  I found a 330uF electrolytic helped in addition to the datasheet-prescribed 100nF ceramics.

Power supply

Since I’m running on batteries with no means of generating power, it’s important that the circuit does not draw power when idle.  Ideally, the circuit should power on when either I:

  • plug the USB cable in (for firmware update/USB audio)
  • toggle the external source switch
  • press the bell button

We also need two power rails: a 9V one for the analogue electronics, and a 5V one for the LeoStick.  A LM7809 and LM7805 proved to be the easiest way to achieve this.

To allow software control of the power, a IRF9540N MOSFET was connected to the 12V input and supplies the LM7809.  The gate pin is connected to a wired-OR bus.  The bell button and external source switch connect to this bus with signal diodes that pull down on the gate.

Two BC547s also have collectors wired up to this bus, one driven from the USB +5V supply, and the other from a pin on the LeoStick.  Pressing the Bell button would power the entire circuit up, at which point the LeoStick would assert its power on signal (turning on one of the BC547s) then sample the state of the bell button and start playing sound.  When it detects the button has been released, it finishes its playback and turns itself off by releasing the power on signal.

Sound effect generator

Earlier I had prototyped a bell generator, however it wasn’t much use as it just repeatedly made a bell noise regardless of the inputs.  To add insult to injury, I had lost the source code I used.  I had a closer look at the MCU datasheet, deciding to start from a clean slate.

The LeoStick provides its audio on pin D11, which is wired up to Port B Pin 7.  Within the chip, two possible timers hook up: Timer 0, which is an 8-bit timer, and Timer 1, which is 16-bits.  Both are fed from the 16MHz system clock.  The bit depth affects the PWM carrier frequency we can generate, the higher the resolution, the slower the PWM runs.  You want the PWM frequency as high as possible, ideally well above 20kHz so that it’s not audible in the audio output, and obviously well above the audio sampling rate.

At 16MHz, a 16-bit timer would barely exceed 240Hz, which is utterly useless for audio.  A 10-bit timer fares better, with 15kHz, older people may not hear it but I certainly can hear 15kHz.  That leaves us with 8-bits which gets us up to 62kHz.  So no point in using Timer 1 if we’re only going to be using 8-bits of it, we might as well use Timer 0.

Some of you familiar with this chip may know of Timer 4, which is a high-speed 10-bit timer fed by a separate 64MHz PLL.  It’s possible to do better quality audio from here, either running at 10-bits with a 62kHz carrier, or dropping to 8-bits and ramping the frequency to 250kHz.  Obviously it’d have been nice, but I had already wired things up by this stage, so it was too late to choose another pin.

Producing the output voltage is only half the equation though: once started, the PWM pin will just output a steady stream of pulses, which when low-passed, produces a DC offset.  In order to play sound, we need to continually update the timer’s Capture Compare register with each new sample at a steady rate.

The most accurate way to do this, is to use another timer.  Timer 3 is another 16-bit timer unit, with just one capture compare output available on Port C pin 3.  It is an ideal candidate for a timer that has no external influence, so it gets the job of updating the PWM capture compare value with new samples.

Timer 1 is connected to pins that drive two of the three LEDs on the LeoStick, with Timer 4 driving the remaining one, so if I wanted, I could have LEDs fade in and out with it instead of just blinking.  However, my needs are basic, and I need something to debounce switches and visibly blink LEDs.  So I use that with a nice long period to give me a 10Hz timer.

Here is the source code.  I’ll add schematics and other notes to it with time, but the particular bits of interest for those wanting to incorporate PWM-generated sound in their AVR projects are the interrupt routine and the sound control functions.

To permit gapless playback, I define two buffers which I alternate between, so while one is being played back, the other can be filled up with samples.  I define these on line 139 with the functions starting at line 190.  The interrupt routine that orchestrates the playback is at line 469.

When sound is to be played, the first thing that needs to happen is for the initial buffer to be loaded with samples using the write_audio function.  This can either read from a separate buffer in RAM (e.g. from USB) or from program memory.  One of the options permits looping of audio.  Having loaded some initial data in, we can then call start_audio to set up the two timers and get audio playback rolling.  start_audio needs the sample rate to configure the sample rate timer, and can accept any sample rate that is a factor of 16MHz (so 8kHz, 16kHz up to 32kHz).

The audio in this application is statically compiled in, taking the form of an array of uint8_t‘s in PROGMEM.

Creating the sounds

I initially had a look around to see if I could get a suitable sound effect.  This proved futile, I was ideally looking around for a simple openly-licensed audio file.  Lots of places offered something, but then wanted you to sign up or pay money.  Fine, I can understand the need to make a quid, and if I were doing this a lot, I’d pay up, but this is a once-off.

Eventually, I found some recordings which were sort of what I was after, but not quite.  So I downloaded these then fired up Audacity to have a closer look.

The bicycle bell

Bicycle bells have a very distinctive sound to them, and are surprisingly complicated.  I initially tried to model it as an exponentially decaying sinusoid of different frequencies, but nothing sounded quite right.

The recording I had told me that the fundamental frequency was just over 2kHz.  Moreover though, the envelope was amplitude-modulated by a second sinusoid: this one about 15Hz.  Soon as I plugged this second term in, things sounded better.  This script, was the end result.  The resulting bell sounds like this:

So somewhat bell-like.  To reduce the space, I use a sample rate of 6.4kHz.  I did try a 4kHz sample rate but was momentarily miffed at the result until I realised what was going on: the bell was above the Nyquist frequency at 4kHz, 6.4kHz is the minimum practical rate that reproduces the audio.

I used Audacity to pick a point in the waveform for looping purposes, to make it sound like a bell being repeatedly struck.

The horn

I wanted something that sounded a little gutsy.  Like an air-horn on a truck.  Once again, I hit the web, and found a recording of a train horn.  Close enough, but not long enough, and a bit noisy.  However opening it up in Audacity and doing a spectrum analysis, I saw there were about 5 tones involved.  I plugged these straight into a Python script and decided to generate those directly.  Using a raised cosine filter to shape the envelope at the start and end, and I soon had my horn effect.  This script generates the horn.  The audio sounds like this:

Using other sound files

If you really wanted, you could use your own sound recordings.  Just keep in mind the constraints of the ATMega32U4, namely, 32kB of flash has to hold both code and recordings.  An ATMega64 would do better.  The audio should be mono, 8-bits and unsigned with as lower sample rate as you can get away with.  (6.4kHz proved to be sufficient for my needs.)

Your easiest bet would be to either figure out how to read WAV files (in Python: wave module), or alternatively, convert to raw headerless audio files, then code up a script that reads the file one byte at a time. The Python scripts I’ve provided might be a useful starting point for generating the C files.

Alternatively, you can try interfacing a SDCard and embedding a filesystem driver and audio file parser (not sure about WAVE but Sun Audio is easily parsed), this is left as an exercise for the adventurous.

Finishing up

I’ll put schematics and pictures up soonish.  I’m yet to try mounting the whole set up, but so far the amplifier is performing fine on the bench.