Nov 202016
 

The Yaesu FT-897D has the de-facto standard 6-pin Mini-DIN data jack on the back to which you can plug a digital modem.  Amongst the pins it provides is a squelch status pin, and in the past I’ve tried using that to drive (via transistors) the carrier detect pin on various computer interfaces to enable the modem to detect when a signal is incoming.

The FT-897D is fussy however.  Any load at all pulling this pin down, and you get no audio.  Any load.  One really must be careful about that.

Last week when I tried the UDRC-II, I hit the same problem.  I was able to prove it was the UDRC-II by construction of a crude adapter cable that hooked up to the DB15-HD connector, converting that to Mini-DIN6: by avoiding the squelch status pin, I avoided the problem.

One possible solution was to cut the supplied Mini-DIN6 cable open, locate the offending wire and cut it.  Not a solution I relish doing.  The other was to try and fix the UDRC-II.

Discussing this on the list, it was suggested by Bryan Hoyer that I use a 4.7k pull-up resistor on the offending pin to 3.3V.  He provided a diagram that indicated where to find the needed signals to tap into.

With that information, I performed the following modification.  A 1206 4.7k resistor is tacked onto the squelch status pin, and a small wire run from there to the 3.3V pin on a spare header.

UDRC-II modification for Yaesu FT-897D

UDRC-II modification for Yaesu FT-897D

I’m at two minds whether this should be a diode instead, just in case a radio asserts +12V on this line, I don’t want +12V frying the SoC in the Raspberry Pi.  On the other hand, this is working, it isn’t “broke”.

Doing the above fixed the squelch drive issue and now I’m able to transmit and receive using the UDRC-II.  Many thanks to Bryan Hoyer for pointing this modification out.

Nov 122016
 

So, recently, the North West Digital Radio group generously donated a UDRC II radio control board in thanks for my initial work on an audio driver for the Texas Instruments TLV320AIC3204 (yes, a mouthful).

This board looks like it might support the older Pi model B I had, but I thought I’d play it safe and buy the later revision, so I bought version 3 of the Pi and the associated 7″ touch screen.  Thus, an order went to RS for a whole pile of parts, including one Raspberry Pi3 computer, a blank 8GB MicroSD card, a power supply, the touch screen kit and a case.

Fitting the UDRC

To fit the UDRC, the case will need some of the plastic cut away,  rectangular section out of the main body and a similarly sized portion out of the back cover.

Modifications to the case

Modifications to the case

When assembled, the cut-away section will allow the DB15-HD and Mini-DIN6 connectors to protrude out slightly.

Case assembled with modifications

The UDRC needs some minor modifications too for the touch screen.  Probe around, and you’ll find a source of 5V on one of the unpopulated headers.  You’ll want to solder a two-pin header to here and hook that to the LCD control board using the supplied jumper leads.  If you’ve got one, use a right-angled header, otherwise just bend a regular one like I did.

5V supply for the LCD on the UDRC

5V supply for the LCD on the UDRC

You’ll note I’ve made a note on the DB15-HD, a monitor does NOT plug in here.

From here, you should be ready to load up a SD card.  NWDR recommend the use of Compass Linux, which is a Raspbian fork configured for use with the UDRC.  I used the lite version, since it was smaller and I’m comfortable with command lines.

Configuring screen rotation

If you try to boot your freshly prepared SD card, the first thing you’ll notice is that the screen is up-side-down.  Clearly a few people didn’t communicate with each-other about which way was up on this thing.

Before you pull the SD card out, it is worth mounting the first partition on the SD card and editing config.txt on the root directory of that partition. If doing this on a Windows computer ensure your text editor respects Unix line endings! (Blame Microsoft. If you’re doing this on a Mac, Linux, BSD or other Unix-ish computer, you have nothing to worry about.)

Add the following to the end of the file (or anywhere really):

# Rotate the screen the "right way up"
lcd_rotate=2

Now save the file, unmount the SD card, and put it in the Pi before assembling the case proper.

Setting up your environment

Now, if you chose the lite option like I did, there’ll be no GUI, and the touch aspect of the touchscreen is useless.  You’ll need a USB keyboard.

Log in as pi (password raspberry), run passwd to change your password, then run sudo -s to gain a root shell.

You might choose like I did to run passwd again here to set root‘s password too.

After that, you’ll want to install some software.  Your choice of desktop environment is entirely up to you, I prefer something lightweight, and have been using FVWM for years, but there are plenty of choices in Debian as well as the usual suspects (KDE, Gnome, XFCE…).

For the display manager, I’ll choose lightdm. We also need an on-screen keyboard. I tried a couple, including matchbox-keyboard and the rather ancient xvkbd. Despite its age, I found xvkbd to be the most usable.

Once you’ve decided what you want, run apt-get install with your list of packages, making sure to include xvkbd and lightdm in your list.  Other applications I included here were network-manager-gnome, qasmixer, pasystray, stalonetray and gkrellm.

Enabling the on-screen keyboard in lightdm

Having installed lightdm and xvkbd, you can now configure lightdm to enable the accessibility options.

Open up /etc/lightdm/lightdm-gtk-greeter.conf, look for the line show-indicators and tack ;~a11y on the end.

Now down further, look for the commented out keyboard setting and change that to keyboard=xvkbd. Save and close the file, then run /etc/init.d/lightdm restart.

You should find yourself staring at the log-in screen, and lo and behold, there should be a new icon up the top-right. Tapping it should bring up a 3 line menu, the bottom of which is the on-screen keyboard.

On-screen keyboard in lightdm

On-screen keyboard in lightdm

The button marked Focus is what you hit to tell the keyboard which application is to receive the keyboard events.  Tap that, then the application you want.  To log in, tap Focus then the password field.  You should be able to tap your password in followed by either the Return button on the virtual keyboard or the Log In button on the form.

Making FVWM touch-friendly

I have a pretty old configuration that has evolved over the last 10 years using FVWM that was built around keyboard-centric operation and screen real-estate preservation.  This configuration mainly needed two changes:

  • Menus and title bar text enlarged to make the corresponding UI elements finger-friendly
  • Adjusting the size of the FVWM BarButtons to suit the 800×480 display

Rather than showing how to do it from scratch, I’ll just link to the configuration tarball which you are welcome to play with.  It uses xcalendar which isn’t in the Debian repositories any more, but is available on Gentoo mirrors and can be built from source (you’ll want to install xutils-dev for xmake), stalonetray and gkrellm are both in the standard Debian repositories.

FVWM on the Raspberry Pi

FVWM on the Raspberry Pi

Enabling the right-click

This took a bit of hunting to figure out.  There is a method that works with Debian Wheezy which allows right-clicks by way of long presses, but this broke in Jessie, and the 2016-05-23 release of Compass Linux is built on the latter.  So another solution is needed.

Philipp Merkel however, wrote a little daemon called twofing.  Once installed, doing a right click is simply a two-fingered tap on the screen, there’s support for other two-fingered gestures such as pinching and rotation as well.  It is available on Github, and I have forked this, adding some udev rules and scripts to integrate it into the Raspberry Pi.

The resulting Debian package is here.  Download the .deb, run dpkg -i on it, and then re-start the Raspberry Pi (or you can try running udevadm trigger and re-starting X).  The udev rules should create a /dev/twofingtouch symbolic link and the installed Xsession.d/Xreset.d scripts should take care of starting it with X and shutting it down afterwards.

Having done this, when you log in you should find that twofing is running, and that right clicks can be performed using a two-fingered prod.

Finishing up

Having done the configuration, you should now have a usable workhorse for numerous applications.  The UDRC shows up as a second sound card and is accessible via ALSA.  I haven’t tried it out yet, but it at least shows up in the mixer application, so the signs are there.  I’ll be looking to add LinBPQ and FreeDV into the mix yet, to round the software stack off to make this a general purpose voice/data radio station for emergency communications.

Nov 062016
 

Sometimes, it is desirable to have a TLS-based VPN tunnel for those times when you’re stuck behind an oppressive firewall and need to have secure communications to the outside world.  Maybe you’re visiting China, maybe you’re making an IoT device and don’t want to open your customers’ networks to world+dog by making your device easy to compromise (or have it pick on Brian Krebs).

OpenVPN is able to share a port with a non OpenVPN server.  When a tunnel is established, it looks almost identical to HTTPS traffic because both use TLS.  The only dead giveaway would be the OpenVPN session lasts longer, but then again, in this day of websockets and long polling, who knows how valid that assumption will be?

The lines needed to pull this magic off?  Here, we have sniproxy listening on port 65443. You can use nginx, Apache, or any other HTTPS web server here.  It need only be listening on the IPv4 loopback interface (127.0.0.1) since all connections will be from OpenVPN.

port 443
port-share localhost 65443

There’s one downside.  OpenVPN will not listen on both IPv4 and IPv6.  In fact, it takes a ritual sacrifice to get it to listen to an IPv6 socket at all.  On UDP, it’s somewhat understandable, and yes, they’re working on it.  On TCP, it’s inexcusable, the problems that plague dual-stack sockets on UDP mostly aren’t a problem on TCP.

It’s also impossible to selectively monitor ports.  There’s a workaround however.  Two, in fact.  Both involve deploying a “proxy” to re-direct the traffic.  So to start with, change that “port 443” to another port number, say 65444, and whilst you’re there, you might as well bind OpenVPN to loopback:

local 127.0.0.1
port 65444
port-share localhost 65443

Port 443 is now unbound and you can now set up your proxy.

Workaround 1: redirect using xinetd

The venerable xinetd superserver has a rather handy port redirection feature.  This has the bonus that the endpoint need not be on the same machine, or be dual-stack.


service https_port_forward
{
flags = IPv6               # Use AF_INET6 as the protocol family
disable = no               # Enable this service
type = UNLISTED            # Not listed in standard system file
socket_type = stream       # Use "stream" socket (aka TCP)
protocol = tcp             # Protocol used by the service
user = nobody              # Run proxy as user 'nobody'
wait = no                  # Do not wait for close, spawn a thread instead
redirect = 127.0.0.1 65444 # Where OpenVPN is listening
only_from = ::/0 0.0.0.0/0 # Allow world + dog
port = 443                 # Listen on port 443
}

Workaround 2: socat and supervisord

socat is a Swiss Army knife of networking, able to tunnel just about anything to anything else.  I was actually going to deploy that route, but whilst I was waiting for socat and supervisord to install, I decided to explore xinetd‘s capabilities.  Both will do the job however.

There is a catch though, socat does not daemonise. So you need something that will start it automatically and re-start it if it fails. You might be able to achieve this with systemd, here I’ll use supervisord to do that task.

The command to run is:
socat TCP6-LISTEN:443,fork TCP4:127.0.0.1:65444

and in supervisord you configure this accordingly:

[program:httpsredirect]
directory=/var/run
command=socat TCP6-LISTEN:443,fork TCP4:127.0.0.1:65444"
redirect_stderr=true
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
autostart=true
autorestart=true

Oct 132016
 

Well, today’s mail had a surprise.  Back about 6 years ago, I was sub-contracted to Jacques Electronics to help them develop some device drivers for their video intercom system.  At the time, they were using TI’s TLV320AIC3204 and system-on-modules based on the Freescale i.MX27 SoC.

No driver existed in the ALSA tree for this particular audio CODEC, and while TI did have one available under NDA, the driver was only licensed for use with a TI OMAP SoC.  I did what just about any developer would do, grabbed the closest-looking existing ALSA SoC driver, ripped it apart and started hacking.  Thus I wound up getting to grips with the I²S infrastructure within the i.MX27 and taming the little beast that is the TLV320AIC3204, producing this patch.

As the code was a derivative work, the code was automatically going to be under the GPLv2 and thus was posted on the ALSA SoC mailing list for others to use.  This would help protect Jacques from any possible GPL infringement regarding the use of that driver.  I was able to do this as it was a clean-room implementation using only material in TI’s data sheet, thus did not contain any intellectual property of my then-employer.

About that time I recall one company using the driver in their IP camera product, the driver itself never made it into the mainline kernel.  About 6 months later, another driver for the TLV320AIC3204 and 3254 did get accepted there, I suspect this too was a clean-room implementation.

Fast forward to late August, I receive an email from Jeremy McDermond on behalf of the Northwest Digital Radio.  They had developed the Universal Digital Radio Controller board for the Raspberry Pi series of computers based around this same CODEC chip.  Interestingly, it was the ‘AIC3204 driver that I developed all that time before that proved to be the code they needed to get the chip working.  The chip in question can be seen up the top-right corner of the board.

Universal Digital Radio Controller

Timely, as there’s a push at the moment within Brisbane Area WICEN Group to investigate possible alternatives to our aging packet radio system and software stack.  These boards, essentially being radio-optimised sound cards, have been used successfully for implementing various digital modes including AX.25 packet, D-Star and could potentially do FreeDV and other digital modes.

So, looks like I’ll be chasing up a supplier for a newer Raspberry Pi board, and seeing what I can do about getting this device talking to the world.

Many thanks to the Northwest Digital Radio company for their generous donation! 🙂

Aug 132016
 

Sometimes I wonder.  Take this evening for example.

I recently purchased some microcontrollers to evaluate for a project, some Atmel ATTiny85s, because they have a rather nice PLL function which means they can do VHF-speed PWM, and some NXP LPC810s, because they happen to be the only DIP-package ARM chip on the market I know of.

The project I’m looking at is a re-work of my bicycle horn… the ATMega32U4 works well, but the LeoStick boards are expensive compared to a bare DIP MCU, and the wiring inside the original prototype is a mess.  I also never got USB working on them, so there’s no point in a USB-capable MCU.

I initially got ATMega1284s owing to the flash storage, but these being 40-pin DIPs, they’re bigger than anticipated, and the fact they’ve got dual USARTs, lots of GPIOs and plenty of storage space, I figured I’d put them aside for another project.

What to use?  Well I have some AT89C2051s from way back (but no programmer for them), some ATTiny24As which I bought for my solar cluster project, an ATMega8L from another project, a LeoStick (Arduino Leonardo clone).  The LeoStick I’m in the process of turning into a debugWire debugger so that I can figure out what the ADCs are doing in my cluster’s power controller (ATTiny24A).

I started building a programmer for the ‘2051s using my ATMega8L last weekend.  The MAX232 IC I grabbed for serial I/O was giving me jibberish, and today I confirmed it was misbehaving.  The board in general is misbehaving in that after flashing the MCU, it seems to stay in reset, so I’ve got more work to do.  If I got that going, I was thinking I could have PCM recordings in an I²C EEPROM and use port 1 on the ‘2051 with an R2R ladder DAC to play sound.  (These chips do not feature PWM.)

Thinking this morning, I thought the LPC810 might be worth a shot.  It only has 4kB of flash, half that of the ATTiny85, and doesn’t have as impressive PWM capabilities, but is good enough.  I really need about 16kB to store the waveforms in flash.  I do have some I²C EEPROMs, mostly <2kB ones that are sourced off old motherboards, but also a handful of 32kB ones that I had just bought especially for this… but then left behind on my desk at work.

I considered audio compression, and experimenting with ADPCM-style techniques, came to the conclusion that I didn’t like the reduced audio quality.  It really sounded harsh.  (Okay, I realise 4-bits per sample is never going to win over the audiophiles!)

Maybe instead of PCM, I could do a crude polyphonic synthesizer?  My horn effect is in fact synthesized using a Python script: the same can be done in C, and the chip probably has the CPU grunt to do it.  It’d save the flash space as I’d be basically doing “poor man’s MIDI” on the thing.  Similar has been done before on lesser hardware.

I did some rough design of data structures.  I figured out a data structure that would allow me to store the state of a “voice” in 8 bytes, and could describe note and timing events in 8-byte blocks.  So in a 2kB EEPROM, I’d store 256 notes, and could easily accommodate 8 or 16 voices in RAM, provided the CPU could keep up at 30MHz.

So, I pull a chip out, slap it in my breadboard, and start hooking it up to power, and to my shiny new USB-TTL serial cable.  Fire up lpc21isp and, nothing, no response from the chip.  Huh?  Check wiring, probe around, still nothing.  Tried different baud rates, etc.  No dice.

This stubborn chip was not going to talk to lpc21isp.  Okay, let’s see if it’ll do SWD.  I dig out my STLink/V2 and hook that up.

OpenOCD reports no response from the device.

Great, maybe a dud chip.  After a good hour or so of fruitless poking and prodding, I pull it out of the breadboard and go to get another from the tube it came from when I notice “Atmel” written on the tube.

I look closer at the chip: it was an ATTiny85!  Different pin-out, different ISP procedure, and even if the .hex file had uploaded, it almost certainly would not have executed.

Swap the chip for an actual LPC810, and OpenOCD reports:

Open On-Chip Debugger 0.10.0-dev-00120-g7a8915f (2015-11-25-18:49)
Licensed under GNU GPL v2
For bug reports, read
http://openocd.org/doc/doxygen/bugs.html
Info : auto-selecting first available session transport "hla_swd". To override use 'transport select '.
Info : The selected transport took over low-level target control. The results might differ compared to plain JTAG/SWD
adapter speed: 10 kHz
adapter_nsrst_delay: 200
Info : Unable to match requested speed 10 kHz, using 5 kHz
Info : Unable to match requested speed 10 kHz, using 5 kHz
Info : clock speed 5 kHz
Info : STLINK v2 JTAG v23 API v2 SWIM v4 VID 0x0483 PID 0x3748
Info : using stlink api v2
Info : Target voltage: 2.979527
Warn : UNEXPECTED idcode: 0x0bc11477
Error: expected 1 of 1: 0x0bb11477
in procedure 'init'
in procedure 'ocd_bouncer'

I haven’t figured out the cause of this yet, whether the ST programmer doesn’t like talking to a competitor’s part. It’d be nice to get SWD going since single-stepping code and peering into memory really spoils a developer like myself. I try lpc21isp again.

Success!  I see a LED blinking, consistent with the demo .hex file I loaded.  Of course now the next step is to try building my own, but at least I can load code onto the device now.

Jul 222016
 

Seems spying on citizens is the new black these days, most government “intelligence” agencies are at it in one form or another. Then the big software companies feel left out, so they join in the fun as well, funneling as much telemetry into their walled garden as possible. (Yes, I’m looking at you, Microsoft.)

This is something I came up with this morning. It’s incomplete, but maybe I can finish it off at some point. I wonder if Cortana has a singing voice?

Partial lyrics for the ASIO/GCHQ/NSA song book

Jul 172016
 

A little trick I just learned today. First, the scenario.

I have a driver for a USART port, the USART on the ATMega32U4 in fact. It uses a FIFO interface to represent the incoming and outgoing data.

I have a library that also uses a FIFO to represent the data to be sent and received on a USART.

I have an application that will configure the USART and pipe between that and the library.

Now, I could have each component implement its own FIFOs, and have the main application shovel data between them. That could work. But I don’t want to do this. I could have the user pass in a pointer to the FIFOs in initialisation functions for the USART driver and the library, but I don’t want to store the extra pointers or incur the additional overheads.

Turns out, you can define a symbol somewhere, then alias it to make two variables appear in the same place. This is done with the alias attribute, and it requires that the target is defined with the nocommon attribute.

In the USART driver, I’ve simply declared the FIFOs as extern entities. This tells the C compiler what to expect in terms of data type but does not define a location in memory. Within the driver, use the symbols as normal.

/* usart.h */

/*! FIFO buffer for USART receive data */
extern struct fifo_t usart_fifo_rx;

/*! FIFO buffer for USART transmit data */
extern struct fifo_t usart_fifo_tx;

/* usart.c */
static void usart_send_next() {
  /* Ready to send next byte */
  int16_t byte = fifo_read_one(&usart_fifo_tx);
  if (byte >= 0)
    UDR1 = byte;
}

ISR(USART1_RX_vect) {
  fifo_write_one(&usart_fifo_rx, UDR1);
}

I can do the same for the protocol library.

/* External FIFO to host UART */
extern struct fifo_t proto_host_uart_rx, proto_host_uart_tx;

/*! External FIFO to target UART */
extern struct fifo_t proto_target_uart_rx, proto_target_uart_tx;

Now how do I link the two? They go by different names. I create aliases, that’s how.

/*
 * FIFO buffers for target communications.
 */
static struct fifo_t target_fifo_rx __attribute__((nocommon));
static uint8_t target_fifo_rx_buffer[128];
extern struct fifo_t usart_fifo_rx __attribute__((alias ("target_fifo_rx")));
extern struct fifo_t proto_target_uart_rx __attribute__((alias ("target_fifo_rx")));

static struct fifo_t target_fifo_tx __attribute__((nocommon));
static uint8_t target_fifo_tx_buffer[128];
extern struct fifo_t usart_fifo_tx __attribute__((alias ("target_fifo_tx")));
extern struct fifo_t proto_target_uart_tx __attribute__((alias ("target_fifo_tx")));

/*
 * FIFO buffers for host communications.
 */
static struct fifo_t host_fifo_rx __attribute__((nocommon));
static uint8_t host_fifo_rx_buffer[128];
extern struct fifo_t proto_host_uart_rx __attribute__((alias ("host_fifo_rx")));
static struct fifo_t host_fifo_tx __attribute__((nocommon));
static uint8_t host_fifo_tx_buffer[128];
extern struct fifo_t proto_host_uart_tx __attribute__((alias ("host_fifo_tx")));

Now a quick check with nm should reveal these to all be at the same locations:

RC=0 stuartl@vk4msl-mb ~/projects/debugwire/firmware $ avr-nm leodebug.elf \
     | grep '\(proto_.*_uart_.x\|host_fifo_.x\|target_fifo_.x\)'
0080022c b host_fifo_rx
008001ac b host_fifo_rx_buffer
0080019c b host_fifo_tx
0080011c b host_fifo_tx_buffer
0080022c B proto_host_uart_rx
0080019c B proto_host_uart_tx
0080034c B proto_target_uart_rx
008002bc B proto_target_uart_tx
0080034c b target_fifo_rx
008002cc b target_fifo_rx_buffer
008002bc b target_fifo_tx
0080023c b target_fifo_tx_buffer
Apr 272016
 

It seems good old “common courtesy” is absent without leave, as is “common sense”. Some would say it’s been absent for most of my lifetime, but to me it seems particularly so of late.

In particular, where it comes to the safety of one’s self, and to others, people don’t seem to actually think or care about what they are doing, and how that might affect others. To say it annoys me is putting it mildly.

In February, I lost a close work colleague in a bicycle accident. I won’t mention his name, as I do not have his family’s permission to do so.

I remember arriving at my workplace early on Friday the 12th before 6AM, having my shower, and about 6:15 wandering upstairs to begin my work day. Reaching my desk, I recall looking down at an open TS-7670 industrial computer and saying out aloud, “It’s just you and me, no distractions, we’re going to get U-Boot working”, before sitting down and beginning my battle with the machine.

So much for the “no distractions” however. At 6:34AM, the office phone rings. I’m the only one there and so I answer. It was a social worker looking for “next of kin” details for a colleague of mine. Seems they found our office details via a Cab Charge card they happened to find in his wallet.

Well, first thing I do is start scrabbling for the office directory to get his home number so I can pass the bad news onto his wife only to find: he’s only listed his mobile number. Great. After getting in contact with our HR person, we later discover there isn’t any contact details in the employee records either. He was around before such paperwork existed in our company.

Common sense would have dictated that one carry an “in case of emergency” number on a card in one’s wallet! At the very least let your boss know!

We find out later that morning that the crash happened on a particularly sharp bend of the Go Between Bridge, where the offramp sweeps left to join the Bicentennial bikeway. It’s a rather sharp bend that narrows suddenly, with handlebar-height handrails running along its length and “Bicycle Only” signs clearly signposted at each end.

Common sense and common courtesy would suggest you slow down on that bridge as a cyclist. Common sense and common courtesy would suggest you use the other side as a pedestrian. Common sense would question the utility of hand rails on a cycle path.

In the meantime our colleague is still fighting for his life, and we’re all holding out hope for him as he’s one of our key members. As for me, I had a network to migrate that weekend. Two of us worked the Saturday and Sunday.

Sunday evening, emotions hit me like a freight train as I realised I was in denial, and realised the true horror of the situation.

We later find out on the Tuesday, our colleague is in a very bad way with worst-case scenario brain damage as a result of the crash. From shining light to vegetable, he’d never work for us again.

Wednesday I took a walk down to the crash site to try and understand what happened. I took a number of photographs, and managed to speak to a gentleman who saw our colleague being scraped off the pavement. Even today, some months later, the marks on the railings (possibly from handlebar grips) and a large blood smear on the path itself, can still be seen.

It was apparent that our colleague had hit this railing at some significant speed. He wasn’t obese, but he certainly wasn’t small, and a fully grown adult does not ricochet off a metal railing and slide face-first for over a metre without some serious kinetic energy involved.

Common sense seems to suggest the average cyclist goes much faster than the 20km/hr collision the typical bicycle helmet is designed for under AS/NZS 2063:2008.

I took the Thursday and Friday off as time-in-lieu for the previous weekend, as I was an emotional wreck. The following Tuesday I resumed cycling to work, and that morning I tried an experiment to reproduce the crash conditions. The bicycle I ride wasn’t that much different to his, both bikes having 29″ wheels.

From what I could gather that morning, it seemed he veered right just prior to the bend then lost control, listing to the right at what I estimated to be about a 30° angle. What caused that? We don’t know. It’s consistent with him dodging someone or something on the path — but this is pure speculation on my part.

Mechanical failure? The police apparently have ruled that out. There’s not much in the way of CCTV cameras in the area, plenty on the pedestrian side, not so much on the cycle side of the bridge.

Common sense would suggest relying on a cyclist to remember what happened to them in a crash is not a good plan.

In any case, common sense did not win out that day. Our colleague passed away from his injuries a little over a fortnight after his crash, aged 46. He is sadly missed.

I’ve since made a point of taking my breakfast down to that point where the bridge joins the cycleway. It’s the point where my colleague had his last conscious thoughts.

Over the course of the last few months, I’ve noticed a number of things.

Most cyclists sensibly slow down on that bend, but a few race past at ludicrous speed. One morning, I nearly thought they’d be an encore performance as two construction workers on City Cycle bikes, sans helmets, came careening around the corner, one almost losing it.

Then I see the pedestrians. There’s a well lit, covered walkway, on the opposite side of the bridge for pedestrian use. It has bench seats, drinking fountains, good lighting, everything you’d want as a pedestrian. Yet, some feel it is not worth the personal exertion to take the 100m extra distance to make use of it.

Instead, they show a lack of courtesy by using the bicycle path. Walking on a bicycle path isn’t just dangerous to the pedestrian like stepping out onto a road, it’s dangerous for the cyclist too!

If a car hits a pedestrian or cyclist, the damage to the occupants of the car is going to be minimal to nonexistent, compared to what happens to the cyclist or pedestrian. If a cyclist or motorcyclist hits a pedestrian however, they surround the frame, thus hit the ground first. Possibly at significant speed.

Yet, pedestrians think it is acceptable to play Russian roulette with their own lives and the lives of every cycle user by continuing to walk where it is not safe for them to go. They’d never do it on a motorway, but somehow a bicycle path is considered fair game.

Most pedestrians are understanding, I’ve politely asked a number to not walk on the bikeway, and most oblige after I point out how they get to the pedestrian walkway.

Common sense would suggest some signage on where the pedestrian can walk would be prudent.

However, I have had at least two that ignored me, one this morning telling me to “mind my own shit”. Yes mate, I am minding “my own shit” as you put it: I’m trying to stop the hypothetical me from possibly crashing into the hypothetical you!

It’s this sort of reaction that seems symbolic of the whole “lack of common courtesy” that abounds these days.

It’s the same attitude that seems to hint to people that it’s okay to park a car so that it blocks the footpath: newsflash, it’s not! I know of one friend of mine who frequently runs into this problem. He’s in a wheelchair — a vehicle not known for its off-road capabilities or ability to squeeze past the narrow gap left by a car.

It seems the drivers think it’s acceptable to force footpath users of all types, including the elderly, the young and the disabled, to “step out” onto the road to avoid the car that they so arrogantly parked there. It makes me wonder how many people subsequently become disabled as a result of a collision caused by them having to step around such obstacles. Would the owner of the parked car be liable?

I don’t know, I’m no lawyer, but I should think they should carry some responsibility!

In Queensland, pedestrians have right-of-way on the footpath. That includes cyclists: cyclists of all ages are allowed there subject to council laws and signage — but once again, they need to give way. In other words, don’t charge down the path like a lunatic, and don’t block it!

No doubt, the people who I’m trying to convince are too arrogant to care about the above, and what their actions might have on others. Still, I needed to get the above off my chest!

Nothing will bring my colleague back, a fact that truly pains me, and I’ve learned some valuable lessons about the sort of encouragement I give people. I regret not telling him to slow down, 5 minutes longer wouldn’t have killed him, and I certainly did not want a race! Was he trying to race me so he could keep an eye on me? I’ll never know.

He was a bright person though, it is proof though that even the intelligent among us are prone to possibly doing stupid things. With thrills come spills, and one might question whether one’s commute to work is the appropriate venue for such thrills, or whether those can wait for another time.

I for one have learned that it does not pay to be the hare, thus I intend to just enjoy the ride for what it is. No need to rush, common sense tells me it just isn’t worth it!

Feb 122016
 

Hi all,

This is a bit of a brain dump so that I don’t forget this little tidbit in future.

Scenario

You have a shiny new Samba 4 active domain controller (or two) responsible for the domain ad.youroffice.example.com.  You have a couple of DNS servers that are responsible for non-AD parts of the domain and the parent youroffice.example.com.  To have everything go through one place, you’ve set up these servers with slave domains for ad.youroffice.example.com.

Joining your first Windows 7 client yields a message like this one.  You’re able to resolve yourdc.ad.youroffice.example.com on the client but not the _msdcs subdomain.

The fix

Configure your slaves to also sync _msdcs.ad.youroffice.example.com.

Example using bind

zone "vrtad.youroffice.example.com" {
        type slave;
        file "/var/lib/bind/db.ad.youroffice.example.com";
        masters { 10.20.30.1; 10.20.30.2; };
        allow-notify { 10.20.30.1; 10.20.30.2; };
};

zone "_msdcs.ad.youroffice.example.com" {
        type slave;
        file "/var/lib/bind/db._msdcs.ad.youroffice.example.com";
        masters { 10.20.30.1; 10.20.30.2; };
        allow-notify { 10.20.30.1; 10.20.30.2; };
};
Dec 062015
 

Recently, I learned about the IceStorm project, which is an effort to reverse engineer the Lattice iCE40-series of FPGAs.  I had run across FPGAs in my time before, but never really got to understand them.  This is for a few reasons:

  • The tools tended to be proprietary, with highly (unnecessarily?) restrictive licensing
  • FPGA boards were hellishly expensive

I wasn’t interested in doing the proprietary toolchain dance, did enough of that with some TI stuff years ago.  There, it was the MSP430, and one of their DSPs.  The former I could use gcc, but still needed a proprietary build of gdbproxy to program and debug the device, and that needed Windows.  The latter could only be programmed using TI’s Code Composer studio.

FPGAs were ten times worse.  Not only was the toolchain huge, occupying gigabytes, but the license was locked to the hardware.  The one project with anything FPGA-related, it was an Altera FPGA, and getting Quartus II to work was nothing short of a nightmare.  I gave up, and vowed never to touch FPGAs.

Fast forward 6 years, and things have changed.  We now have a Verilog synthesiser.  We now have a place-and-route tool.  We have tools for generating a bitstream for the iCE40 FPGAs.  We can now buy FPGA boards for well under the $100.  Heck, you can buy them for $5.

Lattice can do one of three things at this point:

  • They can actively try to stomp it out (discontinuing the iCE40 family, filing law suits, …etc)
  • They can pretend it doesn’t exist
  • They can partner with us and help build a hobby market for their FPGAs

Time will tell as to what they choose.  I’m hoping it’s the latter, but ignoring us is workable too.

So recently I bought an iCE40-HX8K breakout board.  This $80 board is pretty minimal, you get 8 LEDs, a FTDI serial-USB controller (which serves as programmer), a small serial flash EEPROM (for configuration), a linear regulator, a 12MHz oscillator and 4 40-pin headers for GPIOs.

The FPGA on this board is the iCE40HX8K-CT256.  At the time of writing, that’s the top of that particular series with 7680 look-up tables, two PLLs, and some integrated SPI/I²C smarts.

There’s not a lot in the way of tutorials for this particular board, most focus on the iCEStick, which uses the lesser iCE40HX1K-TQ144, has only a small handful of GPIOs exposed and has no configuration EEPROM (it’s one-time programmable).

Through some trial-and-error, and pouring over the schematics though, I managed to port Al Williams’ tutorial on Hackaday at least in part, to the iCE40-HX8k board.  The code for this is on Github.

Pretty much everything works on this board, even PLLs and block RAM.  There’s an example using the PLL on the iCEstick in this VGA demo project.

Some things I’ve learned:

  • If you open jumper J7, and rotate the jumpers on J6 to run horizontally (strapping pins 1-2 and 3-4), specifying -S to iceprog will program the CRAM without touching the SPI flash chip.
  • The PLL ceases to lock in when REFCLK/(1+DIV_R) drops to 10MHz or below.

FILTER_RANGE is a mystery though.  Haven’t figured out what the values correspond to.

It’s likely this particular board is destined to become a DRAM/Interrupt/DMA controller for my upcoming 386, but we’ll see.  In the meantime, I’m playing with a new toy. 🙂