May 072017
 

So, in amongst my pile of crusty old hardware is the old netbook I used to use in the latter part of my univerity days. It is a Lemote Yeeloong, and sports a ~700MHz Loongson 2F CPU (MIPS III little endian ISA) and 1GB RAM.

Back in the day it was a brilliant little machine. It came out of the box running a localised (for China) version of Debian, and had pretty much everything you’d need. I natually repartitioned the machine, setting up Gentoo and I had a separate partition for Debian, so I could actually dual-boot between them.

Fast forward 10 years, the machine runs, but the battery is dead, and Debian no longer supports MIPS-III machines. Debian Jessie does, but Stretch, likely due for release some time this year, will not, if you haven’t got a CPU that supports mips32r2 or mips64r2, you’re stuffed.

I don’t want to throw this machine away.  Being as esoteric as it is, it is an unlikely target for theft, as to the casual observer, it’ll just be “some crappy netbook”.  If someone were to try and steal it, there’s a very high probability I’ll recover it with my data because the day its PMON2000 boot firmware successfully boots a x86-64 OS like Ubuntu or Windows without the assistance of a VM of some kind would be the day Satan puts a requisition order in for anti-freeze and winter mittens.

My use case is for a machine I can take with me on the bicycle.  My needs aren’t huge: I won’t be playing video on this thing, it’ll be largely for web browsing and email.  The web browser needs to support JavaScript, so that rules out options like ELinks or Dillo, my preferred browser is Firefox but I’ll settle for something Webkit-based if that’s all that’s out there.

So what operating systems do I have for a machine that sports a MIPS-III CPU and 1GB RAM?  Fedora has a MIPS port, but that, like Debian, is for the newer MIPS systems.  Arch Linux too is for newer architectures.

I could bootstrap Alpine Linux… and maybe that’s worth looking into, they seem to be doing some nice work in producing a small and capable Linux distribution.  They don’t yet support MIPS though.

Linux From Scratch is an option, if a little labour intensive.  (Been there, done that.)

OpenBSD directly supports this machine, and so I gave OpenBSD 6.0 a try.  It’s a very capable OS, and while it isn’t Linux, there isn’t much that an experienced Linux user like myself needs to adapt to in order to effectively use the OS.  pkgsrc is a great asset to OpenBSD, with a large selection of pre-built packages already available.  Using that, it is possible to get a workable environment up and running very quickly.  OpenBSD/loongson uses the n64 ABI.

Due to licensing worries, they use a particularly old version of binutils as their linker and assembler.  The plan seems to be they wish to wean themselves off the GNU toolchain in favour of LLVM.  At this time though, much of the system is built using the GNU toolchain with some custom patches.  I found that, on the Yeeloong, 1GB RAM was not sufficient for compiling LLVM, even after adding additional swap files, and some packages I needed weren’t available in pkgsrc, nor would they build with the version of GNU tools available.

Maybe as they iron out the kinks in their build environment with LLVM, this will be worth re-visiting.  They’ve done a nice job so far, but it’s not quite up to where I need it to be.

Gentoo actually gives me the choice of two possible ABIs: o32 and n32o32 is the old 32-bit ABI, and suffers a number of performance problems, but generally works.  It’s what Debian Jessie and earlier supplies, and what their mips32 port will produce from Stretch onwards.

n32 is the MIPS equivalent of what some of you may know as x32 on AMD64 platforms, it is a 32-bit environment with 64-bit long pointers… the idea being that very few applications actually benefit from the use of 64-bit data types, and so the usual quantities like int and long remain the same as what they’d be on o32, saving memory.  The long long data type gets a boost because, although “32-bit”, the 64-bit operations are still available for use.

The trouble is, some applications have problems with this mode.  Either the code sees “mips64” in the CHOST and assumes a full 64-bit system (aka n64), or it assumes the pointers are the same width as a long, or the build system makes silly assumptions as to where things get put.  (virtualenv comes to mind, which is what started me on this journey.  The same problem affects x32 on AMD64.)

So I thought, I’d give n64 a try.  I’d see if I can build a cross-compiler on my AMD64 host, and bootstrap Gentoo from that.

Step 1: Cross-compiler

For the cross-compiler, Gentoo has a killer feature that I have not seen in too many other distributions: crossdev.  This is a toolchain build tool that can generate cross-compiler toolchains for most processor architectures and environments.

This is installed by running emerge sys-devel/crossdev.

A gotcha with hardened

I run “hardened” AMD64 stages on my machines, and there’s a little gotcha to be aware of: the hardened USE flag gets set by crossdev, and that can cause fun and games if, like on MIPS, the hardening features haven’t been ported.  My first attempt at this produced a n64 userland where pretty much everything generated a segmentation fault, the one exception being Python 2.7.  If I booted with init=/bin/bash (or init=/bin/bb), my virtual environment died, if I booted with init=/usr/bin/python2.7, I’d be dropped straight into a Python shell, where I could import the subprocess module and try to run things.

Cleaning up, and forcing crossdev to leave off hardened support, got things working.

Building the toolchain

With the above gotcha in mind:

# crossdev --abis n64 \
           --env 'USE="-hardened"' \
           -s4 -t mips64el-unknown-linux-gnu

The --abis n64 tells crossdev you want a n64 ABI toolchain, and the --env will hopefully keep the hardened flag unset. Failing that, try this:

# cat > /etc/portage/package.use/mips64 <<EOF
cross-mips64el-unknown-linux-gnu/binutils -hardened
cross-mips64el-unknown-linux-gnu/gcc -hardened
cross-mips64el-unknown-linux-gnu/glibc -hardened
EOF

If you want a combination of specific toolchain components to try, I’m using:

  • Binutils: 2.28
  • GCC: 5.4.0-r3
  • glibc: 2.25
  • headers: 4.10

Step 2: Checking our toolchain

This is where I went wrong the first time, I tried building the entire OS, only to discover I had wasted hours of CPU time building non-functional binaries. Save yourself some frustration. Start with a small binary to test.

A good target for this is busybox. Run mips64el-unknown-linux-gnu-emerge busybox, and wait for a bit.

When it completes, you should hopefully have a busybox binary:

RC=0 stuartl@beast ~ $ file /usr/mips64el-unknown-linux-gnu/bin/busybox 
/usr/mips64el-unknown-linux-gnu/bin/busybox: ELF 64-bit LSB executable, MIPS, MIPS-III version 1 (SYSV), statically linked, for GNU/Linux 3.2.0, stripped

Testing busybox

There is qemu-user-mips64el, but last time I tried it, I found it broken. So an easier option is to use real hardware or QEMU emulating a full system. In either case, you’ll want to ensure you have your system-of-choice running with a working 64-bit kernel already, if your real hardware isn’t already running a 64-bit Linux kernel, use QEMU.

For QEMU, the path-of-least-resistance I found was to use Debian. Aurélien Jarno has graciously provided QEMU images and corresponding kernels for a good number of ports, including little-endian MIPS.

Grab the Wheezy disk image and the corresponding kernel, then run the following command:

# qemu-system-mips64el -M malta \
    -kernel vmlinux-3.2.0-4-5kc-malta \
    -hda debian_wheezy_mipsel_standard.qcow2 \
    -append "root=/dev/sda1 console=ttyS0,115200" \
    -serial stdio -nographic -net nic -net user

Let it boot up, then log in with username root, password root.

Install openssh-client and rsync (this does not ship with the image):

# apt-get update
# apt-get install openssh-client rsync

Now, you can create a directory, and pull the relevant files from your host, then try the binary out:

# mkdir gentoo
# rsync -aP 10.0.2.2:/usr/mips64el-unknown-linux-gnu/ gentoo/
# chroot gentoo bin/busybox ash

With luck, you should be in the chroot now, using Busybox.

Step 3: Building the system

Having done a “hello world” test, we’re now ready to build everything else. Start by tweaking your /usr/mips64el-unknown-linux-gnu/etc/portage/make.conf to your liking then adjust /usr/mips64el-unknown-linux-gnu/etc/portage/make.profile to point to one of the MIPS profiles. For reference, on my system:

RC=0 stuartl@beast ~ $ ls -l /usr/mips64el-unknown-linux-gnu/etc/portage/make.profile
lrwxrwxrwx 1 root root 49 May  1 09:26 /usr/mips64el-unknown-linux-gnu/etc/portage/make.profile -> /usr/portage/profiles/default/linux/mips/13.0/n64
RC=0 stuartl@beast ~ $ cat /usr/mips64el-unknown-linux-gnu/etc/portage/make.conf 
CHOST=mips64el-unknown-linux-gnu
CBUILD=x86_64-pc-linux-gnu
ARCH=mips

HOSTCC=x86_64-pc-linux-gnu-gcc

ROOT=/usr/${CHOST}/

ACCEPT_KEYWORDS="mips ~mips"

USE="${ARCH} -pam"

CFLAGS="-O2 -pipe -fomit-frame-pointer"
CXXFLAGS="${CFLAGS}"

FEATURES="-collision-protect sandbox buildpkg noman noinfo nodoc"
# Be sure we dont overwrite pkgs from another repo..
PKGDIR=${ROOT}packages/
PORTAGE_TMPDIR=${ROOT}tmp/

ELIBC="glibc"

PKG_CONFIG_PATH="${ROOT}usr/lib/pkgconfig/"
#PORTDIR_OVERLAY="/usr/portage/local/"

Now, you should be ready to start building:

# mips64el-unknown-linux-gnu-emerge -e \
    --keep-going -j6 --load-average 12.0 @system

Now, go away, and do something else for several hours.  It’ll take that long, depending on the speed of your machine.  In my case, the machine is an AMD Phenom II x6 with 8GB RAM, which was brand new in 2010.  It took a good day or so.

Step 4: Testing our system

We should have enough that we can boot our QEMU VM with this image instead.  One way of trying it would be to copy across the userland tree the same way we did for pulling in busybox and chrooting back in again.

In my case, I took the opportunity to build a kernel specifically for the VM that I’m using, and made up a disk image using the new files.

Building a kernel

Your toolchain should be able to cross-build a kernel for the virtual machine.  To get you started, here’s a kernel config file.  Download it, decompress it, then drop it into your kernel source tree as .config.

Having done that, run make olddefconfig ARCH=mips to set the defaults, then make menuconfig ARCH=mips and customise to your hearts content. When finished, run make -j6 vmlinux modules CROSS_COMPILE=mips64el-unknown-linux-gnu- to build the kernel and modules.

Finally, run make modules_install firmware_install INSTALL_MOD_PATH=$PWD/modules CROSS_COMPILE=mips64el-unknown-linux-gnu- to install the kernel modules and firmware into a convenient place.

Making a root disk

Create a blank, raw disk image using qemu-img, then partition it as you like and mount it as a loopback device:

# qemu-img create -f raw gentoo.raw 8G
# fdisk gentoo.raw
(do your partitioning here)
# losetup -P /dev/loop0 $PWD/gentoo.raw

Now you can format the partitions /dev/loop0pX as you see fit, then mount them in some convenient place. I’ll assume that’s /mnt/vm for now. You’re ready to start copying everything in:

# rsync -aP /usr/mips64el-unknown-linux-gnu/ /mnt/vm/
# rsync -aP /path/to/kernel/tree/modules/ /mnt/vm/

You can use this opportunity to make some tweaks to configuration files, like updating etc/fstab, tweaking etc/portage/make.conf (changing ROOT, removing CBUILD), and setting up a getty on ttyS0. I also like to symlink lib to lib64 in non-multilib environments such as this: Don’t symlink lib and lib64! See below.

# cd /mnt/vm
# mv lib/* lib64
# rmdir lib
# ln -s lib64 lib
# cd usr
# mv lib/* lib64
# rmdir lib
# ln -s lib64 lib

When you’re done, unmount.

First boot

Run QEMU with the following arguments:

# qemu-system-mips64el -M malta \
    -kernel /path/to/your/kernel/vmlinux \
    -hda /path/to/your/gentoo.raw \
    -append "root=/dev/sda1 console=ttyS0,115200 init=/bin/bash" \
    -serial stdio -nographic -net nic -net user

It should boot straight to a bash prompt. Mount the root read/write, and then you can make any edits you need to do before boot, such as changing the root password. When done, re-mount the root as read-only, then exec /sbin/init.

# mount / -o rw,remount
# passwd
… etc
# mount / -o ro,remount
# exec /sbin/init

With luck, it should boot to completion.

Step 5: Making the VM a system service

Now, it’d be real nice if libvirt actually supported MIPS VMs, but it doesn’t appear that it does, or at least I couldn’t get it to work.  virt-manager certainly doesn’t support it.

No matter, we can make do with a telnet console (on loopback), and supervisord to daemonise QEMU.  I use the following supervisord configuration file to start my VMs:

[unix_http_server]
file=/tmp/supervisor.sock   ; (the path to the socket file)

[supervisord]
logfile=/tmp/supervisord.log ; (main log file;default $CWD/supervisord.log)
logfile_maxbytes=50MB        ; (max main logfile bytes b4 rotation;default 50MB)
logfile_backups=10           ; (num of main logfile rotation backups;default 10)
loglevel=info                ; (log level;default info; others: debug,warn,trace)
pidfile=/tmp/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
nodaemon=false               ; (start in foreground if true;default false)
minfds=1024                  ; (min. avail startup file descriptors;default 1024)
minprocs=200                 ; (min. avail process descriptors;default 200)

; the below section must remain in the config file for RPC
; (supervisorctl/web interface) to work, additional interfaces may be
; added by defining them in separate rpcinterface: sections
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface

[supervisorctl]
serverurl=unix:///tmp/supervisor.sock ; use a unix:// URL  for a unix socket

[program:qemu-mips64el]
command=/usr/bin/qemu-system-mips64el -cpu MIPS64R2-generic -m 2G -spice disable-ticketing,port=5900 -M malta -kernel /home/stuartl/kernels/qemu-mips/vmlinux -hda /var/lib/libvirt/images/gentoo-mips64el.raw -append "mem=256m@0x0 mem=1792m@0x90000000 root=/dev/sda1 console=ttyS0,115200" -chardev socket,id=char0,port=65223,host=::1,server,telnet,nowait -chardev socket,id=char1,port=65224,host=::1,server,telnet,nowait -serial chardev:char0 -mon chardev=char1,mode=readline -net nic -net bridge,helper=/usr/libexec/qemu-bridge-helper,br=br0

The following creates two telnet sockets, port 65223 is the VM’s console, 65224 is the QEMU control console. The VM has the maximum 2GB RAM possible and uses bridged networking to the network bridge br0. There is a graphical console available via SPICE.

All telnet and SPICE interfaces are bound to loopback, so one must use SSH tunnelling to reach those ports from another host. You can change the above command line to use VNC if that’s what you prefer.

At this point, the VM should be able to boot on its own. I’d start with installing some basic packages, and move on from there. You’ll find the environment is very sparse (my build had no Perl binary for example) but the basics for building everything should be there.

You may also find that what is there, isn’t quite installed right… I found that sshd wasn’t functional due to missing users… a problem soon fixed by doing an emerge -K openssh (the earlier step will have produced binary packages).

In my case, that’s installing a decent text editor (vim) and GNU screen so I can start a build, then detach.  Lastly, I’ll need catalyst, which is Gentoo’s release engineering tool.

At the moment, this is where I’m at.  GNU screen has indirectly pulled in Perl as a dependency, and that is building as I type this.  It is building faster than the little netbook does, and I have the bonus that I can throw more RAM at the problem than I can on the real hardware. The plan from here:

  1. emerge -ek @system, to build everything that got missed before.
  2. ROOT=/tmp/seed emerge -eK @system, to bundle everything up into a staging area
  3. populating /tmp/seed/dev with device files
  4. tar-ing up /tmp/seed to make my initial “seed” stage for catalyst.
  5. building the first n64 stages for Gentoo using catalyst
  6. building the packages I want for the netbook in a chroot
  7. transferring the chroot to the netbook

Symlinking lib and lib64… don’t do it!

So, I was doing this years ago when n32 was experimental.  I recall it being necessary then as this was before Portage having proper multilib support.  The earlier mipsel n32 stages I built, which started out from kanaka‘s even more experimental multilib stages, required this kludge to work-around the lack of support in Portage.

Portage has changed, it now properly handles multilib, and so the symlink kludge is not only not necessary, it breaks things rather badly, as I discovered.  When packages merge files to /lib, rather than following the symlink, they’ll replace it with a directory.  At that point, all hell breaks loose, because stuff that “appeared” in /lib before is no longer there.

I was able to recover by rsync-ing /lib64 to /lib, which isn’t a pretty solution, but it’ll be enough to get an initial “seed” stage.  Running that seed stage through Catalyst will clean up the remnants of that bungle.

Nov 122016
 

So, recently, the North West Digital Radio group generously donated a UDRC II radio control board in thanks for my initial work on an audio driver for the Texas Instruments TLV320AIC3204 (yes, a mouthful).

This board looks like it might support the older Pi model B I had, but I thought I’d play it safe and buy the later revision, so I bought version 3 of the Pi and the associated 7″ touch screen.  Thus, an order went to RS for a whole pile of parts, including one Raspberry Pi3 computer, a blank 8GB MicroSD card, a power supply, the touch screen kit and a case.

Fitting the UDRC

To fit the UDRC, the case will need some of the plastic cut away,  rectangular section out of the main body and a similarly sized portion out of the back cover.

Modifications to the case

Modifications to the case

When assembled, the cut-away section will allow the DB15-HD and Mini-DIN6 connectors to protrude out slightly.

Case assembled with modifications

The UDRC needs some minor modifications too for the touch screen.  Probe around, and you’ll find a source of 5V on one of the unpopulated headers.  You’ll want to solder a two-pin header to here and hook that to the LCD control board using the supplied jumper leads.  If you’ve got one, use a right-angled header, otherwise just bend a regular one like I did.

5V supply for the LCD on the UDRC

5V supply for the LCD on the UDRC

You’ll note I’ve made a note on the DB15-HD, a monitor does NOT plug in here.

From here, you should be ready to load up a SD card.  NWDR recommend the use of Compass Linux, which is a Raspbian fork configured for use with the UDRC.  I used the lite version, since it was smaller and I’m comfortable with command lines.

Configuring screen rotation

If you try to boot your freshly prepared SD card, the first thing you’ll notice is that the screen is up-side-down.  Clearly a few people didn’t communicate with each-other about which way was up on this thing.

Before you pull the SD card out, it is worth mounting the first partition on the SD card and editing config.txt on the root directory of that partition. If doing this on a Windows computer ensure your text editor respects Unix line endings! (Blame Microsoft. If you’re doing this on a Mac, Linux, BSD or other Unix-ish computer, you have nothing to worry about.)

Add the following to the end of the file (or anywhere really):

# Rotate the screen the "right way up"
lcd_rotate=2

Now save the file, unmount the SD card, and put it in the Pi before assembling the case proper.

Setting up your environment

Now, if you chose the lite option like I did, there’ll be no GUI, and the touch aspect of the touchscreen is useless.  You’ll need a USB keyboard.

Log in as pi (password raspberry), run passwd to change your password, then run sudo -s to gain a root shell.

You might choose like I did to run passwd again here to set root‘s password too.

After that, you’ll want to install some software.  Your choice of desktop environment is entirely up to you, I prefer something lightweight, and have been using FVWM for years, but there are plenty of choices in Debian as well as the usual suspects (KDE, Gnome, XFCE…).

For the display manager, I’ll choose lightdm. We also need an on-screen keyboard. I tried a couple, including matchbox-keyboard and the rather ancient xvkbd. Despite its age, I found xvkbd to be the most usable.

Once you’ve decided what you want, run apt-get install with your list of packages, making sure to include xvkbd and lightdm in your list.  Other applications I included here were network-manager-gnome, qasmixer, pasystray, stalonetray and gkrellm.

Enabling the on-screen keyboard in lightdm

Having installed lightdm and xvkbd, you can now configure lightdm to enable the accessibility options.

Open up /etc/lightdm/lightdm-gtk-greeter.conf, look for the line show-indicators and tack ;~a11y on the end.

Now down further, look for the commented out keyboard setting and change that to keyboard=xvkbd. Save and close the file, then run /etc/init.d/lightdm restart.

You should find yourself staring at the log-in screen, and lo and behold, there should be a new icon up the top-right. Tapping it should bring up a 3 line menu, the bottom of which is the on-screen keyboard.

On-screen keyboard in lightdm

On-screen keyboard in lightdm

The button marked Focus is what you hit to tell the keyboard which application is to receive the keyboard events.  Tap that, then the application you want.  To log in, tap Focus then the password field.  You should be able to tap your password in followed by either the Return button on the virtual keyboard or the Log In button on the form.

Making FVWM touch-friendly

I have a pretty old configuration that has evolved over the last 10 years using FVWM that was built around keyboard-centric operation and screen real-estate preservation.  This configuration mainly needed two changes:

  • Menus and title bar text enlarged to make the corresponding UI elements finger-friendly
  • Adjusting the size of the FVWM BarButtons to suit the 800×480 display

Rather than showing how to do it from scratch, I’ll just link to the configuration tarball which you are welcome to play with.  It uses xcalendar which isn’t in the Debian repositories any more, but is available on Gentoo mirrors and can be built from source (you’ll want to install xutils-dev for xmake), stalonetray and gkrellm are both in the standard Debian repositories.

FVWM on the Raspberry Pi

FVWM on the Raspberry Pi

Enabling the right-click

This took a bit of hunting to figure out.  There is a method that works with Debian Wheezy which allows right-clicks by way of long presses, but this broke in Jessie, and the 2016-05-23 release of Compass Linux is built on the latter.  So another solution is needed.

Philipp Merkel however, wrote a little daemon called twofing.  Once installed, doing a right click is simply a two-fingered tap on the screen, there’s support for other two-fingered gestures such as pinching and rotation as well.  It is available on Github, and I have forked this, adding some udev rules and scripts to integrate it into the Raspberry Pi.

The resulting Debian package is here.  Download the .deb, run dpkg -i on it, and then re-start the Raspberry Pi (or you can try running udevadm trigger and re-starting X).  The udev rules should create a /dev/twofingtouch symbolic link and the installed Xsession.d/Xreset.d scripts should take care of starting it with X and shutting it down afterwards.

Having done this, when you log in you should find that twofing is running, and that right clicks can be performed using a two-fingered prod.

Finishing up

Having done the configuration, you should now have a usable workhorse for numerous applications.  The UDRC shows up as a second sound card and is accessible via ALSA.  I haven’t tried it out yet, but it at least shows up in the mixer application, so the signs are there.  I’ll be looking to add LinBPQ and FreeDV into the mix yet, to round the software stack off to make this a general purpose voice/data radio station for emergency communications.

Sep 272015
 

Well, lately I’ve been doing a bit of work hacking the firmware on the Rowetel SM1000 digital microphone.  For those who don’t know it, this is a hardware (microcontroller) implementation of the FreeDV digital voice mode: it’s a modem that plugs into the microphone/headphone ports of any SSB-capable transceiver and converts FreeDV modem tones to analogue voice.

I plan to set this unit of mine up on the bicycle, but there’s a few nits that I had.

  • There’s no time-out timer
  • The unit is half-duplex

If there’s no timeout timer, I really need to hear the tones coming from the radio to tell me it has timed out.  Others might find a VOX feature useful, and there’s active experimentation in the FreeDV 700B mode (the SM1000 currently only supports FreeDV 1600) which has been very promising to date.

Long story short, the unit needed a more capable UI, and importantly, it also needed to be able to remember settings across power cycles.  There’s no EEPROM chip on these things, and while the STM32F405VG has a pin for providing backup-battery power, there’s no battery or supercapacitor, so the SM1000 forgets everything on shut down.

ST do have an application note on their website on precisely this topic.  AN3969 (and its software sources) discuss a method for using a portion of the STM32’s flash for this task.  However, I found their “license” confusing.  So I decided to have a crack myself.  How hard can it be, right?

There’s 5 things that a virtual EEPROM driver needs to bear in mind:

  • The flash is organised into sectors.
  • These sectors when erased contain nothing but ones.
  • We store data by programming zeros.
  • The only way to change a zero back to a one is to do an erase of the entire sector.
  • The sector may be erased a limited number of times.

So on this note, a virtual EEPROM should aim to do the following:

  • It should keep tabs on what parts of the sector are in use.  For simplicity, we’ll divide this into fixed-size blocks.
  • When a block of data is to be changed, if the change can’t be done by changing ones to zeros, a copy of the entire block should be written to a new location, and a flag set (by writing zeros) on the old block to mark it as obsolete.
  • When a sector is full of obsolete blocks, we may erase it.
  • We try to put off doing the erase until such time as the space is needed.

Step 1: making room

The first step is to make room for the flash variables.  They will be directly accessible in the same manner as variables in RAM, however from the application point of view, they will be constant.  In many microcontroller projects, there’ll be several regions of memory, defined by memory address.  This comes from the datasheet of your MCU.

An example, taken from the SM1000 firmware, prior to my hacking (stm32_flash.ld at r2389):

/* Specify the memory areas */
MEMORY
{
  FLASH (rx)      : ORIGIN = 0x08000000, LENGTH = 1024K
  RAM (rwx)       : ORIGIN = 0x20000000, LENGTH = 128K
  CCM (rwx)       : ORIGIN = 0x10000000, LENGTH = 64K
}

The MCU here is the STM32F405VG, which has 1MB of flash starting at address 0x08000000. This 1MB is divided into (in order):

  • Sectors 0…3: 16kB starting at 0x08000000
  • Sector 4: 64kB starting at 0x0800c000
  • Sector 5 onwards: 128kB starting at 0x08010000

We need at least two sectors, as when one fills up, we will swap over to the other. Now it would have been nice if the arrangement were reversed, with the smaller sectors at the end of the device.

The Cortex M4 CPU is basically hard-wired to boot from address 0, the BOOT pins on the STM32F4 decide how that gets mapped. The very first few instructions are the interrupt vector table, and it MUST be the thing the CPU sees first. Unless told to boot from external memory, or system memory, then address 0 is aliased to 0x08000000. i.e. flash sector 0, thus if you are booting from internal flash, you have no choice, the vector table MUST reside in sector 0.

Normally code and interrupt vector table live together as one happy family. We could use a couple of 128k sectors, but 256k is rather a lot for just an EEPROM storing maybe 1kB of data tops. Two 16kB sectors is just dandy, in fact, we’ll throw in the third one for free since we’ve got plenty to go around.

However, the first one will have to be reserved for the interrupt vector table that will have the space to itself.

So here’s what my new memory regions look like (stm32_flash.ld at 2390):

/* Specify the memory areas */
MEMORY
{
  /* ISR vectors *must* be placed here as they get mapped to address 0 */
  VECTOR (rx)     : ORIGIN = 0x08000000, LENGTH = 16K
  /* Virtual EEPROM area, we use the remaining 16kB blocks for this. */
  EEPROM (rx)     : ORIGIN = 0x08004000, LENGTH = 48K
  /* The rest of flash is used for program data */
  FLASH (rx)      : ORIGIN = 0x08010000, LENGTH = 960K
  /* Memory area */
  RAM (rwx)       : ORIGIN = 0x20000000, LENGTH = 128K
  /* Core Coupled Memory */
  CCM (rwx)       : ORIGIN = 0x10000000, LENGTH = 64K
}

This is only half the story, we also need to create the section that will be emitted in the ELF binary:

SECTIONS
{
  .isr_vector :
  {
    . = ALIGN(4);
    KEEP(*(.isr_vector))
    . = ALIGN(4);
  } >FLASH

  .text :
  {
    . = ALIGN(4);
    *(.text)           /* .text sections (code) */
    *(.text*)          /* .text* sections (code) */
    *(.rodata)         /* .rodata sections (constants, strings, etc.) */
    *(.rodata*)        /* .rodata* sections (constants, strings, etc.) */
    *(.glue_7)         /* glue arm to thumb code */
    *(.glue_7t)        /* glue thumb to arm code */
	*(.eh_frame)

    KEEP (*(.init))
    KEEP (*(.fini))

    . = ALIGN(4);
    _etext = .;        /* define a global symbols at end of code */
    _exit = .;
  } >FLASH…

There’s rather a lot here, and so I haven’t reproduced all of it, but this is the same file as before at revision 2389, but a little further down. You’ll note the .isr_vector is pointed at the region called FLASH which is most definitely NOT what we want. The image will not boot with the vectors down there. We need to change it to put the vectors in the VECTOR region.

Whilst we’re here, we’ll create a small region for the EEPROM.

SECTIONS
{
  .isr_vector :
  {
    . = ALIGN(4);
    KEEP(*(.isr_vector))
    . = ALIGN(4);
  } >VECTOR


  .eeprom :
  {
    . = ALIGN(4);
    *(.eeprom)         /* special section for persistent data */
    . = ALIGN(4);
  } >EEPROM


  .text :
  {
    . = ALIGN(4);
    *(.text)           /* .text sections (code) */
    *(.text*)          /* .text* sections (code) */

THAT’s better! Things will boot now. However, there is still a subtle problem that initially caught me out here. Sure, the shiny new .eeprom section is unpopulated, BUT the linker has helpfully filled it with zeros. We cannot program zeroes back into ones! Either we have to erase it in the program, or we tell the linker to fill it with ones for us. Thankfully, the latter is easy (stm32_flash.ld at 2395):

  .eeprom :
  {
    . = ALIGN(4);
    KEEP(*(.eeprom))   /* special section for persistent data */
    . = ORIGIN(EEPROM) + LENGTH(EEPROM) - 1;
    BYTE(0xFF)
    . = ALIGN(4);
  } >EEPROM = 0xff

Credit: Erich Styger

We have to do two things. One, is we need to tell it that we want the region filled with the pattern 0xff. Two, we need to make sure it gets filled with ones by telling the linker to write one as the very last byte. Otherwise, it’ll think, “Huh? There’s nothing here, I won’t bother!” and leave it as a string of zeros.

Step 2: Organising the space

Having made room, we now need to decide how to break this data up.  We know the following:

  • We have 3 sectors, each 16kB
  • The sectors have an endurance of 10000 program-erase cycles

Give some thought as to what data you’ll be storing.  This will decide how big to make the blocks.  If you’re storing only tiny bits of data, more blocks makes more sense.  If however you’ve got some fairly big lumps of data, you might want bigger blocks to reduce overheads.

I ended up dividing the sectors into 256-byte blocks.  I figured that was a nice round (binary sense) figure to work with.  At the moment, we have 16 bytes of configuration data, so I can do with a lot less, but I expect this to grow.  The blocks will need a header to tell you whether or not the block is being used.  Some checksumming is usually not a bad idea either, since that will clue you in to when the sector has worn out prematurely.  So some data in each block will be header data for our virtual EEPROM.

If we don’t care about erase cycles, this is fine, we can just make all blocks data blocks, however it’d be wise to track this, and avoid erasing and attempting to use a depleted sector, so we need somewhere to track this.  256 bytes gives us enough space to stash an erase counter and a map of what blocks are in use within that sector.

So we’ll reserve the first block in the sector to act as this index for the entire sector.  This gives us enough room to have 16-bits worth of flags for each block stored in the index.  That gives us 63 blocks per sector for data use.

It’d be handy to be able to use this flash region for a few virtual EEPROMs, so we’ll allocate some space to give us a virtual ROM ID.  It is prudent to do some checksumming, and the STM32F4 has a CRC32 module, so in that goes, and we might choose to not use all of a block, so we should throw in a size field (8 bits, since the size can’t be bigger than 255).  If we pad this out a bit to give us a byte for reserved data, we get a header with the following structure:

15 14 13 12 11 10 19 8 7 6 5 4 3 2 1 0
+0 CRC32 Checksum
+2
+4 ROM ID Block Index
+6 Block Size Reserved

So that subtracts 8 bytes from the 256 bytes, leaving us 248 for actual program data. If we want to store 320 bytes, we use two blocks, block index 0 stores bytes 0…247 and has a size of 248, and block index 1 stores bytes 248…319 and has a size of 72.

I mentioned there being a sector header, it looks like this:

15 14 13 12 11 10 19 8 7 6 5 4 3 2 1 0
+0 Program Cycles Remaining
+2
+4
+6
+8 Block 0 flags
+10 Block 1 flags
+12 Block 2 flags

No checksums here, because it’s constantly changing.  We can’t re-write a CRC without erasing the entire sector, we don’t want to do that unless we have to.  The flags for each block are currently allocated accordingly:

15 14 13 12 11 10 19 8 7 6 5 4 3 2 1 0
+0 Reserved In use

When the sector is erased, all blocks show up as having all flags set as ones, so the flags is considered “inverted”.  When we come to use a block, we mark the “in use” bit with a zero, leaving the rest as ones.  When we erase, we mark the entire flags block as zeros.  We can set other bits here as we need for accounting purposes.

Thus we have now a format for our flash sector header, and for our block headers.  We can move onto the algorithm.

Step 3: The Code

This is the implementation of the above ideas.  Our code needs to worry about 3 basic operations:

  • reading
  • writing
  • erasing

This is good enough if the size of a ROM image doesn’t change (normal case).  For flexibility, I made my code so that it works crudely like a file, you can seek to any point in the ROM image and start reading/writing, or you can blow the whole thing away.

Constants

It is bad taste to leave magic numbers everywhere, so constants should be used to represent some quantities:

  • VROM_SECT_SZ=16384:
    The virtual ROM sector size in bytes.  (Those watching Codec2 Subversion will note I cocked this one up at first.)
  • VROM_SECT_CNT=3:
    The number of sectors.
  • VROM_BLOCK_SZ=256:
    The size of a block
  • VROM_START_ADDR=0x08004000:
    The address where the virtual ROM starts in Flash
  • VROM_START_SECT=1:
    The base sector number where our ROM starts
  • VROM_MAX_CYCLES=10000:
    Our maximum number of program-erase cycles

Our programming environment may also define some, for example UINTx_MAX.

Derived constants

From the above, we can determine:

  • VROM_DATA_SZ = VROM_BLOCK_SZ – sizeof(block_header):
    The amount of data per block.
  • VROM_BLOCK_CNT = VROM_SECT_SZ / VROM_BLOCK_SZ:
    The number of blocks per sector, including the index block
  • VROM_SECT_APP_BLOCK_CNT = VROM_BLOCK_CNT – 1
    The number of application blocks per sector (i.e. total minus the index block)

CRC32 computation

I decided to use the STM32’s CRC module for this, which takes its data in 32-bit words.  There’s also the complexity of checking the contents of a structure that includes its own CRC.  I played around with Python’s crcmod module, but couldn’t find some arithmetic that would allow it to remain there.

So I copy the entire block, headers and all to a temporary copy (on the stack), set the CRC field to zero in the header, then compute the CRC. Since I need to read it in 32-bit words, I pack 4 bytes into a word, big-endian style. In cases where I have less than 4 bytes, the least-significant bits are left at zero.

Locating blocks

We identify each block in an image by the ROM ID and the block index.  We need to search for these when requested, as they can be located literally anywhere in flash.  There are probably cleverer ways to do this, but I chose the brute force method.  We cycle through each sector and block, see if the block is allocated (in the index), see if the checksum is correct, see if it belongs to the ROM we’re looking for, then look and see if it’s the right index.

Reading data

To read from the above scheme, having been told a ROM ID (rom), start offset and a size, the latter two being in byte sand given a buffer we’ll call out, we first need to translate the start offset to a sector and block index and block offset.  This is simple integer division and modulus.

The first and last blocks of our read, we’ll probably only read part of.  The rest, we’ll read entire blocks in.  The block offset is only relevant for this first block.

So we start at the block we calculate to have the start of our data range.  If we can’t find it, or it’s too small, then we stop there, otherwise, we proceed to read out the data.  Until we run out of data to read, we increment the block index, try to locate the block, and if found, copy its data out.

Writing and Erasing

Writing is a similar affair.  We look for each block, if we find one, we overwrite it by copying the old data to a temporary buffer, copy our new data in over the top then mark the old block as obsolete before writing the new one out with a new checksum.

Trickery is in invoking the wear levelling algorithm on an as-needed basis.  We mark a block obsolete by setting its header fields to zero, but when we run out of free blocks, then we go looking for sectors that are full of obsolete blocks waiting to be erased.  When we encounter a sector that has been erased, we write a new header at the start and proceed to use its first data block.

In the case of erasing, we don’t bother writing anything out, we just mark the blocks as obsolete.

Implementation

The full C code is in the Codec2 Subversion repository.  For those who prefer Git, I have a git-svn mirror (yes, I really should move it off that domain).  The code is available under the Lesser GNU General Public License v2.1 and may be ported to run on any CPU you like, not just ST’s.

May 212015
 

This is more a quick dump of some proof-of-concept code.  We’re in the process of writing communications drivers for an energy management system, many of which need to communicate with devices like Modbus energy meters.

Traditionally I’ve just used the excellent pymodbus library with its synchronous interface for batch-processing scripts, but this time I need real-time and I need to do things asynchronously.  I can either run the synchronous client in a thread, or, use the Twisted interface.

We’re actually using Tornado for our core library, and thankfully there’s an adaptor module to allow you to use Twisted applications.  But how do you do it?  Twisted code requires quite a bit of getting used to, and I’ve still not got my head around it.  I haven’t got my head fully around Tornado either.

So how does one combine these?

The following code pulls out the first couple of registers out of a CET PMC330A energy meter that’s monitoring a few circuits in our office. It is a stripped down copy of this script.

#!/usr/bin/env python
'''
Pymodbus Asynchronous Client Examples -- using Tornado
--------------------------------------------------------------------------

The following is an example of how to use the asynchronous modbus
client implementation from pymodbus.
'''
#---------------------------------------------------------------------------# 
# import needed libraries
#---------------------------------------------------------------------------# 
import tornado
import tornado.platform.twisted
tornado.platform.twisted.install()
from twisted.internet import reactor, protocol
from pymodbus.constants import Defaults

#---------------------------------------------------------------------------# 
# choose the requested modbus protocol
#---------------------------------------------------------------------------# 
from pymodbus.client.async import ModbusClientProtocol
#from pymodbus.client.async import ModbusUdpClientProtocol

#---------------------------------------------------------------------------# 
# configure the client logging
#---------------------------------------------------------------------------# 
import logging
logging.basicConfig()
log = logging.getLogger()
log.setLevel(logging.DEBUG)

#---------------------------------------------------------------------------# 
# example requests
#---------------------------------------------------------------------------# 
# simply call the methods that you would like to use. An example session
# is displayed below along with some assert checks. Note that unlike the
# synchronous version of the client, the asynchronous version returns
# deferreds which can be thought of as a handle to the callback to send
# the result of the operation.  We are handling the result using the
# deferred assert helper(dassert).
#---------------------------------------------------------------------------# 
def beginAsynchronousTest(client):
    io_loop = tornado.ioloop.IOLoop.current()

    def _dump(result):
        logging.info('Register values: %s', result.registers)
    def _err(result):
        logging.error('Error: %s', result)

    rq = client.read_holding_registers(0, 4, unit=1)
    rq.addCallback(_dump)
    rq.addErrback(_err)

    #-----------------------------------------------------------------------# 
    # close the client at some time later
    #-----------------------------------------------------------------------# 
    io_loop.add_timeout(io_loop.time() + 1, client.transport.loseConnection)
    io_loop.add_timeout(io_loop.time() + 2, io_loop.stop)

#---------------------------------------------------------------------------# 
# choose the client you want
#---------------------------------------------------------------------------# 
# make sure to start an implementation to hit against. For this
# you can use an existing device, the reference implementation in the tools
# directory, or start a pymodbus server.
#---------------------------------------------------------------------------# 
defer = protocol.ClientCreator(reactor, ModbusClientProtocol
        ).connectTCP("10.20.30.40", Defaults.Port)
defer.addCallback(beginAsynchronousTest)
tornado.ioloop.IOLoop.current().start()
Mar 192015
 

Hi all,

This is more a note to self for future reference.  Qt has a nice handy reference counting memory management system by means of QSharedPointer and QWeakPointer.  The system is apparently thread-safe and seems to be totally transparent.

One gotcha though, is two QSharedPointer objects cannot share the same pointer unless one is cloned from the other (either directly or via QWeakPointer).  The other is that you must leave deletion of the object to QSharedPointer.  You’ve given it your precious pointer, it has adopted it and while you may call the object, it is no longer yours, so don’t go deleting it.

So you create an object, you want to pass a reference to yourself to some other object.  How?  Like this?

QSharedPointer<MyClass> MyClass::ref() {
    return QSharedPointer<MyClass>(this); /* NO! */
}

No, not like that! That will create QSharedPointer instances left right and centre. Not what you want to do at all. What you need to do, is create the initial reference, but then store a weak reference to it. Then all future calls, you simply call the toStrongRef method of the weak reference to get a QSharedPointer that’s linked to the first one you handed out.

Then, having done this, when you create your new object, you create it with the new keyword as normal, take a QSharedPointer reference to it, then forget all about the original pointer! You can get it back by calling the data method of the pointer object.

To make it simple, here’s a base class you can inherit to do this for you.

    #include <QWeakPointer>
    #include <QSharedPointer>

    /*!
     * Self-Reference helper.  This allows for objects to maintain
     * references to "this" via the QSharedPointer reference-counting
     * smart pointers.
     */
    template<typename T>
    class SelfRef {
        public:
            /*!
             * Get a strong reference to this object.
             */
            QSharedPointer<T>    ref()
            {
                QSharedPointer<T> this_ref(this->this_weak);
                if (this_ref.isNull()) {
                    this_ref = QSharedPointer<T>((T*)this);
                    this->this_weak = this_ref.toWeakRef();
                }
                return this_ref;
            }

            /*!
             * Get a weak reference to this object.
             */
            QWeakPointer<T>        weakRef() const
            {
                return this->this_weak;
            }
        private:
            /*! A weak reference to this object */
            QWeakPointer<T>        this_weak;
    };

Example usage:

#include <iostream>
#include <stdexcept>
#include "SelfRef.h"

class Test : public SelfRef<Test> {
        public:
                Test()
                {
                        std::cout << __func__ << std::endl;
                        this->freed = false;
                }
                ~Test()
                {
                        std::cout << __func__ << std::endl;
                        this->freed = true;
                }

                void test() {
                        if (this->freed)
                                throw std::runtime_error("Already freed!");
                        std::cout
                                << "Test object is at "
                                << (void*)this
                                << std::endl;
                }

                bool                    freed;
                QSharedPointer<Test>    another;
};

int main(void) {
        Test* a = new Test();
        if (a != NULL) {
                QSharedPointer<Test> ref1 = a->ref();
                if (!ref1.isNull()) {
                        QSharedPointer<Test> ref2 = a->ref();
                        ref2->test();
                }
                ref1->test();
        }
        a->test();
        return 0;
}

Note that the line before the return is a deliberate use after free bug to prove the pointer really was freed.  Also note that the idea of setting a boolean flag to indicate the constructor has been called only works here because nothing happens between that use after free attempt and the destructor being called.  Don’t rely on this to see if your object is being called after destruction.  This is what the output session from gdb looks like:

RC=0 stuartl@rikishi /tmp/qtsp $ make CXXFLAGS=-g
g++ -c -g -I/usr/share/qt4/mkspecs/linux-g++ -I. -I/usr/include/qt4/QtCore -I/usr/include/qt4/QtGui -I/usr/include/qt4 -I. -I. -o test.o test.cpp
g++ -Wl,-O1 -o qtsp test.o    -L/usr/lib64/qt4 -lQtGui -L/usr/lib64 -L/usr/lib64/qt4 -L/usr/X11R6/lib -lQtCore -lgthread-2.0 -lglib-2.0 -lpthread 
RC=0 stuartl@rikishi /tmp/qtsp $ gdb ./qtsp 
GNU gdb (Gentoo 7.7.1 p1) 7.7.1
Copyright (C) 2014 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-pc-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://bugs.gentoo.org/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from ./qtsp...done.
(gdb) r
Starting program: /tmp/qtsp/qtsp 
warning: Could not load shared library symbols for linux-vdso.so.1.
Do you need "set solib-search-path" or "set sysroot"?
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Test
Test object is at 0x555555759c90
Test object is at 0x555555759c90
~Test
terminate called after throwing an instance of 'std::runtime_error'
  what():  Already freed!

Program received signal SIGABRT, Aborted.
0x00007ffff5820775 in raise () from /lib64/libc.so.6
(gdb) bt
#0  0x00007ffff5820775 in raise () from /lib64/libc.so.6
#1  0x00007ffff5821bf8 in abort () from /lib64/libc.so.6
#2  0x00007ffff610cd75 in __gnu_cxx::__verbose_terminate_handler() ()
   from /usr/lib/gcc/x86_64-pc-linux-gnu/4.8.3/libstdc++.so.6
#3  0x00007ffff6109ec8 in ?? () from /usr/lib/gcc/x86_64-pc-linux-gnu/4.8.3/libstdc++.so.6
#4  0x00007ffff6109f15 in std::terminate() () from /usr/lib/gcc/x86_64-pc-linux-gnu/4.8.3/libstdc++.so.6
#5  0x00007ffff610a2e9 in __cxa_throw () from /usr/lib/gcc/x86_64-pc-linux-gnu/4.8.3/libstdc++.so.6
#6  0x0000555555555cea in Test::test (this=0x555555759c90) at test.cpp:20
#7  0x0000555555555315 in main () at test.cpp:41
(gdb) up
#1  0x00007ffff5821bf8 in abort () from /lib64/libc.so.6
(gdb) up
#2  0x00007ffff610cd75 in __gnu_cxx::__verbose_terminate_handler() ()
   from /usr/lib/gcc/x86_64-pc-linux-gnu/4.8.3/libstdc++.so.6
(gdb) up
#3  0x00007ffff6109ec8 in ?? () from /usr/lib/gcc/x86_64-pc-linux-gnu/4.8.3/libstdc++.so.6
(gdb) up
#4  0x00007ffff6109f15 in std::terminate() () from /usr/lib/gcc/x86_64-pc-linux-gnu/4.8.3/libstdc++.so.6
(gdb) up
#5  0x00007ffff610a2e9 in __cxa_throw () from /usr/lib/gcc/x86_64-pc-linux-gnu/4.8.3/libstdc++.so.6
(gdb) up
#6  0x0000555555555cea in Test::test (this=0x555555759c90) at test.cpp:20
20                                      throw std::runtime_error("Already freed!");
(gdb) up
#7  0x0000555555555315 in main () at test.cpp:41
41              a->test();
(gdb) quit
A debugging session is active.

        Inferior 1 [process 17906] will be killed.

Quit anyway? (y or n) y

You’ll notice it fails right on that second last line because the last QSharedPointer went out of scope before this.  This is why you forget all about the pointer once you create the first QSharedPointer.

To remove the temptation to use the pointer directly, you can make all your constructors protected (or private) and use a factory that returns a QSharedPointer to your new object.

A useful macro for doing this:

/*!
 * Create an instance of ClassName with the given arguments
 * and immediately return a reference to it.
 *
 * @returns	QSharedPointer<ClassName> object
 */
#define newRef(ClassName, args ...)	\
	((new ClassName(args))->ref().dynamicCast<ClassName>())
Dec 042014
 

Just recently I’ve been looking into asynchronous programming.

Previously I had an aversion to asynchronous code due to the ugly twisted web of callback functions that it can turn into. However, after finding that having a large number of threads blocking on locks and semaphores still manages to thrash a machine, I’ve come to the conclusion that I should put aside my feelings and try it anyway.

Our codebase is written in Python 2.7, sadly, not new enough to have asyncio. However we do plan to eventually move to Python 3.x when things are a bit more stable in the Debian/Ubuntu department (Ubuntu 12.04 didn’t support it and there are a few sites that still run it, one or two still run 10.04).

That said, there’s thankfully a port of what became asyncio in the form of Trollius.

Reading through the examples though still had me lost and the documentation is not exactly extensive. In particular, coroutines and yielding. The yield operator is not new, it’s been in Python for some time, but until now I never really understood it or how it was useful in co-operative programming.

Thankfully, Sahand Saba has written a guide on how this all works:
http://sahandsaba.com/understanding-asyncio-node-js-python-3-4.html

I might put some more notes up as I learn more, but that guide explained a lot of the fundamentals behind a lot of event loop frameworks including asyncio.

Sep 292014
 

Well, it’s been a busy year so far for security vulnerabilities in open-source projects.  Not that those have been the only two bugs, they’re just two high-profile ones that are getting a lot of media attention.

Now, a number of us do take sheer delight in pointing and laughing when one of the big boys, whether they be based in Redmond or California, makes a security balls-up on a big scale.  After all, people pay big dollars to use some of that software, and many are dependent on it for their livelihoods.

The question does get raised though, what do you trust more?  A piece of software whose code is a complete secret, or the a piece of software anyone can audit?  Some argue the former, because anyone can find the holes in the latter and exploit them.  Some argue the latter, since anyone can find the holes and fix them.  Not being able to see the code doesn’t guarantee a lack of security issues however, and these last two headline-making bugs is definitely evidence that having the code isn’t a guarantee to a bug-free utopia.

There is no guarantee either way.

I’ve seen both open-source systems and high-end commercial systems both perform well and I’ve seen both make a dismal failure.  Bad code is bad code, no matter what the license, and even having the source available doesn’t mean you can fix it as first one must be able to understand what its intent is.  Information Technology in particular seems to attract the technologically inept but socially capable types that are able to talk their way into nearly any position, and so you wind up with the monstrosities that you might see on The Daily WTF.  These same people lurk amongst open-source circles too, and there are those who just make an honest mistake.  Security is hard, and it can be easy to overlook a possible hole.

I run Gentoo here, have done so now since 2004 (damn, 10 years already, but I digress…).  I’ve been building my own stage 3 tarballs from scratch since 2010.  July 2010 I bought my current desktop, a 6-core AMD Phenom machine, and so combined with the 512Kbps ADSL I had at the time, it was faster for me to compile stage 3 tarballs for the various systems (i386, AMD64 and about 6 different MIPS builds) than to download the sources.  If I wanted an up-to-date stage 3, I just took my last build, ran it through Gentoo Catalyst, and out came a freshly built tarball.

I still obtain my operating systems that way.  Even though I’ve upgraded the ADSL, I still use the same scripts that used to produce the official Gentoo/MIPS media.

This means I could audit every piece of software that forms my core system.  I have the source code there, all of it.  Not many Linux users have this, most have it at arms reach (i.e. an apt-get source ${PACKAGE} away), or at worst, a polite email/letter to their supplier (e.g. Netcomm will supply sources for their routers for a ~AU$10 fee), however I already have it.

So did I do any audits?  Have I done any audits?  No.  Ultimately I just blindly trust what comes down the wire, and to some, that is arguably no better than just blindly trusting what Apple and Microsoft produce.

Those who say that, do have a point.  I didn’t pick up on HeartBleed, nor on ShellShock, and I probably haven’t spotted what will become the next headline-grabbing bug.  There’s a lot of source code that goes into a GNU/Linux system, and if I were to sit there and audit it, myself, it’d take me a lifetime.  It’d cost me a fortune to pay a team to analyse it.

However, I at least have the choice of auditing parts of it.  I’ll never be able to audit the copies of Microsoft Windows, or the one copy of Apple MacOS X I have.  For those, I’m reliant on the upstream vendors to audit, test and patch their code, I cannot do it myself.

For the open-source software though, it’s ultimately my choice.  I can do it myself, I can also pay someone to do it, I’ve simply chosen not to at this time.  This is an important distinction that the anti-open-source camp seem to forget.

As for the quality factor: well I’ve spent more time arguing with some piece of proprietary software and having trouble getting it to do something I need it to do, or fixing up some cock up caused by a bug in the said software.  One option, I spend hours arguing with it to make it work, and have to pay good money for the privilege.  The other, they money stays in my pocket, and in theory I can re-build it to make it work if needed.  One will place arbitrary restrictions on how I use the software as an end user, forcing me to spend money on more expensive licenses, the other will happily let me keep pushing it until I hit my system’s technical limits.

Neither offer me any kind of warranty regarding to losses I might suffer as a result of their software (I’m sorry, but US$5.00 is as good as worthless), so the money might as well stay in my pocket while I learn something about the software I use.

I remain in control of my destiny that way, and that is the way I’d like to keep it.

Apr 032014
 

Well, lately I’ve been doing some development work with OpenNebula.

We’ve recently deployed a 3-node Ceph cluster which we intend to use as our back-end storage for numerous things: among them being VM storage.  Initially I thought the throughput would be “good enough”, 3 hosts each with gigabit links supplying VM hosts with gigabit backhaul links.

It’d be comparable to typical HDDs, or so I thought.  What I didn’t count on in particular was the random-read latency introduced by round-tripping over the network and overheads.  When I tried Ceph with just libvirt, things weren’t too bad, I was close to saturating my 1Gbps link.  Put two VMs on and again, things hummed along.  Not blistering fast mind you but reasonable.

I got OpenNebula talking to it easy enough.  We’re running the stable version: 4.4.  There are a few things I learned about the way OpenNebula uses Ceph:

  • OpenNebula uses v1-format RBDs (the Ceph default actually)
  • Since v1 RBDs don’t support COW clones, instance images are copied.
  • Copying a 160GB image in triplicate over gigabit Ethernet takes a while, and brought our little cluster to a crawl.

Naturally, we’re looking into beefing up the network links and CPUs on the storage nodes, but I’ve also been looking at ways to reduce the load on the back-end cluster.  One is through caching.  There are a couple of projects out there which allow you to combine two types of storage, using a smaller, faster block device to act as a cache for a larger, slower device.  Two which immediately come to mind: FlashCache and bcache.

bcache is on the TODO list, it has a few more knobs and dials to be able to play with, and shares a single cache device with multiple back-end devices, so might yet be worth investing time in.

Sébastian Han posted a guide on doing RBD caching using FlashCache, and so my work has largely been based on this initial work.  I’ve been hacking up a OpenNebula datastore management and transfer management driver which harnesses FlashCache and the newer v2 RBD format to produce a flexible storage subsystem for OpenNebula.

The basic concept is simple enough:

  • Logical Volume Manager, is used to allocate slices of a SSD to use as cache for back-end RBDs.
  • For non-persistent images, a new copy-on-write clone of the base image is created
  • A flashcache composite device is produced using the LVM volume as cache and the RBD as the backend
  • KVM/QEMU/Xen uses this composite device like a regular disk

The initial attempt worked well for Linux VMs, read performance initially would be between 20MB/sec and 120MB/sec depending on network/storage cluster load.  Subsequent reads would then exceed 240MB/sec.  Write performance was limited to what the cluster could do, unless you used writeback mode at which point speed picked up dramatically.

Windows proved to be a puzzle, it seems some Windows images have an odd way of accessing the disk, and this impacts performance badly.  In many cases, the images were of a sparse nature, with most of the content being in the first 8GB.  So I made sure to allocate 8GB chunks of my SSD, and performed what I call pre-caching: seeding the contents of the SSD with the initial 8GB (or however big the SSD partition is) of the image.

That picks up the initial boot performance by a big margin, at the cost of the image taking a little longer to deploy in the PROLOG stage.

For those who are interested, some early code is available via git.

bcache might be worth a look-in as it has read-ahead caching.  I haven’t done so yet.  I’d like to split the caching subsystem out and have cache drivers much like we have for datastore managers and transfer managers alike.  The same concept would work for iSCSI/CLVM storage or Gluster storage as it does for Ceph.

Feb 252014
 

Hi all,

This is more a note to myself on how to configure stgt to talk to a Ceph rbd. Everyone seems to recommend patching tgt-admin: this is simply not necessary. The challenge is the lax way that tgt-admin parses the configuration file.

My scenario: VMWare ESXi virtual machine host, needing to use storage on Ceph.
I have 3 storage nodes running ceph-mon and ceph-osd daemons. They also have a version of tgtd that supports Ceph. (See the ceph-extras repository.)

The /etc/tgt/conf.d/${CLIENT}.conf configuration file. (I’m putting all the targets for ${CLIENT} here.)

# Target naming: iqn.yyyy-mm.backwards.domain.your:client.target
# where yyyy-mm: year and month of target creation
# backwards.domain.your: Your domain name; written backwards.
# client.target: A name for the target, since it's for one client here I name it
# as the client's host name then give the rest some descriptive title.
<target iqn.2014-02.domain.my:my-client.my-target-name>
    driver iscsi
    bs-type rbd
    backing-store pool-name/rbd-name
    initiator-address ip.of.my.client
</target>

For better or worse, I run the tgt daemon on the Ceph nodes themselves. Multipath I’m not sure about at this point, I’ve set up the targets on all of my Ceph nodes so I can connect to any, but I have not tested this yet.

To enable that target:

# tgt-admin -v -e

Then to verify:

# tgt-admin -s

You should see your LUNs listed.

Nov 122013
 

Hi all,

Not often I have a whinge about something, but this problem has been bugging me of late more than somewhat.  I’m in the process of setting up an OpenStack cluster at work.  Now, as the underlying OS we’ve chosen Ubuntu Linux which is fine.  Ubuntu is a quite stable, reliable and well supported platform.

One of my pet peeves though, is when some package manager decides to get lazy.  Now, those of us who have been around the Linux scene have probably discovered RPM dependency hell… and the smug Debian users who tell us that Debian doesn’t do this.

Ho ho, errm… no, when APT wants to go into dummy mode, it does so with style:

Nov 12 05:32:27 in-target: Setting up python3-update-manager (1:0.186.2) ...
Nov 12 05:32:27 in-target: Setting up python3-distupgrade (1:0.192.13) ...
Nov 12 05:32:27 in-target: Setting up ubuntu-release-upgrader-core 
(1:0.192.13) ...
Nov 12 05:32:27 in-target: Setting up update-manager-core (1:0.186.2) ...
Nov 12 05:32:27 in-target: Processing triggers for libc-bin ...
Nov 12 05:32:27 in-target: ldconfig deferred processing now taking place
Nov 12 05:32:27 in-target: Processing triggers for initramfs-tools ...
Nov 12 05:32:27 in-target: Processing triggers for ca-certificates ...
Nov 12 05:32:27 in-target: Updating certificates in /etc/ssl/certs... 
Nov 12 05:32:29 in-target: 158 added, 0 removed; done.
Nov 12 05:32:29 in-target: Running hooks in /etc/ca-certificates/update.d....
Nov 12 05:32:29 in-target: done.
Nov 12 05:32:29 in-target: Processing triggers for sgml-base ...
Nov 12 05:32:29 pkgsel: installing additional packages
Nov 12 05:32:29 in-target: Reading package lists...
Nov 12 05:32:29 in-target: 
Nov 12 05:32:29 in-target: Building dependency tree...
Nov 12 05:32:30 in-target: 
Nov 12 05:32:30 in-target: Reading state information...
Nov 12 05:32:30 in-target: 
Nov 12 05:32:30 in-target: openssh-server is already the newest version.
Nov 12 05:32:30 in-target: Some packages could not be installed. This may 
mean that you have
Nov 12 05:32:30 in-target: requested an impossible situation or if you are 
using the unstable
Nov 12 05:32:30 in-target: distribution that some required packages have not 
yet been created
Nov 12 05:32:30 in-target: or been moved out of Incoming.
Nov 12 05:32:30 in-target: The following information may help to resolve the 
situation:
Nov 12 05:32:30 in-target: 
Nov 12 05:32:30 in-target: The following packages have unmet dependencies:
Nov 12 05:32:30 in-target:  mariadb-galera-server : Depends: 
mariadb-galera-server-5.5 (= 5.5.33a+maria-1~raring) but it is not going to 
be installed

Mmmm, great, not going to be installed. May I ask why not? No, I’ll just drop to a shell and do it myself then.

Nov 12 05:32:30 in-target: E: Unable to correct problems, you have held 
broken packages.

Now this is probably one of my most hated things about computing, is when a software package accuses YOU of doing something that you haven’t. Excuse me… I have held broken packages? I simply performed a fresh install then told you to do an install!

So let’s have a closer look.

Nov 12 05:32:30 main-menu[20801]: WARNING **: Configuring 'pkgsel' failed 
with error code 100
Nov 12 05:32:30 main-menu[20801]: WARNING **: Menu item 'pkgsel' failed.
Nov 12 05:37:38 main-menu[20801]: INFO: Modifying debconf priority limit from 
'high' to 'medium'
Nov 12 05:37:38 debconf: Setting debconf/priority to medium
Nov 12 05:37:38 main-menu[20801]: DEBUG: resolver (ext2-modules): package 
doesn't exist (ignored)
Nov 12 05:37:40 main-menu[20801]: INFO: Menu item 'di-utils-shell' selected
~ # chroot /target
chroot: can't execute '/bin/network-console': No such file or directory
~ # chroot /target bin/bash

We give it a shot ourselves to see the error more clearly.

root@test-mgmt0:/# apt-get install mariadb-galera-server
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
 mariadb-galera-server : Depends: mariadb-galera-server-5.5 (= 
5.5.33a+maria-1~raring) but it is not going to be installed
E: Unable to correct problems, you have held broken packages.

Fine, so we’ll try installing that instead then.

root@test-mgmt0:/# apt-get install mariadb-galera-server-5.5
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
 mariadb-galera-server-5.5 : Depends: mariadb-client-5.5 (>= 
5.5.33a+maria-1~raring) but it is not going to be installed
                             Depends: libmariadbclient18 (>= 
5.5.33a+maria-1~raring) but it is not going to be installed
                             PreDepends: mariadb-common but it is not going 
to be installed
E: Unable to correct problems, you have held broken packages.

Okay, closer, so we need to install those too. But hang on, isn’t that apt‘s responsibility to know this stuff? (which it clearly does).

Also note we don’t get told why it isn’t going to be installed. It refuses to install the packages, “just because”. No reason given.

We try adding in the deps to our list.

root@test-mgmt0:/# apt-get install mariadb-galera-server-5.5 
mariadb-client-5.5
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
 mariadb-client-5.5 : Depends: libdbd-mysql-perl (>= 1.2202) but it is not 
going to be installed
                      Depends: mariadb-common but it is not going to be 
installed
                      Depends: libmariadbclient18 (>= 5.5.33a+maria-1~raring) 
but it is not going to be installed
                      Depends: mariadb-client-core-5.5 (>= 
5.5.33a+maria-1~raring) but it is not going to be installed
 mariadb-galera-server-5.5 : Depends: libmariadbclient18 (>= 
5.5.33a+maria-1~raring) but it is not going to be installed
                             PreDepends: mariadb-common but it is not going 
to be installed
E: Unable to correct problems, you have held broken packages.

Okay, some more deps, we’ll add those…

root@test-mgmt0:/# apt-get install mariadb-galera-server-5.5 
mariadb-client-5.5 libmariadbclient18
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
 libmariadbclient18 : Depends: mariadb-common but it is not going to be 
installed
                      Depends: libmysqlclient18 (= 5.5.33a+maria-1~raring) 
but it is not going to be installed
 mariadb-client-5.5 : Depends: libdbd-mysql-perl (>= 1.2202) but it is not 
going to be installed
                      Depends: mariadb-common but it is not going to be 
installed
                      Depends: mariadb-client-core-5.5 (>= 
5.5.33a+maria-1~raring) but it is not going to be installed
 mariadb-galera-server-5.5 : PreDepends: mariadb-common but it is not going 
to be installed
E: Unable to correct problems, you have held broken packages.

Wash-rinse-repeat!

root@test-mgmt0:/# apt-get install mariadb-galera-server-5.5 
mariadb-client-5.5 libmariadbclient18 mariadb-common
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
 libmariadbclient18 : Depends: libmysqlclient18 (= 5.5.33a+maria-1~raring) 
but it is not going to be installed
 mariadb-client-5.5 : Depends: libdbd-mysql-perl (>= 1.2202) but it is not 
going to be installed
 mariadb-common : Depends: mysql-common but it is not going to be installed
E: Unable to correct problems, you have held broken packages.
root@test-mgmt0:/# apt-get install mariadb-galera-server-5.5 
mariadb-client-5.5 libmariadbclient18 mariadb-common libdbd-mysql-perl 
mysql-common
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
 libmariadbclient18 : Depends: libmysqlclient18 (= 5.5.33a+maria-1~raring) 
but 5.5.34-0ubuntu0.13.04.1 is to be installed
 mariadb-client-5.5 : Depends: mariadb-client-core-5.5 (>= 
5.5.33a+maria-1~raring) but it is not going to be installed
 mysql-common : Breaks: mysql-client-5.1
                Breaks: mysql-server-core-5.1
E: Unable to correct problems, you have held broken packages.

Aha, so there’s a newer version in the Ubuntu repository that’s overriding ours. Brilliant. Ohh, and there’s a mysql-client binary too, but it won’t tell me what version it’s trying for.

Looking in the repository myself I spot a package named mysql-common_5.5.33a+maria-1~raring_all.deb. That is likely our culprit, so I try version 5.5.33a+maria-1~raring.

root@test-mgmt0:/# apt-get install mariadb-galera-server-5.5 
mariadb-client-5.5 libmariadbclient18 mariadb-common libdbd-mysql-perl 
mysql-common=5.5.33a+maria-1~raring libmysqlclient18=5.5.33a+maria-1~raring 
mariadb-client-core-5.5
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following extra packages will be installed:
  galera libaio1 libdbi-perl libhtml-template-perl libnet-daemon-perl 
libplrpc-perl

Bingo!

So, for those wanting to pre-seed MariaDB Cluster 5.5, I used the following in my preseed file:

# MariaDB 5.5 repository list - created 2013-11-12 05:20 UTC
# http://mariadb.org/mariadb/repositories/
d-i apt-setup/local3/repository string \
        deb http://mirror.aarnet.edu.au/pub/MariaDB/repo/5.5/ubuntu raring main
d-i apt-setup/local3/comment string \
        "MariaDB repository"
d-i pkgsel/include string mariadb-galera-server-5.5 \
        mariadb-client-5.5 libmariadbclient18 mariadb-common \
        libdbd-mysql-perl mysql-common=5.5.33a+maria-1~raring \
        libmysqlclient18=5.5.33a+maria-1~raring mariadb-client-core-5.5 \
        galera

# For unattended installation, we set the password here
mysql-server mysql-server/root_password select DatabaseRootPassword
mysql-server mysql-server/root_password_again select DatabaseRootPassword

So yeah, next time someone mentions this:

Gentoo: Increasing blood pressure since 1999.

it doesn’t just apply to Gentoo!