Nov 282018
 

The binutils linker is able to generate a map file when it links your binaries.  This provides a lot of detail on how the functions and variables have been arranged into the program memory space, which is crucial information when dealing with embedded devices.

Unfortunately, looking around I didn’t see any decent tools for extracting this information.  I wound up cooking my own Python script up to do this.  It’s very crude, just takes a map file on standard input, and dumps a report to standard output.  It seems to work okay with ARM, and sorta works with AVR but might need some more work.

import re
from sys import stdin, stdout

WIDTH = 8
SPARSE_SKIP = 4*WIDTH
SYMBOL_ONLY_RE = re.compile(\
        r'^ \.([a-zA-Z0-9]+)\.([a-zA-Z0-9_\.]+)$')
ADDR_ONLY_RE = re.compile(\
        r'^ {16}(0x[0-9a-f]+) +(0x[0-9a-f]+) (.*)$')
ADDR_CXXSYM_RE = re.compile(\
        r'^ {16}(0x[0-9a-f]+) {16}([a-zA-Z_][a-zA-Z0-9_:()*\[\]\.]+)$')
SYMBOL_ADDR_RE = re.compile(\
        r'^ \.([a-zA-Z0-9]+)\.([a-zA-Z0-9_\.]+) +(0x[0-9a-f]+) +(0x[0-9a-f]+) (.*)$')
FILL_RE = re.compile('^ \*fill\* +(0x[0-9a-f]+) +(0x[0-9a-f]+) +(\d+)$')
REGION_RE = re.compile('^([a-zA-Z0-9_]+) +(0x[0-9a-f]+) (0x[0-9a-f]+) ([rwx]+)$')

regions = []
last = None
objects = []

def on_symbol_only(match):
    global last
    if match:
        (section, symbol) = match.groups()
        last = {
            'type': 'symbol',
            'section': section,
            'symbol': symbol
        }
        objects.append(last)
    return match


def on_addr_only(match):
    if match:
        (address, size, loc) = match.groups()
        if last is None:
            return match

        assert last['type'] == 'symbol'
        assert 'address' not in last
        assert 'size' not in last
        assert 'loc' not in last
        last['address'] = int(address, base=16)
        last['size'] = int(size, base=16)
        last['loc'] = loc
    return match


def on_symbol_addr(match):
    global last
    if match:
        (section, symbol, address, size, loc) = match.groups()
        last = {
            'type': 'symbol',
            'section': section,
            'symbol': symbol,
            'address': int(address, base=16),
            'size': int(size, base=16),
            'loc': loc
        }
        objects.append(last)
    return match


def on_addr_cxxsym(match):
    if match:
        (address, cxxsym) = match.groups()
        if last is None:
            return match

        assert last['type'] == 'symbol'
        if last['address'] != int(address, base=16):
            return match
        if 'cxxsyms' not in last:
            last['cxxsyms'] = set()
        last['cxxsyms'].add(cxxsym)
    return match


def on_fill(match):
    if match:
        (address, size, data) = match.groups()
        objects.append({
            'type': 'fill',
            'address': int(address, base=16),
            'size': int(size, base=16),
            'data': int(data, base=16)
        })
    return match

def on_region(match):
    if match:
        (region, origin, length, attrs) = match.groups()
        regions.append({
            'region': region,
            'address': int(origin, base=16),
            'size': int(length, base=16),
            'attrs': attrs
        })
    return match


for line in stdin:
    line = line.rstrip()

    try:
        if line == 'Memory Configuration':
            break
    except:
        print ('# Failed at line %r' % line)
        raise


for line in stdin:
    line = line.rstrip()

    try:
        if line == 'Linker script and memory map':
            break
        if on_region(REGION_RE.match(line)):
            continue
    except:
        print ('# Failed at line %r' % line)
        raise


for line in stdin:
    line = line.rstrip()
    try:
        if on_symbol_only(SYMBOL_ONLY_RE.match(line)):
            continue

        if on_addr_only(ADDR_ONLY_RE.match(line)):
            continue

        if on_addr_cxxsym(ADDR_CXXSYM_RE.match(line)):
            continue

        if on_fill(FILL_RE.match(line)):
            continue

        last = None
    except:
        print ('Failure context:')
        print ('# last = %r' % last)
        print ('# line = %r' % line)
        raise

for region in regions:
    region['end'] = region['address'] + region['size']
regions.sort(key=lambda r : r['address'])

for obj in objects:
    if 'cxxsyms' in obj:
        obj['cxxsyms'] = list(sorted(obj['cxxsyms']))

    if ('address' in obj) and ('size' in obj):
        obj['end'] = obj['address'] + obj['size']

        for region in regions:
            if (obj['end'] <= region['end']) and \ (obj['address'] >= region['address']):
                obj['region'] = region['region']
                break
objects.sort(key=lambda o : o.get('address', -1))

for region in regions:
    address = region['address']
    sym_idx = 0
    row_rem = 0
    row_syms = []
    seen = set()

    region_objects = list(filter(
        lambda obj : obj.get('region') == region['region'],
        objects))
    if not region_objects:
        continue

    for obj in region_objects:
        if obj['type'] == 'symbol':
            sym = '%02d' % (sym_idx % 100)
            sym_idx += 1
        elif obj['type'] == 'fill':
            sym = '--'
        else:
            sym = '??'

        while address < obj['address']: if not row_rem: if (obj['address'] - address) > SPARSE_SKIP:
                    end = obj['address'] - (obj['address'] % SPARSE_SKIP)
                    stdout.write('\n%16s 0x%08x -- 0x%08x (%d bytes)' % (
                        region['region'], address, end, end - address))
                    address = end
                    continue

                stdout.write('\n%16s 0x%08x: ' % (region['region'], address))
                row_rem = WIDTH

            stdout.write(' ..')
            row_rem -= 1
            address += 1

        while address < obj['end']:
            if not row_rem:
                if row_syms:
                    stdout.write(' | %s\n' % row_syms.pop(0))
                else:
                    stdout.write('\n')
                stdout.write('%16s 0x%08x: ' % (region['region'], address))
                row_rem = WIDTH

            stdout.write(' %s' % sym)
            row_rem -= 1
            address += 1
            if ('symbol' in obj) and (obj['symbol'] not in seen):
                row_syms.append('%s: %s' % (sym, obj['symbol']))
                seen.add(obj['symbol'])

            if not row_rem:
                if row_syms:
                    stdout.write(' | %s\n' % row_syms.pop(0))
                else:
                    stdout.write('\n')
                stdout.write('%16s 0x%08x: ' % (region['region'], address))
                row_rem = WIDTH

    while row_rem:
        stdout.write(' ..')
        row_rem -= 1
        address += 1

    stdout.write('\n%16s 0x%08x -- 0x%08x (%d bytes)\n' % (
        region['region'], address, region['end'], region['end'] - address))
    stdout.write('%16s %d bytes remaining\n' % (
        region['region'], region['end'] - address))

Continue reading »

Jul 232018
 

Lately, I’ve been doing a lot of development work on Tridium Niagara kit.  The Tridium platform is fundamentally built on Sun^WOracle’s Java environment, and is very popular in the building management industry.  There’s an estimate of over 600000 JACE devices (building management controllers) deployed worldwide, so I can fully understand why my workplace is chasing them.

That means coming to grips with their environment, and getting it to talk to ours.  Officially, VRT is a Debian/Ubuntu shop.  They used to dabble with Red Hat years ago, back when VRT and Red Hat were next-door neighbours (in Gardner Close, Milton) but VRT switched to Ubuntu around 2008 after a brief flirt with Gentoo.  Thus, must of our tooling assumes a Debian-based system.

Docker CE on Debian and Ubuntu is a snap.  However, Tridium it would seem, are Red Hat fans, and only support their development environment on Microsoft Windows (yes shudder) or Red Hat Enterprise Linux.  Thus, we have a RHEL 7.3 VM we pass around when we’re doing VM development.  I figured since we’re trying to link Niagara to WideSky, it would be nice to be able to deploy WideSky on RHEL.

WideSky uses Docker as the basis for its deployment, so this sounded simple enough.  Install Docker and docker-compose, throw a bog-standard deployment in there, docker-compose up -d, off we go.

Not so fast.

While there’s Docker EE for RHEL, budget is tight and we really don’t need the support as this isn’t a “production” instance as such.  If the VM gets sick we just roll it back to a known good version and continue from there.  It doesn’t make sense to spend money on purchasing Docker EE.  There’s a CentOS version of Docker CE, and even unofficial instructions on how to shoehorn this into RHEL.  I dutifully followed these, but then hit a road-block with container-selinux: the repository no longer has that version.

Rather than looking for what version they have now, or play Russian Roulette hunting for a random RPM from some mirror site (been there, done that many moons ago before I knew better)… a better plan was to grab the sources and sic rpmbuild onto them so we get a RHEL-native binary.

Building container-selinux on RHEL

  1. Begin by installing dependencies:
    # yum install -y selinux-policy selinux-policy-devel rpm-build rpm-devel git
  2. Download the sources for the RPM:
    $ git clone https://git.centos.org/r/rpms/container-selinux.git
    $ git checkout c7-alt
    $ cd SPECS
  3. Have a look at the .spec file to see where it expects to source the sources from, up the top of the file I downloaded, I saw:
    %global git0 https://github.com/projectatomic/%{name}
    %global commit0 dfb449b771ca4977bb7d5fb6cd7be3cfc14d6fca
  4. Fetch the sources, then check out that commit:
    $ git clone https://github.com/projectatomic/container-selinux
    $ git checkout dfb449b771ca4977bb7d5fb6cd7be3cfc14d6fca
  5. Rename the check-out directory as container-selinux-${GIT_COMMIT_ID}
    $ cd ..
    $ mv container-selinux container-selinux-dfb449b771ca4977bb7d5fb6cd7be3cfc14d6fca
  6. Package it up into a tarball, excluding the .git directory and plop that file in ~/rpmbuild/SOURCES
    $ tar --exclude container-selinux-dfb449b771ca4977bb7d5fb6cd7be3cfc14d6fca/.git \
    -czvf ~/rpmbuild/sources/container-selinux-dfb449b.tar.gz \
    container-selinux-dfb449b771ca4977bb7d5fb6cd7be3cfc14d6fca
  7. Build!
    $ rpmbuild -ba container-selinux.spec

All going to plan, you should have a shiny new RPM file in ~/rpmbuild/RPMS.  Install that, then you can proceed with installing the CentOS version of Docker CE.  If you’re doing this for a production environment, and absolutely must use Docker CE, then I’d advise that perhaps taking the source RPMs for Docker CE and building those on RHEL would be advisable over using raw CentOS binaries, but each to your own.

# docker info
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 0
Server Version: 18.06.0-ce
Storage Driver: overlay2
 Backing Filesystem: xfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: d64c661f1d51c48782c9cec8fda7604785f93587
runc version: 69663f0bd4b60df09991c08812a60108003fa340
init version: fec3683
Security Options:
 seccomp
  Profile: default
Kernel Version: 3.10.0-693.11.1.el7.x86_64
Operating System: Red Hat Enterprise Linux
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 3.702GiB
Name: localhost.localdomain
ID: YVHJ:UXQV:TBAS:E5MH:B4GL:VT2H:A2BW:MQMF:3AGA:FBBX:MINO:24Z6
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false
Jul 222018
 

So, on the bike, I use a portable GPS to keep track of my speed and to track the mileage done on the bike so I know when to next put it in for service. Originally I just relied on the trip counter in the GPS, but then found that this could develop quite an error if left to tick over for a few months.

Thus, I wrote a simple CGI application in Perl and SQLite3 that would track the odometer readings. Plain, simple, and it’s worked quite well, but remembering to punch in the current odometer reading is a chore, and my stats are only as granular as I submit: if I want to see what distance I did on a particular day, I either have to have had the foresight to store readings at the start and end of that day, or I’m stuffed.

I also keep the GPX tracklogs. While the Garmin 650 is not great at handling lots of tracklogs (and for some moronic reason, they name the files “Day DD-MMM-YY HH.MM.SS.gpx”, not something sensible like “YYYY-MM-DDTHH-MM-SS.gpx”), it’s good enough that I can periodically siphon off the track logs for storage on my laptop. I then have a record of where I’ve been.

Theoretically, this also has the distance travelled, I could make a service that just consumes the GPX files, and tallies up the distances that way. Maybe even visualise heat-map style, where I go most. (No prizes for guessing “work” … but where else?)

The existing system uses SQLite, and specifically, its views, as poor man’s stored procedures. It’s hacky, inefficient, and sooner or later I’ll have performance problems. PostGIS is an extension onto PostgreSQL which supports a large number of spatial operations, including finding the length of a series of points, which is exactly the problem I’m trying to solve right now. The catch is, how do you import the data?

Enter GDAL

GDAL is a library of geographic functions for answering these kinds of questions. It ships with a utility ogr2ogr, which can take geographic information in a variety of formats, and convert to a variety of output formats. Crucially, this tool supports consuming GPX files and writing to a PostGIS database.

Loading one file, is easy enough:

$ ogr2ogr -oo GPX_ELE_AS_25D=YES \
  -dim 3 \
  -gt 65536 \
  -lco GEOM_TYPE=geography \
  -preserve_fid \
  -f PostgreSQL \
  "PG:dbname=yourdb" yourfile.gpx \
  tracks track_points

The arguments here were found by trial-and-error.  Specifically, -oo GPX_ELE_AS_25D=YES and -dim 3 tell ogr2ogr to preserve the elevation in the point information (as well as keeping a copy of it in the ele column). -lco GEOM_TYPE=geography tells ogr2ogr to use the geography data type in PostGIS.

Look in the database, and you’ll see two tables, tracks and track_points. Sadly, you don’t get to choose the names of these (not easily anyway, there is -nln, but it then will create one table with the given name, put the tracks in it, then blow it away and replace it with a table of the same name containing points), and there’s no foreign keys between the two.

The fun starts when you try to import a second GPX file. Run that command again, and because of -preserve_fid, you’ll get a primary key clash. Take that away, and the track_fid column in track_points becomes meaningless.

If you drop -preserve_fid, then track_fid gets set to 0 for all points.  Useless.

Importing many GPX files

Out of the box, this just wasn’t going to fly, so we needed to do things a little different.  Firstly, I duplicated the schema that GDAL creates, creating my own tables which will ultimately store the data.  I then used a wrapper shell script that calls psql before and after ogr2ogr so I can re-map the primary keys to maintain relationships.

Schema SQL

CREATE SEQUENCE public.gpx_points_ogc_fid_seq
    INCREMENT 1
    START 1
    MINVALUE 1
    MAXVALUE 2147483647
    CACHE 1;

CREATE SEQUENCE public.gpx_tracks_ogc_fid_seq
    INCREMENT 1
    START 1
    MINVALUE 1
    MAXVALUE 2147483647
    CACHE 1;

CREATE TABLE public.gpx_tracks
(
    ogc_fid integer NOT NULL,
    name character varying COLLATE pg_catalog."default",
    cmt character varying COLLATE pg_catalog."default",
    "desc" character varying COLLATE pg_catalog."default",
    src character varying COLLATE pg_catalog."default",
    link1_href character varying COLLATE pg_catalog."default",
    link1_text character varying COLLATE pg_catalog."default",
    link1_type character varying COLLATE pg_catalog."default",
    link2_href character varying COLLATE pg_catalog."default",
    link2_text character varying COLLATE pg_catalog."default",
    link2_type character varying COLLATE pg_catalog."default",
    "number" integer,
    type character varying COLLATE pg_catalog."default",
    gpxx_trackextension character varying COLLATE pg_catalog."default",
    the_geog geography(MultiLineStringZ,4326),
    CONSTRAINT gpx_tracks_pkey PRIMARY KEY (ogc_fid)
)
WITH (
    OIDS = FALSE
)
TABLESPACE pg_default;

CREATE TABLE public.gpx_points
(
    ogc_fid integer NOT NULL,
    track_fid integer,
    track_seg_id integer,
    track_seg_point_id integer,
    ele double precision,
    "time" timestamp with time zone,
    magvar double precision,
    geoidheight double precision,
    name character varying COLLATE pg_catalog."default",
    cmt character varying COLLATE pg_catalog."default",
    "desc" character varying COLLATE pg_catalog."default",
    src character varying COLLATE pg_catalog."default",
    link1_href character varying COLLATE pg_catalog."default",
    link1_text character varying COLLATE pg_catalog."default",
    link1_type character varying COLLATE pg_catalog."default",
    link2_href character varying COLLATE pg_catalog."default",
    link2_text character varying COLLATE pg_catalog."default",
    link2_type character varying COLLATE pg_catalog."default",
    sym character varying COLLATE pg_catalog."default",
    type character varying COLLATE pg_catalog."default",
    fix character varying COLLATE pg_catalog."default",
    sat integer,
    hdop double precision,
    vdop double precision,
    pdop double precision,
    ageofdgpsdata double precision,
    dgpsid integer,
    the_geog geography(PointZ,4326),
    CONSTRAINT gpx_points_pkey PRIMARY KEY (ogc_fid),
    CONSTRAINT gpx_points_track_fid_fkey FOREIGN KEY (track_fid)
        REFERENCES public.gpx_tracks (ogc_fid) MATCH SIMPLE
        ON UPDATE RESTRICT
        ON DELETE RESTRICT
)
WITH (
    OIDS = FALSE
)
TABLESPACE pg_default;

The wrapper script

 1 #!/bin/sh
 2 
 3 DB=tracklog
 4 
 5 for f in "$@"; do
 6         psql tracklog <<EOF
 7 DROP TABLE IF EXISTS tracks;
 8 DROP TABLE IF EXISTS track_points;
 9 EOF
10         ogr2ogr -oo GPX_ELE_AS_25D=YES \
11                 -dim 3 \
12                 -gt 65536 \
13                 -lco SPATIAL_INDEX=FALSE \
14                 -lco GEOM_TYPE=geography \
15                 -overwrite \
16                 -preserve_fid \
17                 -f PostgreSQL \
18                 "PG:dbname=${DB}" "$f" \
19                 tracks track_points
20 
21         # Re-map FIDs then insert into real tables.
22         psql tracklog <<EOF
23         CREATE TEMPORARY TABLE track_fids AS
24         SELECT  ogc_fid AS orig_fid,
25                 nextval('gpx_tracks_ogc_fid_seq') AS ogc_fid
26         FROM    tracks;
27 
28         CREATE TEMPORARY TABLE point_fids AS
29         SELECT  ogc_fid AS orig_fid,
30                 nextval('gpx_points_ogc_fid_seq') AS ogc_fid
31         FROM    track_points;
32 
33         INSERT INTO gpx_tracks
34         SELECT  track_fids.ogc_fid AS ogc_fid,
35                 tracks.name as name,
36                 tracks.cmt as cmt,
37                 tracks."desc" as "desc",
38                 tracks.src as src,
39                 tracks.link1_href as link1_href,
40                 tracks.link1_text as link1_text,
41                 tracks.link1_type as link1_type,
42                 tracks.link2_href as link2_href,
43                 tracks.link2_text as link2_text,
44                 tracks.link2_type as link2_type,
45                 tracks."number" as "number",
46                 tracks.type as type,
47                 tracks.gpxx_trackextension as gpxx_trackextension,
48                 tracks.the_geog as the_geog
49         FROM    track_fids, tracks
50         WHERE   track_fids.orig_fid=tracks.ogc_fid;
51 
52         INSERT INTO gpx_points
53         SELECT  point_fids.ogc_fid AS ogc_fid,
54                 track_fids.ogc_fid AS track_fid,
55                 track_points.track_seg_id AS track_seg_id,
56                 track_points.track_seg_point_id AS track_seg_point_id,
57                 track_points.ele AS ele,
58                 track_points."time" AS "time",
59                 track_points.magvar AS magvar,
60                 track_points.geoidheight AS geoidheight,
61                 track_points.name AS name,
62                 track_points.cmt AS cmt,
63                 track_points."desc" AS "desc",
64                 track_points.src AS src,
65                 track_points.link1_href AS link1_href,
66                 track_points.link1_text AS link1_text,
67                 track_points.link1_type AS link1_type,
68                 track_points.link2_href AS link2_href,
69                 track_points.link2_text AS link2_text,
70                 track_points.link2_type AS link2_type,
71                 track_points.sym AS sym,
72                 track_points.type AS type,
73                 track_points.fix AS fix,
74                 track_points.sat AS sat,
75                 track_points.hdop AS hdop,
76                 track_points.vdop AS vdop,
77                 track_points.pdop AS pdop,
78                 track_points.ageofdgpsdata AS ageofdgpsdata,
79                 track_points.dgpsid AS dgpsid,
80                 track_points.the_geog AS the_geog
81         FROM    track_points, track_fids, point_fids
82         WHERE   point_fids.orig_fid=track_points.ogc_fid
83         AND     track_fids.orig_fid=track_points.track_fid;
84 
85         DROP TABLE tracks;
86         DROP TABLE track_points;
87         DROP TABLE track_fids;
88         DROP TABLE point_fids;
89 EOF
90 done

Getting the length of a track

Having imported all the data, we can do something like this:

SELECT ogc_fid, name,
  ST_Length(the_geog, false)/1000 as dist_in_km
FROM gpx_tracks order by ogc_fid desc limit 10;

and get this:

1754 Day 20-JUL-18 18:09:02′ 9.83689686312541′
1753 Day 15-JUL-18 09:36:16′ 5.75919119415676′
1752 Day 14-JUL-18 17:12:24′ 0.071734341651265′
1751 Day 14-JUL-18 17:12:23′ 0.0729574875289383′
1750 Day 13-JUL-18 08:13:32′ 9.88420745610283′
1749 Day 06-JUL-18 09:00:32′ 9.81221316219109′
1748 Day 30-JUN-18 01:11:26′ 9.77607205972035′
1747 Day 23-JUN-18 05:02:04′ 19.6368592034475′
1746 Day 22-JUN-18 18:03:37′ 9.91964760346248′
1745 Day 12-JUN-18 21:22:26′ 0.0884092391531763′

Visualisation with QGIS

Turns out, this is straightforward…

  1. In your workspace, there’s a tree with the different layer types you can add, including PostGIS… right-click on this and select New Connection… fill in the details for your PostgreSQL database.
  2. Below that is XYZ Tiles…, right click again, select New Connection for OpenStreetMap, and use the URL https://a.tile.openstreetmap.org/{z}/{x}/{y}.png (also, see their policy).
  3. Drag the OpenStreetMap connection to your layers
  4. Expand the PostGIS connection you just made, and look for the gpx_tracks table, drag this on top of your OpenStreetMap layer.

Below is everywhere I’ve been with the GPS tracklog running.  Much of what you see is the big loop a few of us did in 2012, including my trip to Ballarat for the 2012 LCA.

If I zoom in on Brisbane, unsurprisingly, some areas show up very clearly as being common haunts for me:

A bit of SQL voodoo, and I come up with this:

In orange is the territory covered on the Boulder (minus what was covered before I got the GPS), in blue the territory covered on the Talon 29 ER 0, and in red, on my current commuter (Toughroad SLR2).

Jun 042018
 

So, recently there was a task at my work to review enabling gzip compression on our nginx HTTP servers to compress the traffic.

Now, in principle it seemed like a good idea, but having been exposed to the security world a little bit, I was familiar with some of the issues with this, notably, CRIME, BEAST and BREACH.  Of these, only BREACH is unmitigated at the browser end.

The suggested mitigations, in order of effectiveness are:

  1. Disabling HTTP compression
  2. Separating secrets from user input
  3. Randomizing secrets per request
  4. Masking secrets (effectively randomizing by XORing with a random secret per request)
  5. Protecting vulnerable pages with CSRF
  6. Length hiding (by adding random number of bytes to the responses)
  7. Rate-limiting the requests

Now, we’ve effectively being doing (1) by default… but (2), (3) and (4) make me wonder how protocols like OAuth2 are supposed to work.  That got me thinking about a little toy I was given for attending the 2011 linux.conf.au… it’s a YubiKey, one of the early model ones.  The way it operates is that Yubico’s servers, and your key, share a secret AES key (I think it’s AES-128), some static data, and a counter.  Each time you generate a one-time pad with the key, it increments its counter, encrypts the value with the static data, then encodes the output as a hexdump using a keyboard-agnostic encoding scheme to be “typed” into the computer.

Yubico receive this token, decrypt it, then compare the counter value.  If it checks out, and is greater than the existing counter value at their end, they accept it, and store that new counter value.

The same made me wonder if that could work for requests from a browser… that is, you agree on a shared secret over HTTPS, or using Diffie Hellman.  You synchronise counters (either using your new shared secret, or over HTTPS at the same time as you make the shared key), then from there on, each request to your API made by the browser, is then accompanied by a one-time pad, generated by encrypting the counter value and the static data and sending that in the HTTP headers.

There are a few libraries that do AES in the browser, such as JSAES (GPLv3) and aes-js (MIT).

This is going to be expensive to do, so a compromise might be to use this every N requests, where N is small enough that BREACH doesn’t have a sufficient number of requests from which it can derive a secret.  By the time it figures out that secret, the token is expired.  Or they could be bulk-generated at the browser end in the background so there’s a ready supply.

I haven’t gone through the full in’s and out’s of this, and I’m no security expert, but that’s just some initial thinking.

May 192018
 

Recently, a new project sprang up on the Hackaday.io site; it was for the KiteBoard, an open-source cellular development platform.  In a nutshell, this is a single-board-computer that embeds a full mobile system-on-chip and runs the Android operating system.  The project is seeking crowd funding for the second version of this platform.

With it, you can build smartphones (of course), tablets, tele-presence robots, or really, any project which can benefit from a beefy CPU with a built-in cellular modem.  It comes as a kit, which you then assemble yourself.  The level of difficulty in assembly is no greater than that of assembling a desktop PC: the circuit boards are pre-populated, you just need to connect them together.  In this version, some soldering of pushbuttons and wires is needed: all through-hole components.  No reflow ovens or solder paste is necessary here, an 8-year-old could do it.

The break-out board for the CPU card features in addition to connections for all the usual cellular phone signals (e.g. earpiece, microphone, button inputs) a GPIO header that follows the de-facto standard “Raspberry Pi” interface, allowing many Raspberry Pi “hats” to plug directly into this board.

That lends itself greatly to expandability.  Want a eInk or OLED notification display on the back?  A scrolling LED display?  A piano?  A games console?  Knock yourself out!  You, are the designer, you decide.  There are lots of options.

I for one, would consider an amateur radio transceiver, an external antenna socket and a beefier battery.  Presently, I get around with the ZTE T83 (“Telstra Dave”), which works okay, but as it runs an old version of Android (4.1), running newer applications on it is a problem.  I believe it could run something newer, but ZTE believe that their job was finished in 2013 when the first one rolled off the production line.

The box did not include a copy of the kernel sources or any link to where that could be obtained.  (GNU GPL v2 section 2b?  What’s that?)

The successor, the T84 is a little better, in fact it has pretty much the same hardware that’s in Kite, but it struggles in rural areas.  On a recent trip into the Snowy Mountains, my phone would be working fine, when my father’s T84 would report “no service available”.  Clearly, someone at Telstra/ZTE screwed up the firmware on it, and so it fails to switch networks correctly.  Without the sources, we are unable to fix that.  Even something as simple as replacing a battery is neigh on impossible, they’re built like bombs: not designed to be taken apart.

I have no desire to spend money on a company that puts out poorly supported rubbish running pirated operating system kernels.  The story is similar elsewhere, and most devices while better in specs and operating system, lack the external antenna connection that I desire in a phone.

Kite represents a breath of fresh air in that regard.  It is to smart phones, what the Raspberry Pi is to single board computers in general.  It’s not only designed to be taken apart, it’s shipped to you as parts.  Apparently with Kite v2, there’ll be schematics available, so you’ll be able to look-up the datasheets of respective components and be able to make informed decisions about part substitutions.  All antenna connections are socketed, so you can substitute at will.

While the OS isn’t going to be as open as one might like (mobile chipset manufacturers like their black boxes), it’s a BIG step in the right direction.  There’s more scope for supporting this platform long-term, than contemporary ones.

As far as actually using Kite, Shree Kumar was generous enough to organise the loan of a Kite for me to test with the Australian networks.  The phone takes up to two micro-SIMs (about 15mm×12mm); one on the daughter card (this is SIM 1) and one on the CPU card (SIM 2).

For the sake of testing, I figured I’d try it out with the two major networks, Telstra and Optus.  As it happens, my Telstra SIM is too big (they call it a “full-size” SIM now; I remember full-size SIMs being credit-card sized), so rather than chopping up my existing SIM or getting it transferred, I bought and activated a prepaid service.  I also bought a SIM for Optus.  I bought $10 credit for each.

As it happens, the Optus one came with data, the Telstra did not.  No big deal in this case.  The phone does have a limitation in that it will talk to one 3G/4G network and one GSM (2G) network at a time.  Given both networks I chose have abandoned 2G, that pretty much means the dual-SIM functionality on this model is severely hobbled.  That said, either SIM can operate in 3G mode, and so it’s simple enough to switch one SIM into 2G mode then activate the other in 3G/4G mode.  So far, the Kite has spent most of its time on Optus.

Evidently Vodaphone still have a 2G network… at least the Kite does see one 2G cell operated by them.  Long term, this is a problem that all dual-SIM phone chipset makers will have to deal with, a future Kite may well be able to do 3G simultaneously on both SIMs, but for me, this is not a show-stopper.

I’ve put together this review of the Kite.  It’s rare for me to be in front of a camera instead of behind it, and yes, the editing is very rough.  If there is time (there won’t be this weekend) I hope to take the phone out to a rural area and try it out with the more distant networks, but so far it seems happy enough to switch to 3G when I get home, and use 4G when I’m at work, so this I see as a promising sign.

The KickStarter is lagging behind quite a way in the funding goal, but alternate options are being considered for getting this project off-the-ground.  Here’s hoping that the project does get up, and that we get to see Kite v2 being developed and made for real, as I think the mobile phone industry really does need a viable open competitor.

Apr 282018
 

So, a few weeks ago I installed a new battery charger, and tweaked it so that the solar did most of the leg work during the day, and the charger kept the batteries topped up at night.

I also discussed the addition of a new industrial PC to perform routing and system monitoring functions… which was to run Gentoo Linux/musl. For now, that little PC is still running Debian Stretch, but for 45 days, it was rock solid. The addition of this box, and taking on the role of router to the management network meant I could finally achieve one of my long-term goals for the project: decommissioning the old server.

The old server is still set up with all my data and software… but now the back-up cron job calls /sbin/poweroff when it’s done, and the BIOS is set to wake the machine up in the evening ready to receive a back-up late at night.

In its place, a virtual machine clone of the box, handles my email and all the old functions of that server. This was all done just prior to my father and I leaving for a 3 week holiday in the Snowy Mountains.

I did have a couple of hiccups with Ceph OSDs crashing … but basically re-starting the daemons (done remotely whilst travelling through Cowra) got everything back up. A bit of placement group cleaning, and everything was back online again. I had another similar hiccup coming out of Maitland, but once again, re-starting the daemons fixed it. No idea why it crashed, that’s something I’ll have to investigate.

Other than that, the cluster itself has run well.

One thing that did momentarily kill the industrial PC though: I wandered down to the rack with a small bus-powered 2.5″ HDD with the intent of re-starting my Gentoo builds. This HDD had the same content as the 3.5″ HDD I had plugged in before. I figured being bus powered, I would not be dependent on mains, and it could just chug away to its heart’s content.

No such luck, the moment I plugged that drive in, the little machine took great umbrage to the spinning rust now vacuuming the electrons away from its core functions, and shut down abruptly. I’ve now brought my 3.5″ drive and dock down, plugged that into the wall, and have my builds resuming. If power goes off, hopefully the machine either handles the loss of swap gracefully. If it does crash, the watchdog will take care of it.

Thus, I have the little TS-7670 first attempting a build of gcc, to see how we go. Finger’s crossed our power should remain up. There was at least one outage in the time we were away, but hopefully we should get though this next build!

The next step I think should be to add some control of the mains charger to allow the batteries to be boosted to full charge overnight. The thinking is a simple diode-OR arrangement. Many comparators such as the LM393 have an open-collector output, which gives us this for free.

The theory is this.

The battery bank powers a simple circuit which runs of a 5V regulator. That regulator powers a dual comparator IC and provides a reference voltage. The comparator draws bugger all power, so I’m happy to use a linear PSU here. It’s mainly there as a voltage reference.

Precision isn’t really the aim here, so adjustable pots will make life easier.

The voltages from the battery bank and the solar panel are fed through voltage dividers to bring the voltages down to below 5V, then those voltages are individually fed into separate pots that control the hysteresis. I can adjust all points of the system.

The idea is that should the batteries get too low, or the sun go down, one or the other (or both) comparators will go low and pull down on R2. If the batteries are high and the sun is up, nothing pulls on R2 so the REMOTE+ pin on the HEP-600C-12 is allowed to float to +5V, turning off the mains charger.

The advantage of this is there’s no programming of a microcontroller, it’s just analogue electronics. The LM393s are pretty hardy things, the datasheet says they’ll run at 36V and can accept a maximum voltage of VCC-1.5V; so if I run at 5V, 3.5V is my recommended maximum. The adjustment pots should let me set a threshold voltage that avoids going above this.

I mainly need 5V for the HEP-600C-12, and for providing that stable known voltage reference. The LM78C05 should be fine for this.

Once I’ve done that, I should be able to wind that charger back up to its factory setting of 14.4V, which will mean that overnight the batteries will be charged back to full charge.

Dec 082017
 

So, I have two compute nodes.  I’ll soon have 32GB RAM in each one, currently one has 32GB and the other has its original 8GB… with 5 8GB modules on the way.

I’ve tested these, and they work fine in the nodes I have, they’ll even work along side the Kingston modules I already have, so one storage node will have a mixture.  That RAM is expected to arrive on Monday.

Now, it’d be nice to have HA set up so that I can power down the still-to-be-upgraded compute node, and have everything automatically fire up on the other compute node.  OpenNebula supports this. BUT I have two instances that are being managed outside of OpenNebula that I need to handle: one being the core router, the other being OpenNebula itself.

My plan was to use corosync.  I have an identical libvirt config for both VMs, allowing me to move the VMs manually between the hosts.  VM Disk storage is using RBDs on Ceph.  Thus, HA by default.

As an experiment, I thought, what would happen if I fired up two instances of the VM that pointed to the same RBD image?  I was expecting one of two things to happen:

  • The image would be locked by the first started image, locking out the second.  One instance would boot, the other would fail to boot.
  • Both instances would boot… the split-brain scenario.

So, I created a libvirt domain on one node, slapped Ubuntu on there (I just wanted a basic OS for testing, so command line, nothing fancy).  As that was installing, I dumped out the “XML config” and imported that to the second node, but didn’t start it yet.

Once I had the new VM booted on node 1, I booted it on node 2.

To my horror, it started booting, and booted straight to a log-in prompt. Great, I had manually re-created the split-brain scenario I specifically hoped to avoid.  Thankfully, it is a throw-away VM specifically for testing this behaviour.  To be sure, I logged in on both, then hard-resetted one.  It boots to GRUB, then immediately GRUB goes into panic mode.  I hard reset the other VM, it boots past GRUB, but then systemd goes into panic mode.  This is expected: the two VMs are stomping on each others’ data oblivious to each others’ existence, a recipe for disaster.

So for this to work, I’m going to have to work on my fencing.  I need to ensure beyond all possible doubt, that the VM is running in one place and one place ONLY.

libvirt supports VM hooks to do this, and there’s an example here, however this thread seems to suggest this is not a reliable way of doing things.  RBD locking is what I hoped libvirt would do implicitly, but it seems not, and it appears that the locks are not removed when a client dies, which could lead to other problems.

A distributed lock manager would handle this, and this is something I need to research.  Possibilities include HashiCorp Consul, Apache ZooKeeper, CoreOS etcd and Redis, among others.  I can also try to come up with my own, perhaps built on PAXOS or Raft.

The state needs to only be kept in memory, persistence on disk is not required.  It’s safe to assume that if the cluster doesn’t know about a VM, it isn’t running anywhere else.  Once told of that VMs existence though, it should ensure only one instance runs at a time.

If a node loses contact with the remaining group, it should terminate everything it has, as it’s a fair bet, the others have noticed its absence and have re-started those instances already.

There’s lots to think about here, so I’ll leave this post at this point and ponder this some more.

Nov 052017
 

So… with the new controller we’re able to see how much current we’re getting from the solar.  I note they omit the solar voltage, and I suspect the current is how much is coming out of the MPPT stage, but still, it’s more information than we had before.

With this, we noticed that on a good day, we were getting… 7A.

That’s about what we’d expect for one panel.  What’s going on?  Must be a wiring fault!

I’ll admit when I made the mounting for the solar controller, I didn’t account for the bend radius in the 6gauge wire I was using, and found it was difficult to feed it into the controller properly.  No worries, this morning at 4AM I powered everything off, took the solar controller off, drilled 6 new holes a bit lower down, fed the wires through and screwed them back in.

Whilst it was all off, I decided I’d individually charge the batteries.  So, right-hand battery came first, I hook the mains charger directly up and let ‘er rip.  Less than 30 minutes later, it was done.

So, disconnect that, hook up the left hand battery.  45 minutes later the charger’s still grinding away.  WTF?

Feel the battery… it is hot!  Double WTF?

It would appear that this particular battery is stuffed.  I’ve got one good one though, so for now I pull the dud out and run with just the one.

I hook everything up,  do some final checks, then power the lot back up.

Things seem to go well… I do my usual post-blackout dance of connecting my laptop up to the virtual instance management VLAN, waiting for the OpenNebula VM to fire up, then log into its interface (because we’re too kewl to have a command line tool to re-start an instance), see my router and gitea instances are “powered off”, and instruct the system to boot them.

They come up… I’m composing an email, hit send… “Could not resolve hostname”… WTF?  Wander downstairs, I note the LED on the main switch flashing furiously (as it does on power-up) and a chorus of POST beeps tells me the cluster got hard-power-cycled.  But why?  Okay, it’s up now, back up stairs, connect to the VLAN, re-start everything again.

About to send that email again… boompa!  Same error.  Sure enough, my router is down.  Wander downstairs, and as I get near, I hear the POST beeps again.  Battery voltage is good, about 13.2V.  WTF?

So, about to re-start everything, then I lose contact with my OpenNebula front-end.  Okay, something is definitely up.  Wander downstairs, and the hosts are booting again.  On a hunch I flick the off-switch to the mains charger.  Klunk, the whole lot goes off.  There’s no connection to the battery, and so when the charger drops its power to check the battery voltage, it brings the whole lot down.

WTF once more?  I jiggle some wires… no dice.  Unplug, plug back in, power blinks on then off again.  What is going on?

Finally, I pull right-hand battery out (the left-hand one is already out and cooling off, still very warm at this point), 13.2V between the negative terminal and positive on the battery, good… 13.2V between negative and the battery side of the isolator switch… unscrew the fuse holder… 13.2V between fuse holder terminal and the negative side…  but 0V between negative side on battery and the positive terminal on the SB50 connector.

No apparent loose connections, so I grab one of my spares, swap it with the existing fuse.  Screw the holder back together, plug the battery back in, and away it all goes.

This is the offending culprit.  It’s a 40A 5AG fuse.  Bought for its current carrying capacity, not for the “bling factor” (gold conductors).

If I put my multimeter in continuance test mode and hold a probe on each end cap, without moving the probes, I hear it go open-circuit, closed-circuit, open-circuit, closed-circuit.  Fuses don’t normally do that.

I have a few spares of these thankfully, but I will be buying a couple more to replace the one that’s now dead.  Ohh, and it looks like I’m up for another pair of batteries, and we will have a working spare 105Ah once I get the new ones in.

On the RAM front… the firm I bought the last lot through did get back to me, with some DDR3L ECC SO-DIMMs, again made by Kingston.  Sounded close enough, they were 20c a piece more (AU$855 for 6 vs $AU864.50).

Given that it was likely this would be an increasing problem, I thought I’d at least buy enough to ensure every node had two matched sticks in, so I told them to increase the quantity to 9 and to let me know what I owe them.

At first they sent me the updated invoice with the total amount (AU$1293.20).  No problems there.  It took a bit of back-and-forth before I finally confirmed they had the previous amount I sent them.  Great, so into the bank I trundle on Thursday morning with the updated invoice, and I pay the remainder (AU$428.70).

Friday, I get the email to say that product was no longer available.  They instead, suggested some Crucial modules which were $60 a piece cheaper.  Well, when entering a gold mine, one must prepare themselves for the shaft.

Checking the link, I found it: these were non-ECC.  1Gbit×64, not 1Gbit×72 like I had ordered.  In any case I was over it, I fired back an email telling them to cancel the order and return the money.  I was in no mood for Internet shopper Russian Roulette.

It turns out I can buy the original sticks through other suppliers, just not in the quantities I’m after.  So I might be able to buy one or two from a supplier, I can’t buy 9.  Kingston have stopped making them and so what’s left is whatever companies have in stock.

So I’ll have to move to something else.  It’d be worth buying one stick of the original type so I can pair it with one of the others, but no more than that.  I’m in no mood to do this in a few years time when parts are likely to be even harder to source… so I think I’ll bite the bullet and go 16GB modules.  Due to the limits on my debit card though, I’ll have to buy them two at a time (~$900AUD each go).  The plan is:

  1. Order in two 16GB modules and an 8GB module… take existing 8GB module out of one of the compute nodes and install the 16GB modules into that node.  Install the brand new 8GB module and the recovered 8GB module into two of the storage nodes.  One compute node now has 32GB RAM, and two storage nodes are now upgraded to 16GB each.  Remaining compute node and storage node each have 8GB.
  2. Order in two more 16GB modules… pull the existing 8GB module out of the other compute node, install the two 16GB modules.  Then install the old 8GB module into the remaining storage node.  All three storage nodes now have 16GB each, both compute nodes have 32GB each.
  3. Order two more 16GB modules, install into one compute node, it now has 64GB.
  4. Order in last two 16GB modules, install into the other compute node.

Yes, expensive, but sod it.  Once I’ve done this, the two nodes doing all the work will be at their maximum capacity.  The storage nodes are doing just fine with 8GB, so 16GB should mean there’s plenty of RAM for caching.

As for virtual machine management… I’m pretty much over OpenNebula.  Dealing with libvirt directly is no fun, but at least once configured, it works!  OpenNebula has a habit of not differentiating between a VM being powered off (as in, me logging into the guest and issuing a shutdown), and a VM being forcefully turned off by the host’s power getting yanked!

With one, there should be some event fired off by libvirt to tell OpenNebula that the VM has indeed turned itself off.  With the latter, it should observe that one moment the VM is there, and next it isn’t… the inference being that it should still be there, and that perhaps that VM should be re-started.

This could be a libvirt limitation too.  I’ll have to research that.  If it is, then the answer is clear: we ditch libvirt and DIY.  I’ll have to research how I can establish a quorum and schedule where VMs get put, but it should be doable without the hassle that OpenNebula has been so far, and without going to the utter tedium that is OpenStack.

Oct 282017
 

So, this morning I decided to shut the whole lot down and switch to the new solar controller.  There’s some clean-up work to be done, but for now, it’ll do.  The new controller is a Powertech MP3735.  Supposedly this one can deliver 30A, and has programmable float and bulk charge voltages.  A nice feature is that it’ll disconnect the load when it drops below 11V, so finding the batteries at 6V should be a thing of the past!  We’ll see how it goes.

I also put in two current shunts, one on the feed into/out of the battery, and one to the load.  Nothing is connected to monitor these as yet, but some research suggested that while in theory it is just an op-amp needed, that op-amp has to deal with microvolt differences and noise.

There are instrumentation amplifiers designed for that, and a handy little package is TI’s INA219B.  This incorporates aforementioned amplifier, but also adds to that an ADC with an I²C interface.  Downside is that I’ll need an MCU to poll it, upside is that by placing the ADC and instrumentation amp in one package, it should cut down noise, further reduced if I mount the chip on a board bolted to the current shunt concerned.  The ADC measures bus voltage and temperature as well.  Getting this to work shouldn’t be hard.  (Yes, famous last words I know.)

A few days ago, I also placed an order for some more RAM for the two compute nodes.  I had thought 8GB would be enough, and in a way it is, except I’ve found some software really doesn’t work properly unless it has 2GB RAM available (Gitea being one, although it is otherwise a fantastic Git repository manager).  By bumping both these nodes to 32GB each (4×8GB) I can be less frugal about memory allocations.

I can in theory go to 16GB modules in these boxes, but those were hideously expensive last time I looked, and had to be imported.  My debit card maxes out at $AU999.99, and there’s GST payable on anything higher anyway, so there goes that idea.  64GB would be nice, but 32GB should be enough.

The fun bit though, Kingston no longer make DDR3 ECC SO-DIMMs.  The mob I bought the last lot though informed me that the product is no longer available, after I had sent them the B-Pay payment.  Ahh well, I’ve tossed the question back asking what do they have available that is compatible.

Searching for ECC SODIMMs is fun, because the search engines will see ECC and find ECC DIMMs (i.e. full-size).  When looking at one of these ECC SODIMM unicorns, they’ll even suggest the full-size version as similar.  I’d love to see the salespeople try to fit the suggested full-size DIMM into the SODIMM socket and make it work!

The other thing that happens is the search engine sees ECC and see that that’s a sub-string of non-ECC.  Errm, yeah, if I meant non-ECC, I’d have said so, and I wouldn’t have put ECC there.

Crucial and Micron both make it though, here’s hoping mixing and matching RAM from different suppliers in the same bank won’t cause grief, otherwise the other option is I pull the Kingston sticks out and completely replace them.

The other thing I’m looking at is an alternative to OpenNebula.  Something that isn’t a pain in the arse to deploy (like OpenStack is, been there, done that), that is decentralised, and will handle KVM with a Ceph back-end.

A nice bonus would be being able to handle cross-architecture QEMU VMs, in particular for ARM and MIPS targets.  This is something that libvirt-based solutions do not do well.

I’m starting to think about ways I can DIY that solution.  Blockchain was briefly looked at, and ruled out on the basis that while it’d be good for an audit log, there’s no easy way to index it: reading current values would mean a full-scan of the blockchain, so not a solution on its own.

CephFS is stable now, but I’m not sure how file locking works on it.  Then there’s object storage itself, librados.  I’m not sure if there’s a database engine that can interface to that, or maybe to Amazon S3 storage (radosgw can emulate that), that’ll be the next step.  Lots to think about.

Oct 222017
 

So I’ve now had the solar panels up for a month now… and so far, we’ve had a run of very overcast or wet days.

Figures… and we thought this was the “sunshine state”?

I still haven’t done the automatic switching, so right now the mains power supply powers the relay that switches solar to mains.  Thus the only time my cluster runs from solar is when either I switch off the mains power supply manually, or if there’s a power interruption.

The latter has not yet happened… mains electricity supply here is pretty good in this part of Brisbane, the only time I recall losing it for an extended period of time was back in 2008, and that was pretty exceptional circumstances that caused it.

That said, the political football of energy costs is being kicked around, and you can bet they’ll screw something up, even if for now we are better off this side of the Tweed river.

A few weeks back, with predictions of a sunny day, I tried switching off the mains PSU in the early morning and letting the system run off the solar.  I don’t have any battery voltage logging or current logging as yet, but the system went fine during the day.  That evening, I turned the mains back on… but the charger, a Redarc BCDC1225, seemingly didn’t get that memo.  It merrily let both batteries drain out completely.

The IPMI BMCs complained bitterly about the sinking 12V rail at about 2AM when I was sound asleep.  Luckily, I was due to get up at 4AM that day.  When I tried checking a few things on the Internet, I first noticed I didn’t have a link to the Internet.  Look up at the switch in my room and saw the link LED for the cluster was out.

At that point, some choice words were quietly muttered, and I wandered downstairs with multimeter in hand to investigate.  The batteries had been drained to 4.5V!!!

I immediately performed some load-shedding (ripped out all the nodes’ power leads) and power-cycled the mains PSU.  That woke the charger up from its slumber, and after about 30 seconds, there was enough power to bring the two Ethernet switches in the rack online.  I let the voltage rise a little more, then gradually started re-connecting power to the nodes, each one coming up as it was plugged in.

The virtual machine instances I had running outside OpenNebula came up just fine without any interaction from me, but  it seems OpenNebula didn’t see it fit to re-start the VMs it was responsible for.  Not sure if that is a misconfiguration, or if I need to look at an alternate solution.

Truth be told, I’m not a fan of libvirt either… overly complicated for starting QEMU VMs.  I might DIY a solution here as there’s lots of things that QEMU can do which libvirt ignores or makes more difficult than it should be.

Anyway… since that fateful night, I have on two occasions run the cluster from solar without incident.  On the off-chance though, I have an alternate charger which I might install at some point.  The downside is it doesn’t boost the 12V input like the other one, so I’d be back to using that Xantrex charger to charge from mains power.

Already, I’m thinking about the criteria for selecting a power source.  It would appear there are a few approaches I can take, I can either purely look at the voltages seen at the solar input and on the battery, or I can look at current flow.

Voltage wise, I tried measuring the solar panel output whilst running the cluster today.  In broad daylight, I get 19V off the panels, and at dusk it’s about 16V.

Judging from that, having the solar “turn on” at 18V and “turn off” at 15V seems logical.  Using the comparator approach, I’d need to set a reference of 16.5V and tweak the hysteresis to give me a ±3V swing.

However, this ignores how much energy is actually being produced from solar in relation to how much is being consumed.  It is possible for a day to start off sunny, then for the weather to cloud over.  Solar voltage in that case might be sitting at the 16V mentioned.

If the current is too low though, the cluster will drain more power out than is going in, and this will result in the exact conditions I had a few weeks ago: a flat battery bank.  Thus I’m thinking of incorporating current shunts both on the “input” to the battery bank, and to the “output”.  If output is greater than input, we need mains power.

There’s plenty of literature about interfacing to current shunts.  I’ll have to do some research, but immediately I’m thinking an op-amp running from the battery configured as a non-inverting DC gain block with the inputs going to either side of the current shunt.

Combining the approaches is attractive.  So turn on when solar exceeds 18V, turn off when battery output current exceeds battery input current.  A dual op-amp, a dual comparator, two current shunts, a R-S flip-flop and a P-MOSFET for switching the relay, and no hysteresis calculations needed.