Mar 312018
 

So last post, I mentioned about the installation of the new battery charger, which is fed from 240V mains. Over the last few days this charger has held the batteries at a rock-solid 14.4V. Not once did the batteries drop below that voltage setpoint.

So good in fact, the solar charger does no work at all.

By the way, this is what the install looks like. I promised pictures last post.

That’s the DC end … and the nasty AC end is all sealed up…

I will eventually move this to a spot on the back of the rack, but it can sit here for now.

Ultimately, the proper fix to this will be to have the mains-powered charger power off when the sun is up. On the DC output connector, the two rightmost screw terminals go to an opto-isolator that, when powered, shuts off the charger, putting it into stand-by mode. This was one of the reasons I bought this particular unit. The other was the wide range of voltage adjustment.

The question is when to turn on, and when to go to stand-by. Basically if the following expression is true, then turn off the mains:

(V_{batt} > 12.8) \\wedge (V_{solar} > 15)

We do not want solar if the battery is very low, as there’s a possibility that the solar output will not be sufficient.  Likewise, if the sun’s out, we need the mains to keep the battery topped up.

The solar output is nearly always above 15V when the sun is up, so there’s our first clue.  We can safely get to 12.8V before things start going pear shaped on the cluster, so we can use that as our low-voltage safety net.  If both of these conditions are met, then it’s safe to turn off the mains power and rely on solar only.

We need a +5V signal when both these conditions are met.  This very much sounds like the job of a dual-comparator with diode-OR outputs pulling on a 5V pull-up.  Maybe a wee bit of hysteresis on those to prevent flapping, and we should be good.

Unfortunately, to do that, I need to unscrew terminals to feed some wires in.  I don’t feel like doing that just now… we’re packing up to go away for a while, and I think this sort of job can wait until we return.

In the meantime, I’ve done something of a hack.  I mentioned the PSU is adjustable.  I wound Vfloat back to 12V… thus Vboost has gone to 12.8V.  Right now, the mains PSU is showing a green LED, meaning it is in floating mode.

We have good sun right now, and the solar controller is currently boosting the battery.  When the battery gets low, the charging circuitry of the mains PSU should kick in, and bring the battery voltage up, holding it at 12.8V until the sun comes up.  I’ll leave it for now and see how this hack goes.

On other news… I might need to re-consider my NTP server arrangements.  I’m not sure if it’s a quirk of OpenBSD, or of the network here, but it seems OpenNTPD struggles to keep good time.  Never tried using the Advantech PC as a NTP server until now, and I’m also experimenting with using my VPS at Vultr as a NTP server.

http://www.pool.ntp.org/user/Redhatter

Both are drifting like crazy.  I have a GPS module lying around that I might consider hooking up to the TS-7670… perhaps make it a Stratum 1 NTP server on the NTP server pool, then the Advantech can sync to that.

This won’t help the VPS though, and I’m at a loss to explain why a Geode LX800 running on an ADSL link in my laundry, outperforms a VPS in a nicely climate-controlled data centre with gigabit Internet.

But at least now that’s one less job for my aging server.  I’ve also moved mail server duties off the old box onto a VM, so I’ll be looking at the BIOS settings there to see if I can get the box to wake up some time in the evening, let cron run the back-up jobs, then power the whole lot back down again, save some juice.

Aug 202017
 

OpenNebula is running now… I ended up re-loading my VM with Ubuntu Linux and throwing OpenNebula on that.  That works… and I can debug the issue with Gentoo later.

I still have to figure out corosync/heartbeat for two VMs, the one running OpenNebula, and the core router.  For now, the VMs are only set up to run on one node, but I can configure them on the other too… it’s then a matter of configuring libvirt to not start the instances at boot, and setting up the Linux-HA tools to figure out which node gets to fire up which VM.

The VM hosts are still running Gentoo however, and so far I’ve managed to get them to behave with OpenNebula.  A big part was disabling the authentication in libvirt, otherwise polkit generally made a mess of things from OpenNebula’s point of view.

That, and firewalld had to be told to open up ports for VNC/spice… I allocated 5900-6900… I doubt I’ll have that many VMs.

Last weekend I replaced the border router… previously this was a function of my aging web server, but now I have an ex-RAAF-base Advantech UNO-1150G industrial PC which is performing the routing function.  I tried to set it up with Gentoo, and while it worked, I found it wasn’t particularly stable due to limited memory (it only has 256MB RAM).  In the end, I managed to get OpenBSD 6.1/i386 running sweetly, so for now, it’s staying that way.

While the AMD Geode LX800 is no speed demon, a nice feature of this machine is it’s happy with any voltage between 9 and 32V.

The border router was also given the responsibility of managing the domain: I did this by installing ISC BIND9 from ports and copying across the config from Linux.  This seemed to be working, and so I left it.  Big mistake, turns out bind9 didn’t think it was authoritative, and so refused to handle AXFRs with my slaves.

I was using two different slave DNS providers, puck.nether.net and Roller Network, both at the time of subscription being freebies.  Turns out, when your DNS goes offline, puck.nether.net responds by disabling your domain then emailing you about it.  I received that email Friday morning… and so I wound up in a mad rush trying to figure out why BIND9 didn’t consider itself authoritative.

Since I was in a rush, I decided to tell the border router to just port-forward to the old server, which got things going until I could look into it properly.  It took a bit of tinkering with pf.conf, but eventually got that going, and the crisis was averted.  Re-enabling the domains on puck.nether.net worked, and they stayed enabled.

It was at that time I discovered that Roller Network had decided to make their slave DNS a paid offering.  Fair enough, these things do cost money… At first I thought, well, I’ll just pay for an account with them, until I realised their personal plans were US$5/month.  My workplace uses Vultr for hosting instances of their WideSky platform for customers… and aside from the odd hiccup, they’ve been fine.  US$5/month VPS which can run almost anything trumps US$5/month that only does secondary DNS, so out came the debit card for a new instance in their Sydney data centre.

Later I might use it to act as a caching front-end and as a secondary mail exchanger… but for now, it’s a DIY secondary DNS.  I used their ISO library to install an OpenBSD 6.1 server, and managed to nut out nsd to act as a secondary name server.

Getting that going this morning, I was able to figure out my DNS woes on the border router and got that running, so after removing the port forward entries, I was able to trigger my secondary DNS at Vultr to re-transfer the domain and debug it until I got it working.

With most of the physical stuff worked out, it was time to turn my attention to getting virtual instances working.  Up until now, everything running on the VM was through hand-crafted VMs using libvirt directly.  This is painful and tedious… but for whatever reason, OpenNebula was not successfully deploying VMs.  It’d get part way, then barf trying to set up 802.1Q network interfaces.

In the end, I knew OpenNebula worked fine with bridges that were already defined… but I didn’t want to have to hand-configure each VLAN… so I turned to another automation tool in my toolkit… Ansible:

- hosts: compute
  tasks:
  - name: Configure networking
    template: src=compute-net.j2 dest=/etc/conf.d/net
# …
- hosts: compute
  tasks:
# …
  - name: Add symbolic links (instance VLAN interfaces)
    file: src=net.lo dest=/etc/init.d/net.bond0.{{item}} state=link
    with_sequence: start=128 end=193
  - name: Add symbolic links (instance VLAN bridges)
    file: src=net.lo dest=/etc/init.d/net.vlan{{item}} state=link
    with_sequence: start=128 end=193
# …
  - name: Make services start at boot (instance VLAN bridges)
    command: rc-update add net.vlan{{item}} default
    with_sequence: start=128 end=193 

That’s a snippet of the playbook… and it basically creates symbolic links from Gentoo’s net.lo for all the VLAN ports and bridges, then sets them up to start at boot.

In the compute-net.j2 file referenced above, I put in the following to enumerate all the configuration bits.

# Instance VLANs
{% for vlan in range(128,193) %}
config_vlan{{vlan}}="null"
config_bond0_{{vlan}}="null"
rc_net_vlan{{vlan}}_need="net.bond0.{{vlan}}"
{% endfor %}
# …
vlans_bond0="5 8 10{% for vlan in range(128,193) %} {{vlan}} {% endfor %}248 249 250 251 252"
vlans_bond1="253"
# …
# Instance VLANs
{% for vlan in range(128,193) %}
bridge_vlan{{vlan}}="bond0.{{vlan}}"
{% endfor %} 

The start and end ranges are a little off, but it saved a lot of work.

This naturally took a while for OpenRC to bring up… but it worked. Going back to OpenNebula, I told it what bridges to use, and before long I had my first instance… an OpenBSD router to link my personal VLAN to the DMZ.

I spent a bit of time re-working my routing tables after that… in fact, my network is getting big enough now I have to write some details down.  I spent a few hours documenting the effort:

That’s page 1 of about 15… yes my hand is sore… but at least now should I get run over by a bus, others have a fighting chance doing anything with the network without my technical input.