DC-DC Converter: Driver designed

So, I’ve done the driver board.  This is bigger than I thought it would be, at first I thought it’d just be the LVDS receiver, MOSFET driver, and a few capacitors/resistors, and the connectors.  Ideally I wanted something that could be slipped over the pins of the MOSFETs, allowing the drain and source to be connected to other connections which could take the current.

I had just laid everything out on a 5×3.5cm board, two-layer (so dirt cheap).  Nice and tidy.  For the receiver I ended up using the DS90C402: it was already in Kicad.  All looked good, until I saw this in the datasheet (highlighting mine):

In short, if the cable gets unplugged, the receiver will effectively drive both MOSFETs hard on 100%.  Kaboom!

So, I had to introduce an inverter into the circuit.  A bit more propagation delay, and another component, it’s the biggest part on the board (they don’t make SOIC-8 inverters).  I’ve chosen a 74AHC family part like I did for the driver board so it should have the speed needed.

I’m not sure if this is needed for LVDS, but I’ve added a number of pull-up and pull-down resistors as well as the 100R terminations.  These are underneath.

Likewise, I realised I had omitted doing the same on the controller.  There were some for I²C, but I’ve re-located these to the bottom.  So I’ve made some room for them.  Better to do it now than find out I need them later.

Like the driver board, I’ve documented resistors used for pull-up, pull-down and termination.  I’m not exactly sure which ones are needed or what values they should be, but fixing a silk-screen isn’t a big issue.

The LVDS outputs have resistors too, you can see those near the relevant sockets.  I suspect the answer is they are needed at the receiver, not the driver.

Solar Cluster: Solar Testing

So I’ve now had the solar panels up for a month now… and so far, we’ve had a run of very overcast or wet days.

Figures… and we thought this was the “sunshine state”?

I still haven’t done the automatic switching, so right now the mains power supply powers the relay that switches solar to mains.  Thus the only time my cluster runs from solar is when either I switch off the mains power supply manually, or if there’s a power interruption.

The latter has not yet happened… mains electricity supply here is pretty good in this part of Brisbane, the only time I recall losing it for an extended period of time was back in 2008, and that was pretty exceptional circumstances that caused it.

That said, the political football of energy costs is being kicked around, and you can bet they’ll screw something up, even if for now we are better off this side of the Tweed river.

A few weeks back, with predictions of a sunny day, I tried switching off the mains PSU in the early morning and letting the system run off the solar.  I don’t have any battery voltage logging or current logging as yet, but the system went fine during the day.  That evening, I turned the mains back on… but the charger, a Redarc BCDC1225, seemingly didn’t get that memo.  It merrily let both batteries drain out completely.

The IPMI BMCs complained bitterly about the sinking 12V rail at about 2AM when I was sound asleep.  Luckily, I was due to get up at 4AM that day.  When I tried checking a few things on the Internet, I first noticed I didn’t have a link to the Internet.  Look up at the switch in my room and saw the link LED for the cluster was out.

At that point, some choice words were quietly muttered, and I wandered downstairs with multimeter in hand to investigate.  The batteries had been drained to 4.5V!!!

I immediately performed some load-shedding (ripped out all the nodes’ power leads) and power-cycled the mains PSU.  That woke the charger up from its slumber, and after about 30 seconds, there was enough power to bring the two Ethernet switches in the rack online.  I let the voltage rise a little more, then gradually started re-connecting power to the nodes, each one coming up as it was plugged in.

The virtual machine instances I had running outside OpenNebula came up just fine without any interaction from me, but  it seems OpenNebula didn’t see it fit to re-start the VMs it was responsible for.  Not sure if that is a misconfiguration, or if I need to look at an alternate solution.

Truth be told, I’m not a fan of libvirt either… overly complicated for starting QEMU VMs.  I might DIY a solution here as there’s lots of things that QEMU can do which libvirt ignores or makes more difficult than it should be.

Anyway… since that fateful night, I have on two occasions run the cluster from solar without incident.  On the off-chance though, I have an alternate charger which I might install at some point.  The downside is it doesn’t boost the 12V input like the other one, so I’d be back to using that Xantrex charger to charge from mains power.

Already, I’m thinking about the criteria for selecting a power source.  It would appear there are a few approaches I can take, I can either purely look at the voltages seen at the solar input and on the battery, or I can look at current flow.

Voltage wise, I tried measuring the solar panel output whilst running the cluster today.  In broad daylight, I get 19V off the panels, and at dusk it’s about 16V.

Judging from that, having the solar “turn on” at 18V and “turn off” at 15V seems logical.  Using the comparator approach, I’d need to set a reference of 16.5V and tweak the hysteresis to give me a ±3V swing.

However, this ignores how much energy is actually being produced from solar in relation to how much is being consumed.  It is possible for a day to start off sunny, then for the weather to cloud over.  Solar voltage in that case might be sitting at the 16V mentioned.

If the current is too low though, the cluster will drain more power out than is going in, and this will result in the exact conditions I had a few weeks ago: a flat battery bank.  Thus I’m thinking of incorporating current shunts both on the “input” to the battery bank, and to the “output”.  If output is greater than input, we need mains power.

There’s plenty of literature about interfacing to current shunts.  I’ll have to do some research, but immediately I’m thinking an op-amp running from the battery configured as a non-inverting DC gain block with the inputs going to either side of the current shunt.

Combining the approaches is attractive.  So turn on when solar exceeds 18V, turn off when battery output current exceeds battery input current.  A dual op-amp, a dual comparator, two current shunts, a R-S flip-flop and a P-MOSFET for switching the relay, and no hysteresis calculations needed.

Toy Synthesizer: 74HC573 replaced with ‘574… I/O modules built

So… earlier in the week I received some 74HC574s (the right chip) to replace the 74HC573s I tried to use.

My removal technique was not pretty, wound up just cutting the legs off the hapless 74HC573 (my earlier hack had busted a pin on it anyway) and removing it that way. Since the holes were full of solder, it was easier to just bend the pins on the ‘574 and surface-mount it.

This afternoon, I gave it a try, and ‘lo and behold, it worked. I even tried hooking up one of the LED strings and driving that with the MOSFET… no problems at all.

I had only built up one of the modules at this stage, so I built another 5 on the same piece of strip board.

The requirement is for 5 channels… this meets that and adds an extra one (the board was wide enough). For the full accompaniment and to have ICSP/networking via external connections, a second board like this could be made, omitting the MOSFETs on four of the channels to handle the ICSP control lines and reducing the capacitances/resistances to suit.

Somewhere I have some TVS diodes for this board, but of course, they have legs, upon which they got up and ran away. Haven’t resurfaced yet. I’m sure they will if I buy more though. The spare footprint on the top-left of the main board is where one TVS diode goes, the others go on the I/O board, two for each channel.

Toy Synthesizer: 74HC573, no substitute for the ‘574

So… when laying out this board, I decided I’d swap the 74HC374 for the much nicer ‘574 for managing the MOSFETs. No problem, well, one minor snag, neither Jaycar nor Altronics carry the ‘574.

They carry the ‘374… Jaycar is where I got mine originally. They also carry the very similar ‘573. I had intended to order some ‘574s for when the boards arrived, but they sort of turned up unexpectedly… so didn’t get a chance.

The fundamental difference? Apart from the ‘574 being edge triggered, the ‘573 also is active high on the logic enable pin. I had wrongly assumed it was active low .

Naturally, I did try to hack around this fundamental difference:

Tried to get that leg in focus, but it’s difficult when the LCD of the camera is such low resolution (and the viewfinder is an even lower resolution LCD). Basically, I nibbled the leg of the IC with my sidecutters and bent it up. To the pad, I solder the gate of a 2N7000 MOSFET, bend the drain pin up to meet the now floating-in-air LE pin, and run a resistor (a 3k3) to Vcc.

That works somewhat… it might be possible to introduce some more state machine cycles to handle the kludge… although the real fix is to use the correct part in the first place.

Solar Cluster: A no-microcontroller automatic battery selector

Another approach to the selecting a source is to avoid microcontrollers altogether and just rely on non-programmable logic. This is inspired a bit by the Saturday Clock, or rather, my thinking of how it could be done without an MCU.

In selecting a source, we really only care about one thing: is the battery voltage high enough? If no, we need to hunt for one that is.

This question can be answered by a simple analogue comparator such as the LM311, a shift register, and a few other logic gates.

Here, we have such a circuit. Up the top, is our shift register, set up as a ring counter. The buffers there are stand-ins for diodes, if any of them is a 1, the output is a 1 and the NOT gate on the input outputs a 0.

The outputs of the shift register are used to select a battery, which has its own comparator and select logic. The comparator is represented here by the D flip-flop at the extreme left: in essence I’m using this as a switch, Logisim doesn’t provide one, only a momentary button. We need a signal that is high when the battery is above acceptable voltage. We also need its inverse.

The select line from the shift register controls the gate on two tri-state buffers, allowing us to inhibit the comparator’s output. The buffered “good” signal is used to SET the “enable” D-flip-flop that drives the switch turning the battery on. This same (buffered “good”) signal also passes thorough a diode-OR arrangement that indicates whether a source is “available”.

To emulate make-before-break, inverted “select” signal and the “source available” signal pass through an AND gate and into the RESET of the “enable” flip flop, so it gets turned off when another source is turned on.

Finally, the buffered “bad” signal from all modules is fed back on one shared line, inhibiting the clock until a battery drops below the minimum level.

A glitch here is if multiple batteries are initially turned on with none above the minimum voltage, this will cause multiple sources to be selected. This is not too hard to manage in software, and the solution might in fact be to implement this on an ATTiny24A as mentioned in the previous post; this logic circuit can be implemented quite easily in C, with comparators in hardware or using the ADC as a software comparator as I’m doing in the charge controller.

Interrupt controllers from logic gates

Well, in the last post I started to consider the thoughts of building my own computer from a spare 386 CPU I had liberated from an old motherboard.

One of the issues I face is implementing the bus protocol that the 386 uses, and decoding of interrupts.  The 386 expects an 8-bit interrupt request number that corresponds to the interrupting device.  I’m used to microcontrollers where you use a single GPIO line, but in this case, the interrupts are multiplexed.

For basic needs, you could do it with a demux IC.  That will work for a small number of interrupt lines.  Suppose I wanted more though?  How feasible is it to support many interrupt lines without tying up lots of GPIO lines?

CANBus has an interesting way of handling arbitration.  The “zeros” are dominant, and thus overrule “ones”.  The CAN transceiver is a full-duplex device, so as the station is transmitting, it listens to the state of the bus.  When some nodes want to talk (they are, of course, oblivious to each-others’ intentions), they start sending a start-bit (a zero) which synchronises all nodes, then begin sending an address.

While each node is sending the same “bit value”, the receiving nodes see that value.  As each node tries sending a 1 while the others are sending 0’s, it sees the disparity, and concludes that it has lost arbitration.  Eventually, you’re left with a single node that then proceeds to send its CANBus frame.

Now, we don’t need the complexity of CANBus to do what we’re after.  We can keep synchronisation by simple virtue that we can distribute a common clock (the one the CPU runs at).  Dominant and recessive bits can be implemented with transistors pulling down on a pull-up resistor, or a diode-OR: this will give us a system where ‘1’s are dominant.  Good enough.

So I figured up Logisim to have a fiddle, came up with this:

Interrupt controller using logic gates

Interrupt controller using logic gates

interrupt.circ is the actual LogiSim circuit if you wanted to have a fiddle; decompress it.  Please excuse the mess regarding the schematic.

On the left is the host-side of the interrupt controller.  This would ultimately interface with the 386.  On the right, are two “devices”, one on IRQ channel 0x01, the other on 0x05.  The controller handles two types of interrupts: “DMA interrupts”, where the device just wants to tell the DMA controller to put data into memory, or “IRQ”s, where we want to interrupt the CPU.

The devices are provided with the following control signals from the interrupt controller:

Signal Controlled by Description
DMA Devices Informs the IRQ controller if we’re interrupting for DMA purposes (high) or if we need to tell the CPU something (low).
IRQ Devices Informs the IRQ controller we want its attention
ISYNC Controller Informs the devices that they have the controller’s attention and to start transmitting address bits.
IRQBIT[2…0] Controller Instructs the devices what bit of their IRQ address to send (0 = MSB, 7 = LSB).
IDA Devices The inverted address bit value corresponding to the bit pointed to by IRQBIT.
IACK Devices Asserted by the device that wins arbitration.

Due to the dominant/recessive nature of the bits, the highest numbered device wins over lesser devices. IRQ requests also dominate over DMA requests.

In the schematic, the devices each have two D-flip-flops that are not driven by any control signals.  These are my “switches” for toggling the state of the device as a user.  The ones feeding into the XOR gate control the DMA signal, the others control the IRQ line.

Down the bottom, I’ve wired up a counter to count how long between the ISYNC signal going high and the controller determining a result.  This controller manages to determine which device requested its attention within 10 cycles.  If clocked at the same 20MHz rate as the CPU core, this would be good enough for getting a decoded IRQ channel number to the data lines of the 386 CPU by the end of its second IRQ acknowledge cycle, and can handle up to 256 devices.

A logical next step would be to look at writing this in Verilog and trying it out on an FPGA.  Thanks to the excellent work of Clifford Wolf in producing the IceStorm project, it is now possible to do this with completely open tools.  So, I’ve got a Lattice iCE40HX-8K FPGA board coming.  This should make a pretty mean SDRAM controller, interrupt controller and address decoder all in one chip, and should be a great introduction into configuring FPGAs.