Aug 132016
 

Sometimes I wonder.  Take this evening for example.

I recently purchased some microcontrollers to evaluate for a project, some Atmel ATTiny85s, because they have a rather nice PLL function which means they can do VHF-speed PWM, and some NXP LPC810s, because they happen to be the only DIP-package ARM chip on the market I know of.

The project I’m looking at is a re-work of my bicycle horn… the ATMega32U4 works well, but the LeoStick boards are expensive compared to a bare DIP MCU, and the wiring inside the original prototype is a mess.  I also never got USB working on them, so there’s no point in a USB-capable MCU.

I initially got ATMega1284s owing to the flash storage, but these being 40-pin DIPs, they’re bigger than anticipated, and the fact they’ve got dual USARTs, lots of GPIOs and plenty of storage space, I figured I’d put them aside for another project.

What to use?  Well I have some AT89C2051s from way back (but no programmer for them), some ATTiny24As which I bought for my solar cluster project, an ATMega8L from another project, a LeoStick (Arduino Leonardo clone).  The LeoStick I’m in the process of turning into a debugWire debugger so that I can figure out what the ADCs are doing in my cluster’s power controller (ATTiny24A).

I started building a programmer for the ‘2051s using my ATMega8L last weekend.  The MAX232 IC I grabbed for serial I/O was giving me jibberish, and today I confirmed it was misbehaving.  The board in general is misbehaving in that after flashing the MCU, it seems to stay in reset, so I’ve got more work to do.  If I got that going, I was thinking I could have PCM recordings in an I²C EEPROM and use port 1 on the ‘2051 with an R2R ladder DAC to play sound.  (These chips do not feature PWM.)

Thinking this morning, I thought the LPC810 might be worth a shot.  It only has 4kB of flash, half that of the ATTiny85, and doesn’t have as impressive PWM capabilities, but is good enough.  I really need about 16kB to store the waveforms in flash.  I do have some I²C EEPROMs, mostly <2kB ones that are sourced off old motherboards, but also a handful of 32kB ones that I had just bought especially for this… but then left behind on my desk at work.

I considered audio compression, and experimenting with ADPCM-style techniques, came to the conclusion that I didn’t like the reduced audio quality.  It really sounded harsh.  (Okay, I realise 4-bits per sample is never going to win over the audiophiles!)

Maybe instead of PCM, I could do a crude polyphonic synthesizer?  My horn effect is in fact synthesized using a Python script: the same can be done in C, and the chip probably has the CPU grunt to do it.  It’d save the flash space as I’d be basically doing “poor man’s MIDI” on the thing.  Similar has been done before on lesser hardware.

I did some rough design of data structures.  I figured out a data structure that would allow me to store the state of a “voice” in 8 bytes, and could describe note and timing events in 8-byte blocks.  So in a 2kB EEPROM, I’d store 256 notes, and could easily accommodate 8 or 16 voices in RAM, provided the CPU could keep up at 30MHz.

So, I pull a chip out, slap it in my breadboard, and start hooking it up to power, and to my shiny new USB-TTL serial cable.  Fire up lpc21isp and, nothing, no response from the chip.  Huh?  Check wiring, probe around, still nothing.  Tried different baud rates, etc.  No dice.

This stubborn chip was not going to talk to lpc21isp.  Okay, let’s see if it’ll do SWD.  I dig out my STLink/V2 and hook that up.

OpenOCD reports no response from the device.

Great, maybe a dud chip.  After a good hour or so of fruitless poking and prodding, I pull it out of the breadboard and go to get another from the tube it came from when I notice “Atmel” written on the tube.

I look closer at the chip: it was an ATTiny85!  Different pin-out, different ISP procedure, and even if the .hex file had uploaded, it almost certainly would not have executed.

Swap the chip for an actual LPC810, and OpenOCD reports:

Open On-Chip Debugger 0.10.0-dev-00120-g7a8915f (2015-11-25-18:49)
Licensed under GNU GPL v2
For bug reports, read
http://openocd.org/doc/doxygen/bugs.html
Info : auto-selecting first available session transport "hla_swd". To override use 'transport select '.
Info : The selected transport took over low-level target control. The results might differ compared to plain JTAG/SWD
adapter speed: 10 kHz
adapter_nsrst_delay: 200
Info : Unable to match requested speed 10 kHz, using 5 kHz
Info : Unable to match requested speed 10 kHz, using 5 kHz
Info : clock speed 5 kHz
Info : STLINK v2 JTAG v23 API v2 SWIM v4 VID 0x0483 PID 0x3748
Info : using stlink api v2
Info : Target voltage: 2.979527
Warn : UNEXPECTED idcode: 0x0bc11477
Error: expected 1 of 1: 0x0bb11477
in procedure 'init'
in procedure 'ocd_bouncer'

I haven’t figured out the cause of this yet, whether the ST programmer doesn’t like talking to a competitor’s part. It’d be nice to get SWD going since single-stepping code and peering into memory really spoils a developer like myself. I try lpc21isp again.

Success!  I see a LED blinking, consistent with the demo .hex file I loaded.  Of course now the next step is to try building my own, but at least I can load code onto the device now.

Jul 222016
 

Seems spying on citizens is the new black these days, most government “intelligence” agencies are at it in one form or another. Then the big software companies feel left out, so they join in the fun as well, funneling as much telemetry into their walled garden as possible. (Yes, I’m looking at you, Microsoft.)

This is something I came up with this morning. It’s incomplete, but maybe I can finish it off at some point. I wonder if Cortana has a singing voice?

Partial lyrics for the ASIO/GCHQ/NSA song book

Jul 172016
 

A little trick I just learned today. First, the scenario.

I have a driver for a USART port, the USART on the ATMega32U4 in fact. It uses a FIFO interface to represent the incoming and outgoing data.

I have a library that also uses a FIFO to represent the data to be sent and received on a USART.

I have an application that will configure the USART and pipe between that and the library.

Now, I could have each component implement its own FIFOs, and have the main application shovel data between them. That could work. But I don’t want to do this. I could have the user pass in a pointer to the FIFOs in initialisation functions for the USART driver and the library, but I don’t want to store the extra pointers or incur the additional overheads.

Turns out, you can define a symbol somewhere, then alias it to make two variables appear in the same place. This is done with the alias attribute, and it requires that the target is defined with the nocommon attribute.

In the USART driver, I’ve simply declared the FIFOs as extern entities. This tells the C compiler what to expect in terms of data type but does not define a location in memory. Within the driver, use the symbols as normal.

/* usart.h */

/*! FIFO buffer for USART receive data */
extern struct fifo_t usart_fifo_rx;

/*! FIFO buffer for USART transmit data */
extern struct fifo_t usart_fifo_tx;

/* usart.c */
static void usart_send_next() {
  /* Ready to send next byte */
  int16_t byte = fifo_read_one(&usart_fifo_tx);
  if (byte >= 0)
    UDR1 = byte;
}

ISR(USART1_RX_vect) {
  fifo_write_one(&usart_fifo_rx, UDR1);
}

I can do the same for the protocol library.

/* External FIFO to host UART */
extern struct fifo_t proto_host_uart_rx, proto_host_uart_tx;

/*! External FIFO to target UART */
extern struct fifo_t proto_target_uart_rx, proto_target_uart_tx;

Now how do I link the two? They go by different names. I create aliases, that’s how.

/*
 * FIFO buffers for target communications.
 */
static struct fifo_t target_fifo_rx __attribute__((nocommon));
static uint8_t target_fifo_rx_buffer[128];
extern struct fifo_t usart_fifo_rx __attribute__((alias ("target_fifo_rx")));
extern struct fifo_t proto_target_uart_rx __attribute__((alias ("target_fifo_rx")));

static struct fifo_t target_fifo_tx __attribute__((nocommon));
static uint8_t target_fifo_tx_buffer[128];
extern struct fifo_t usart_fifo_tx __attribute__((alias ("target_fifo_tx")));
extern struct fifo_t proto_target_uart_tx __attribute__((alias ("target_fifo_tx")));

/*
 * FIFO buffers for host communications.
 */
static struct fifo_t host_fifo_rx __attribute__((nocommon));
static uint8_t host_fifo_rx_buffer[128];
extern struct fifo_t proto_host_uart_rx __attribute__((alias ("host_fifo_rx")));
static struct fifo_t host_fifo_tx __attribute__((nocommon));
static uint8_t host_fifo_tx_buffer[128];
extern struct fifo_t proto_host_uart_tx __attribute__((alias ("host_fifo_tx")));

Now a quick check with nm should reveal these to all be at the same locations:

RC=0 stuartl@vk4msl-mb ~/projects/debugwire/firmware $ avr-nm leodebug.elf \
     | grep '\(proto_.*_uart_.x\|host_fifo_.x\|target_fifo_.x\)'
0080022c b host_fifo_rx
008001ac b host_fifo_rx_buffer
0080019c b host_fifo_tx
0080011c b host_fifo_tx_buffer
0080022c B proto_host_uart_rx
0080019c B proto_host_uart_tx
0080034c B proto_target_uart_rx
008002bc B proto_target_uart_tx
0080034c b target_fifo_rx
008002cc b target_fifo_rx_buffer
008002bc b target_fifo_tx
0080023c b target_fifo_tx_buffer
Apr 272016
 

It seems good old “common courtesy” is absent without leave, as is “common sense”. Some would say it’s been absent for most of my lifetime, but to me it seems particularly so of late.

In particular, where it comes to the safety of one’s self, and to others, people don’t seem to actually think or care about what they are doing, and how that might affect others. To say it annoys me is putting it mildly.

In February, I lost a close work colleague in a bicycle accident. I won’t mention his name, as I do not have his family’s permission to do so.

I remember arriving at my workplace early on Friday the 12th before 6AM, having my shower, and about 6:15 wandering upstairs to begin my work day. Reaching my desk, I recall looking down at an open TS-7670 industrial computer and saying out aloud, “It’s just you and me, no distractions, we’re going to get U-Boot working”, before sitting down and beginning my battle with the machine.

So much for the “no distractions” however. At 6:34AM, the office phone rings. I’m the only one there and so I answer. It was a social worker looking for “next of kin” details for a colleague of mine. Seems they found our office details via a Cab Charge card they happened to find in his wallet.

Well, first thing I do is start scrabbling for the office directory to get his home number so I can pass the bad news onto his wife only to find: he’s only listed his mobile number. Great. After getting in contact with our HR person, we later discover there isn’t any contact details in the employee records either. He was around before such paperwork existed in our company.

Common sense would have dictated that one carry an “in case of emergency” number on a card in one’s wallet! At the very least let your boss know!

We find out later that morning that the crash happened on a particularly sharp bend of the Go Between Bridge, where the offramp sweeps left to join the Bicentennial bikeway. It’s a rather sharp bend that narrows suddenly, with handlebar-height handrails running along its length and “Bicycle Only” signs clearly signposted at each end.

Common sense and common courtesy would suggest you slow down on that bridge as a cyclist. Common sense and common courtesy would suggest you use the other side as a pedestrian. Common sense would question the utility of hand rails on a cycle path.

In the meantime our colleague is still fighting for his life, and we’re all holding out hope for him as he’s one of our key members. As for me, I had a network to migrate that weekend. Two of us worked the Saturday and Sunday.

Sunday evening, emotions hit me like a freight train as I realised I was in denial, and realised the true horror of the situation.

We later find out on the Tuesday, our colleague is in a very bad way with worst-case scenario brain damage as a result of the crash. From shining light to vegetable, he’d never work for us again.

Wednesday I took a walk down to the crash site to try and understand what happened. I took a number of photographs, and managed to speak to a gentleman who saw our colleague being scraped off the pavement. Even today, some months later, the marks on the railings (possibly from handlebar grips) and a large blood smear on the path itself, can still be seen.

It was apparent that our colleague had hit this railing at some significant speed. He wasn’t obese, but he certainly wasn’t small, and a fully grown adult does not ricochet off a metal railing and slide face-first for over a metre without some serious kinetic energy involved.

Common sense seems to suggest the average cyclist goes much faster than the 20km/hr collision the typical bicycle helmet is designed for under AS/NZS 2063:2008.

I took the Thursday and Friday off as time-in-lieu for the previous weekend, as I was an emotional wreck. The following Tuesday I resumed cycling to work, and that morning I tried an experiment to reproduce the crash conditions. The bicycle I ride wasn’t that much different to his, both bikes having 29″ wheels.

From what I could gather that morning, it seemed he veered right just prior to the bend then lost control, listing to the right at what I estimated to be about a 30° angle. What caused that? We don’t know. It’s consistent with him dodging someone or something on the path — but this is pure speculation on my part.

Mechanical failure? The police apparently have ruled that out. There’s not much in the way of CCTV cameras in the area, plenty on the pedestrian side, not so much on the cycle side of the bridge.

Common sense would suggest relying on a cyclist to remember what happened to them in a crash is not a good plan.

In any case, common sense did not win out that day. Our colleague passed away from his injuries a little over a fortnight after his crash, aged 46. He is sadly missed.

I’ve since made a point of taking my breakfast down to that point where the bridge joins the cycleway. It’s the point where my colleague had his last conscious thoughts.

Over the course of the last few months, I’ve noticed a number of things.

Most cyclists sensibly slow down on that bend, but a few race past at ludicrous speed. One morning, I nearly thought they’d be an encore performance as two construction workers on City Cycle bikes, sans helmets, came careening around the corner, one almost losing it.

Then I see the pedestrians. There’s a well lit, covered walkway, on the opposite side of the bridge for pedestrian use. It has bench seats, drinking fountains, good lighting, everything you’d want as a pedestrian. Yet, some feel it is not worth the personal exertion to take the 100m extra distance to make use of it.

Instead, they show a lack of courtesy by using the bicycle path. Walking on a bicycle path isn’t just dangerous to the pedestrian like stepping out onto a road, it’s dangerous for the cyclist too!

If a car hits a pedestrian or cyclist, the damage to the occupants of the car is going to be minimal to nonexistent, compared to what happens to the cyclist or pedestrian. If a cyclist or motorcyclist hits a pedestrian however, they surround the frame, thus hit the ground first. Possibly at significant speed.

Yet, pedestrians think it is acceptable to play Russian roulette with their own lives and the lives of every cycle user by continuing to walk where it is not safe for them to go. They’d never do it on a motorway, but somehow a bicycle path is considered fair game.

Most pedestrians are understanding, I’ve politely asked a number to not walk on the bikeway, and most oblige after I point out how they get to the pedestrian walkway.

Common sense would suggest some signage on where the pedestrian can walk would be prudent.

However, I have had at least two that ignored me, one this morning telling me to “mind my own shit”. Yes mate, I am minding “my own shit” as you put it: I’m trying to stop the hypothetical me from possibly crashing into the hypothetical you!

It’s this sort of reaction that seems symbolic of the whole “lack of common courtesy” that abounds these days.

It’s the same attitude that seems to hint to people that it’s okay to park a car so that it blocks the footpath: newsflash, it’s not! I know of one friend of mine who frequently runs into this problem. He’s in a wheelchair — a vehicle not known for its off-road capabilities or ability to squeeze past the narrow gap left by a car.

It seems the drivers think it’s acceptable to force footpath users of all types, including the elderly, the young and the disabled, to “step out” onto the road to avoid the car that they so arrogantly parked there. It makes me wonder how many people subsequently become disabled as a result of a collision caused by them having to step around such obstacles. Would the owner of the parked car be liable?

I don’t know, I’m no lawyer, but I should think they should carry some responsibility!

In Queensland, pedestrians have right-of-way on the footpath. That includes cyclists: cyclists of all ages are allowed there subject to council laws and signage — but once again, they need to give way. In other words, don’t charge down the path like a lunatic, and don’t block it!

No doubt, the people who I’m trying to convince are too arrogant to care about the above, and what their actions might have on others. Still, I needed to get the above off my chest!

Nothing will bring my colleague back, a fact that truly pains me, and I’ve learned some valuable lessons about the sort of encouragement I give people. I regret not telling him to slow down, 5 minutes longer wouldn’t have killed him, and I certainly did not want a race! Was he trying to race me so he could keep an eye on me? I’ll never know.

He was a bright person though, it is proof though that even the intelligent among us are prone to possibly doing stupid things. With thrills come spills, and one might question whether one’s commute to work is the appropriate venue for such thrills, or whether those can wait for another time.

I for one have learned that it does not pay to be the hare, thus I intend to just enjoy the ride for what it is. No need to rush, common sense tells me it just isn’t worth it!

Feb 122016
 

Hi all,

This is a bit of a brain dump so that I don’t forget this little tidbit in future.

Scenario

You have a shiny new Samba 4 active domain controller (or two) responsible for the domain ad.youroffice.example.com.  You have a couple of DNS servers that are responsible for non-AD parts of the domain and the parent youroffice.example.com.  To have everything go through one place, you’ve set up these servers with slave domains for ad.youroffice.example.com.

Joining your first Windows 7 client yields a message like this one.  You’re able to resolve yourdc.ad.youroffice.example.com on the client but not the _msdcs subdomain.

The fix

Configure your slaves to also sync _msdcs.ad.youroffice.example.com.

Example using bind

zone "vrtad.youroffice.example.com" {
        type slave;
        file "/var/lib/bind/db.ad.youroffice.example.com";
        masters { 10.20.30.1; 10.20.30.2; };
        allow-notify { 10.20.30.1; 10.20.30.2; };
};

zone "_msdcs.ad.youroffice.example.com" {
        type slave;
        file "/var/lib/bind/db._msdcs.ad.youroffice.example.com";
        masters { 10.20.30.1; 10.20.30.2; };
        allow-notify { 10.20.30.1; 10.20.30.2; };
};
Dec 062015
 

Recently, I learned about the IceStorm project, which is an effort to reverse engineer the Lattice iCE40-series of FPGAs.  I had run across FPGAs in my time before, but never really got to understand them.  This is for a few reasons:

  • The tools tended to be proprietary, with highly (unnecessarily?) restrictive licensing
  • FPGA boards were hellishly expensive

I wasn’t interested in doing the proprietary toolchain dance, did enough of that with some TI stuff years ago.  There, it was the MSP430, and one of their DSPs.  The former I could use gcc, but still needed a proprietary build of gdbproxy to program and debug the device, and that needed Windows.  The latter could only be programmed using TI’s Code Composer studio.

FPGAs were ten times worse.  Not only was the toolchain huge, occupying gigabytes, but the license was locked to the hardware.  The one project with anything FPGA-related, it was an Altera FPGA, and getting Quartus II to work was nothing short of a nightmare.  I gave up, and vowed never to touch FPGAs.

Fast forward 6 years, and things have changed.  We now have a Verilog synthesiser.  We now have a place-and-route tool.  We have tools for generating a bitstream for the iCE40 FPGAs.  We can now buy FPGA boards for well under the $100.  Heck, you can buy them for $5.

Lattice can do one of three things at this point:

  • They can actively try to stomp it out (discontinuing the iCE40 family, filing law suits, …etc)
  • They can pretend it doesn’t exist
  • They can partner with us and help build a hobby market for their FPGAs

Time will tell as to what they choose.  I’m hoping it’s the latter, but ignoring us is workable too.

So recently I bought an iCE40-HX8K breakout board.  This $80 board is pretty minimal, you get 8 LEDs, a FTDI serial-USB controller (which serves as programmer), a small serial flash EEPROM (for configuration), a linear regulator, a 12MHz oscillator and 4 40-pin headers for GPIOs.

The FPGA on this board is the iCE40HX8K-CT256.  At the time of writing, that’s the top of that particular series with 7680 look-up tables, two PLLs, and some integrated SPI/I²C smarts.

There’s not a lot in the way of tutorials for this particular board, most focus on the iCEStick, which uses the lesser iCE40HX1K-TQ144, has only a small handful of GPIOs exposed and has no configuration EEPROM (it’s one-time programmable).

Through some trial-and-error, and pouring over the schematics though, I managed to port Al Williams’ tutorial on Hackaday at least in part, to the iCE40-HX8k board.  The code for this is on Github.

Pretty much everything works on this board, even PLLs and block RAM.  There’s an example using the PLL on the iCEstick in this VGA demo project.

Some things I’ve learned:

  • If you open jumper J7, and rotate the jumpers on J6 to run horizontally (strapping pins 1-2 and 3-4), specifying -S to iceprog will program the CRAM without touching the SPI flash chip.
  • The PLL ceases to lock in when REFCLK/(1+DIV_R) drops to 10MHz or below.

FILTER_RANGE is a mystery though.  Haven’t figured out what the values correspond to.

It’s likely this particular board is destined to become a DRAM/Interrupt/DMA controller for my upcoming 386, but we’ll see.  In the meantime, I’m playing with a new toy. 🙂

Nov 242015
 

Some time back, Lenovo made the news with the Superfish fiasco.  Superfish was a piece of software that intercepted HTTPS connections by way of a trusted root certificate installed on the machine.  When the software detected a browser attempting to make a HTTPS connection, it would intercept it and connect on that software’s behalf.

When Superfish negotiated the connection, it would then generate on-the-fly a certificate for that website which it would then present to the browser.  This allowed it to spy on the web page content for the purpose of advertising.

Now Dell have been caught shipping an eDellRoot certificate on some of its systems.  Both laptops and desktops are affected.  This morning I checked the two newest computers in our office, both Dell XPS 8700 desktops running Windows 7.  Both had been built on the 13th of October, and shipped to us.  They both arrived on the 23rd of October, and they were both taken out of their boxes, plugged in, and duly configured.

I pretty much had two monitors and two keyboards in front of me, performing the same actions on both simultaneously.

Following configuration, one was deployed to a user, the other was put back in its box as a spare.  This morning I checked both for this certificate.  The one in the box was clean, the deployed machine had the certificate present.

Dell's dodgy certificate in action

Dell’s dodgy certificate in action

How do you check on a Dell machine?

A quick way, is to hit Logo+R (Logo = “Windows Key”, “Command Key” on Mac, or whatever it is on your keyboard, some have a penguin) then type certmgr.msc and press ENTER. Under “Trusted Root Certificate Store”, look for “eDellRoot”.

Another way is, using IE or Chrome, try one of the following websites:

(Don’t use Firefox: it has its own certificate store, thus isn’t affected.)

Removal

Apparently just deleting the certificate causes it to be re-installed after reboot.  qasimchadhar posted some instructions for removal, I’ll be trying these shortly:

You get rid of the certificate by performing following actions:

  1. Stop and Disable Dell Foundations Service
  2. Delete eDellRoot CA registry key here
    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\SystemCertificates\ROOT\Certificates\98A04E4163357790C4A79E6D713FF0AF51FE6927
  3. Then reboot and test.

Future recomendations

It is clear that the manufacturers do not have their user’s interests at heart when they ship Windows with new computers.  Microsoft has recognised this and now promote signature edition computers, which is a move I happen to support.  HOWEVER this should be standard not an option.

There are two reasons why third-party software should not be bundled with computers:

  1. The user may not have a need or use for, the said software, either not requiring its functionality or preferring an alternative.
  2. All non-trivial software is a potential security attack vector and must be kept up to date.  The version released on the OEM image is guaranteed to be at least months old by the time your machine arrives at your door, and will almost certainly be out-of-date when you come to re-install.

So we wind up either spending hours uninstalling unwanted or out-of-date crap, or we spend hours obtaining a fresh clean non-OEM installation disc, installing the bare OS, then chasing up drivers, etc.

This assumes the OEM image is otherwise clean.  It is apparent though that more than just demo software is being loaded on these machines, malware is being shipped.

With Dell and Lenovo now both in on this act, it’s now a question of if we can trust OEM installs.  Evidence seems to suggest that no, we can no longer trust such images, and have to consider all OS installations not done by the end user as suspect.

The manufacturers have abused our trust.  As far as convenience goes, we have been had.  It is clear that an OEM-supplied operating system does not offer any greater convenience to the end user, and instead, puts them at greater risk of malware attack.  I think it is time for this practice to end.

If manufacturers are unwilling to provide machines with images that would comply with Microsoft’s signature edition requirements, then they should ship the computer with a completely blank hard drive (or SSD) and unmodified installation media for a technically competent person (of the user’s choosing) to install.

Nov 212015
 

Well, in the last post I started to consider the thoughts of building my own computer from a spare 386 CPU I had liberated from an old motherboard.

One of the issues I face is implementing the bus protocol that the 386 uses, and decoding of interrupts.  The 386 expects an 8-bit interrupt request number that corresponds to the interrupting device.  I’m used to microcontrollers where you use a single GPIO line, but in this case, the interrupts are multiplexed.

For basic needs, you could do it with a demux IC.  That will work for a small number of interrupt lines.  Suppose I wanted more though?  How feasible is it to support many interrupt lines without tying up lots of GPIO lines?

CANBus has an interesting way of handling arbitration.  The “zeros” are dominant, and thus overrule “ones”.  The CAN transceiver is a full-duplex device, so as the station is transmitting, it listens to the state of the bus.  When some nodes want to talk (they are, of course, oblivious to each-others’ intentions), they start sending a start-bit (a zero) which synchronises all nodes, then begin sending an address.

While each node is sending the same “bit value”, the receiving nodes see that value.  As each node tries sending a 1 while the others are sending 0’s, it sees the disparity, and concludes that it has lost arbitration.  Eventually, you’re left with a single node that then proceeds to send its CANBus frame.

Now, we don’t need the complexity of CANBus to do what we’re after.  We can keep synchronisation by simple virtue that we can distribute a common clock (the one the CPU runs at).  Dominant and recessive bits can be implemented with transistors pulling down on a pull-up resistor, or a diode-OR: this will give us a system where ‘1’s are dominant.  Good enough.

So I figured up Logisim to have a fiddle, came up with this:

Interrupt controller using logic gates

Interrupt controller using logic gates

interrupt.circ is the actual LogiSim circuit if you wanted to have a fiddle; decompress it.  Please excuse the mess regarding the schematic.

On the left is the host-side of the interrupt controller.  This would ultimately interface with the 386.  On the right, are two “devices”, one on IRQ channel 0x01, the other on 0x05.  The controller handles two types of interrupts: “DMA interrupts”, where the device just wants to tell the DMA controller to put data into memory, or “IRQ”s, where we want to interrupt the CPU.

The devices are provided with the following control signals from the interrupt controller:

Signal Controlled by Description
DMA Devices Informs the IRQ controller if we’re interrupting for DMA purposes (high) or if we need to tell the CPU something (low).
IRQ Devices Informs the IRQ controller we want its attention
ISYNC Controller Informs the devices that they have the controller’s attention and to start transmitting address bits.
IRQBIT[2…0] Controller Instructs the devices what bit of their IRQ address to send (0 = MSB, 7 = LSB).
IDA Devices The inverted address bit value corresponding to the bit pointed to by IRQBIT.
IACK Devices Asserted by the device that wins arbitration.

Due to the dominant/recessive nature of the bits, the highest numbered device wins over lesser devices. IRQ requests also dominate over DMA requests.

In the schematic, the devices each have two D-flip-flops that are not driven by any control signals.  These are my “switches” for toggling the state of the device as a user.  The ones feeding into the XOR gate control the DMA signal, the others control the IRQ line.

Down the bottom, I’ve wired up a counter to count how long between the ISYNC signal going high and the controller determining a result.  This controller manages to determine which device requested its attention within 10 cycles.  If clocked at the same 20MHz rate as the CPU core, this would be good enough for getting a decoded IRQ channel number to the data lines of the 386 CPU by the end of its second IRQ acknowledge cycle, and can handle up to 256 devices.

A logical next step would be to look at writing this in Verilog and trying it out on an FPGA.  Thanks to the excellent work of Clifford Wolf in producing the IceStorm project, it is now possible to do this with completely open tools.  So, I’ve got a Lattice iCE40HX-8K FPGA board coming.  This should make a pretty mean SDRAM controller, interrupt controller and address decoder all in one chip, and should be a great introduction into configuring FPGAs.

Nov 072015
 

Well, I’ve been thinking a lot lately about single board computers. There’s a big market out there. Since the Raspberry Pi, there’s been a real explosion available to the small-end of town, the individual. Prior to this, development boards were mostly in the 4-figures sort of price range.

So we’re now rather spoiled for choice. I have a Raspberry Pi. There’s also the BeagleBone Black, Banana Pi, and several others. One gripe I have with the Raspberry Pi is the complete absence of any kind of analogue input. There’s an analogue line out, you can interface some USB audio devices (although I hear two is problematic), or you can get an I2S module.

There’s a GPU in there that’s capable of some DSP work and a CLKOUT pin that can generate a wide range of frequencies. That sounds like the beginnings of a decent SDR, however one glitch, while I can use the CLKOUT pin to drive a mixer and the GPIOs to do band selection, there’s nothing that will take that analogue signal and sample it.

If I want something wider than audio frequencies (and even a 192kHz audio CODEC is not guaranteed above ~20kHz) I have to interface to SPI, and the pickings are somewhat slim. Then I read this article on a DIY single board computer.

That got me thinking about whether I could do my own. At work we use the Technologic Systems TS-7670 single-board computers, and as nice as those machines are, they’re a little slow and RAM-limited. Something that could work as a credible replacement there too would be nice, key needs there being RS-485, Ethernet and a 85 degree temperature rating.

Form factor is a consideration here, and I figured something modular, using either header pins or edge connectors would work. That would make the module easily embeddable in hobby projects.

Since all the really nice SoCs are BGA packages, I figured I’d first need to know how easy I could work with them. We’ve got a stack of old motherboards sitting in a cupboard that I figured I could raid for BGAs to play with, just to see first-hand how fine the pins were. A crazy thought came to me: maybe for prototyping, I could do it dead-bug style?

Key thing here being able to solder directly to a ball securely, then route the wire to its destination. I may need to glue it to a bit of grounded foil to keep the capacitance in check. So, the first step I figured, would be to try removing some components from the boards I had laying around to see this first-hand.

In amongst the boards I came across was one old 386 motherboard that I initially mistook for a 286 minus the CPU. The empty (PLCC) socket is for an 80387 math co-processor. The board was in the cupboard for a good reason, corrosion from the CMOS battery had pretty much destroyed key traces on one corner of the board.

Corrosion on a motherboard caused by a CMOS battery

Corrosion on a motherboard caused by a CMOS battery

I decided to take to it with the heat gun first. The above picture was taken post-heatgun, but you can see just how bad the corrosion was. The ISA slots were okay, and so where a stack of other useful IC sockets, ICs, passive components, etc.

With the heat gun at full blast, I’d just wave it over an area of interest until the board started to de-laminate, then with needle-nose pliers, pull the socket or component from the board. Sometimes the component simply dropped out.

At one point I heard a loud “plop”. Looking under the board, one of the larger surface-mounted chips had fallen off. That gave me an idea, could the 386 chip be de-soldered? I aimed the heat-gun directly at the area underneath. A few seconds later and it too hit the deck.

All in all, it was a successful haul.

Parts off the 386 motherboard

Parts off the 386 motherboard

I also took apart an 8-bit ISA joystick card. It had some nice looking logic chips that I figured could be re-purposed. The real star though was the CPU itself:

Intel NG80306SX-20

Intel NG80306SX-20

The question comes up, what does one do with a crusty old 386 that’s nearly as old as I am? A quick search turned up this scanned copy of the Intel 80386SX datasheet. The chip has a 16-bit bus with 23 bits worth of address lines (bit 0 is assumed to be zero). It requires a clock that is double the chip’s operating frequency (there’s an internal divide-by-two). This particular chip runs internally at 20MHz. Nothing jumped out as being scary. Could I use this as a practice run for making an ARM computer module?

A dig around dug up some more parts:

More parts

More parts

In this pile we have…

I also have some SIMMs laying around, but the SDRAM modules look easier to handle since the controllers on board synchronise with what would otherwise be the front-side bus.  The datasheet does not give a minimum clock (although clearly this is not DC; DRAM does need to be refreshed) and mentions a clock frequency of 33MHz when set to run at a CAS latency of 1.  It just so happens that I have a 33MHz oscillator.  There’s a couple of nits in this plan though:

  • the SDRAM modules a 3.3V, the CPU is 5V: no problem, there are level conversion chips out there.
  • the SDRAM modules are 64-bits wide.  We’ll have to buffer the output to eight 8-bit registers.  Writes do a read-modify-write cycle, and we use a 2-in-4 decoder to select the CE pin on two of the registers from address bits 1 and 2 from the CPU.
  • Each SDRAM module holds 32MB.  We have a 23-bit address bus, which with 16-bit words gives us a total address space of 16MB.  Solution: the old 8-bit computers of yesteryear used bank-switching to address more RAM/ROM than they had address lines for, we can interface an 8-bit register at I/O address 0x0000 (easily decoded with a stack of Schottky diodes and a NOT gate) which can hold the remaining address bits mapping the memory to the lower 8MB of physical memory.  We then hijack the 386’s MMU to map the 8MB chunks and use the page faults to switch memory banks.  (If we put the SRAM and ROM up in the top 1MB, this gives us ~7MB of memory-mapped I/O to play with.)

So, not show stoppers.  There’s an example circuit showing interfacing an ATMega8515 to a single SDRAM chip for driving a VGA interface, and some example code, with comments in German. Unfortunately you’d learn more German in an episode of Hogan’s Heroes than what I know, but I can sort-of figure out the sequence used to read and write from/to the SDRAM chip. Nothing looks scary there either.  This SDRAM tutorial seems to be a goldmine.

Thus, it looks like I’ve got enough bits to have a crack at it.  I can run the 386 from that 33MHz brick; which will give me a chip running at 16.5MHz.  Somewhere I’ve got the 40MHz brick laying around from the motherboard (I liberated that some time ago), but that can wait.

A first step would be to try interfacing the 386 chip to an AVR, and feed it instructions one step at a time, check that it’s still alive.  Then, the next steps should become clear.

Oct 312015
 

Well, it seems the updates to Microsoft’s latest aren’t going as its maker planned. A few people have asked me about my personal opinion of this OS, and I’ll admit, I have no direct experience with it.  I also haven’t had much contact with Windows 8 either.

That said, I do keep up with the news, and a few things do concern me.

The good news

It’s not all bad of course.  Windows 8 saw a big shrink in the footprint of a typical Windows install, and Windows 10 continues to be fairly lightweight.  The UI disaster from Windows 8 has been somewhat pared back to provide a more traditional desktop with a start menu that combines features from the start screen.

There are some limitations with the new start menu, but from what I understand, it behaves mostly like the one from Windows 7.  The tiled section still has some rough edges though, something that is likely to be addressed in future updates of Windows 10.

If this is all that had changed though, I’d be happily accepting it.  Sadly, this is not the case.

Rolling-release updates

Windows has, since day one, been on a long-term support release model.  That is, they bring out a release, then they support it for X years.  Windows XP was released in 2002 and was supported until last year for example.  Windows Vista is still on extended support, and Windows 7 will enter extended support soon.

Now, in the Linux world, we’ve had both long-term support releases and rolling release distributions for years.  Most of the current Linux users know about it, and the distribution makers have had many years to get it right.  Ubuntu have been doing this since 2004, Debian since 1998 and Red Hat since 1994.  Rolling releases can be a bumpy ride if not managed correctly, which is why the long-term support releases exist.  The community has recognised the need, and meets it accordingly.

Ubuntu are even predictable with their releases.  They release on a schedule.  Anything not ready for release is pushed back to the next release.  They do a release every 6 months, in April and October and every 2 years, the April release is a long-term support release.  That is; 8.04, 10.04, 12.04, 14.04 are all LTS releases.  The LTS releases get supported for about 3 years, the regular releases about 18 months.

Debian releases are basically LTS, unless you run Debian Testing or Debian Unstable.  Then you’re running rolling-release.

Some distributions like Gentoo are always rolling-release.  I’ve been running Gentoo for more than 10 years now, and I find the rolling releases rarely give me problems.  We’ve had our hiccups, but these days, things are smooth.  Updating an older Gentoo box to the latest release used to be a fight, but these days, is comparatively painless.

It took most of that 10 years to get to that point, and this is where I worry about Microsoft forcing the vast majority of Windows users onto a rolling-release model, as they will be doing this for the first time.  As I understand it, there will be four branches:

  1. Windows Insiders programme is like Debian Unstable.  The very latest features are pushed out to them first.  They are effectively running a beta version of Windows, and can expect many updates, many breakages, lots of things changing.  For some users, this will be fine, others it’ll be a headache.  There’s no option to skip updates, but you probably will have the option of resigning from the Windows Insiders programme.
  2. Home users basically get something like Debian Testing.  After updates have been thrashed out by the insiders, it gets force-fed to the general public.  The Home version of Windows 10 will not have an option to defer an update.
  3. Professional users get something more like the standard releases of Debian.  They’ll have the option of deferring an update for up to 30 days, so things can change less frequently.  It’s still rolling-release, but they can at least plan their updates to take place once a month, hopefully without disrupting too much.
  4. Enterprise users get something like the old-stable release of Debian.  Security updates, and they have the option to defer updates for a year.

Enterprise isn’t available unless you’re a large company buying lots of licenses.  If people must buy a Windows 10 machine, my recommendation would be to go for the professional version, then you have some right of veto, as not all the updates a purely security-related, some will be changing the UI and adding/removing features.

I can see this being a major headache though for anyone who has to support hardware or software on Windows 10 however, since it’s essentially the build number that becomes important: different release builds will behave differently.  Possibly different enough that things need much more testing and maintenance than what vendors are used to.

Some are very poor at supporting Linux right now due to the rolling-release model of things like the Linux kernel, so I can see Windows 10 being a nightmare for some.

Privacy concerns

One of the big issues to be raised with Windows 10 is the inclusion of telemetry to “improve the user experience” and other features that are seen as an invasion of privacy.  Many things can be turned off, but it will take someone who’s familiar with the OS or good at researching the problem to turn them off.

Probably the biggest concern from my prospective as a network administrator is the WiFi Sense feature.  This is a feature in Windows 10 (and Windows 8 Phone), turned on by default, that allows you to share WiFi passwords with other contacts.

If one of that person’s contacts then comes into range of your AP, their device contacts Microsoft’s servers which have the password on file, and can provide it to that person’s device (hopefully in a secured manner).  The password is never shown to the user themselves, but I believe it’s only a matter of time before someone figures out how to retrieve that password from WiFi Sense.  (A rogue AP would probably do the trick.)

We have discussed this at work where we have two WiFi networks: one WPA2 enterprise one for staff, and a WPA2 Personal one for guests.  Since we cannot control whether the users have this feature turned on or not, or whether they might accidentally “share” the password with world + dog, we’re considering two options:

  1. Banning the use of Windows 10 devices (and Windows 8 Phone) from being used on our guest WiFi network.
  2. Implementing a cron job to regularly change the guest WiFi password.  (The Cisco AP we have can be hit with SSH; automating this shouldn’t be difficult.)

There are some nasty points in the end user license agreement too that seem to give Microsoft free reign to make copies of any of the data on the system.  They say personal information will be removed, but even with the best of intentions, it is likely that some personal information will get caught in the net cast by telemetry software.

Forced “upgrades” to Windows 10

This is the bit about Windows 10 that really bugs me.  Okay, Microsoft is pushing a deal where they’ll provide it to you for free for a year.  Free upgrades, yaay!  But wait: how do you know if your hardware and software is compatible?  Maybe you’re not ready to jump on the bandwagon just yet, or maybe you’ve heard news about the privacy issues or rolling release updates and decided to hold back.

Many users of Windows 7, 8 and 8.1 are now being force-fed the new release, whether we asked for it or not.

Now the problem with this is it completely ignores the fact that some do not run with an always-on Internet connection with a large quota.  I know people who only have a 3G connection, with a very small (1GB) quota.  Windows 10 weighs in at nearly 3GB, so for them, they’ll be paying for 2GB worth of overuse charges just for the OS, never mind what web browsing, emailing and other things they might have actually bought their Internet connection for.

Microsoft employees have been outed for showing such contempt before.  It seems so many there are used to the idea of an Internet connection that is always there and has a big enough quota to be considered “unlimited” that they have forgotten that some parts of the world do not have such luxuries.  The computer and the Internet are just tools: we do not buy an Internet connection just for the sake of having one.

Stopping updates

There are a couple of tools that exist for managing this.  I have not tested any of them, and cannot vouch for their safety or reliability.

  • BlockWindows (github link) is a set of scripts that, when executed, uninstall and disable most of the Windows 10-related updates on Windows 7 and 8/8.1.
  • GWX Control Panel is a (proprietary?) tool for controlling the GWX process.  The download is here.

My recommendation is to keep good backups.  Find a tool that will do a raw partition back-up of your Windows partition, and keep your personal files on a separate partition.  Then, if Microsoft does come a-knocking, you can easily roll back.  Hopefully after the “free upgrade” offer has expired (about this time next year), they will cease and desist from this practise.