November 2015

It is time we stopped trusting OEM-bundled operating systems

Some time back, Lenovo made the news with the Superfish fiasco.  Superfish was a piece of software that intercepted HTTPS connections by way of a trusted root certificate installed on the machine.  When the software detected a browser attempting to make a HTTPS connection, it would intercept it and connect on that software’s behalf.

When Superfish negotiated the connection, it would then generate on-the-fly a certificate for that website which it would then present to the browser.  This allowed it to spy on the web page content for the purpose of advertising.

Now Dell have been caught shipping an eDellRoot certificate on some of its systems.  Both laptops and desktops are affected.  This morning I checked the two newest computers in our office, both Dell XPS 8700 desktops running Windows 7.  Both had been built on the 13th of October, and shipped to us.  They both arrived on the 23rd of October, and they were both taken out of their boxes, plugged in, and duly configured.

I pretty much had two monitors and two keyboards in front of me, performing the same actions on both simultaneously.

Following configuration, one was deployed to a user, the other was put back in its box as a spare.  This morning I checked both for this certificate.  The one in the box was clean, the deployed machine had the certificate present.

Dell's dodgy certificate in action

Dell’s dodgy certificate in action

How do you check on a Dell machine?

A quick way, is to hit Logo+R (Logo = “Windows Key”, “Command Key” on Mac, or whatever it is on your keyboard, some have a penguin) then type certmgr.msc and press ENTER. Under “Trusted Root Certificate Store”, look for “eDellRoot”.

Another way is, using IE or Chrome, try one of the following websites:

(Don’t use Firefox: it has its own certificate store, thus isn’t affected.)

Removal

Apparently just deleting the certificate causes it to be re-installed after reboot.  qasimchadhar posted some instructions for removal, I’ll be trying these shortly:

You get rid of the certificate by performing following actions:

  1. Stop and Disable Dell Foundations Service
  2. Delete eDellRoot CA registry key here
    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\SystemCertificates\ROOT\Certificates\98A04E4163357790C4A79E6D713FF0AF51FE6927
  3. Then reboot and test.

Future recommendations

It is clear that the manufacturers do not have their user’s interests at heart when they ship Windows with new computers.  Microsoft has recognised this and now promote signature edition computers, which is a move I happen to support.  HOWEVER this should be standard not an option.

There are two reasons why third-party software should not be bundled with computers:

  1. The user may not have a need or use for, the said software, either not requiring its functionality or preferring an alternative.
  2. All non-trivial software is a potential security attack vector and must be kept up to date.  The version released on the OEM image is guaranteed to be at least months old by the time your machine arrives at your door, and will almost certainly be out-of-date when you come to re-install.

So we wind up either spending hours uninstalling unwanted or out-of-date crap, or we spend hours obtaining a fresh clean non-OEM installation disc, installing the bare OS, then chasing up drivers, etc.

This assumes the OEM image is otherwise clean.  It is apparent though that more than just demo software is being loaded on these machines, malware is being shipped.

With Dell and Lenovo now both in on this act, it’s now a question of if we can trust OEM installs.  Evidence seems to suggest that no, we can no longer trust such images, and have to consider all OS installations not done by the end user as suspect.

The manufacturers have abused our trust.  As far as convenience goes, we have been had.  It is clear that an OEM-supplied operating system does not offer any greater convenience to the end user, and instead, puts them at greater risk of malware attack.  I think it is time for this practice to end.

If manufacturers are unwilling to provide machines with images that would comply with Microsoft’s signature edition requirements, then they should ship the computer with a completely blank hard drive (or SSD) and unmodified installation media for a technically competent person (of the user’s choosing) to install.

Interrupt controllers from logic gates

Well, in the last post I started to consider the thoughts of building my own computer from a spare 386 CPU I had liberated from an old motherboard.

One of the issues I face is implementing the bus protocol that the 386 uses, and decoding of interrupts.  The 386 expects an 8-bit interrupt request number that corresponds to the interrupting device.  I’m used to microcontrollers where you use a single GPIO line, but in this case, the interrupts are multiplexed.

For basic needs, you could do it with a demux IC.  That will work for a small number of interrupt lines.  Suppose I wanted more though?  How feasible is it to support many interrupt lines without tying up lots of GPIO lines?

CANBus has an interesting way of handling arbitration.  The “zeros” are dominant, and thus overrule “ones”.  The CAN transceiver is a full-duplex device, so as the station is transmitting, it listens to the state of the bus.  When some nodes want to talk (they are, of course, oblivious to each-others’ intentions), they start sending a start-bit (a zero) which synchronises all nodes, then begin sending an address.

While each node is sending the same “bit value”, the receiving nodes see that value.  As each node tries sending a 1 while the others are sending 0’s, it sees the disparity, and concludes that it has lost arbitration.  Eventually, you’re left with a single node that then proceeds to send its CANBus frame.

Now, we don’t need the complexity of CANBus to do what we’re after.  We can keep synchronisation by simple virtue that we can distribute a common clock (the one the CPU runs at).  Dominant and recessive bits can be implemented with transistors pulling down on a pull-up resistor, or a diode-OR: this will give us a system where ‘1’s are dominant.  Good enough.

So I figured up Logisim to have a fiddle, came up with this:

Interrupt controller using logic gates

Interrupt controller using logic gates

interrupt.circ is the actual LogiSim circuit if you wanted to have a fiddle; decompress it.  Please excuse the mess regarding the schematic.

On the left is the host-side of the interrupt controller.  This would ultimately interface with the 386.  On the right, are two “devices”, one on IRQ channel 0x01, the other on 0x05.  The controller handles two types of interrupts: “DMA interrupts”, where the device just wants to tell the DMA controller to put data into memory, or “IRQ”s, where we want to interrupt the CPU.

The devices are provided with the following control signals from the interrupt controller:

Signal Controlled by Description
DMA Devices Informs the IRQ controller if we’re interrupting for DMA purposes (high) or if we need to tell the CPU something (low).
IRQ Devices Informs the IRQ controller we want its attention
ISYNC Controller Informs the devices that they have the controller’s attention and to start transmitting address bits.
IRQBIT[2…0] Controller Instructs the devices what bit of their IRQ address to send (0 = MSB, 7 = LSB).
IDA Devices The inverted address bit value corresponding to the bit pointed to by IRQBIT.
IACK Devices Asserted by the device that wins arbitration.

Due to the dominant/recessive nature of the bits, the highest numbered device wins over lesser devices. IRQ requests also dominate over DMA requests.

In the schematic, the devices each have two D-flip-flops that are not driven by any control signals.  These are my “switches” for toggling the state of the device as a user.  The ones feeding into the XOR gate control the DMA signal, the others control the IRQ line.

Down the bottom, I’ve wired up a counter to count how long between the ISYNC signal going high and the controller determining a result.  This controller manages to determine which device requested its attention within 10 cycles.  If clocked at the same 20MHz rate as the CPU core, this would be good enough for getting a decoded IRQ channel number to the data lines of the 386 CPU by the end of its second IRQ acknowledge cycle, and can handle up to 256 devices.

A logical next step would be to look at writing this in Verilog and trying it out on an FPGA.  Thanks to the excellent work of Clifford Wolf in producing the IceStorm project, it is now possible to do this with completely open tools.  So, I’ve got a Lattice iCE40HX-8K FPGA board coming.  This should make a pretty mean SDRAM controller, interrupt controller and address decoder all in one chip, and should be a great introduction into configuring FPGAs.

A DIY computer

Well, I’ve been thinking a lot lately about single board computers. There’s a big market out there. Since the Raspberry Pi, there’s been a real explosion available to the small-end of town, the individual. Prior to this, development boards were mostly in the 4-figures sort of price range.

So we’re now rather spoiled for choice. I have a Raspberry Pi. There’s also the BeagleBone Black, Banana Pi, and several others. One gripe I have with the Raspberry Pi is the complete absence of any kind of analogue input. There’s an analogue line out, you can interface some USB audio devices (although I hear two is problematic), or you can get an I2S module.

There’s a GPU in there that’s capable of some DSP work and a CLKOUT pin that can generate a wide range of frequencies. That sounds like the beginnings of a decent SDR, however one glitch, while I can use the CLKOUT pin to drive a mixer and the GPIOs to do band selection, there’s nothing that will take that analogue signal and sample it.

If I want something wider than audio frequencies (and even a 192kHz audio CODEC is not guaranteed above ~20kHz) I have to interface to SPI, and the pickings are somewhat slim. Then I read this article on a DIY single board computer.

That got me thinking about whether I could do my own. At work we use the Technologic Systems TS-7670 single-board computers, and as nice as those machines are, they’re a little slow and RAM-limited. Something that could work as a credible replacement there too would be nice, key needs there being RS-485, Ethernet and a 85 degree temperature rating.

Form factor is a consideration here, and I figured something modular, using either header pins or edge connectors would work. That would make the module easily embeddable in hobby projects.

Since all the really nice SoCs are BGA packages, I figured I’d first need to know how easy I could work with them. We’ve got a stack of old motherboards sitting in a cupboard that I figured I could raid for BGAs to play with, just to see first-hand how fine the pins were. A crazy thought came to me: maybe for prototyping, I could do it dead-bug style?

Key thing here being able to solder directly to a ball securely, then route the wire to its destination. I may need to glue it to a bit of grounded foil to keep the capacitance in check. So, the first step I figured, would be to try removing some components from the boards I had laying around to see this first-hand.

In amongst the boards I came across was one old 386 motherboard that I initially mistook for a 286 minus the CPU. The empty (PLCC) socket is for an 80387 math co-processor. The board was in the cupboard for a good reason, corrosion from the CMOS battery had pretty much destroyed key traces on one corner of the board.

Corrosion on a motherboard caused by a CMOS battery

Corrosion on a motherboard caused by a CMOS battery

I decided to take to it with the heat gun first. The above picture was taken post-heatgun, but you can see just how bad the corrosion was. The ISA slots were okay, and so where a stack of other useful IC sockets, ICs, passive components, etc.

With the heat gun at full blast, I’d just wave it over an area of interest until the board started to de-laminate, then with needle-nose pliers, pull the socket or component from the board. Sometimes the component simply dropped out.

At one point I heard a loud “plop”. Looking under the board, one of the larger surface-mounted chips had fallen off. That gave me an idea, could the 386 chip be de-soldered? I aimed the heat-gun directly at the area underneath. A few seconds later and it too hit the deck.

All in all, it was a successful haul.

Parts off the 386 motherboard

Parts off the 386 motherboard

I also took apart an 8-bit ISA joystick card. It had some nice looking logic chips that I figured could be re-purposed. The real star though was the CPU itself:

Intel NG80306SX-20

Intel NG80306SX-20

The question comes up, what does one do with a crusty old 386 that’s nearly as old as I am? A quick search turned up this scanned copy of the Intel 80386SX datasheet. The chip has a 16-bit bus with 23 bits worth of address lines (bit 0 is assumed to be zero). It requires a clock that is double the chip’s operating frequency (there’s an internal divide-by-two). This particular chip runs internally at 20MHz. Nothing jumped out as being scary. Could I use this as a practice run for making an ARM computer module?

A dig around dug up some more parts:

More parts

More parts

In this pile we have…

I also have some SIMMs laying around, but the SDRAM modules look easier to handle since the controllers on board synchronise with what would otherwise be the front-side bus.  The datasheet does not give a minimum clock (although clearly this is not DC; DRAM does need to be refreshed) and mentions a clock frequency of 33MHz when set to run at a CAS latency of 1.  It just so happens that I have a 33MHz oscillator.  There’s a couple of nits in this plan though:

  • the SDRAM modules a 3.3V, the CPU is 5V: no problem, there are level conversion chips out there.
  • the SDRAM modules are 64-bits wide.  We’ll have to buffer the output to eight 8-bit registers.  Writes do a read-modify-write cycle, and we use a 2-in-4 decoder to select the CE pin on two of the registers from address bits 1 and 2 from the CPU.
  • Each SDRAM module holds 32MB.  We have a 23-bit address bus, which with 16-bit words gives us a total address space of 16MB.  Solution: the old 8-bit computers of yesteryear used bank-switching to address more RAM/ROM than they had address lines for, we can interface an 8-bit register at I/O address 0x0000 (easily decoded with a stack of Schottky diodes and a NOT gate) which can hold the remaining address bits mapping the memory to the lower 8MB of physical memory.  We then hijack the 386’s MMU to map the 8MB chunks and use the page faults to switch memory banks.  (If we put the SRAM and ROM up in the top 1MB, this gives us ~7MB of memory-mapped I/O to play with.)

So, not show stoppers.  There’s an example circuit showing interfacing an ATMega8515 to a single SDRAM chip for driving a VGA interface, and some example code, with comments in German. Unfortunately you’d learn more German in an episode of Hogan’s Heroes than what I know, but I can sort-of figure out the sequence used to read and write from/to the SDRAM chip. Nothing looks scary there either.  This SDRAM tutorial seems to be a goldmine.

Thus, it looks like I’ve got enough bits to have a crack at it.  I can run the 386 from that 33MHz brick; which will give me a chip running at 16.5MHz.  Somewhere I’ve got the 40MHz brick laying around from the motherboard (I liberated that some time ago), but that can wait.

A first step would be to try interfacing the 386 chip to an AVR, and feed it instructions one step at a time, check that it’s still alive.  Then, the next steps should become clear.