brisbane-flood-2022

Teardown: Leader SN4PROv3

The other victims of the Brisbane 2022 flood at my workplace are a pile of Leader SN4PROv3 “NUC” clones, like the ruggedised Intel NUCs, were purchased as cheap “PLC”s. Out of the box, they run Windows 10, which seems insane given they, like the Intel NUCs, only have 4GB RAM and sport dual-core Celerons running at a whopping 1.1GHz. (Wow, let’s see that play Crysis!)

Most of these actually survived pretty well, with all but one booting. I had visually inspected each by opening the case and having a look inside, but obviously on this one specimen, I missed something. So let’s crack it open and have a look.

Leader SN4PROv3

First step is to move the rubber feet out of the way and remove the 4 case screws hiding beneath.

The four case screws hide beneath the rubber feet (which I have moved)

The Leader website claims these machines run eMMC. In all the units I have here, it appears all of them are not eMMC, but rather, are M.2 SATA SSDs. I’d consider that an upgrade. My guess is maybe the first of this model had eMMC, but then the chip shortage bit and so they endowed these ones with SATA. The footprint for the eMMC looks to be just near the battery.

In any case, that SSD is in the way making the screws to the left of it hard to reach, so let’s get it out of the way.

Removing the SSD to access board screws

The SSD here is a “Kston”-brand SSD… Not Kingston, don’t be fooled by the lettering…

The SSD… not pretending to be a “Kingston”, honest!!!

Anyway, having done that, the screws that hold the board down are now more accessible, so let’s get them out.

PCB screws

Finally, to actually get the board out, we’ll need to pop the rear cover out. There are four little plastic catches that we’ll need to push down and out to release the rear panel. A flat-blade screwdriver works for this.

Removing the rear panel

Having done this, you’ll note the board now rattles back and forth. Use the new opening to push the board out of its resting place.

Removing the board, pull it up in the direction shown.

As you do this, you’ll note the case will still interfere, but now at least you should be able to bend the plastic out of the way.

Bending the case to get the connectors past

Now, at this point you’ll note there’s some coax feeds connecting two stick-on antennas to the case. The mainboard end is socketed but the case end are just soldered on with no strain-relief!

The stick-on WiFi/Bluetooth antennas

The heatsink/fan assembly can now be seen, and it too is held on by four screws.

Heatsink/Fan screws

Undo these, and we should be staring at the CPU.

Ohh, so that’s what was causing your POST issue?

We can see the culprit here that caused the failed POST… there’s a tiny 8-pin chip in amongst that rust residue, and of course I’m fresh out of the isopropyl alcohol spray… so we’ll try some circuit board cleaner on this and see if she goes afterwards.

Teardown: Intel NUC8CCHKRN2

So, following on from the fun and games of migrating a network, the other outcome of the flood is cleaning out hardware that wasn’t so lucky to escape the flood waters. My workplace does a lot of industrial automation and meter integration work, and the tool of choice lately has been Linux machines with NodeRED (this is what WideSky Edge is built on).

Seeing a chip-shortage, they bought up a big stock pile of these ruggedised Intel NUCs. These machines are no front-page news spec-wise (dual-core Intel Celeron, 4GB RAM, 64GB eMMC), but are good enough for the task. However, mother nature had other plans, and machines that were not IPX7 rated got a dunking no one anticipated. Miraculously, I’ve managed to get most of them going. Initially when I got them home was I gave them all a rise in tap water to just wash away any mud and other muck that may have been in the river water, then left them to dry for a few days.

This didn’t help matters, with units still refusing to power on, so I opened each one up and sprayed them with some isopropyl alcohol spray… using an entire 300g can of the stuff on 14 rugged NUCs. I did this to one at first… and found to my amazement, it booted!

She lives!!!

I did find though, there was no documentation on how to disassemble one of these. Intel’s docs don’t even tell you. And much of what I did find was YouTube videos… apparently people are incapable of taking still photos and documenting the process in plain text. Never mind, I figured it out, and will document the procedure here.

I’m not going to bother identifying chips, this isn’t iFixIt. I’m primarily concerned about cleaning off any muck that’s causing the boot failure.

Disassembly

So presumably you’ve got one of these NUCs in front of you.

The NUC8CCHKRN2

First step is to flip it over (front still facing you), and you’ll see 4 very obvious screws. These are captive screws, so no risk of them rattling loose.

The screws for opening the case.

Inside we see the WiFi/Bluetooth module, a space for a M.2 SSD/peripheral, the RTC battery and the top of the main board.

Inside the bottom cover

The first step I found easiest is to remove the screw and nut that holds the WiFi card in place. This is a spacer+screw assembly which does double-duty of holding the WiFi card down, and holding whatever full-length M.2 peripheral you put in there. Pliers work for rotating that nut. Removal will allow us to better get the coax antenna feeds out of the way to access the screw beneath.

The nut+screw assembly holding the WiFi card down

Now we have that out of the way, we can remove the screws that actually hold the inner chassis down. There are two screws, one on each side.

The screws holding the inner chassis frame in place

Now, to finally release the inner frame, there are two catches, one each side of the case.

The plastic catches holding the chassis frame in place

You’ll want to push the plastic outwards whilst pulling upward on the USB / Ethernet connectors. Do one side, pulling the frame out by 5mm, then do the other.

Catches away!

Having done this, you’ll see continuing to pull on these connectors, the whole assembly pulls out. From here, we now can focus on extracting the PCB from the inner chassis frame. The PCB is fixed by two screws which the silkscreen helpfully points out.

The inner chassis removed, two screws fix the PCB itself.

Undo these two screws, then flip the board over, you’ll find 4 more that bolt the heatsink to the CPU.

Heatsink screws

These screws are captive like the case screws in the initial step, so just undo each a few turns, move to the next screw and loosen it, etc, and keep going until the PCB comes loose.

…and we’re in!

The PCB should now come free and you’ll be able to see the CPU and RAM (which are soldered-on).

Network juju on the fly: migrating a corporate network to VPN-based connectivity

So, this week mother nature threw South East Queensland a curve-ball like none of us have seen in over a decade: a massive flood. My workplace, VRT Systems / WideSky.Cloud Pty Ltd resides at 38b Douglas Street, Milton, which is a low-lying area not far from the Brisbane River. Sister company CETA is just two doors down at no. 40. Mid-February, a rain depression developed in the Sunshine Coast Hinterland / Wide Bay area north of Brisbane.

That weather system crawled all the way south, hitting Brisbane with constant heavy rain for 5 days straight… eventually creeping down to the Gold Coast and over the border to the Northern Rivers part of NSW.

The result on our offices was devastating. (Copyright notice: these images are placed here for non-commercial use with permission of the original photographers… contact me if you wish to use these photos and I can forward your request on.)

Some of the stock still worked after the flood — the Siemens UH-40s pictured were still working (bar a small handful) and normally sell for high triple-figures. The WideSky Hubs and CET meters all feature a conformal coating on the PCBs that will make them robust to water ingress and the Wavenis meters are potted sealed units. So not all a loss — but there’s a big pile of expensive EDMI meters (Mk7s and Mk10s) though that are not economic to salvage due to approval requirements which is going to hurt!

Le Mans Motors, pictured in those photos is an automotive workshop, so would have had lots of lubricants, oils and grease in stock needed to service vehicles — much of those contaminants were now across the street, so washing that stuff off the surviving stock was the order of the day for much of Wednesday, before demolition day Friday.

As for the server equipment, knowing that this was a flood-prone area (which also by the way means insurance is non-existent), we deliberately put our server room on the first floor, well above the known flood marks of 1974 and 2011. This flood didn’t get that high, getting to about chest-height on the ground floor. Thus, aside from some desktops, laptops, a workshop (including a $7000 oscilloscope belonging to an employee), a new coffee machine (that hurt the coffee drinkers), and lots of furniture/fittings, most of the IT equipment came through unscathed. The servers “had the opportunity to run without the burden of electricity“.

We needed our stuff working, so we needed to first rescue the machines from the waterlogged building and set them up elsewhere. Elsewhere wound up being at the homes of some of our staff with beefy NBN Internet connections. Okay, not beefy compared to the 500Mbps symmetric microwave link we had, but 50Mbps uplinks were not to be snorted at in this time of need.

The initial plan was the machines that once shared an Ethernet switch, now would be in physically separate locations — but we still needed everything to look like the old network. We also didn’t want to run more VPN tunnels than necessary. Enter OpenVPN L2 mode.

Establishing the VPN server

Up to this point, I had deployed a temporary VPN server as a VPS in a Sydney data centre. This was a plain-jane Ubuntu 20.04 box with a modest amount of disk and RAM, but hopefully decent CPU grunt for the amount of cryptographic operations it was about to do.

Most of our customer sites used OpenVPN tunnels, so I migrated those first — I managed to grab a copy of the running server config as the waters rose before the power tripped out. Copying that config over to the new server, start up OpenVPN, open a UDP port to the world, then fiddled DNS to point the clients to the new box. They soon joined.

Connecting staff

Next problem was getting the staff linked — originally we used a rather aging Cisco router with its VPN client (or vpnc on Linux/BSD), but I didn’t feel like trying to experiment with an IPSec server to replicate that — so up came a second OpenVPN instance, on a new subnet. I got the Engineering team to run the following command to generate a certificate signing request (CSR):

openssl req -newkey rsa:4096 -nodes -keyout <name>.key -out <name>.req

They sent me their .req files, and I used EasyRSA v3 to manage a quickly-slapped-together CA to sign the requests. Downloading them via Slack required that I fish them out of the place where Slack decided to put them (without asking me) and place it in the correct directory. Sometimes I had to rename the file too (it doesn’t ask you what you want to call it either) so it had a .req extension. Having imported the request, I could sign it.

$ mv ~/Downloads/theclient.req pki/reqs/
$ ./easyrsa sign-req client theclient

A new file pki/issued/theclient.crt could then be sent back to the user. I also provided them with pki/ca.crt and a configuration file derived from the example configuration files. (My example came from OpenBSD’s OpenVPN package.)

They were then able to connect, and see all the customer site VPNs, so could do remote support. Great. So far so good. Now the servers.

Server connection VPN

For this, a third OpenVPN daemon was deployed on another port, but this time in L2 mode (dev tap) not L3 mode. In addition, I had servers on two different VLANs, I didn’t want to have to deploy yet more VPN servers and clients, so I decided to try tunnelling 802.1Q. This required boosting the MTU from the default of 1500 to 1518 to accommodate the 802.1Q VLAN tag.

The VPN server configuration looked like this:

port 1196
proto udp
dev tap
ca l2-ca.crt
cert l2-server.crt
key l2-server.key
dh data/dh4096.pem
server-bridge
client-to-client
keepalive 10 120
cipher AES-256-CBC
persist-key
persist-tun
status /etc/openvpn/l2-clients.txt
verb 3
explicit-exit-notify 1
tun-mtu 1518

In addition, we had to tell netplan to create some bridges, we created a vpn.conf in /etc/netplan/vpn.yaml that looked like this:

network:
    version: 2
    ethernets:
        # The VPN tunnel itself
        tap0:
            mtu: 1518
            accept-ra: false
            dhcp4: false
            dhcp6: false
    vlans:
        vlan10-phy:
            id: 10
            link: tap0
        vlan11-phy:
            id: 11
            link: tap0
        vlan12-phy:
            id: 12
            link: tap0
        vlan13-phy:
            id: 13
            link: tap0
    bridges:
        vlan10:
            interfaces:
                - vlan10-phy
            accept-ra: false
            link-local: [ ipv6 ]
            addresses:
                - 10.0.10.1/24
                - 2001:db8:10::1/64
        vlan11:
            interfaces:
                - vlan11-phy
            accept-ra: false
            link-local: [ ipv6 ]
            addresses:
                - 10.0.11.1/24
                - 2001:db8:11::1/64
        vlan12:
            interfaces:
                - vlan12-phy
            accept-ra: false
            link-local: [ ipv6 ]
            addresses:
                - 10.0.12.1/24
                - 2001:db8:12::1/64
        vlan13:
            interfaces:
                - vlan13-phy
            accept-ra: false
            link-local: [ ipv6 ]
            addresses:
                - 10.0.13.1/24
                - 2001:db8:13::1/64

Those aren’t the real VLAN IDs or IP addresses, but you get the idea. Bridge up on the cloud end isn’t strictly necessary, but it does mean we can do other forms of tunnelling if needed.

On the clients, we did something very similar. OpenVPN client config:

client
dev tap
proto udp
remote vpn.example.com 1196
resolv-retry infinite
nobind
persist-key
persist-tun
ca l2-ca.crt
cert l2-client.crt
key l2-client.key
remote-cert-tls server
cipher AES-256-CBC
verb 3
tun-mtu 1518

and for netplan:

network:
    version: 2
    ethernets:
        tap0:
            accept-ra: false
            dhcp4: false
            dhcp6: false
    vlans:
        vlan10-eth:
            id: 10
            link: eth0
        vlan11-eth:
            id: 11
            link: eth0
        vlan12-eth:
            id: 12
            link: eth0
        vlan13-eth:
            id: 13
            link: eth0
        vlan10-vpn:
            id: 10
            link: tap0
        vlan11-vpn:
            id: 11
            link: tap0
        vlan12-vpn:
            id: 12
            link: tap0
        vlan13-vpn:
            id: 13
            link: tap0
    bridges:
        vlan10:
            interfaces:
                - vlan10-vpn
                - vlan10-eth
            accept-ra: false
            link-local: [ ipv6 ]
            addresses:
                - 10.0.10.2/24
                - 2001:db8:10::2/64
        vlan11:
            interfaces:
                - vlan11-vpn
                - vlan11-eth
            accept-ra: false
            link-local: [ ipv6 ]
            addresses:
                - 10.0.11.2/24
                - 2001:db8:11::2/64
        vlan12:
            interfaces:
                - vlan12-vpn
                - vlan12-eth
            accept-ra: false
            link-local: [ ipv6 ]
            addresses:
                - 10.0.12.2/24
                - 2001:db8:12::2/64
        vlan13:
            interfaces:
                - vlan13-vpn
                - vlan13-eth
            accept-ra: false
            link-local: [ ipv6 ]
            addresses:
                - 10.0.13.2/24
                - 2001:db8:13::2/64

I also tried using a Raspberry Pi with Debian, the /etc/network/interfaces config looked like this:

auto eth0
iface eth0 inet dhcp
        mtu 1518

auto tap0
iface tap0 inet manual
        mtu 1518

auto vlan10
iface vlan10 inet static
        address 10.0.10.2
        netmask 255.255.255.0
        bridge_ports tap0.10 eth0.10
iface vlan10 inet6 static
        address 2001:db8:10::2
        netmask 64

auto vlan11
iface vlan11 inet static
        address 10.0.11.2
        netmask 255.255.255.0
        bridge_ports tap0.11 eth0.11
iface vlan11 inet6 static
        address 2001:db8:11::2
        netmask 64

auto vlan12
iface vlan12 inet static
        address 10.0.12.2
        netmask 255.255.255.0
        bridge_ports tap0.12 eth0.12
iface vlan12 inet6 static
        address 2001:db8:12::2
        netmask 64

auto vlan13
iface vlan13 inet static
        address 10.0.13.2
        netmask 255.255.255.0
        bridge_ports tap0.13 eth0.13
iface vlan13 inet6 static
        address 2001:db8:13::2
        netmask 64

Having done this, we had the ability to expand our virtual “L2” network by simply adding more clients on other home Internet connections, the bridges would allow all servers to see each-other as if they were connected to the same Ethernet switch.