The other victims of the Brisbane 2022 flood at my workplace are a pile of Leader SN4PROv3 “NUC” clones, like the ruggedised Intel NUCs, were purchased as cheap “PLC”s. Out of the box, they run Windows 10, which seems insane given they, like the Intel NUCs, only have 4GB RAM and sport dual-core Celerons running at a whopping 1.1GHz. (Wow, let’s see that play Crysis!)
Most of these actually survived pretty well, with all but one booting. I had visually inspected each by opening the case and having a look inside, but obviously on this one specimen, I missed something. So let’s crack it open and have a look.
First step is to move the rubber feet out of the way and remove the 4 case screws hiding beneath.
The Leader website claims these machines run eMMC. In all the units I have here, it appears all of them are not eMMC, but rather, are M.2 SATA SSDs. I’d consider that an upgrade. My guess is maybe the first of this model had eMMC, but then the chip shortage bit and so they endowed these ones with SATA. The footprint for the eMMC looks to be just near the battery.
In any case, that SSD is in the way making the screws to the left of it hard to reach, so let’s get it out of the way.
The SSD here is a “Kston”-brand SSD… Not Kingston, don’t be fooled by the lettering…
Anyway, having done that, the screws that hold the board down are now more accessible, so let’s get them out.
Finally, to actually get the board out, we’ll need to pop the rear cover out. There are four little plastic catches that we’ll need to push down and out to release the rear panel. A flat-blade screwdriver works for this.
Having done this, you’ll note the board now rattles back and forth. Use the new opening to push the board out of its resting place.
As you do this, you’ll note the case will still interfere, but now at least you should be able to bend the plastic out of the way.
Now, at this point you’ll note there’s some coax feeds connecting two stick-on antennas to the case. The mainboard end is socketed but the case end are just soldered on with no strain-relief!
The heatsink/fan assembly can now be seen, and it too is held on by four screws.
Undo these, and we should be staring at the CPU.
We can see the culprit here that caused the failed POST… there’s a tiny 8-pin chip in amongst that rust residue, and of course I’m fresh out of the isopropyl alcohol spray… so we’ll try some circuit board cleaner on this and see if she goes afterwards.
So, following on from the fun and games of migrating a network, the other outcome of the flood is cleaning out hardware that wasn’t so lucky to escape the flood waters. My workplace does a lot of industrial automation and meter integration work, and the tool of choice lately has been Linux machines with NodeRED (this is what WideSky Edge is built on).
Seeing a chip-shortage, they bought up a big stock pile of these ruggedised Intel NUCs. These machines are no front-page news spec-wise (dual-core Intel Celeron, 4GB RAM, 64GB eMMC), but are good enough for the task. However, mother nature had other plans, and machines that were not IPX7 rated got a dunking no one anticipated. Miraculously, I’ve managed to get most of them going. Initially when I got them home was I gave them all a rise in tap water to just wash away any mud and other muck that may have been in the river water, then left them to dry for a few days.
This didn’t help matters, with units still refusing to power on, so I opened each one up and sprayed them with some isopropyl alcohol spray… using an entire 300g can of the stuff on 14 rugged NUCs. I did this to one at first… and found to my amazement, it booted!
I did find though, there was no documentation on how to disassemble one of these. Intel’s docs don’t even tell you. And much of what I did find was YouTube videos… apparently people are incapable of taking still photos and documenting the process in plain text. Never mind, I figured it out, and will document the procedure here.
I’m not going to bother identifying chips, this isn’t iFixIt. I’m primarily concerned about cleaning off any muck that’s causing the boot failure.
So presumably you’ve got one of these NUCs in front of you.
First step is to flip it over (front still facing you), and you’ll see 4 very obvious screws. These are captive screws, so no risk of them rattling loose.
Inside we see the WiFi/Bluetooth module, a space for a M.2 SSD/peripheral, the RTC battery and the top of the main board.
The first step I found easiest is to remove the screw and nut that holds the WiFi card in place. This is a spacer+screw assembly which does double-duty of holding the WiFi card down, and holding whatever full-length M.2 peripheral you put in there. Pliers work for rotating that nut. Removal will allow us to better get the coax antenna feeds out of the way to access the screw beneath.
Now we have that out of the way, we can remove the screws that actually hold the inner chassis down. There are two screws, one on each side.
Now, to finally release the inner frame, there are two catches, one each side of the case.
You’ll want to push the plastic outwards whilst pulling upward on the USB / Ethernet connectors. Do one side, pulling the frame out by 5mm, then do the other.
Having done this, you’ll see continuing to pull on these connectors, the whole assembly pulls out. From here, we now can focus on extracting the PCB from the inner chassis frame. The PCB is fixed by two screws which the silkscreen helpfully points out.
Undo these two screws, then flip the board over, you’ll find 4 more that bolt the heatsink to the CPU.
These screws are captive like the case screws in the initial step, so just undo each a few turns, move to the next screw and loosen it, etc, and keep going until the PCB comes loose.
The PCB should now come free and you’ll be able to see the CPU and RAM (which are soldered-on).
So, this week mother nature threw South East Queensland a curve-ball like none of us have seen in over a decade: a massive flood. My workplace, VRT Systems / WideSky.Cloud Pty Ltd resides at 38b Douglas Street, Milton, which is a low-lying area not far from the Brisbane River. Sister company CETA is just two doors down at no. 40. Mid-February, a rain depression developed in the Sunshine Coast Hinterland / Wide Bay area north of Brisbane.
That weather system crawled all the way south, hitting Brisbane with constant heavy rain for 5 days straight… eventually creeping down to the Gold Coast and over the border to the Northern Rivers part of NSW.
The result on our offices was devastating. (Copyright notice: these images are placed here for non-commercial use with permission of the original photographers… contact me if you wish to use these photos and I can forward your request on.)
Some of the stock still worked after the flood — the Siemens UH-40s pictured were still working (bar a small handful) and normally sell for high triple-figures. The WideSky Hubs and CET meters all feature a conformal coating on the PCBs that will make them robust to water ingress and the Wavenis meters are potted sealed units. So not all a loss — but there’s a big pile of expensive EDMI meters (Mk7s and Mk10s) though that are not economic to salvage due to approval requirements which is going to hurt!
Le Mans Motors, pictured in those photos is an automotive workshop, so would have had lots of lubricants, oils and grease in stock needed to service vehicles — much of those contaminants were now across the street, so washing that stuff off the surviving stock was the order of the day for much of Wednesday, before demolition day Friday.
As for the server equipment, knowing that this was a flood-prone area (which also by the way means insurance is non-existent), we deliberately put our server room on the first floor, well above the known flood marks of 1974 and 2011. This flood didn’t get that high, getting to about chest-height on the ground floor. Thus, aside from some desktops, laptops, a workshop (including a $7000 oscilloscope belonging to an employee), a new coffee machine (that hurt the coffee drinkers), and lots of furniture/fittings, most of the IT equipment came through unscathed. The servers “had the opportunity to run without the burden of electricity“.
We needed our stuff working, so we needed to first rescue the machines from the waterlogged building and set them up elsewhere. Elsewhere wound up being at the homes of some of our staff with beefy NBN Internet connections. Okay, not beefy compared to the 500Mbps symmetric microwave link we had, but 50Mbps uplinks were not to be snorted at in this time of need.
The initial plan was the machines that once shared an Ethernet switch, now would be in physically separate locations — but we still needed everything to look like the old network. We also didn’t want to run more VPN tunnels than necessary. Enter OpenVPN L2 mode.
Establishing the VPN server
Up to this point, I had deployed a temporary VPN server as a VPS in a Sydney data centre. This was a plain-jane Ubuntu 20.04 box with a modest amount of disk and RAM, but hopefully decent CPU grunt for the amount of cryptographic operations it was about to do.
Most of our customer sites used OpenVPN tunnels, so I migrated those first — I managed to grab a copy of the running server config as the waters rose before the power tripped out. Copying that config over to the new server, start up OpenVPN, open a UDP port to the world, then fiddled DNS to point the clients to the new box. They soon joined.
Next problem was getting the staff linked — originally we used a rather aging Cisco router with its VPN client (or vpnc on Linux/BSD), but I didn’t feel like trying to experiment with an IPSec server to replicate that — so up came a second OpenVPN instance, on a new subnet. I got the Engineering team to run the following command to generate a certificate signing request (CSR):
They sent me their .req files, and I used EasyRSA v3 to manage a quickly-slapped-together CA to sign the requests. Downloading them via Slack required that I fish them out of the place where Slack decided to put them (without asking me) and place it in the correct directory. Sometimes I had to rename the file too (it doesn’t ask you what you want to call it either) so it had a .req extension. Having imported the request, I could sign it.
A new file pki/issued/theclient.crt could then be sent back to the user. I also provided them with pki/ca.crt and a configuration file derived from the example configuration files. (My example came from OpenBSD’s OpenVPN package.)
They were then able to connect, and see all the customer site VPNs, so could do remote support. Great. So far so good. Now the servers.
Server connection VPN
For this, a third OpenVPN daemon was deployed on another port, but this time in L2 mode (dev tap) not L3 mode. In addition, I had servers on two different VLANs, I didn’t want to have to deploy yet more VPN servers and clients, so I decided to try tunnelling 802.1Q. This required boosting the MTU from the default of 1500 to 1518 to accommodate the 802.1Q VLAN tag.
The VPN server configuration looked like this:
keepalive 10 120
In addition, we had to tell netplan to create some bridges, we created a vpn.conf in /etc/netplan/vpn.yaml that looked like this:
Having done this, we had the ability to expand our virtual “L2” network by simply adding more clients on other home Internet connections, the bridges would allow all servers to see each-other as if they were connected to the same Ethernet switch.