I’ve been working with VRT Systems for a few years now. Originally brought in as a software engineer, my role shifted to include network administration duties.
This of course does not phase me, I’ve done network administration work before for charities. There are some small differences, for example, back then it was a single do-everything box running Gentoo hosting a Samba-based NT domain for about 5 Windows XP workstations, now it’s about 20 Windows 7 workstations, a Samba-based NT domain backed by LDAP, and a number of servers.
Part of this has been to move our aging infrastructure to a more modern “private cloud” infrastructure. In the following series, I plan to detail my notes on what I’ve learned through this process, so that others may benefit from my insight. At this stage, I don’t have all the answers, and there are some things I may have wrong below.
The first stage with any such network development (this goes for “cloud”-like and traditional structures) is to consider how we want the network to operate, how it is going to be managed, and what skills we need.
Both my manager and I are Unix-oriented people, in my case I’ll be honest — I have a definite bias towards open source, and I’ll try to assess a solution on technical merit rather than via glossy brochures.
After looking at some commercial solutions, my manager more or less came to the conclusion that a lot of these highly expensive servers are not so magical, they are fundamentally just standard desktops in a small form factor. While we could buy a whole heap of 1U high rack servers, we might be better served by using more standard hardware.
The plan is to build a cluster of standard boxes, in as small form factor as practical, which would be managed at a higher level for load balancing and redundancy.
Hardware: first attempt
One key factor we wanted to reduce was the power consumption. Our existing rack of hardware chews about 1.5kW of power. Since we want to run a lot of virtual machines, we want to make them as efficient as possible. We wanted a small building block that would handle a small handful of VMs, and storing data across multiple nodes for redundancy.
After some research, we wound up with our first attempt at a compute node:
|Motherboard:||Intel DQ77KB Mini ITX|
|CPU:||Intel Core i3-3220T 2.8GHz Dual-Core|
|Storage:||Intel 520S 240GB SSD|
|Networking:||Onboard dual gigabit for cluster, PCIe Realtek RTL8168 adaptor for client-facing network|
The plan, is that we’d have many of these, they would pool their storage in a redundant fashion. The two on-board NICs would be bonded together using LACP and would form a back-end storage network for the nodes to share data. The one PCIe card would be the “public” face of the cluster and would connect it to the outside world using VLANs.
For the OS, we threw on Ubuntu 12.04 LTS AMD64, and we ran the KVM hypervisor. We then tried throwing this on one of our power meters to see how much power the thing drew. At first my manager asked if the thing was even turned on … it was idling at 10W.
Loaded it on with a few virtual machines, eventually I had 6 VMs going on the thing, ranging from Linux, Windows 2000, Windows XP and a Windows 2008R2 P2V image for one of our customer projects.
The CPU load sat at about 6.0, and the power consumption did not budge above 30W. Our existing boxes drew 300W, so theoretically we could run 10 of these for just one of our old servers.
Running QEMU VMs from bash scripts is all very well, but in this case we need to be able to give non-technical users use of a subset of the cluster for projects. I hardly expect them to write bash scripts to fire up KVM over SSH.
We considered a few options: Ganeti, OpenNebula and OpenStack.
Ganeti looked good but the lack of a template system and media library let it down for us, and OpenNebula proved a bit fiddly as well. OpenStack is a big behemoth and will take quite a bit of research however.
One factor that stood out like a sore thumb: our initial infrastructure was going to just have all compute nodes, with shared storage between them. There were a couple of options for doing this such as having the nodes in pairs with DR:BD, using Ceph or Sheepdog, etc… but by far, the most common approach was to have a storage backend on a SAN.
SANs get very expensive very quickly. Nice hardware, but overkill and over budget. We figured we should plan for that eventuality, should the need arise, but it’d be a later addition. We don’t need blistering speed, if we can sustain 160Mbps throughput, that’d probably be fine for most things.
Reading the literature, Ceph looked by far and above the best choice, but it had a catch — you can’t run Ceph server daemons, and Ceph in-kernel clients, on the same host. Doing so you run the risk of a deadlock, in much the same manner as NFS does when you mount from localhost.
OpenStack actually has 3 types of storage:
- Ephemeral storage
- Block storage
- Image storage
Ephemeral storage is specific to a given virtual machine. It often lives on the compute node with the VM, or on a back-end storage system, and stores data temporarily for the life of a virtual machine instance. When a VM instance is created, new copies of ephemeral block devices are created from images stored in image storage. Once the virtual machine is terminated, these ephemeral block devices are deleted.
Block storage is the persistent storage for a given VM. Say you were running a mail server … your OS and configuration might exist on a ephemeral device, but your mail would sit on a block device.
Image storage are simply raw images of block devices. Image storage cannot be mounted as a block device directly, but rather, the storage area is used as a repository which is read from when creating the other two types of storage.
Ephemeral storage in OpenStack is managed by the compute node itself, often using LVM on a local block device. There is no redundancy as it’s considered to be temporary data only.
For block storage, OpenStack provides a service called cinder. This, at its heart, seems to use LVM as well, and exports the block devices over iSCSI.
For image storage, OpenStack has a redundant storage system called swift. The basis for this seems to be rsync, with a service called swift-proxy providing a REST-interface over http. swift-proxy is very network intensive, and benefits from hardware such as high-speed networking (e.g. 10Gbps Ethernet).
Hardware: second attempt
Having researched how storage works in OpenStack somewhat, it became clear that one single building block would not do. There would in fact be two other types of node: storage nodes, and management nodes.
The storage nodes would contain largish spinning disks, with software maintaining copies and load balancing between all nodes.
The management nodes would contain the high-speed networking, and would provide services such as Ceph monitors (if we use Ceph), swift-proxy and other core functions. RabbitMQ and the core database would run here for example.
Without the need for big storage, the compute nodes could be downsized in disk, and expanded in RAM. So we now had a network that looked like this:
|Motherboard:||Intel DQ77KB Mini ITX||Intel DQ77MH Micro ATX|
|CPU:||Intel Core i3-3220T 2.8GHz Dual-Core|
|RAM:||2*8GB SODIMM||2*4GB DIMM|
|Storage:||Intel 520S 60GB SSD||Intel 520S 60GB SSD for OS, 2*Seagate ST3000VX000-1CU1 3TB HDDs for data|
|Networking:||Onboard dual gigabit for cluster, PCIe Realtek RTL8168 adaptor for client-facing network||Onboard dual gigabit for management, PCIe 10GbE for cluster communications||Onboard dual gigabit for cluster, PCIe Realtek RTL8168 adaptor for management|
The management and storage nodes are slightly tweaked versions of what we use for compute nodes. The motherboard is basically the same chipset, but capable of taking larger PCIe cards and using a standard ATX power supply.
Since we’re not storing much on the compute nodes, we’ve gone for 60GB SSDs rather than 240GB SSDs to cut the cost down a little. We might have to look at 120GB SSDs in newer nodes, or maybe look at other options, as Intel seem to have discontinued the 60GB 520S … bless them! The Intel 520S SSDs were chosen due to the 5-year warranty offered.
The management and storage nodes, rather than going into small Mini-ITX media-centre style cases, are put in larger 2U rackmount cases. These cases have room for 4 HDDs, in theory.
For testing purposes, we got two of each node. This allows us to try out things like testing what would happen if a node went belly up by yanking its power, and to test load balancing when things are working properly.
We haven’t bought the 10GbE cards at this stage, as we’re not sure exactly which ones to get (we have a Cisco SG500X switch to plug them into) and they’re expensive.
The final cluster will have at least 3 storage nodes, 3 management nodes and maybe as many as 16 compute nodes. I say at least 3 storage nodes — in buying the test hardware, I accidentally ordered 7 cases, and so we might decide to build an extra storage node.
Each of those gives us 6TB of storage, and the production plan is to load balance with a replica on at least 3 nodes… so we can survive any two going belly up. The disks also push close to 800Mbps throughput, so with 3 nodes serving up data, that should be enough to saturate the dual-gigabit link on the compute node. 4 nodes would give us 8TB of effective storage.
With so many nodes though, one problem remains, deploying the configuration and managing it all. We’re using Ubuntu as our base platform, and so it makes sense to tap into their technologies for deployment.
We’ll be looking to use Ubuntu Cloud and Juju to manage the deployment.
Ubuntu Cloud itself is a packaged version of OpenStack. The components of OpenStack are deployed with Juju. Juju itself can deploy services either to “public clouds” like Amazon AWS, or to one’s own private cluster using Ubuntu MAAS (Metal As A Service).
Metal As a Service itself basically is a deployment system which installs and configures Ubuntu on network-booting clients for automatic installation and configuration.
The underlying technology is based on a few components: dnsmasq DHCP/DNS server, tftp-hpa TFTP server, and the configuration gets served up to the installer via a web service API. There’s a web interface for managing it all. Once installed, you then deploy services using Juju (the word juju apparently translates to “magic”).
So having researched what hardware will likely be needed, I need to research a few things.
Firstly, the storage mechanism, we can either go with the pure OpenStack approach with cinder managing LVM based storage and exporting over iSCSI, or we get cinder to manage a Ceph back-end storage cluster. This decision has not yet been made. My two biggest concerns with cinder are:
- Does cinder manage multiple replicas of block storage?
- Does cinder try to load-balance between replicas?
With image storage, if we use Ceph, we have two choices. We can either:
- Install Swift on the storage nodes, partition the drives and use some of the storage for Swift, and the rest for Ceph… with Swift-proxy on the management nodes.
- Install Rados Gateway on the management nodes in place of Swift
But which is the better approach? My understanding is that Ceph doesn’t fully integrate into the OpenStack identity service (called keystone). I need to find out if this matters much, or whether splitting storage between Swift and Ceph might be better.
Metal As a Service seems great in concept. I’ve been researching OpenStack and Ceph for a few months now (with numerous interruptions), and I’m starting to get a picture as to how it all fits together. Now the next step is to understand MAAS and Juju. I don’t mind magic in entertainment, but I do not like it in my systems. So my first step will be to get to understand MAAS and Juju on a low level.
Crucially, I want to figure out how one customises the image provided by MAAS… in particular, making sure it deploys to the 60GB SSD on each node, and not just the first block device it sees.
The storage nodes have their two 6Gbps SATA ports connected to the 3TB HDDs for performance, making them visible as /dev/sda and /dev/sdb — MAAS needs to understand that the disk it needs to deploy to is called /dev/sdc in this case. I’d also perfer it to use XFS rather than EXT4, and a user called something other than “ubuntu”. These are things I’d like to work out how to configure.
As for Juju, I need to work out exactly what it does when it “bootstraps” itself. When I tried it last, it randomly picked a compute node. I’d be happier if it deployed itself to the management node I ran it from. I also need to figure out how it picks out nodes and deploys the application. My quick testing with it had me asking it to deploy all the OpenStack components, only to have it sit there doing nothing… so clearly I missed something in the docs. How is it supposed to work? I’ll need to find out. It certainly isn’t this simple.