Monday 31 August 2009

Home network rebuild, part two: Virtual Hosts

Having previously got my new fileserver/NAS box up and running I've now moved on to splitting out the roles of my previous monolithic server into several virtual servers.

Virtual Servers
The first server (not actually virtual) hosts the basic network infrastructure required to get everything else running. NFS for various filesystems (not least of all home dirs), AoE for network attached disk space, DNS, DHCP and tftp for booting clients, and NTP for network time. On top of that I'll probably set up networked syslog there too.

The second server is the phone system -- asterisk. It's going onto a virtual machine of its own because having functioning phones is important and I'll probably install it, configure it and then keep the updates to a bare minimum.

Then we get onto the two fun boxes. One is the main, stable server. It runs Debian lenny (stable) and hosts mail (exim and dovecot), apache, subversion, squid, network monitoring, music streaming, printing, and the like. The second is running Debian squeeze (though it'll probably become sid at some point) and is for more bleeding edge stuff.

The idea is that if the version on lenny isn't up to my needs instead of back-porting or installing individual packages from unstable as I used to, I can simply install it on the unstable box. It's also intended to be a bit more of a play area for ideas. If I actually find myself needing the unstable version of dovecot, say, I'll probably spin it off onto its own box.

There are a few remaining areas which are going to present a problem. One is MythTV. It requires hardware (the tuner cards), and often bleeding edge releases of various things. At the moment virtualizing it doesn't seem like a win so it will stay on the old server for the time being and eventually, perhaps, be shifted onto a spare Via M10000 board I have and powered up and down for recording with wake-on-lan packets. We'll see.

Implementation
The two physical servers I have are the new Atom 330 based NAS box and my old dual core Athlon 3800 box. Each has a couple of gigs of memory and gigabit ethernet connection. The Atom box won't support kvm but kqemu seems to run fine on it. In fact, I've found the version of kvm I'm running (kvm85 on linux-2.6.30 amd64) is actually slightly unstable and prone to crashing and locking up, so I'm using qemu (with kqemu) on both servers at the moment.

I've handled networking by using bridging and having experimented with approaches a bit I've gone with creating an LVM for the basic operating system for each host and presenting that to the VM as /dev/hda, but then any data partitions (e.g. mail stores, home directories) are separate LVMs accessed from within the host over AoE.

Having these partitions available to all hosts while they're up rather than having to attach them to the machine as virtual devices makes managing them easier, but despite playing with booting the whole virtual machine over AoE using pxegrub, I found the complexities of managing the vmlinuz and initrd images externally to the host running them (they have to be external so they can be served up by TFTP) to outweigh any benefits.

So far, and since switching to kqemu instead of kvm, they seem to be up and running and performing well enough for the tasks they have. The original server is being steadily stripped of its responsibilities and services and once I've built an asterisk virtual machine it will be taken down, stripped of its disks and rebuilt as a minimal physical host for running kvm/qemu images. Plus that inconvenient MythTV server, of course.


Future Plans
The main thing I'm planning to get around to at some point is sorting out live migration of virtual machines. I'd like the hosts to all sit on the low-powered Atom server for much of the time but as other machines come up, migrate them over. So for example, when I fire up my quad-core athlon box it'd be nice to have the two main servers migrate transparently onto it for increased speed. With wake-on-lan configured and an appropriate shutdown script to migrate them back this would mean I could have CPU power on demand without actually having to deal with host downtime.

A variation on this would be to have the various desktop machines around the house boot into a virtual-hosting shell and grab their (or any other) virtual machine from the network. I could then power off my desktops for much of the time without losing session state (including open network connections) and remote machine access.

Why?
None of this seems like an immediately obvious Big Win, and in truth a large part of the reason for doing it is for its own sake so that I have up to date, hands on experience of these things.

All that said, there are real advantages I'm already seeing at this early stage. The virtual machines are all hosted on LVM volumes. When I perform an upgrade I can first snapshot the volume and if it all goes wrong, roll back to the previously working state. There have been times when I've really wished I could have done that with a buggy upgrade.

Also, having the main parts of my network infrastructure on a virtual host means that I can effectively upgrade them, repartition them and generally carry out all the sorts of low-level maintenance which previously meant crawling about in the attic swearing at scratched CD-ROMs and buggy BIOSes from the comfort of a laptop in the garden.

For a home network, virtualization is perhaps not as exciting a prospect as in a large data centre and the advantages of scale are lost, but there are still some more mundane advantages that I'm looking forward to.

Home network rebuild, part one: Storage

Over the past couple of months (a period dictated more by lack of time than anything technical) I have been undergoing some fairly major rebuilding of the house server infrastructure.

Background
Up until July we had a server in the attic with a terabyte of RAID 5 storage (Linux software RAID on SATA disks) which ran virtually everything except the routing tasks which were moved onto a OpenWRT-based Netgear WGT634u some time ago.

In July I needed to upgrade the disk space and took the opportunity to build a new server and explore the possibility of using ATA over Ethernet (AoE) and moving some of my services onto virtual hosts. The server I bought was a TranquilPC BBS2 server, which is a low power, Atom 330 based thing with three hot-swappable SATA drive bays and Gigabit Ethernet. Being an Atom it won't support things that require AMD-V or VT-x extentions (e.g. kvm) but it should be good for lightweight kqemu-based virtual machines. To this machine I added three 1TB Western Digital WD10EADS drives.

The BBS2 is a nice piece of kit originally intended for use as a Windows Home Server. It runs Linux very nicely though and I went about setting it up as a Debian server. The idea was that it'd have a 1GB RAID 1 root partition mirrored across two drives and the rest as a chunk of RAID 5 LVM space. If I had been bothered about performance I could have sliced it into a RAID 0, RAID 1 and a RAID 5 chunks for different purposes but it's not something I really care about at home.

BBS2 Installation
Debian pretty much installed out of the box, complete with my LVM and RAID requirements. I booted from a flash stick and everything worked fine until I rebooted, when it couldn't find a bootable device. I booted into rescue mode and accidentally installed a boot-block to the flash stick which at least got me up and running.

I burned several hours trying to get it to boot from the internal disks because grub is known to be slightly finnicky about the BIOS order of devices versus the operating system order, and I wondered if I'd mucked it up. But no, it seems that the problem is simply that the SII3124 hosted hot-swap disks aren't bootable. Damn.

For a while I booted it off the USB stick with a grub install but for longer term I got a compact flash to SATA adapter and a spare 128MB CF card stuck that in as a bootable disk. It seems to work and having yanked out each of the RAID disks in turn the box still boots. Success!

I set up 5GB LVM volumes for /usr and /var and a 100GB home partition which I installed as /srv/nfs/home. Most of the machines around the house already use autofs to mount /home from either /var/export/home on the local machine or over NFS. It seems /srv has become an official part of the FHS since I last looked so I'm switching from /var/export to /srv/nfs. Finally I also set up a /srv/backup for my rsync/hardlink based online backup system.

The idea for this box is that as a low-power server it will be on all the time and hosting the basic network services I need to boot anything else -- disk space (NFS and AoE), DHCP, DNS, tftp and NTP. Everything else can live on virtual machines hosted either on the BBS2 or, if necessary, on a more high-powered machine.

Initial tests seem to show that the disks manage real world performance (on top of RAID5 and LVM) of ~30MB/s, which though a bit disappointing is good enough for my purposes. For those interested, the hdparm -t results for various devices are:

Raw hard disk (/dev/sda): ~75MB/s
Raw RAID 1 (/dev/md0): ~70MB/s
Raw RAID 5 (/dev/md1): ~35MB/s
LVM/RAID 5 (/dev/array/test): ~28MB/s

I'm actually quite surprised by the loss in performance from raw disk to RAID 5, and that might be something I investigate at some point. I don't actually need the performance to be better but it bothers me.

Edit: The problem was that the RAID 5 array had lost a disk. It wasn't rebuilding but the performance still suffered, interestingly. Rebuilding occurred at ~30MB/s when I kicked it off and the performance this morning is ~85MB/s for both raw RAID 5 and LVM/RAID 5.

Disclaimer: I have nothing to do with TranquilPC. When I started thinking about this project I like something prebuilt for a change, particularly given I could shop based on whole-system power consumption that way. I did a quick search for low powered servers and TranquilPC came up as highly rated, quite cheap, small and independent and while not official supporting of Linux, at least very open-minded about it. So far I'm quite happy with my purchase.

Sunday 5 July 2009

Joomla Weekend

I've got some project work coming up involving Joomla. I've never used it (or indeed any of these new fangled web CMSes) so I thought I'd play with it.

Many's a contractor would save this for Monday and charge for the initial learning too, but in this case it seemed like a fun thing to do and more importantly I want to be able to discuss the requirements with the client with something of an idea what I'm talking about and some chance of being able to predict how long the work will take.

Joomla seems quite flexible and produces pleasing and consistent results very easily, but it's also seems quite cumbersome to use compared with raw HTML. Making a page (menu item) seems like a lot of effort compared with what I'm used to. Then again, if you really are after the bloggy, latest-articles-first sort of auto-layout, it's largely a one off operation. If you want more fine grained control over which articles are linked where, that seems to be fighting it a little more.

The extension API seems fairly straightforward though the terminology and structure is a little counter-intuitive to me and the tutorials and howtos I found seemed heavy on "Do this and you'll see this result," approach. I prefer: "Do this and Joomla will do this, then this, and this to give you this result." If you want to code something very similar to the example that's one thing, but if you need to do something a little bit different from what's out there you need to know the hows and whys, not just the what-do-i-copies.

All in all though, fun stuff. I should probably look at Drupal at some point too for a different perspective, but not until this project is done and dusted.

Sunday 21 June 2009

New Toys

Over the last couple of days I've started to find my interest in technical geekery coming back. For a while now computers have been my day job and as such they ceased to be an interesting hobby. Well, after nine months away from working with them I'm starting to find green shoots of technical interest showing through again.

Over the last few days I've been playing with various new things, and for my own amusement I'm going to try to keep track of new things I play with with the newtoys tag on blog posts.

First up Awesome: A modern, fast, configurable window manager for X Windows based primarily around the tiling paradigm but with a Lua-based configuration engine which allows you to do pretty much whatever you want with it. It's certainly slick and small and I'm pleased with my initial set up of it and am looking forward to playing with it more.

Lua: An lightweight, extensible, embeddable scripting language designed for customising and extending the behaviour of applications and tools. You wouldn't (please) write an application in it from scratch, but if you expose the relevant parts of your internal API to it, users can cutomise things in ways that you didn't think of.

Asterisk Caller ID support: Finally got around to tracking down the problem that has left me without working Caller ID on my Asterisk PBX. It turns out it was a (known) bug in the Sipura-3102 firmware. Unfortunately Sipura got bought by Linksys who then got bought by Cisco. Cisco are the company I least like dealing with when it comes to bugs and firmware. Their procedure for downloading anything (including the release notes) makes signing up for a bank account look easy. I mean, really. And on top of that, their Web site is buggy. It took me over an hour to persuade it to give me the firmware which finally arrived as a Windows executable which refused to work in Wine. Luckily I eventually found some help on the 'net, extracted the firmware and uploaded it to the Sipura by hand. It was really easy once I had the information -- the firmware needs to be on an HTTP or TFTP server and you can type the address into the Sipura's http://device/upgrade?... URL.

Anyway. The upshot of that is that the nuisance calls we frequently got are now officially a thing of the past. The PBX answers the phone and plays a recording of the Monty On The Run theme music to our unwanted caller.

On the back of the Asterisk success I decided to put some code into the asterisk dial plan to notify my jabber account when people were calling. That was easy but still, it's not quite what I wanted. So I've coded up a multicast messaging service in python with a CLI tool to send a message and a libnotify based client for displaying the result. Now we're getting somewhere.

Well... with all this libnotify fun maybe I could get my Facebook notifications popping up there too, instead of requiring me to log in and clear them manually. So I've been playing with the Facebook API. Unfortunately that's lead to some grumbling because their Terms of Service aren't Open Source friendly (you are required to keep your application key secret, which you can't do with a freely distributable application, Open Source or not). It makes sense for a hosted app, but not for a "desktop" app. They also, I notice, don't open up access to your inbox and the much-vaunted Jabber-compatible Chat interface has gone silent too. It looks to me like they're trying to keep their crown jewels to themselves here for fear of losing visitors.

But all in all it does feel like my geek streak is starting to come back again.

Divorcing The GNOME

GNOME and I have got on quite well since it came into existence in the late '90s. I had always thought that an interface combining the best of the command line and GUI approaches ought to be possible and for a time it seemed to be going well.

Unfortunately, GNOME and I have separated over irreconcilable differences, summed up by this page:
"Once upon a time, Gnome provided a way to enable Emacs-style keyboard
shortcuts ... Unfortunately, in trying to simplify the Gnome interface (for better or worse), this option has been removed."

This is a story that comes up over and over again: GNOME removing configurability in the name of ease-of-use.

GNOME used to be a powerful GUI built around Unix-friendly principles such as configurability, but more and more it's tending to the One Size Fits All approach. Though you can still get at a lot of config by hacking around with gconf, knowing what you're doing is getting harder. That makes fewer people change the defaults, which makes fewer developers find problems caused by deviating from the defaults, which in turn makes it even less appealing to customise things.

For many years I was a happy sawfish user. I had a nicely customised set of window management rules, shortcuts and utility functions that helped make me very productive. A few months back an upgrade over-ruled my choice of window manager and attempts to change it back just unearthed a bunch of strange behaviours caused partly by assumption that other code was making. Compiz was pretty configurable so I stuck with it for a while, but I finally snapped when I ended up typing my GPG password into a dialogue that popped up and stole my keyboard focus. The problem of new windows stealing focus is one that I never solved in Compiz.

Yesterday I replaced the vast majority of my GNOME driven setup with one based on the Awesome window manager. In a few hours I learned the Lua language that Awesome uses for configuration, disabled the tiling features that I didn't want (even though Tiling is the number one feature on Awesome's list, you can just configure it out if you try), added some functions I missed from my Sawfish setup and I now have a slick and extensible setup which as an added benefit makes my computer feel significantly faster.

I understand the appeal of uniformity. I understand that if everyone does things the same way it makes many things easier. But diversity and flexibility need to be treasured too. I should be able to mould my desktop environment around how my mind works at its most efficient, not vice versa. We need tools that make dealing with diversity easy rather than tools that try to make it go away.

I still respect the work the GNOME team have done on many levels, but I feel that in their rush for mass acceptance they are being too eager to sacrifice some of the things that made their alternatives to Windows so appealing.