The advantages of separate machines for separate things

June 23, 2010

Sometimes it seems that system administration goes in cycles. Right now the cycle is moving back towards consolidation of services on fewer machines, so I want to talk about the advantages of using separate machines (whether virtual or physical) for different services, instead of putting them all on the same machine with various degrees of clever tricks.

(The genesis of this entry was a comment on this entry, talking about how one could use one machine instead of two to do the job I was tackling.)

First off, it is usually simpler to configure the machines. This especially so if you need two instances of the same system, such as a mailer or a web server, as many system setups are simply not designed with this in mind and require a bunch of changes to work right (and missing a change can be problematic). Running only one system instance of a service is the common case that everything is designed for, so you might as well go with the flow; it's easier.

Second, it gives you isolation and independence; when you do things to the underlying system environment, it affects as few services as possible. The obvious case is taking a machine down or rebooting it, where if you have a bunch of services on the machine you need a time that is acceptable to have all of them down (at once). Similarly, if you're planning an OS upgrade or change you need to have all of the services ready to go on the new OS instead of just one of them (and you can't do a split upgrade, keeping some services on the old OS version and moving others to the new one).

This also implies that your one machine needs to be configured for the union of what all of its services require. This sounds abstract, so I'll give a concrete example: do you need to mount all of your NFS filesystems? Our main mail machine has to have user home directories mounted from our fileservers and use our full /etc/passwd file, but the spam forwarding machine does not. As a result, the spam forwarding machine is almost entirely decoupled from our fileserver infrastructure.

(Our fileserver infrastructure does much more than just plain NFS service, but that's another entry.)

And of course you get fault isolation; if something goes wrong on one machine, it only takes down one service instead of a whole bunch of them. Here, things going wrong can be anything from system crashes to CPU fan failures to someone accidentally nudging the network cable out when they were in the machine room doing something else.

Sometimes the services really are so tightly coupled that you would never use the freedom that one service per machine isolation gives you. But in my experience this is the rare case; far more common is situations where some services are easier to interrupt than others.

Virtualization is not a cure-all for these issues; if anything it can make some of them worse, because it can concentrate a lot of machines and services onto one physical piece of hardware and one host OS. You can do better, but it gets expensive.

(From personal experience, doing anything to the host machine for a bunch of virtual machines is a pain in the rear if you don't have easy failover.)


Comments on this page:

From 83.64.115.202 at 2010-06-23 02:39:21:

I think this cycle is also greatly influenced by the non-existant licensing fees for your OS. If you were to use a per-machine/per-installation licensed OS, you'd probably mix some services on machines to save some cost. (This holds true for any proprietary OS as well as Enterprise Linux.)

 -c
From 92.236.77.29 at 2010-06-25 15:23:47:

Of course, putting services on separate physical machines makes the failures more likely, despite the failure "only" affecting one service.

If you take it to an extreme and have one (pair of) servers instead of 100 servers, say, then you've only got one set of disks and power supplies, motherboards etc to fail rather than any out of the 100. (Please do correct me if my very poor statistics knowledge is failing me here)

VM server software need not be particularly expensive, and if you're using some kind of configuration management then you'd be able to move services around and duplicate them or whatever pretty easily. I was looking at using KVM and Ganeti as a free and open source VM for a replicated pair of servers, but VMware ESX is so widely-used and the Starter packs so inexpensive that it makes sense to use the paid-for ESX versions (at least until we get beyond the starter packs and into serious licensing costs...)

Written on 23 June 2010.
« Why feed readers are not good for skimming things
The elements of fileserver infrastructure »

Page tools: View Source, View Normal, Add Comment.
Search:
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Wed Jun 23 01:13:13 2010
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.