Disk enclosures versus 'all in one case' designs

August 21, 2013

There are two basic choices if you want a decent number of disks attached to a server (such as for iSCSI backends); you can use a basic generic server with an external disk enclosure or you can use a fancy server that has all of those disk bays integrated. When we set up our iSCSI backends we used the first option, partly for historical reasons. I've recently been thinking about this issue again for our next generation of hardware so I want to write something down on the balance back and forth.

The advantage of the external disk enclosure approach in general is that you get independence and isolation. Your choice of server is not tied to what you can get with enough drive bays and similarly your choice of drive bay enclosures is not constrained by what options the server vendors choose to make available. Especially for servers, if a server goes out of production you don't really care; get another generic server and you're likely good to go. Disk enclosures may be a bit more of a problem but even they are getting pretty generic. Separate enclosures can also simplify the spares situation a lot, especially if you buy servers in bulk. They also simplify swapping dead hardware around.

(This is especially so if the disk enclosures are the parts most likely to die. A modern server has annoying dependencies on its physical hardware but the disk enclosure is usually generic. Pull the disks out of one, stick them all in another, and probably nothing will notice. We have magically recovered backends this way.)

The advantages of an all in one case are less obvious but essentially they are that you have one case instead of two. This means fewer points of failure (for example you have only one power supply that has to keep working instead of two), fewer external cables to cause heartburn, and less physical space and power connectors required (it may also mean less power needed). It can also mean that you pay less. Potential savings are especially visible if you are basically assembling the same parts but deciding whether to put them in one case or two.

(In theory you should always pay less because you're buying one less case. In practice there are a lot of effects, including that vendors mark up what they feel are bigger servers and often have relatively cheap basic 1U servers. You can try to build your own custom bigger server using something like the guts of the 1U server, but you probably can't get the components of such a 1U server anywhere near as cheaply as a big vendor can. I wouldn't be surprised if the economics work out such that you're sometimes getting the case and power supply for almost free.)

I don't think one option is clearly superior to the other until you start to get into extreme situations. At very few disks or very many I think that all in one case designs start winning big. At the low end buying a second chassis for a few disk slots is absurd and expensive. At the high end you have the problem of connecting to all of those disks externally with decent bandwidth (and you start hating the two-case price penalties if you're trying to be very cheap).

Written on 21 August 2013.
« The challenge for ARM servers, at least here
I've changed my thinking about redundant power supplies »

Page tools: View Source.
Search:
Login: Password:

Last modified: Wed Aug 21 00:23:54 2013
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.