External disk enclosures versus disk servers
We're finally looking at renewing our fileserver infrastructure because the hardware is five years old and we're running a version of Solaris that is now laughable. As part of that we're revisiting decisions we made in the original version to see if we still like them. One of those decisions is the issue of how we attach a fair number of data disks to a server.
Generally you have two decent options. You can either get external disk enclosures and connect them to a generic server somehow (these days with your choice of eSATA or SAS) or you can get server cases with room for ever increasing amounts of disks (cabled up to the motherboard somehow). Our current iSCSI backends are built with external disk enclosures, but other groups here use disk servers and have plenty of good experience. I believe that disk servers are generally cheaper although I haven't looked at the numbers.
(The reasonably famous Backblaze Storage Pod design is an example of a disk server.)
Our current plan is to continue with some variety of external disk enclosures, for a few reasons:
- Flexibility in that external disk enclosures can be used with any
server you like (provided that the server can take an appropriate
interface card). If the server model you like becomes unavailable
you can switch to another and in an emergency you can connect an
enclosure to almost anything.
As part of this external disk enclosures may well have a longer lifetime. A disk server is strongly tied to its motherboard and motherboards become obsolete within a few years. An external disk enclosure can be used (and reused) with many generations of servers without problems.
- It's easier to fix many hardware failures. External disk enclosures
are anonymous components so if one fails for some reason you can
put your spare into place then swap the disks in and you're done.
Dealing with a failure in integrated hardware is more complicated
because there are non-anonymous parts of it (like Ethernet ports
with specific MAC addresses).
(This may be a bias due to our experience; we've mostly had disk enclosures fail, not servers. Maybe a disk server would be as reliable as our servers, not our disk enclosures.)
- Less has to change if you have some servers with a different number
of data disks. We have some servers with a few SSDs instead of a
number of HDs; the only real difference is what sort of external
disk shelf they use (and external disk shelves are anonymous).
Among other things, this makes the spares situation easier.
- What we want in total disk count doesn't seem to fit well in disk servers. We're looking at 12 to 16 data disks (ideally the latter) plus two system disks, and we want all of them to be hot-swappable. This doesn't seem to fit well with what I could see in disk servers; the next jump after 16 hot-swappable bays is 24, which is too many for us in a single server.
Our overall feeling is that using external disk enclosures in our current environment has been a not insignificant win and that we value the flexibility and so on that they've given us.
(My impression is that disk servers use somewhat less physical space. This is currently not an issue for us.)