Reconsidering external disk enclosures versus disk servers

September 18, 2013

Not even three months ago I confidently wrote about all of the good reasons why we were picking external disk enclosures over disk servers. Today I'm here to tell you why we've flip-flopped on that and are now planning to buy disk servers instead. What it boils down to is money.

(I could claim it was also uncertainties over SATA disks behind SAS expanders, but not really; I only started really reading about those issues after we'd made the decision.)

What started the ball rolling was that we found reasonably affordable motherboards with onboard dual 10G-T Ethernet ports (and SAS). These were pretty much the only affordable way of doing 10G in our next generation of fileserver hardware. However, going with this motherboard meant no more generic inexpensive servers; instead we'd have to specify out a case and all of the other things for it. This basically meant that we had a 'one chassis or two' choice; we could buy a case for the motherboard and then a second case as an external disk enclosure, or we could buy just one case and put both the motherboard and the disks into it. Using a single case will save us a significant amount per backend and it turned out that we could find a suitable case (in fact one with a near-ideal disk configuration).

I still believe in all of the merits of external disk enclosures that I wrote about in my original entry. But until we can get inexpensive generic servers with dual 10G-T (and SAS) all of them are trumped by budget practicalities. We can deal with the moderate downsides.

(There are also some upsides, such as fewer exterior cables to get snagged and accidentally yanked loose. I'm always a bit nervous when I'm behind our current fileserver racks because of all of the ESATA cables.)

(Would I still buy external disk enclosures if we had the budget for it? I'm honestly not sure. Those advantages are real but I'm not convinced that they're worth the cost, especially when compared to other things you could do with the same amount of money. If I had endless money, yes definitely; we'd use SAS disks in external SAS JBODs connected to generic servers with dual 10G-T onboard.)


Comments on this page:

By Eugeniu Patrascu at 2013-09-18 02:43:13:

If money is the only thing you're considering, you can always go an order a custom server enclosure like the ones the folks at backblaze.com made and put lots of disks inside to serve files. You will have space to throw in a few SSDs for the ZFS cache.

By James (trs80) at 2013-09-18 05:30:08:

So you'll be going with 1Gb switches now and upgrading to 10GBASE-T later on? Given your physical split of networks and switches you can't amortise getting a bulk lot of 10Gb ports and splitting them amongst all your devices ...

Out of hardware geek curiosity, do you have a link to the "reasonably affordable motherboards with onboard dual 10G-T Ethernet ports (and SAS)" that you mentioned here?

By cks at 2013-09-18 11:50:42:

The specific motherboard we're looking at is the SuperMicro X9DRG-7TF (and there's a cheaper version without SAS, the X9DRH-ITF). I think there's a number of other people making motherboards based on the Intel C602 chipset with onboard dual 10G-T; this is just the one we found first.

(In fact now that I look SuperMicro has a whole series of X9* motherboards with different variants of this and that.)

Our current 10G-T plans are to use 10G-T switches for our two iSCSI networks and maybe to have a core 10G-T switch for our main network that at least the fileservers will connect to. We think this is affordable for us and provides a path to grow 10G-T outward.

(Even if no individual machine can talk to the fileservers at greater than 1G, a 10G link for fileservers allows for aggregate >1G traffic.)

As for custom server enclosures and enclosures with lots of disks: we don't want to put too many disks in any one machine for various reasons (16 is about our comfortable maximum and in fact we're likely to split that into 12 HDs and 4 SSDs for ZIL) and we're not operating anywhere near the scale where truly custom hardware is something we can do.

Written on 18 September 2013.
« The pain (or annoyance) of deploying a simple WSGI thing
Load is a whole system phenomenon »

Page tools: View Source, View Normal, Add Comment.
Search:
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Wed Sep 18 01:29:02 2013
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.