Why you might not want to use SSDs as system disks just yet

November 6, 2013

I wrote recently about our planned switch to using SSDs as system disks. This move may be somewhat more daring and risky than I've made it sound so far, so today I feel like running down all of the reasons I know that you might not want to do this (or at least do this just yet). Note that not all of this is deeply researched and in fact a bunch of it is taken from ambient stories and gossip that floats around.

The current state of hard drives is that they are mature technology. As mature technology the guts are basically the same between all hard drives, both different makes and different manufacturers, and the main 'enterprise' drive upsell is often about the firmware (and somewhat about how you connect to them). As such the consumer 7200 RPM SATA drive you can easily buy is mostly the same as an expensive 7200 RPM nearline SAS drive. Which is, of course, why so many people buy more or less consumer SATA drives for all sorts of production use.

My impression is that this is not the case for SSDs. Instead SSDs are a rapidly evolving immature technology, with all that that implies. SSDs are not homogenous; they vary significantly between manufacturers, product lines, and even product generations. Unlike hard drives you can't assume that any SSD you buy from a major player in the market will be decent or worth its price (but on the other hand it can sometimes be an underpriced gem). There are also real and significant differences between 'enterprise' SSDs and ordinary consumer SSDs; the two are not small variants of each other and ordinary consumer SSDs may not be up to actual production usage in server environments.

You can find plenty of horror stories about specific SSDs out there. You can also find more general horror stories about SSD behavior under exceptional conditions; one example I've seen recently is Understanding the Robustness of SSDs under Power Fault [PDF] (from FAST '13), which is about what it says. Let's scare everyone with a bit from the abstract:

Our experimental results reveal that thirteen out of the fifteen tested SSD devices exhibit surprising failure behaviors under power faults, including bit corruption, shorn writes, unserializable writes, metadata corruption, and total device failure.

Most of their SSDs were from several years ago so things may be better with current SSDs. Or maybe not. We don't necessarily know and that's part of the problem with SSDs. SSDs are very complex devices and vendors have every reason to gloss over inconvenient details and (allegedly) make devices that lie about things to you so that they look faster or healthier.

(It's widely reported that some SSDs simply ignore cache flush commands from the host instead of dutifully and slowly committing pending writes to the actual flash. And we're not talking about SSDs that have supercapacitors so that they can handle power failures.)

On a large scale level none of this is particularly surprising or novel (not even the bit about ignoring cache flushes). We saw the same things in the hard drive industry before it became a mature field, including manufacturers being 'good' or 'bad' and there being real differences between the technology of different manufacturers and between 'enterprise' and consumer drives. SSDs are just in the early stages of the same process that HDs went through in their time.

Ultimately that's the large scale reason to consider avoiding SSDs for casual use, such as for system drives. If you don't actively need them or really benefit from them, why take the risks that come from being a pioneer?

(This is the devil's advocate position and I'm not sure how much I agree with it. But I put the arguments for SSDs in the other entry.)


Comments on this page:

Have you considered PXE booting into the OS, which is what SmartOS encourages? (except their use of local disks is VM storage, not iSCSI sharing).

I've done this before for diskless clients, but haven't made the jump into "boot-drive free servers" yet...

By cks at 2013-11-07 11:16:00:

I think that a diskless environment would be more complicated and less resilient than one using local system disks. In some environments it would be more scalable but we are too small for that. Used for fileservers or iSCSI backends (or both) it would make the PXE boot stuff a crucial single point of failure for our entire overall environment, which is something that would make it a very hard sell.

(I'm not sure how modern PXE booting is intended to work, and in specific whether the booted image is supposed to be self-contained with a ramdisk-based root filesystem or if it's supposed to mount a root filesystem from somewhere. The former is a complete nonstarter for us for a number of reasons.)

The recommended way I've found (on Linux at least) seems to be boot with a kernel (and possibly initramfs), then bootstrap with a r/o root on NFS, then run unionfs or similar over the top of it for software that requires r/w root, and possibly have scripts that pull down machine-specific files onto the unionfs.

The single point of failure argument is still there for network connectivity, but beyond that it's less of an issue. NFS root is just a r/o set of files on disk, so replicating them to a secondary/tertiary servers is trivial. And DHCP/PXE aren't that difficult to set up HA by splitting ranges or chainloading scriptable PXE firmware like iPXE which can deal with downed servers.

I'm not specifically advocating this (it makes more sense on non-crucial user facing endpoints, where the biggest win is "I only have to apply updates one place"), but it's interesting to think about.

By cks at 2013-11-13 16:55:27:

A for the record pointer: I wrote more about my view of this in NetbootingViews.

For NFS root specifically, note that replicating filesystems is not good enough for failover of mounted NFS filesystems. I wrote up the reasons in an old entry on why high available NFS requires shared storage.

Written on 06 November 2013.
« Modern versions of Unix are more adjustable than they used to be
The spectrum of options when netbooting systems »

Page tools: View Source, View Normal.
Search:
Login: Password:

Last modified: Wed Nov 6 23:06:29 2013
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.