Even for us, SSD write volume limits can matter

June 24, 2022

Famously, one difference between HDDs and SSDs is that SSDs have limits on how much data you can write to them and HDDs mostly don't (which means that SSDs have a definite lifetime). These limits are a matter both of actual failure and of warranty coverage, with the warranty coverage limit generally being lower. We don't normally think about about this, though, because we're not a write-intensive place. Sometimes there are surprises, such as high write volume on our MTAs or more write volume than I expected on my desktops, but even then the absolute numbers tend to be low and not anywhere near the write endurance ratings of our SSDs.

Recently, though, we realized that we have one place with high write volume, more than high enough to cause problems with ordinary SSDs, and that's on our Amanda backup servers. When an Amanda server takes in backups and puts them on 'tapes', it first writes each backup to a staging disk and then later copies from the staging disk to 'tape' (in our Amanda environment, these are HDDs). If you have a 10G network and fileservers with SATA SSDs, as we do, how fast an ordinary HDD can write data generally becomes your bottleneck. If your fileservers can provide data at several hundred MBytes/sec and Amanda can deliver that over the network, a single HDD staging disk or even a stripe of two of them isn't enough to keep up.

However, the nature of the work that a staging disk does means that it sees high write volume. Every day, all of your backups sluice through the staging disk (or disks) on their way to 'tapes'. If you back up 3 TB to 4 TB a day per backup server, that's 3 TB to 4 TB of writes to the staging disk. It would be nice to use SSDs here to speed up backups, but no ordinary SSD has that sort of write endurance. Much as you'd have to aggregate a bunch of HDDs to get the write speed you'd need, you'd have to aggregate a bunch of ordinary SSDs to get any individual one down to the write endurance level they can survive.

(In a way the initial backup to the staging disks is often the most important part of how fast your backups are, because that's when your other machines may be bogged down with making backups or otherwise affected by the process.)

There are special enterprise SSDs with much higher write endurance, but they also come with much higher price tags. For once, this extra cost is not just because the e word has been attached to something. The normal write endurance limits are intrinsic to how current solid state storage cells work; to increase them, either the SSD must be over-provisioned or it needs to use more expensive but more robust cell technology, or both. Neither of these is free.


Comments on this page:

There are special enterprise SSDs with much higher write endurance, but they also come with much higher price tags.

For anyone curious, a spec to look for is "DWPD": device writes per day.

So if there is a 800 GB SSD, and it has a '1 DWPD' in the spec sheet, you can write 800 GB per day, every day, and have it work for the life of its (e.g.) five year warranty. If a drive has a '5 DWPD' spec, it can write 5*800 = 4000GB per day, every day, for the five years of its warranty and it will (should?) not be a problem.

Toshiba (to take one example) has SSDs that have values of 25 DWPD:

Of course you can balance things: a 1600GB SSD with 1 DPWD may have 'equivalent' endurance to a 8000GB (1600*5) SSD with 0.2 DPWD.

By Randall at 2022-07-16 11:52:40:

A RAID0 of HDDs might do well on all fronts here. You do need to have the slots for them, and the staging process needs to not be seek-bound, but one way to look at it is you can add couple hundred MB/s of sequential bandwidth for whatever a cheap drive costs, and you'll probably get the needed capacity as a side effect of buying enough disks to hit the bandwidth target.

I guess the flipside is that if you can get a single SSD rated for the durability you need, it's simpler.

A nice thing for either HDDs or SSDs is this temp-storage use case doesn't need great durability. Since you can restart a failed backup, you can write to an SSD until it actually quits working, instead of just up to its rated TBW. Not sure if it still holds today, but in some tests a long time ago, the average failure point was way past the rated one.

By cks at 2022-07-18 15:51:40:

We've used RAID0s of HDDs for Amanda holding disks, but we've found that they give us more heartburn in operation than we really want. Losing a holding disk (through loss of a RAID0 member) costs us a night's backups and also requires immediately scrambling to repair the system so that it can do the next set of backups, even if this happens on a weekend or during holidays. Losing a night's backups (or most of them, or some part of them) turns out to be not so great in our environment.

Written on 24 June 2022.
« A mystery with Fedora 36, fontconfig, and xterm (and urxvt)
A limitation on what 'go install' can install (as of Go 1.18) »

Page tools: View Source, View Normal, Add Comment.
Search:
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Fri Jun 24 23:44:44 2022
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.