How writes work on ZFS raidzN pools, with implications for 4k disks

November 4, 2013

There is an important difference between how ZFS handles raidzN pools and traditional RAID-5 and RAID-6 systems, a difference that can have serious ramifications in some environments. While I've mentioned this before I've never made it explicit and clear, so it's time to fix that.

In a traditional RAID-5/6/etc system, all stripes are full width, ie they span all disks (in fact they're statically laid out and you can predict which disks are data disks and which are parity disks for any particular stripe). If you write or rewrite only part of a stripe, the RAID system must do some variant of a read-modify-write cycle, updating at least one data disk and N parity disks. In ZFS stripes are variable size and hence span a variable number of disks (up to the full number of disks for data plus parity). Layout is variable and how big a stripe is depends on how much data you're writing (up to the dataset's recordsize). To determine how many disks a given data block write needs, you basically divide the size of the data by the fundamental sector size of the vdev (ie its ashift), possibly or likely wrapping around once the write gets big. There is no in-place updates of existing stripes.

(This leads to the usual ZFS sizing suggestion for how many disks should be in a raidzN vdev. Basically you want a full block to be evenly divided over all of the data disks, so with the usual 128kb recordsize you might want 8 disks for the data plus N disks for the parity. This creates even disk usage for full sized writes.)

In the days of disks with 512 byte physical sectors it didn't take much data being written to use all of the vdev's disks; even a 4kb write could be sliced up into eight 512-byte chunks and thus use eight data disks (plus N more for parity). You might still have some unevenness, but probably not much. In the days of 4k sector disks, things can now be significantly different. In particular if you make a 4kb write it takes one 4kb sector on one disk for the data and then N more 4kb sectors on other disks for the parity. If you have a raidz2 vdev and write only 4kb blocks (probably as random writes) you will write twice as many blocks for parity as for data, for a write amplification ratio for your data of 3 to 1 (you've written 4kb at the user level, the disks write 12kb). Even a raidz1 vdev has a 2x write amplification for 4k random writes.

(What may make this worse is that I believe that a lot of ZFS metadata is likely to be relatively small. On a raidzN vdev using 4k disks, much of it may not use all disks and thus suffer some degree of write amplification.)

The short way to put this is in ZFS the parity overhead varies depending on your write blocksize. And on 4k sector disks it may well be higher than you expect.

There are some consequences of this for 4k sector drives. First, the larger your raidzN vdevs are (in terms of disks) the larger the writes you need in order to use them all and reduce the actual overhead of parity. Second, if you want to minimize parity overhead it's important to evenly divide data between all disks. If you roll over, using two 4k sectors for data on even one disk, ZFS needs two 4k sectors for parity on each parity disk. Since in real life your writes are probably going to be of various different sizes (and then there's metadata), 4k sector disks and ashift=12 will likely have higher parity overheads than 512b sector disks. And in general from what you expect for RAID-5/6/etc.

I don't know if this makes ZFS raidzN less viable these days. Given the read performance issues, it probably always was for slow(er) bulk data storage outside of special situations.


Comments on this page:

One thing to note regarding meta data - there is a new pool version 29 with "RAID-Z/mirror hybrid allocator" and it has been for sometime now. Essentially it mirrors metadata in RAID-Z pools (it's a N-way mirror to match RAIDZ level). This greatly improves read performance in environments with lots of small files, etc. This is available in Solaris 11.

By Paul Tötterman at 2014-09-23 03:37:52:

Looks like a small write won't use up space on all drives in a raid-z vdev: http://blog.delphix.com/matt/2014/06/06/zfs-stripe-width/

Written on 04 November 2013.
« Wikitext needs a better way of writing tables
Modern versions of Unix are more adjustable than they used to be »

Page tools: View Source, View Normal.
Search:
Login: Password:

Last modified: Mon Nov 4 23:32:54 2013
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.