RAID-5 versus RAID-6
For those who haven't encountered it yet, RAID-6 is RAID-5 with two parity blocks per stripe instead of one; it thus has a two disk overhead for parity, compared to RAID-5's one disk.
We're interested in RAID-6 because it is more or less a better version of RAID-5 plus a hot spare. With RAID-5, if you lose a drive the disk array is non-redundant for the time it takes to rebuild onto your hot spare; if you lose a second disk during this time, the entire array is lost. With large modern disk drives of the sort we use, a rebuild can take many hours, especially if your RAID array is active with real work at the time.
(RAID-6 does pay a penalty on writes, which we think is not too important in our environment. This may be counterbalanced by the fact that it keeps your 'hot spare' drive active, so it can't quietly die on you.)
RAID-6 makes the most sense if you are using all of the drives in a controller in a single large disk array, which is what we generally do. If you're making multiple arrays, RAID-5 plus a single global hot spare may have significantly less overhead (depending on how many arrays you're making).
Unfortunately, few vendors of standalone SAN RAID controllers seem to include RAID-6 (at least in their baseline products; it may be available in high-end gear or as an extra-cost option). Software implementations are more common, with Solaris ZFS's 'raidz2' and Linux's software RAID being two of them.
Sidebar: RAID-5 efficiency figures
For my own interest:
|disks||RAID-5 overhead||RAID-6 overhead|
Common sizes for standalone SATA RAID controllers are 12 disks (2U) or 15 disks (3U).