An aside on RAID-5/RAID-6 and disk failures
In light of yesterday's entry, one might sensibly ask why RAID-5 (or RAID-6) systems don't try harder to limp along after enough disks fail. In my view, the short answer is that the RAID systems are lazy.
Most RAID-N systems do not bother trying to verify the data on read; if the disk does not report a read error, they just give the data to you. Thus, even if they lost enough disks to become nominally non-functional, they could still return some data if you read from them. They would have to fail reads from the gone disks and for anything with a disk read error, but they could satisfy other reads with no worse guarantees than they usually provide.
(ZFS is an exception, because as discussed previously it does verify checksums on reads and needs all of the blocks of a stripe in order to do so. This means that if you lose enough disks on a ZFS raidzN, everything suddenly has unverifiable checksums and ZFS must fail all reads.)
While you could wave your hands about it, RAID-N systems in this state would pretty much have to fail all writes because they now have nowhere to put parity information. You could argue that they could merely fail writes that would wind up on now-gone disks on the basis that this is no worse than allowing writes to a degraded RAID-N (any read errors or disk losses are unrecoverable in either case), but I think that this is pushing it.
So, RAID-N systems could do better; they just don't bother because it's simpler to just give up immediately. This laziness is probably boosted by the fact that even if the RAID-N continued to return some data on reads the filesystem itself is probably toast, since as discussed yesterday very few filesystems deal well when a significant amount of their storage becomes corrupt or unreadable.
(This is the kind of entry that I write at least partly for myself since it forces me to work through the logic.)