ZFS spare-N
spare vdevs in your pool are mirror vdevs
Here's something that comes up every so often in ZFS and is not as well publicized as perhaps it should be (I most recently saw it here). Suppose that you have a pool, there's been an issue with one of the drives, and you've had a spare activate. In some situations, you'll wind up with a pool configuration that may look like this:
[...] wwn-0x5000cca251b79b98 ONLINE 0 0 0 spare-8 ONLINE 0 0 0 wwn-0x5000cca251c7b9d8 ONLINE 0 0 0 wwn-0x5000cca2568314fc ONLINE 0 0 0 wwn-0x5000cca251ca10b0 ONLINE 0 0 0 [...]
What is this spare-8
thing, beyond 'a sign that a spare activated
here'? This is sometimes called a 'spare vdev', and the answer is
that spare vdevs are mirror vdevs.
Yes, I know, ZFS says that you can't put one vdev inside another vdev and these spare-N vdevs are inside other vdevs. ZFS is not exactly wrong, since it doesn't let you and me do this, but ZFS itself can break its own rules and it's doing so here. These really are mirror vdevs under the surface and as you'd expect they're implemented with exactly the same code in the ZFS kernel code.
(If you're being sufficiently technical these are actually a slightly different type of mirror vdev, which you can see being defined in vdev_mirror.c. But while they have different nominal types they run the same code to do various operations. Admittedly, there are some other sections in the ZFS code that check to see whether they're operating on a real mirror vdev or a spare vdev.)
What this means is that these spare-N
vdevs behave like mirror
vdevs. Assuming that both sides are healthy, reads can be satisfied
from either side (and will be balanced back and forth as they are
for mirror vdevs), writes will go to both sides, and a scrub will
check both sides. As a result, if you scrub a pool with a spare-N
vdev and there are no problems reported for either component device,
then both old and new device are fine and contain a full and intact
copy of the data. You can keep either (or both).
As a side note, it's possible to manually create your own spare-N
vdevs even without a fault, because spares activation is actually
a user-level thing in ZFS. Although I haven't
tested this recently, you generally get a spare-N
vdev if you do
'zpool replace <POOL> <ACTIVE-DISK> <NEW-DISK>
' and <NEW-DISK>
is configured as a spare in the pool. Abusing this to create long
term mirrors inside raidZ vdevs is left as an exercise to the reader.
(One possible reason to have a relatively long term mirror inside a raidZ vdev is if you don't entirely trust one disk but don't want to pull it immediately, and also have a handy spare disk. Here you're effectively pre-deploying a spare in case the first disk explodes on you. You could also do the same if you don't entirely trust the new disk and want to run it in parallel before pulling the old one.)
PS: As you might expect, the replacing-N
vdev that you get when
you replace a disk is also a mirror vdev, with the special behavior
than when the resilver finishes, the original device is normally
automatically detached.
|
|