Splitting a mirrored ZFS pool in ZFS on Linux

December 20, 2019

Suppose, not hypothetically, that you're replacing a pair of old disks with a pair of new disks in a ZFS pool that uses mirrors. If you're a cautious person and you worry about issues like infant mortality in your new drives, you don't necessarily want to immediately switch from the old disks to the new ones; you want to run them in parallel for at least a bit of time. ZFS makes this very easy, since it supports up to four way mirrors and you can just attach devices to add extra mirrors (and then detach devices later). Eventually it will come time to stop using the old disks, and at this point you have a choice of what to do.

The straightforward thing is to drop the old disks out of the ZFS mirror vdev with 'zpool detach', which cleanly removes them (and they won't come back later, unlike with Linux software RAID). However this is a little bit wasteful, in a sense. Those old disks have a perfectly good backup copy of your ZFS pool on them, but when you detach them you lose any real possibility of using that copy. Perhaps you would like to keep that data as an actual backup copy, just in case. Modern versions of ZFS can do this through splitting the pool with 'zpool split'.

To quote the manpage here:

Splits devices off pool creating newpool. All vdevs in pool must be mirrors and the pool must not be in the process of resilvering. At the time of the split, newpool will be a replica of pool. [...]

In theory the manpage's description suggests that you can split a four-way mirror vdev in half, pulling off two devices at once in a 'zpool split' operation. In practice it appears that the current 0.8.x version of ZFS on Linux can only split off a single device from each mirror vdev. This meant that I needed to split my pool in a multi-step operation.

Let's start with a pool, maindata, with four disks in a single mirrored vdev, oldA, oldB, newC, and newD. We want to split maindata so that there is a new pool with oldA and oldB. First, we split one old device out of the pool:

zpool split -R /mnt maindata maindata-hds oldA

Normally the just split off newpool is not imported (as far as I know), and certainly you don't want it imported if your filesystems have explicit 'mountpoint' settings (because then filesystems from the original and the split off pool will fight over who gets to be mounted there). However, you can't add devices to exported pools and we need to add oldB, so we have to import the new pool in an altroot. I use /mnt here out of tradition but you can use any convenient empty directory.

With the pool split off, we need to detach oldB from the regular pool and attach it to oldA in the new pool to make the new pool actually be mirrored:

zpool detach maindata oldB
zpool attach maindata-hds oldA oldB

This will then resilver the maindata-hds new pool on to oldB (even though oldB has an almost exact copy already). Once the resilver is done, you can export the pool:

zpool export maindata-hds

You now have your mirrored backup copy sitting around with relatively little work on your part.

All of this appears to have worked completely fine for me. I scrubbed my maindata pool before splitting it, just in case, but I don't think I bothered to scrub the maindata-hds new pool after the resilver. It's only an emergency backup pool anyway (and it gets less and less useful over time, since there are more divergences between it and the live pool).

PS: I don't know if you can make snapshots, split a pool, and then do incremental ZFS sends from filesystems in one copy of the pool to the other to keep your backup copy more or less up to date. I wouldn't be surprised if it worked, but I also wouldn't be surprised if it didn't.

Written on 20 December 2019.
« Linux kernel Security Modules (LSMs) need their own errno value
My new Linux office workstation disk partitioning for the end of 2019 »

Page tools: View Source, Add Comment.
Search:
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Fri Dec 20 00:33:45 2019
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.