The temptation of LVM mirroring

February 15, 2012

One of the somewhat obscure things that LVM can do is mirroring. If you mention this, most people will probably ask why on earth you'd want to use it; mirroring is what software RAID is for, and then you can stack LVM on top if you want to. Well, yes (and I agree with them in general). But I have an unusual situation that makes LVM mirroring very tempting right now.

The background is that I'm in the process of migrating my office workstation from a pair of old 320 GB drives to a pair of somewhat newer 750 GB drives, and it's reached the time to move my LVM setup to the 750s (it's currently on a RAID-1 array on the two 320s). There are at least three convenient ways of doing this:

  1. add the appropriate partitions from the 750s as two more mirrors to the existing RAID-1 array. There are at least two drawbacks to this; I can't change the raid superblock format, and growing the LVM volume afterwards so that I can actually use the new space will be somewhat of a pain.

    (I suppose that a single pvresize is not that much of a pain, provided that it works as advertised.)

  2. create a new RAID-1 on the 750s, add it as a new physical volume, and pvmove from the old RAID-1 physical volume to the new RAID-1 PV.

    (I did pilot trials of pvmove in a virtual machine and it worked fine even with a significant IO load on the LVM group being moved, which gives me the confidence to think about this even after my bad experience many years ago.)

  3. as above, but set up LVM mirroring between the old and the new disks instead of immediately pvmove'ing to the new disks and using them alone.

    (Done with the right magic this might leave an intact, usable copy of all of the logical volumes behind on the 320 GB drives when I finally break the mirror.)

The drawback of the second approach is that if the 750 GB drives turn out to be flaky or have errors, I don't have a quick and easy way to go back to the current situation; I would have to re-pvmove back in the face of disk problems. And, to make me nervous, I already had one 750 become flaky after it was just sitting in my machine for a bit of time.

(I've already changed to having the root filesystem on the new drives, but I have an acceptable fallback for that and anyways it's less important than my actual data.)

The drawback of the third approach is that I would have to trust LVM mirroring, which is undoubtedly far less widely used than software RAID-1. But it's temptingly easier (and better) than just adding two more mirrors to the current RAID-1 array. If it worked without problems, it would clearly be the best answer; it has the best virtues of both of the other two solutions.

(This would be a terrible choice for a production server unless we really needed to change the RAID superblock format and couldn't afford any downtime. But this is my office workstation, so the stakes are lower.)

I suppose the right answer is to do a trial run of LVM mirroring in a virtual machine, just as I did a pilot run of pvmove. The drawback of that is having to wait longer to migrate to the 750s and ironically a significant reason for the migration is so that I can have more space for virtual machine images.


Comments on this page:

From 192.171.3.126 at 2012-02-16 04:06:28:

I would be inclined to pull out one of the old discs first to keep as a backup in case things go wrong. Then do (2) to pvmove off the broken mirror of just the other old disc to the new discs.

From 138.246.23.243 at 2012-02-16 09:22:46:

I'd have done this 'old-school', by setting up the new RAID/LVM as you want, creating the LV sized as you wish, and then dd'ing the LV from the old RAID to the new RAID and resize2fs/xfs_growfs'ing to the new sizes.

If anything goes wrong during that operation, you still have your old RAID unmodified and all you lost is time.

By cks at 2012-02-16 12:23:47:

Any sort of filesystem copying is not really convenient. I really want to do whatever I do while the system is live, and I want to be sure that no data or changes are lost or corrupted in the process. Any explicit copy (with dd or dump or whatever) requires other activity to be shut down in order to be reliable.

From 24.126.141.58 at 2012-02-16 13:34:29:

I think option 2 is really the way LVM is intended to be used. If you are worried about disk issues, run a full badblocks scan on them before starting the migration. That should be treated as a separate issue.

Written on 15 February 2012.
« A downside of automation
Handling modern IPv6 in programming environments »

Page tools: View Source, View Normal.
Search:
Login: Password:

Last modified: Wed Feb 15 18:02:58 2012
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.