How to do a very cautious LVM storage migration

May 28, 2012

A while back I wrote about how I was tempted by LVM mirroring when I wanted to migrate my LVM setup from a RAID mirror on some old disks to a new RAID mirror on some new(er) disks. Because I am some peculiar combination of cautious and daring, I gave in to this temptation recently. Now that the migration has more or less finished, it's time I reported in how it went and how to do this.

The short summary is using LVM mirroring to migrate my LVM volume group from disk to disk worked without problems but the next time I need to do this I will probably just use pvmove, because establishing the actual mirrors was achingly slow and the whole process was kind of a tedious pain in the rear. I don't know if pvmove would be faster, but I can hope.

(The mirrors seemed to perform decently once they were synchronized. But initial synchronization of about 250 GB of data took literally days and it was not disk speed limited; LVM never drove the disks at full bandwidth or full IOPs/second rates.)

There are two advantages of using LVM mirroring instead of pvmove and I used both of them. First, you can run for a while on both the new storage and the old storage at the same time, to build up confidence in the new storage. Second, you can preserve a complete and usable copy of all of your data on the old storage, a copy that you can inspect, mount, and so on if you wind up having to. With pvmove, your data just moves; you wind up only on the new storage and there's nothing left on the old storage.

I read a number of writeups of how to do LVM mirroring on the web, but I found all of them to be a little bit unclear (partly because the logic of when you specified which disk device wasn't always clear). So here is the annotated steps that I used. First, let's say that the old disk space you're migrating away from is /dev/OLD and the new disk space is /dev/NEW, and you're migrating the LVM volume group vg0 with the single volume vg0/data, mounted on /data. Then:

  1. Initialize /dev/NEW as a LVM physical volume:
    pvcreate /dev/NEW
  2. Add it to the volume group:
    vgextend vg0 /dev/NEW

  3. Mirror each volume/filesystem to the new storage:
    lvconvert -m1 --mirrorlog mirrored --alloc anywhere vg0/data /dev/NEW

    This is the step that takes forever, and you have to repeat it for each filesystem (I did not try to lvconvert multiple volumes at once, I did them one at a time).

    It's possible that you will not need '--alloc anywhere'; leave it out the first time to see (if you do need it, LVM will report that it can't find space to put stuff). The important arguments are -m1, which tells lvconvert to create a mirror (on /dev/NEW, because that's the physical volume we specified) and --mirrorlog mirrored which tells it to create a (mirrored) persistent on-disk log of what bits of the mirror are in sync.

    If I was doing this again I might just use --mirrorlog disk, because as it happens LVM put both of my mirror log mirrors on /dev/NEW for its own inscrutable reasons (it's possible that --alloc anywhere influenced this). I didn't let this worry me because the whole situation was temporary and /dev/NEW was itself a mirrored RAID array, so it was already pretty reliable.

    (It's possible that a non-mirrored mirrorlog would speed things up.)

  4. Verify that everything looks good:
    lvs -a -o+devices

    What this should show is that vg0/data now has four internal subvolumes. The _mimage_N subvolumes are the actual mirrors (the original volume you started with and the mirror on the new storage), one on each of /dev/OLD and /dev/NEW, and you'll also have two additional subvolumes for the mirror log (ideally one on each disk, but see above).

    At this point you can run with full mirroring for as long as you want in order to build up confidence in the new disk(s). Once you're fully happy with them, it's time to complete the migration by splitting off the old disks.

  5. Split apart each volume, leaving the live version on the new disk and creating a new volume that is the data on the old disk. I think that I read that this apparently goes better if the filesystem is unmounted at the time, so that's how I did it:
    umount /data
    lvconvert --splitmirrors 1 -n data-o vg0/data /dev/OLD
    mount /data

    The -n data-o gives the volume name of the 'new' volume (ie, the name you want for the original volume on the original disk). We specify /dev/OLD here to tell lvconvert that it should act on the mirror side that is on /dev/OLD.

    If you run 'lvs -a -o+devices' afterwards, you should see that all of those internal subvolumes have disappeared and you now have two volumes; vg0/data should be entirely on /dev/NEW and vg0/data-o should be entirely on /dev/OLD.

  6. After doing this for each filesystem you have one volume group using both /dev/OLD and /dev/NEW but all of your live volumes are on /dev/NEW; all of the volumes on /dev/OLD are unused. The final step is to split apart the volume group itself into two, the live one on /dev/NEW and a second volume group that is just all of the old volumes on /dev/OLD.

    First, we need to make all of the volumes on /dev/OLD inactive:

    lvchange -an vg0/data-o

    This should complete without complaints because none of these volumes should be in use; they should all be quiescent, unmounted, and so on.

    Then we can split the volume group itself:

    vgsplit vg0 vg0-o /dev/OLD

    Here vg0-o is the name of the 'new' volume group, ie the old copy of the data on the old storage. We specify /dev/OLD to tell vgsplit to act on the volumes (and physical volume and so on) on /dev/OLD.

    Running 'lvs -a -o+devices' should now show two volume groups, with vg0 using only /dev/NEW and vg0-o using only /dev/OLD.

After this is done you can decommission vg0-o at your leisure. I haven't gotten around to doing that since I haven't quite reached the point where I want to physically remove the old disks (I still have my boot partition on them, partly because I need to figure out which physical SATA plug on the motherboard actually is sda, sdb, and so on).

(I don't know if you can just disconnect the disks without doing anything special in LVM. That would be the ideal way to do it since it would preserve vg0-o and its volumes completely intact for any future need, but LVM might get upset when you reboot your machine because a volume group it expects isn't there.)


Comments on this page:

From 70.26.85.161 at 2012-05-28 18:47:39:

I was unaware that LVM actually did mirroring: I thought it only tied together block devices at a higher layer up the stack.

This post caused me to do some digging, and there is one supposed disadvantage to LVM mirroring over MD mirroring:

LVM is not safe in a power failure, it does not respect write barriers and pass those down to the lower drives.

hence, it is often faster than MD by default, but to be safe you would have to turn off your drive’s write caches, which ends up making it slower than if you used write barriers.

http://deranfangvomende.wordpress.com/2011/04/04/linux-lvm-mirroring-comes-at-a-price/

That was written about a year ago, so I'm not sure if it's still valid now (May 2012). There were some write barrier fixes in Linux kernel 2.6.33, but I'm having some problem understanding the various bug reports and weblog commentary on exactly what got fixed (MD soft raid, dm devices, LVM2?).

By cks at 2012-05-28 23:47:11:

I don't think that LVM does good mirroring; I trust MD far, far more (even LVM over MD, although your link now makes me wonder). My LVM mirroring was strictly as a temporary migration measure so that I could run the old and the new disks in parallel at the same time, and I consider it more of a hack than anything else. I would never rely on LVM mirroring for actual data redundancy.

(As I mentioned in the original entry about the temptation, the underlying storage for both the old PV and the new PV is a MD RAID-1 mirror. Although this doesn't really help if LVM is screwing up barriers and I have a power failure.)

By Alex DiMarco at 2013-09-02 15:20:42:

Good Article....

One point:Anything after linux kernal 2.6.31 - write barriers are supported.

From Wikipedia: http://en.wikipedia.org/wiki/Logical_Volume_Manager

Until Linux kernel 2.6.31,[1] write barriers were not supported (fully supported in 2.6.33). This means that the guarantee against filesystem corruption offered by journaled file systems like ext3 and XFS was negated under some circumstances.[2]

By plgs at 2015-01-03 20:46:34:

Note that since about September 2013, lvm2 mirroring incorporates dm_raid mirroring using the 'raid1' mirror segment type (select # lvconvert --type raid1, but it's now the default mirror type). This means you no-longer need the '--corelog' and/or '--mirrorlog disk/core/mirrored' options, since the raid1 segment type always stores its logs (in fact, metadata subvolumes) on-disk on the same PVs as the lv being mirrored.

By Court at 2016-10-19 16:39:01:

I've done both pvmove and lvm mirroring to migrate to new disks. I don't remember pvmove being much faster. But you can really screw yourself if migrating on filesystems that are used pretty heavily. Since you are basically copying extents and stating that the moved extent is on the new device, you have a period where the logical volume is across 2 disks. If you are doing a ton of I/O, it can really cause delays. Doing the mirror is painfully slow, but I have not had a case where it affected performance.

Written on 28 May 2012.
« Complications in spam filter stats in our environment
Yes, git add makes a difference (no matter what people think) »

Page tools: View Source, View Normal, Add Comment.
Search:
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Mon May 28 00:33:42 2012
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.