My views on using LVM for your system disk and root filesystem

May 4, 2017

In a comment on my entry about perhaps standardizing the size of our server root filesystems, Goozbach asked a good question:

Any reason not to put LVM on top of raid for OS partitions? (it's saved my bacon more than once both resizing and moving disks)

First, let's be clear what we're talking about here. This is the choice between putting your root filesystem directly into a software RAID array (such as /dev/md0) or creating a LVM volume group on top of the software RAID array and then having your root filesystem be a logical volume in it. In a root-on-LVM-on-MD setup, I'm assuming that the root filesystem would still use up all of the disk space in the LVM volume group (for most of the same reasons outlined for the non-LVM case in the original entry).

For us, the answer is that there is basically no payoff for routinely doing this, because in order to need LVM for this we'd need a number of unusual things to be true all at once:

  • we can't just use space in the root filesystem; for some reason, it has to be an actual separate filesystem.
  • but this separate filesystem has to use space from the system disks, not from any additional disks that we might add to the server.
  • and there needs to be some reason why we can't just reinstall the server from scratch with the correct partitioning and must instead go through the work of shrinking the root filesystem and root LVM logical volume in order to make up enough spare space for the new filesystem.

Probably an important part of this is that our practice is to reinstall servers from scratch when we repurpose them, using our install system that makes this relatively easy. When we do this we get the option to redo the partitioning (although it's generally easier to keep things the same, since that means we don't even have to repartition, just tell the installer to use the existing software RAIDs). If we had such a special need for a separate filesystem, it's probably a sufficiently unique and important server that we would want to start it over from scratch, rather than awkwardly retrofitting an existing server into shape.

(One problem with a retrofitted server is that you can't be entirely sure you can reinstall it from scratch if you need to, for example because the hardware fails. Installing a new server from scratch does help a great deal to assure that you can reinstall it too.)

We do have servers with unusual local storage needs. But those servers mostly use additional disks or unusual disks to start with, especially now that we've started moving to small SSDs for our system disks. With small SSDs there just isn't much space left over for a second filesystem, especially if you want to leave a reasonable amount of space free on both it and the root filesystem in case of various contingencies (including just 'more logs got generated than we expected').

I also can't think of many things that would need a separate filesystem instead of just being part of the root filesystem and using up space there. If we're worried about this whatever it is running the root filesystem out of space, we almost certainly want to put in big, non-standard system disks in the first place rather than try to wedge it into whatever small disks the system already has. Leaving all the free space in a single (root) filesystem that everything uses has the same space flexibility as ZFS, and we're lazy enough to like that. It's possible that I'm missing some reasonably common special case here because we just don't do whatever it is that really needs a separate local filesystem.

(We used to have some servers that needed additional system filesystems because they needed or at least appeared to want special mount options. Those needs quietly went away over the years for various reasons.)

Sidebar: LVM plus a fixed-size root filesystem

One possible option to advance here is a hybrid approach between a fixed size root partition and a LVM setup: you make the underlying software RAID and LVM volume group as big as possible, but then you assign only a fixed and limited amount of that space to the root filesystem. The remaining space is left as uncommitted free space, and then is either allocated to the root if it needs to grow or used for additional filesystems if you need them.

I don't see much advantage to this setup, though. Since the software RAID array is maximum-sized, you still have the disk replacement problems that motivated my initial question. You add the chance of the root filesystem running out of space if you don't keep an eye on it and make the time to grow it as needed, and in order for this setup to pay off you still have to need the space in a separate filesystem for some reason, instead of as part of the root filesystem. What you save is the hassle of shrinking the root filesystem if you ever need to make that additional filesystem with its own space.

Written on 04 May 2017.
« Sometimes, chmod can fail for interesting reasons
Digging into BSD's choice of Unix group for new directories and files »

Page tools: View Source, Add Comment.
Search:
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Thu May 4 00:18:43 2017
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.