My new Linux office workstation disk partitioning for the end of 2019

December 20, 2019

I've just had the rare opportunity to replace all of my office machine's disks at once, without having to carry over any of the previous generation the way I've usually had to. As part of replacing everything I got the chance to redo the partitioning and setup of all of my disks, again all at once without the need to integrate a mix of the future and the past. For various reasons, I want to write down the partitioning and filesystem setup I decided on.

My office machine's new set of disks are a pair of 500 GB NVMe drives and a pair of 2 TB SATA SSDs. I'm using GPT partitioning on all four drives for various reasons. All four drives start with my standard two little partitions, a 256 MB EFI System Partition (ESP, gdisk code EF00) and a 1 MB BIOS boot partition (gdisk code EF02). I don't currently use either of them (my past attempt to switch from MBR booting to UEFI was a failure), but they're cheap insurance for a future. Similarly, putting these partitions on all four drives instead of just my 'system' drives is more cheap insurance.

(Writing this down has made me realize that I didn't format the ESPs. Although I don't use UEFI for booting, I have in the past put updated BIOS firmware images there in order to update the BIOS.)

The two NVMe are my 'system' drives. They have three additional partitions; a 70 GB partition used for a Linux software RAID mirror of the root filesystem (including /usr and /var, since I put all of the system into one filesystem), a 1 GB partition that is a Linux software RAID mirror swap partition, and the remaining 394.5 GB as a mirrored ZFS pool that holds filesystems that I want to be as fast as possible and that I can be confident won't grow to be too large. Right now that's my home directory filesystem and the filesystem that holds source code (where I build Firefox, Go, and ZFS on Linux, for example).

The two SATA SSDs are my 'data' drives, holding various larger but less important things. They have two 70 GB partitions that are Linux software RAID mirrors and the remaining space is in in a single partition for another mirrored ZFS pool. One of the two 70 GB partitions is so that I can make backup copies of my root filesystem before upgrading Fedora (if I bother to do so); the other is essentially an 'overflow' filesystem for some data that I want on an ext4 filesystem instead of in a ZFS pool (including a backup copy of all recent versions of ZFS on Linux that I've installed on my machine, so that if I update and the very latest version has a problem, I can immediately reinstall a previous one). The ZFS pool on the SSDs contains larger and generally less important things like my VMWare virtual machine images and the ISOs I use to install them, and archived data.

Both ZFS pools are set up following my historical ZFS on Linux practice, where they use the /dev/disk/by-id names for my disks instead of the sdX and nvme... names. Both pools are actually relatively old; I didn't create new pools for this and migrate my data, but instead just attached new mirrors to the old pools and then detached the old drives (more or less). The root filesystem was similarly migrated from my old SSDs by attaching and removing software RAID mirrors; the other Linux software RAID filesystems are newly made and copied through ext4 dump and restore (and the new software RAID arrays were added to /etc/mdadm.conf more or less by hand).

(Since I just looked it up, the ZFS pool on the SATA SSDs was created in August of 2014, originally on HDs, and the pool on the NVMe drives was created in January of 2016, originally on my first pair of (smaller) SSDs.)

Following my old guide to RAID superblock formats, I continued to use the version 1.0 format for everything except the new swap partition, where I used the version 1.2 format. By this point using 1.0 is probably superstition; if I have serious problems (for example), I'm likely to just boot from a Fedora USB live image instead of trying anything more complicated.

All of this feels very straightforward and predictable by now. I've moved away from complex partitioning schemes over time and almost all of the complexity left is simply that I have two different sets of disks with different characteristics, and I want some filesystems to be fast more than others. I would like all of my filesystems to be on NVMe drives, but I'm not likely to have NVMe drives that big for years to come.

(The most tangled bit is the 70 GB software RAID array reserved for a backup copy of my root filesystem during major upgrades, but in practice it's been quite a while since I bothered to use it. Still, having it available is cheap insurance in case I decide I want to do that someday during an especially risky Fedora upgrade.)

Comments on this page:

By Anonymous Coward at 2019-12-21 21:49:53:

I'd be very interested to know about how your experience with this setup goes. Because I have a similar one, and I've deliberately steered clear of EFI booting because my impression was that the EFI system partition got in the way of having (software) mirrored boot drives. See for instance this page, which talks about mirroring that partition behind the back of the EFI firmware using old MD superblock formats, even in the face of that firmware modifying the partition. Compared to "install GRUB to both drives" (as I do with BIOS/MBR booting) that sounds to me like the kind of alarming hack that will one day come back and bite me.

I was also under the impression that UEFI booting was the only way to boot from NVMe devices, but I can't remember where I read that, and I suppose it was superstition, if you're successfully doing so. I bought SATA M.2 drives for that reason, but it'd be good to know that I could get NVMe replacements if one of them died, since supposedly the market is moving quickly away from SATA for M.2.

I would just stay on BIOS/MBR booting indefinitely, but I've heard that this requires special firmware support, and that this functionality (the "compatibility support module") may be going away soon. Or rather, Microsoft or Intel or whomever will soon no longer require systems to have it to get their blessing, which makes me suspect it'll vanish or break. Again, I can't remember a source for that, so I'd be happy to hear if I'm mistaken.

By cks at 2019-12-22 01:52:25:

My motherboard is definitely booting from the NVMe drives using BIOS booting (and I can be confident of that because the old SATA SSDs that I was booting from got physically removed). The issues with not being able to mirror the EFI System Partition was one reason I wound up abandoning my attempts to shift over to UEFI booting; as long as BIOS booting still works, it seems mostly superior to UEFI for Linux.

(There are some things you lose by not using UEFI boot, but they're currently not very important to me.)

Written on 20 December 2019.
« Splitting a mirrored ZFS pool in ZFS on Linux
Filenames and paths should be a unique type and not a form of strings »

Page tools: View Source, View Normal, Add Comment.
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Fri Dec 20 23:52:22 2019
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.