My new Linux office workstation disk partitioning for the end of 2019
I've just had the rare opportunity to replace all of my office machine's disks at once, without having to carry over any of the previous generation the way I've usually had to. As part of replacing everything I got the chance to redo the partitioning and setup of all of my disks, again all at once without the need to integrate a mix of the future and the past. For various reasons, I want to write down the partitioning and filesystem setup I decided on.
My office machine's new set of disks are a pair of 500 GB NVMe drives and a pair of 2 TB SATA SSDs. I'm using GPT partitioning on all four drives for various reasons. All four drives start with my standard two little partitions, a 256 MB EFI System Partition (ESP, gdisk code EF00) and a 1 MB BIOS boot partition (gdisk code EF02). I don't currently use either of them (my past attempt to switch from MBR booting to UEFI was a failure), but they're cheap insurance for a future. Similarly, putting these partitions on all four drives instead of just my 'system' drives is more cheap insurance.
(Writing this down has made me realize that I didn't format the ESPs. Although I don't use UEFI for booting, I have in the past put updated BIOS firmware images there in order to update the BIOS.)
The two NVMe are my 'system' drives. They have three additional
partitions; a 70 GB partition used for a Linux software RAID mirror
of the root filesystem (including /usr
and /var
, since I put
all of the system into one filesystem), a 1 GB partition that is a
Linux software RAID mirror swap partition, and the remaining 394.5
GB as a mirrored ZFS pool that holds filesystems that I want to be
as fast as possible and that I can be confident won't grow to be
too large. Right now that's my home directory filesystem and the
filesystem that holds source code (where I build Firefox, Go, and
ZFS on Linux, for example).
The two SATA SSDs are my 'data' drives, holding various larger but less important things. They have two 70 GB partitions that are Linux software RAID mirrors and the remaining space is in in a single partition for another mirrored ZFS pool. One of the two 70 GB partitions is so that I can make backup copies of my root filesystem before upgrading Fedora (if I bother to do so); the other is essentially an 'overflow' filesystem for some data that I want on an ext4 filesystem instead of in a ZFS pool (including a backup copy of all recent versions of ZFS on Linux that I've installed on my machine, so that if I update and the very latest version has a problem, I can immediately reinstall a previous one). The ZFS pool on the SSDs contains larger and generally less important things like my VMWare virtual machine images and the ISOs I use to install them, and archived data.
Both ZFS pools are set up following my historical ZFS on Linux
practice, where they use the /dev/disk/by-id
names for my disks instead of the sdX and nvme... names. Both pools
are actually relatively old; I didn't create new pools for this and
migrate my data, but instead just attached new mirrors to the old
pools and then detached the old drives (more or less). The root filesystem was similarly migrated
from my old SSDs by attaching and removing software RAID mirrors;
the other Linux software RAID filesystems are newly made and copied
through ext4 dump
and restore
(and the new software RAID arrays
were added to /etc/mdadm.conf
more or less by hand).
(Since I just looked it up, the ZFS pool on the SATA SSDs was created in August of 2014, originally on HDs, and the pool on the NVMe drives was created in January of 2016, originally on my first pair of (smaller) SSDs.)
Following my old guide to RAID superblock formats, I continued to use the version 1.0 format for everything except the new swap partition, where I used the version 1.2 format. By this point using 1.0 is probably superstition; if I have serious problems (for example), I'm likely to just boot from a Fedora USB live image instead of trying anything more complicated.
All of this feels very straightforward and predictable by now. I've moved away from complex partitioning schemes over time and almost all of the complexity left is simply that I have two different sets of disks with different characteristics, and I want some filesystems to be fast more than others. I would like all of my filesystems to be on NVMe drives, but I'm not likely to have NVMe drives that big for years to come.
(The most tangled bit is the 70 GB software RAID array reserved for a backup copy of my root filesystem during major upgrades, but in practice it's been quite a while since I bothered to use it. Still, having it available is cheap insurance in case I decide I want to do that someday during an especially risky Fedora upgrade.)
Comments on this page:
|
|