Ubuntu 22.04 with multiple disks and (U)EFI booting

August 9, 2022

One of the traditional and old problems with UEFI booting on servers is that it had a bad story if you wanted to be able to boot off multiple disks. Each disk needed its own EFI System Partition (ESP) and you either manually kept them in synchronization (perhaps via rsync in a cron job) or put them in a Linux software RAID mirror with the RAID superblock at the end and hoped hard that nothing ever went wrong. To my surprise, the state of things seems to be rather better in Ubuntu 22.04, although there are still traps.

Modern Linuxes don't put much in the ESP, and in particular even Fedora no longer puts frequently changing things there. In Ubuntu 22.04, what's there in the EFI/ubuntu subdirectory is a few GRUB binaries and a stub grub.cfg that tells GRUB where to find your real /boot/grub/grub.cfg, which normally lives in your root filesystem. All of these are installed into /boot/efi by running 'grub-install', or into some other location by running 'grub-install --efi-directory=/some/where'.

(On a 64-bit Ubuntu 22.04 EFI booted system, 'grub-install --help' will usefully tell you that the default target type is 'x86_64-efi', although the manual page will claim otherwise.)

This lets you manually maintain two or more ESPs; just mount the second one somewhere (perhaps temporarily) and run grub-install against it. Ubuntu has added a script to do more or less this (cf), /usr/lib/grub/grub-multi-install, which is normally run by EFI grub package postinstalls as 'grub-multi-install --target=x86_64-efi'. This script will run through a list of configured ESPs, mount them temporarily (if they're not already mounted), and update them with grub-install. In the 22.04 server installer, if you mark additional disks as extra boot drives, it will create an ESP partition on them and add them to this list of configured ESPs.

(I believe that you can run this script by hand if you want to.)

The list of configured ESPs is stored in a debconf selection, 'grub-efi/install_devices'; there are also a number of other related grub-efi/ debconf selections. An important thing to know is that configured ESPs are listed using their disk's ID, as /dev/disk/by-id/<something> (which is perfectly sensible and perhaps the only sane way to do it). This means that if one of your boot disks fails and is replaced, the list of configured ESPs won't include the new disk (even if you made an ESP on it) and will (still) include the old one. Apparently one fix is to reconfigure a relevant GRUB package, such as (I think) 'dpkg-reconfigure grub-efi-amd64', from this AskUbuntu answer.

(In the usual Debian and Ubuntu way, one part of this setup is that a package upgrade of GRUB may someday abruptly stop to quiz you about this situation, if you've replaced a disk but not reconfigured things since. Also, I don't know if there's a better way to see this list of configured ESPs other than 'debconf-get-selections | grep ...' or maybe 'debconf-show grub-efi-amd64'.)

Life would be nicer if you could set Ubuntu 22.04 to just install or update GRUB on all valid ESPs that it found, but the current situation isn't bad (assuming that the reconfigure works; I haven't tested it, since we just started looking into this today). The reconfiguration trick is an extra thing to remember, but at least we're already used to running grub-install on BIOS boot systems. I'm also not sure I like having /boot/efi listed in /etc/fstab and mounted, since it's non-mirrored; if that particular disk fails, you could have various issues.

(In looking at this I discovered that some of our systems were mounting /boot/efi from their second disk instead of their first one. I blame the Ubuntu 22.04 server installer for reasons beyond the scope of this aside.)

PS: On a BIOS boot system, the 'grub-pc/install_devices' setting can be a software RAID array name, which presumably means 'install boot blocks on whatever devices are currently part of this RAID array'. I assume that UEFI boot can't be supported this way because there would be more magic in going from a software RAID array to the ESP partitions (if any) on the same devices.

PPS: Someday Ubuntu may let you configure both BIOS and UEFI boot on the same system, which would have uses if you want to start off with one but preserve your options to switch to the other for various reasons. We'd probably use it on our servers.


Comments on this page:

I assume that UEFI boot can't be supported this way because there would be more magic in going from a software RAID array to the ESP partitions (if any) on the same devices.

I don't particularly understand this part to be honest, as you already noted that there's a posibillity of:

Linux software RAID mirror with the RAID superblock at the end

In theory, nothing should go wrong that way, as the only time you write is while the system is booted and installing updates for grub, this way you can never really break MD Array, yet, due to the fact that superblock is at the end of the partition, it will not really break EFI boot process because that FAT32 partition on top of the MD array looks like a regular fat32 partition early in the boot process.

And if one of the drive fails, md array still assembles, but is in degraded state, which can obviously be monitored and fixed.

Seems to me that's overall best solution we have at the moment.

By cks at 2022-08-10 13:03:00:

The BIOS boot process of going from the software RAID array name to the disks doesn't put the boot blocks in the partitions of the RAID array, it goes from the partition names of the RAID array (eg sda1 and sdb1) to the whole disks (sda and sdb). Or at least it had better do that, since most BIOS bootblocks are installed on the whole disk. You could do a similar thing to go from the RAID partitions to the whole disk to some ESP partition on the disk, but the idea of finding and recognizing the ESP partition on a disk raises a bunch of questions; what if there isn't one, or it's not clear if something is an ESP, or if there are more than one ESP candidate on a single disk?

The traditional two problems with making the ESP partitions themselves into a software RAID array is that historically some UEFI firmware would refuse to boot such a system, and if the firmware or something else ever updated an ESP, you would have inconsistent mirror data and doom might well ensue (including corrupted vfat filesystems, which would likely make UEFI boot unhappy). The third problem is that Linux installers don't support it; as far as I know, the Ubuntu 22.04 installer has no option for this if you're setting up a UEFI system (and in fact will shoot you in the foot if you take the ESPs you've told it about and then make them into a software RAID array).

The BIOS boot process of going from the software RAID array name to the disks doesn't put the boot blocks in the partitions of the RAID array, it goes from the partition names of the RAID array (eg sda1 and sdb1) to the whole disks (sda and sdb).

Indeed, BIOS boot process reads first 512 bytes of the drive and executes them directly, and that part usually holds Stage 1 bootloader which then has information about the partitions and where to find them (4 primary partitions).

You could do a similar thing to go from the RAID partitions to the whole disk to some ESP partition on the disk, but the idea of finding and recognising the ESP partition on a disk raises a bunch of questions; what if there isn't one, or it's not clear if something is an ESP, or if there are more than one ESP candidate on a single disk?

That is an interesting problem, and I have no clear answer to that one.

The traditional two problems with making the ESP partitions themselves into a software RAID array is that historically some UEFI firmware would refuse to boot such a system, and if the firmware or something else ever updated an ESP, you would have inconsistent mirror data and doom might well ensue (including corrupted vfat filesystems, which would likely make UEFI boot unhappy).

This is a layered issue, but let's hypothetically say that you have two partitions, each 512MB and each on the separate disk drive.

If you assemble MD Array with 0.9 sub-version, superblock is located at the end of the device. And then when you format that assembled device with FAT32, and install necessary things on it you basically get pure FAT32 partition from EFI/BIOS/OS perspective. If you examine each MD member partition individually, they would appear as regular fat32 partitions

Then, from the perspecitve of the boot process, you pick and choose either one. And from the perspective of upgrades, well, all changes are saved to the both drives/partitions since OS mounts /boot as MD device.

Issue that is still pending is eventual stage1 bootloader update or updates to the efi variables if those are in MoBo firmware.

Usually, in my scenarios, that worked fine, but I may be wrong and not seeing the whole picture, or understanding it properly for that matter. So if you would have the patience to expand on the issue, I would highly appreciate it.

Ivan wrote:

If you assemble MD Array with 0.9 sub-version, superblock is located at the end of the device.

For the record, the 1.0 mdadm superblock format is also only written at the end of the device:

There may be advantages to using a 1.x-based format or the 0.9 in various situations.

If unspecified, mdadm uses 1.2 as of this writing, which is 4K from the beginning of the device.

By Sam Nelson at 2022-12-14 10:19:06:

Just wanted to say a quick thanks for your great writeup. There is a lot of confusing information online on how to properly sync up multiple EFI partitions. Your post cleared up all the questions I had!

Cheers,

Sam

Written on 09 August 2022.
« Two example Grafana Loki log queries to get things from ntpdate logs
Some notes (to myself) about formatting text in jq »

Page tools: View Source, View Normal, Add Comment.
Search:
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Tue Aug 9 23:12:44 2022
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.