Modern versions of systemd can cause an unmount storm during shutdowns
One of my discoveries about Ubuntu 20.04 is that my test machine
can trigger the kernel's out of memory killing during shutdown. My test
virtual machine has 4 GB of RAM and 1 GB of swap, but it also has
347 NFS mounts, and after some investigation, what appears to be
happening is that in the 20.04 version of systemd (systemd 245 plus
whatever changes Ubuntu has made), systemd now seems to try to run
umount for all of those filesystems all at once (which also starts
umount.nfs process for each one). On 20.04, this is apparently
enough to OOM my test machine.
(My test machine has the same amount of RAM and swap as some of our production machines, although we're not running 20.04 on any of them.)
On the one hand, this is exactly what systemd said it was going to
do in general. Systemd will do as much in parallel as possible and
these NFS mounts are not nested inside each other, so they can all
be unmounted at once. On the other hand, this doesn't scale; there's
a certain point where running too many processes at once just
thrashes the machine to death even if it doesn't drive it out of
memory. And on the third hand, this doesn't happen to us on earlier
versions of Ubuntu LTS; either their version of systemd doesn't
start as many unmounts at once or their version of
umount.nfs requires enough fewer resources that we can get away
Unfortunately, so far I haven't found a way to control this in
systemd. There appears to be no way to set limits on how many
unmounts systemd will try to do at once (or in general how many
units it will try to stop at once, even if that requires running
programs). Nor can we readily modify the mount units, because all
of our NFS mounts are done through shell scripts by directly calling
mount; they don't exist in
/etc/fstab or as actual
(One workaround would be to set up a new systemd unit that acts
before filesystems are unmounted and runs a '
umount -t nfs',
because that doesn't try to do all of the unmounts at once. Getting
the ordering right may be a little bit tricky.)
How to set up an Ubuntu 20.04 ISO image to auto-install a server
In Ubuntu 20.04 LTS, Canonical has switched to an all new and not yet fully finished system for automated server installs. Yesterday I wrote some notes about the autoinstall configuration file format, but creating a generally functional configuration file is only the first step; now you need to set up something to install it. Around here we use DVDs, or at least ISO images, in our install setup, so that's what I've focused on.
The first thing you need (besides your autoinstall configuration file) is a suitable ISO image. At the moment, the only x86 server image that's available for Ubuntu 20.04 is the 'live server' image, so that's what I used (see here for the 18.04 differences between the plain server image and the 'live server' one, but then Ubuntu 20.04 is all in on the 'live' version). To make this ISO into a self-contained ISO that will boot with your autoinstall configuration, we need to add some data files to the ISO and then modify the isolinux boot configuration.
The obvious data file we have to add to the ISO is our autoconfigure
file. However, it has to be set up in a directory for itself and a
companion file, and each has to be called special names. Let's say
that the directory within the ISO that we're going to use for this
/cslab/inst. Then our autoinstall configuration file
must be called
/cslab/inst/user-data, and we need an empty
/cslab/inst/meta-data file beside it. At install time, the path
to this directory is
/cdrom/cslab/inst, because the ISO is mounted
(I put our configuration in a subdirectory here because we put
additional bootstrap files under
/cslab that are copied onto the
system as part of the autoinstall.)
The isolinux configuration file we need to modify in the ISO is
/isolinux/txt.cfg. We want to modify the kernel command line
to add a new argument, '
default live label live menu label ^Install Ubuntu Server kernel /casper/vmlinuz append initrd=/casper/initrd quiet ds=nocloud;s=/cdrom/cslab/inst/ --- [...]
(You can modify the 'safe graphics' version of the boot entry as well if you think you may need it. I probably should do that to our isolinux txt.cfg.)
The purpose and parameters of the 'ds=' argument are described
This particular set of parameters tells the autoinstaller to find
our configuration file in
/cslab/inst/ on the ISO, where it will
automatically look for both '
user-data' and '
Some sources will tell you to also add an '
to the kernel command line. You probably don't want to do this, and
it's only necessary if you want a completely noninteractive install
that doesn't even stop to ask you if you're sure you want to erase
your disks. If you have some '
interactive-sections' specified in
your autoinstall configuration file, this is not applicable; you're
already having the autoinstall stop to ask you some questions.
For actually modifying the ISO image, what I do is prepare a scratch
directory, unpack the pristine ISO image into it with
we have 7z installed and it will unpack ISOs, among many other
things), modify the scratch directory, and then build a new ISO
mkisofs -o cslab_ubuntu_20.04.iso \ -ldots -allow-multidot -d -r -l -J \ -no-emul-boot -boot-load-size 4 -boot-info-table \ -b isolinux/isolinux.bin -c isolinux/boot.cat \ SCRATCH-DIRECTORY isohybrid cslab_ubuntu_20.04.iso
makes this ISO bootable as a USB stick. Well, theoretically bootable.
I haven't actually tried this for 20.04.)
You can automate all of this with some shell scripts that take an ISO image and a directory tree of things to merge into it (overwriting existing files) and generate a new image.