Wandering Thoughts

2022-05-19

Moving a libvirt-based virtualization setup from one machine to another

Following my successful experience with libvirt on my desktop, I decoded to build out a general virtualization server for tests, scratch machines, and so on. Unfortunately, the old server I reused for this turned out not to be up to the task, at least not if I wanted it to be attractive. So today I reinstalled things on a newer, better server and wanted to move over the entire libvirt setup from the initial machine so that I didn't have to redo everything. What I needed to copy (and why) for this wasn't obvious to me, so here are my notes.

(My initial VM host was one of our old OmniOS ZFS NFS fileservers. I feel oddly sad that we're not using those machines for anything any more; they were decently nice machines in their time. Although since their KVM over IP console is Java-only and doesn't work with Fedora's normal Java, they've gotten less attractive.)

First off, you need to copy over the actual disk images (and ISO images). Libvirt would like to put these in /var/lib/libvirt/images, but I had relocated these to a new filesystem (which I called /virt) on different disks. So I rsync'd the entire thing over. Right now this isn't much data, since I haven't done much with virtual machines on this VM host (partly because, well, it's slow).

I think that the fast way to do the rest would have been to copy over /etc/libvirt and /var/lib/libvirt in their entirety. This would have left me needing to edit the libvirt configuration for my bridged network, because the network device names changed between the old hardware and the new hardware. Otherwise, I optimistically think I might not have had to touch anything else. However, I opted to do things piece by piece.

What I did and copied was more or less:

  1. I used virt-manager to stop the default storage pool and (NAT) networking, and rename both of them.
  2. I dumped the XML for the storage pools on the old server with 'virsh pool-dumpxml' and then created the pools with this XML and 'virsh pool-define'.
  3. I'd kept the original XML definitions of my bridged networking on the old machine, so I edited it to have the right network device names and loaded the XML to define things with 'virsh net-define'.

  4. The basic XML definitions for virtual machines are in /etc/libvirt/qemu with obvious names, so I copied them over. After rebooting (to restart the libvirt daemons from the ground up), this got me the VMs but not their snapshots.

  5. To get the snapshots as well, I had to copy over /var/lib/libvirt/qemu/snapshot, which contains the XML defining the snapshots (I make all of my snapshots with the VM powered down).

The actual disk snapshots are already present as 'internal snapshots' in the VM QEMU disk images, so the XML just tells libvirt about them and lets you manipulate snapshots through libvirt tools like virsh instead of having to use raw qemu-img commands (the way you have to do with UEFI based VMs).

The libvirt storage is defined in XML files in /etc/libvirt/storage. Networking (for QEMU based VMs) is in /etc/libvirt/qemu/networks. You may have libvirt hook scripts in /etc/libvirt/hooks (as I do on my desktop to set up NAT). There's some dnsmasq stuff in /var/lib/libvirt/dnsmasq but I think it's probably all automatically written, or at least stuff that you can probably not care about copying over.

I don't regret going through the piece by piece effort because I feel that I now understand libvirt a bit more and I'm better equipped to poke around it for other things. But if I was doing it again I'd probably just copy all of /etc/libvirt and /var/lib/libvirt, then edit any network device names and so on as necessary.

LibvirtMovingSetup written at 22:57:09; Add Comment

2022-05-13

The cause of an odd DNF/RPM error about conflicting files

Recently I updated the kernel on my Fedora desktops and as I usually do, updated to the latest development version of ZFS on Linux. When I did this, I got an error message from DNF (which I think is really an RPM error message) that struck me as impossible:

file /usr/src/zfs-2.1.99/cmd/arc_summary from install of zfs-dkms-2.1.99-1199_g0c11a2738.fc35.noarch conflicts with file from package zfs-dkms-2.1.99-1136_ge77d59ebb.fc35.noarch

Here, I'm updating the zfs-dkms RPM package (among others), or trying to, and DNF is telling me that a file from the new version of the package is conflicting with the same file from an old version of the package.

It's perfectly natural for DNF/RPM to tell you about cross-package file conflicts (although it's not really supposed to happen). What that means is that both RPM packages include a file by that name but with different contents. However, this isn't supposed to happen when you update a package; the old and the new versions of the package may have the same file with different contents, but after the update there's no conflict because the old file (and package) has been replaced with the new file (and package).

(Modern versions of RPM and DNF even handle the case of upgrading multi-arch packages in sync with each other that used to cause problems.)

Unfortunately, this error message is misleading, and specifically it is misleading in its use of 'file'. You and I might think that by 'file', DNF means a Unix file. Actually what it means is a Unix file name, because the actual problem is that in the old package, /usr/src/zfs-2.1.99/cmd/arc_summary was a directory (due to a package building error) and in the new package it is now a file (due to the package building error being fixed). RPM can turn one version of a Unix file into another version of a Unix file during a package upgrade, but it can't 'upgrade' a directory into a file (or at least it won't).

Fortunately there is a simple fix, because what RPM cares about for the old package is what's actually on the filesystem. All you have to do is remove the arc_summary directory in advance (or rename it to something else), and DNF/RPM will be happy. Now that the old package's directory is gone, it doesn't have to try to turn it into a file (which it can't do); it can just put the new file down.

(RPM not being willing to deal with this change of a directory into a file in a new version of the package is probably sensible. But it really should have a better error message, one that gives people an answer about what's going on.)

DNFRPMOddFileConflictError written at 21:42:23; Add Comment

2022-05-12

Why I'm considering some use of NetworkManager (and I probably have to)

I'm not a fan of NetworkManager on my desktops (although I think there are machines where it's good), but recently I tweeted:

I wonder how well I could get NetworkManager to co-exist with systemd-networkd, so that NM handles the few things networkd is bad at (eg OpenVPN, PPPoE) and networkd handles everything normal.

My home desktop has roughly three types of networks. First, there are networks (the local ethernet, my Wireguard tunnel, and now my libvirt virtual networks) that are configured through well supported mechanisms like networkd. Second, there is my PPPoE DSL connection, which is still using the deprecated and someday to be removed ifup. Finally, there are networks that I don't even have because they're too difficult to set up by hand, such as an OpenVPN connection to our VPN server (which I might use as a backup to my Wireguard tunnel if my office desktop is down).

At some point, I'm going to need a replacement for ifup to drive my PPPoE DSL connection, and I would rather not build that myself. Networkd doesn't handle PPPoE connections and may never do so, so the only other real choice seems to be NetworkManager. However, I don't want to hand all of my networking over to NetworkManager; instead, I would like my existing 'good' networking to keep coexisting with NetworkManager. My good networking would keep on as it is, while NM would handle PPPoE and allow me to finally set up things like OpenVPN connections. I'd have to start using nmcli commands to manage some things, but in practice my PPPoE DSL link is supposed to be up all of the time and I'd only use other NM-managed things in an emergency.

(I know that NetworkManager can set up working PPPoE DSL for me, because I did it once long ago and as far as I know that configuration still works. Although I admit I haven't used it for years. The actual PPPoE DSL configuration file on my laptop in /etc/NetworkManager/system-connections also looks pleasantly simple and straightforward, although since it has a UUID I suspect I can't just copy it over to my desktop.)

It's possible to make NetworkManager ignore devices entirely (also), and I've set this up on both my home and my work desktops for all of the connections I definitely don't want NM touching, as something between preparation and a precaution. I've also told NetworkManager not to touch resolv.conf, because I'll manage all of that myself by hand.

(In theory I could try to make systemd-resolved work by manually or semi-automatically configuring DNS servers, domains, and so on in it. In practice it has some mandatory behaviors I don't want, and I have a setup that works fine as long as I have some VPN connection to work. If I'm completely VPN-less, it's easy to fix. I could even script this, since unbound-control can add and remove forward zones on the fly.)

NetworkManagerWhyConsidering written at 22:42:01; Add Comment

2022-05-10

Seeing the speed of your USB devices under Linux the easy way

As we all sadly know, USB comes in a bewildering variety of connectors, standards, and especially speeds that range from very nice to sadly pathetic. USB 1.0 and 1.1 are 12 Mbps, USB 2.0 is 480 Mbps, USB 3.0 (aka 'USB 3.1 gen 1') is 5 Gbps, USB 3.1 (aka 'USB 3.1 gen 2') is 10 Gbps, USB 3.2 is 20 Gbps, and 'USB4' is 40 Gbps. Or at least those are the specification rates; my cynical side suspects that, say, not all devices labeled as 'USB 3.0' actually do 5 Gbps. In addition, in order to get that speed you need to plug your high speed device into a USB port and chain (if you're using a hub) that supports the speed end to end.

All of this makes it rather interesting to know what actual data rates you're getting (or at least that have been negotiated) with your USB devices. It turns out that there is an easy way to do this under Linux, in the form of 'lsusb -tv'. Normally, lsusb is either not informative enough or too informative, but -tv (tree view with one level of verbosity) tells you just enough to decode things:

; lsusb -tv
/:  Bus 02.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/6p, 5000M
    ID 1d6b:0003 Linux Foundation 3.0 root hub
    |__ Port 2: Dev 2, If 0, Class=Vendor Specific Class, Driver=ax88179_178a, 5000M
        ID 0b95:1790 ASIX Electronics Corp. AX88179 Gigabit Ethernet
[...]

(usb-devices may in some cases give you additional useful information, since it seems to give slightly different identification for devices.)

That output is from my work laptop with a USB Gigabit Ethernet adapter plugged in. As we would hope, this is at USB 3.0 data rates, although that might be because of the hub or the device itself.

Looking at actual reported USB speeds on my home desktop and my work desktop leave me somewhat puzzled. Both of them theoretically have a number of USB 3.0 ports and certainly they have blue USB-A ports, which normally indicates USB 3.0, and I have most things plugged into them. However, 'lsusb -tv' says that everything I have connected is on hubs at 480M. This is where my lack of knowledge of how USB behaves in practice is showing, because I'm not sure I have anything connected that would ask for more than USB 2.0 speeds.

Some experimentation with my laptop's Ethernet adapter and some reading has given me the answer, which is that USB 3.0 hosts (including hubs, I think) have both a USB 3.0 controller and a separate USB 2.0 controller (see the end of the System Design section of the Wikipedia USB article, via). Since I don't have any USB 3.0 devices connected, all of my USB 3.0 ports (and the USB 3.0 hub) are all only using their USB 2.0 side.

(I don't know if there's any way in Linux to figure out which USB devices are the paired 3.0 and 2.0 sides of one actual thing.)

PS: As part of looking into this I discovered (and verified) that my webcams are USB 2.0. I guess it's fast enough for 1080p.

PPS: Another source of speed information is in sysfs, as /sys/bus/usb/devices/usb*/speed, but I don;t know how you match up what device is what. I'm flailing around in the dark, which is why I was happy to stumble over 'lsusb -tv'.

Sidebar: An example of a 480M hub and a lower speed connection

Taken from my work machine, this is the keyboard (directly connected to a back panel USB port).

; lsusb -tv
[...]
/:  Bus 05.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/4p, 480M
    ID 1d6b:0002 Linux Foundation 2.0 root hub
    |__ Port 3: Dev 2, If 0, Class=Human Interface Device, Driver=usbhid, 1.5M
        ID 04d9:0169 Holtek Semiconductor, Inc. 
    |__ Port 3: Dev 2, If 1, Class=Human Interface Device, Driver=usbhid, 1.5M
        ID 04d9:0169 Holtek Semiconductor, Inc. 
[...]

I don't know why the keyboard has two interfaces, but presumably there is a USB reason. I have a Bluetooth dongle with four, although only three of them list 'Driver=btusb' and the fourth has a blank driver.

SeeingUSBDeviceSpeeds written at 23:35:05; Add Comment

2022-05-09

Snaps don't seem compatible with NFS home directories in Ubuntu 22.04

Over on Twitter, I said something about Firefox in 22.04:

So Ubuntu 22.04 (beta) makes Firefox into a Snap. I guess our users won't be using it on our machines any more, since Snaps don't work at all in our environment. (Or they didn't in 20.04; they require non-NFS home directories in /home.)

After I made that Tweet I did some experimentation on 22.04 and so I can now say positively that Canonical Snaps don't work in our environment, where we use NFS v3 mounted home directories from our ZFS fileservers. Well, probably and mostly. Snaps definitely don't work out of the box on Ubuntu 22.04, and I'm not convinced it's possible to fix all of their problems with manual work.

One major problem is that the AppArmor profiles used to "restrict" snaps make specific assumptions about the system environment in at least two areas. The first area and the easiest to modify is where people's home directories are. The standard Ubuntu AppArmor configuration more or less assumes everyone's home directory is directly in /home as /home/<user>. However, this can (theoretically) be tuned through modifying /etc/apparmor.d/tunables/home.d/site.local; for us, I think we would do something like:

@{HOMEDIRS}+=/h/*/

(Our NFS mounted home directories all have names like /h/281, and then users have home directories like /h/281/cks.)

Unfortunately this is only the start of the problems. As covered in places like Bug #1662552 "snaps don't work with NFS home", the AppArmor profiles generally don't allow 'networking', and this affects kernel networking for NFS. This has apparently been somewhat worked around in modern versions of Snaps, according to sources like Cannot open path of the current working directory: Permission denied bis.

(The workaround seems to be to automatically give all Snaps network access. This obviously reduces the security protections for some Snaps, although not Firefox since Firefox already needs to talk to the network.)

However, this leaves another major issue, also discussed in the Cannot open path report; various Snap internal operations run as root and require access to user home directories, which requires your NFS server to allow root on NFS clients unrestricted access (for both reads and writes). This is a non-starter in our environment, where we're quite guarded about allowing root casual, unrestricted access to the entire NFS filesystems.

(We are of course perfectly aware that someone with root access on a NFS client can read and possibly write any file on an NFS mount, because they can assume the appropriate UID. However, this is vastly different from a situation where any accident as root allows an errant script, program, or command to wipe out vast amounts of data on our fileservers. Unfortunately, per exports(5) you can't give a client un-squashed root only for reads; if you allow read/write access to the filesystem and don't squash root, root gets write as well as read.)

PS: It's possible that there are also issues because our NFS clients normally have an /etc/passwd that lists people's home directories as '/u/<user>', where /u is a directory full of symbolic links to the real locations. We're not interested in changing this aspect of our systems either.

SnapsVersusNFSHomedirs written at 23:49:14; Add Comment

2022-05-04

The temptation of smartctl's JSON output format given NVMe SSDs

Over on the Fediverse, I said something:

I have a real temptation to combine smartctl's (new) JSON output with jq to generate Prometheus metrics from SMART data (instead of my current pile of awk of non-JSON smartctl output). On the other hand, using jq for this feels like a Turing tarpit; it feels like the right answer is having a Python/etc program ingest the JSON and do all the reformatting and gathering in a real programming language that I'll be able to read and follow in a few months.

We believe in putting data from SMART into our metrics system so that we have it captured and can do various things with it, now and in the future. Today, this is done by processing the normal output of 'smartctl -i' and 'smartctl -A' for our SATA and SAS drives using a mix of awk and other Unix programs in a shell script. The fly in the ointment on a few machines today (and more machines in the future) is NVMe SSDs, because NVMe SSDs have health information but not SMART attributes, so while 'smartctl -A' works on them it produces output in a completely different format that my script has no idea how to deal with.

There are three attractions of using smartctl's new-ish JSON output format with some post-processing step. The first is that I can run smartctl only once for each drive, because the JSON output format makes it straightforward to handle the output of 'smartctl -iA' all at once. The second is that I could probably condense a lot of the extraction of various fields and the chopping up of various bits into a single program that runs once, instead of a bunch of Unix programs that run repeatedly. The third and biggest is that I could unify processing of SMART attributes and NVMe health information and handle it all in the same processing of the JSON output. The processing would simply look for SMART attributes and NVMe health information in the JSON and output whatever it found, rather than having to tell the two apart from how the input was formatted.

(In other words, the JSON output comes conveniently pre-labeled.)

Using smartctl's JSON output format doesn't solve all of the problems presented by NVMe SSDs, because the health information presented by NVMe SSDs doesn't map exactly on to SMART attributes. If I wanted to be honest, I would generate different Prometheus metrics for them that didn't pretend to have, for example, a SMART attribute ID number. But if I did that, I would make it harder to do metrics queries like 'show us the most heavily written to drives' across all of our drives regardless of their type.

(Or, more likely, 'show us all of the drive temperatures', since how things like power-on hours and write volume is represented in SMART varies a lot between different drives).

The usual tool for processing JSON in shell scripts is jq. In theory jq might be able to do all of the selection and processing of smartctl's JSON output that's needed for this. In practice, I suspect I will be much happier doing this in Python, because the logic of what is extracted and reported (and how it's mangled) will be much clearer in a programming language than in jq's terse filtering and formatting mini-language.

SmartctlJSONTemptation written at 21:50:27; Add Comment

2022-05-01

Using Linux's libvirt for my virtualization needs has been okay

About two weeks ago I reached a tipping point of unhappiness with VMWare Workstation and wound up deciding to try switching over to the current obvious alternative, libvirt, mostly through virt-manager, virt-viewer, and virsh. The summary from two weeks of usage is that libvirt has worked out okay for me; it's more or less a perfectly workable virtualization environment for Linux guest VMs, with an adequate GUI experience along side a command line experience that I'm coming to appreciate more for basic operations like starting and stopping virtual machines and reverting to snapshots. I'm happy that I've made the switch over to libvirt and I sort of wish I'd done it earlier.

(Libvirt may work great for non-Linux VMs, but I haven't tried it with any. I haven't needed to recreate my OpenBSD VM install yet (although that may happen when OpenBSD 7.1 is released) and I don't currently have any need for a Microsoft Windows VM.)

The libvirt experience is definitely not as polished and effortless as the VMWare Workstation one. I had to do a number of setup steps, including adding an extra network port to my work machine. But once I had gone through the setup effort everything has worked fine, although there are a number of paper cuts that I may write up some other day. On the positive side, starting up virtual machines no longer affects my desktop sessions in any particularly unusual of visible way and virtual machines themselves do seem to perform well.

On the whole libvirt has given me the feeling of the typical old school Linux experience when compared to other operating systems, which is to say that you have to do more work to get everything working but once you do, you have a system that's more understandable. VMWare Workstation's networking was convenient but a magical black box; libvirt's networking is more annoying and more work, but I understand and can manipulate pretty much all of the bits.

(One endorsement is that I liked the whole experience enough to bring up libvirt on my home desktop so that I had a place there for VMs for various purposes. I don't strictly need this right now, but libvirt makes this easy enough that I could just go do it. So now I have a stock Fedora 35 VM at home, too.)

Although I'm not sure I fully understand what I'm doing, I do seem to have worked out how to deal with my HiDPI irritation that guest consoles are shown un-scaled, which makes them tiny and hard to read (especially in text mode, such as Ubuntu server installers). I've come to rather appreciate this improvement over VMWare Workstation; apparently the tiny guest consoles in it were more of a paper-cut than I appreciated at the time.

Another thing I've become fond of is how well the libvirt tools and programs work over SSH from my home desktop to my work desktop. This has been mostly transparent and has often performed clearly better than my previous approach of using remote X via SSH's X forwarding. The previous approach wasn't a bad experience, but the new way is clearly better and more responsive. One libvirt thing that helps this is that it's very easy and obvious to start virtual machines without their console being attached to anything; I just 'virsh start <whatever>' and then 'ssh <whatever>' and I'm off to the races. This was always theoretically possible with VMWare, but I almost never did it in practice for various reasons.

(This is an extended version of some things I said on Twitter and on the Fediverse.)

LibvirtHasBeenOkay written at 22:09:23; Add Comment

2022-04-28

The practical problem with /etc/pam.d on long-lived Linux systems

Yesterday I wrote about how a 20-second program startup delay turned out to be because of a stray line in an /etc/pam.d file. Unfortunately, things in /etc/pam.d are especially prone to this problem on systems that have been around for a long time and have been upgraded from version to version of their Linux. The short version of why is that there are too many things modifying /etc/pam.d files.

The ideal situation for /etc/pam.d files would be if the only thing that changed them was the packages that supplied them. Then they would be like program binaries; they would quietly and transparently get updated to new versions as part of package updates, including distribution version upgrades. A less ideal situation would be if the only two things changing /etc/pam.d files was the packages that supplied them and the system administrator. Pretty much every Linux package manager has features that are designed to deal with this, things like RPM's .rpmnew files and Debian's process for asking you what you want to do about the situation.

Unfortunately, /etc/pam.d files are historically modified by other programs as well under various circumstances; as they install themselves, when the system authentication configuration changes, and so on. Automatic modifications by programs of package-supplied files is generally a kiss of death for keeping them up to date. Even if the system administrator doesn't also make their own changes, package managers usually provide essentially no support for sorting out the situation, not even so much as a three way diff between the 'base' version (the old package version), the current version, and the package's new version.

(One reason for this omission is that it would require keeping around the original packaged version of all such files so that you can create the diff at all.)

Adding to the fun is that /etc/pam.d files are critical to your system working. If you break one, you may not be able to log in or use sudo (or su), or core infrastructure may stop working. This is by design (PAM fails closed, denying access if there's things wrong), but it makes any changes to a working environment an unusually high stakes activity. Add to this that PAM is tangled in general and it's no surprise that busy system administrators mostly don't touch their PAM stacks unless they have to. If the package update process didn't automatically handle things and it still works, don't touch anything.

It's possible that other programs modifying /etc/pam.d files has now gone out of style. I certainly hope it has, but I haven't looked.

(This problem of multiple programs automatically changing configuration files is one reason a great many configuration systems have moved to having a directory of configuration snippets. It's far easier to deal with that and to keep everything straight. /etc/pam.d files have yet to make that shift and to be fair, they present some unusual problems for it since you generally want very fine grained control over PAM ordering.)

PAMFilesLongtermProblem written at 23:10:47; Add Comment

2022-04-27

The root cause of my xdg-desktop-portal problems on a Fedora machine

For some time I've had an odd problem on my work Fedora desktop, where after I logged in the first time I ran one of a number of GUI programs, it would take more than 20 seconds to start with no visible reason for why (the affected programs included Firefox and Liferea). After that 20 seconds, everything was fine and everything started or re-started as fast as it should. This didn't happen on my home Fedora desktop, which has an essentially identical configuration. After I realized that something was wrong and noticed the pattern, I watched journalctl logs during a first program startup and soon found some tell-tale log entries:

10:08:12 dbus-daemon[1220146]: [session uid=19 pid=1220144] Activating service name='org.freedesktop.portal.Desktop' requested by ':1.10' (uid=19 pid=1221915 comm="/u/cks/lib/i386-linux/liferea-git/bin/liferea " label="kernel")
10:08:12 dbus-daemon[1220146]: [session uid=19 pid=1220144] Activating service name='org.freedesktop.portal.Documents' requested by ':1.11' (uid=19 pid=1221921 comm="/usr/libexec/xdg-desktop-portal " label="kernel")
[...]
10:08:12 dbus-daemon[1220146]: [session uid=19 pid=1220144] Activating service name='org.freedesktop.secrets' requested by ':1.11' (uid=19 pid=1221921 comm="/usr/libexec/xdg-desktop-portal " label="kernel")
10:08:12 gnome-keyring-daemon[1220065]: The Secret Service was already initialized
[ ... 20+ second delay ...]
10:08:38 xdg-desktop-por[1221921]: Failed to create secret proxy: Error calling StartServiceByName for org.freedesktop.secrets: Timeout was reached
10:08:38 xdg-desktop-por[1221921]: No skeleton to export
10:08:38 dbus-daemon[1220146]: [session uid=19 pid=1220144] Successfully activated service 'org.freedesktop.portal.Desktop'

I wasn't happy. My first idea was to remove xdg-desktop-portal entirely, but you can't do that these days. At the time (a couple of weeks ago) I dealt with the problem by chmod'ing all of the xdg-desktop-portal programs to 000, which made the initial activation attempt fail. A few days ago I had to deal with this again (because of a Fedora upgrade to x-d-p that reversed my chmods) and this time I managed to track it down.

(I don't use Flatpaks and other things that xdg-desktop-portal is relevant for, and in any case it doesn't appear to actually work for my odd desktop. This is a known issue, cf, and unfortunately the workarounds don't prevent xdg-desktop-portal from being started.)

In the end, the problem was being caused by a stray gnome-keyring-daemon process that for some reason wasn't cooperating with x-d-p (the g-k-d message in the logs above comes from such a process). This stray process came about because on my work desktop, my /etc/pam.d/login contained a line to start g-k-d on login, through the pam_gnome_keyring PAM module:

session  optional  pam_gnome_keyring.so auto_start

This line wasn't present in my home machine's /etc/pam.d/login, which probably means it dates from before Fedora 15 (timestamps unfortunately provide no clue, as VMWare Workstation appears to touch it when it's installed or upgraded). Commenting out this line (which isn't present on a modern Fedora) fixed my stalling problem, and probably made gnome-keyring-daemon work.

(I don't actually use gnome-keyring-daemon for anything as far as I know. Certainly I don't use it for SSH keys, although I believe it now does actually support ED25519 keys.)

You might at this point reasonably ask why /etc/pam.d/login is relevant at all, instead of /etc/pam.d/xdm or /etc/pam.d/gdm or the like (which do still have an invocation of this module), since it's the PAM module for text logins on the console (or on serial lines). The answer is that I don't use graphical login programs; instead I log in on the text console and then start X from the command line. This also means that my X desktop's D-Bus daemon is not established immediately; instead it's started as part of starting X. It's possible that this caused extra communication difficulties between the running gnome-keyring-daemon and xdg-desktop-portal.

(Clearly x-d-p got g-k-d's attention somehow, but it may not have been through D-Bus. G-k-d does have some sort of control socket, normally found in /run/user/<uid>/keyring/control.)

XdgDekstopPortalSlownewssWhy written at 23:43:09; Add Comment

2022-04-26

Why your physical servers running Ubuntu 22.04 LTS can boot very slowly

If you install Ubuntu 22.04's server edition onto a server that has one or more network ports that you aren't using, it's quite likely that you'll get to see an unexpected two minute pause during system boot. In some configurations this is a total stall, with neither local nor remote logins possible. This behavior didn't happen in 20.04, although some of the underlying issues were there, and unfortunately it's rather hard to automatically work around.

The direct source of the stall is our old friend systemd-networkd-wait-online, which in 22.04 waits 120 seconds (two minutes) until all of your network links are "configured". More specifically, it waits until all links that systemd-networkd knows about are configured. Unfortunately, interfaces that are listed as having DHCP enabled on them only satisfy s-n-w-o if the system actually gets a DHCP address from the network, which is where the rest of the problem starts coming in.

(In 20.04, I don't believe this happened if you had some stray interfaces still set to DHCP. You could pick up these interfaces relatively easily.)

The Ubuntu 22.04 server installer, subiquity, automatically performs DHCP on all interfaces on your server it finds. Regardless of whether or not it gets any DHCP answers, or even if the interface is disconnected, it carries over this 'try DHCP' state to the installed system unless you manually change it, interface by interface. In theory subiquity will let you turn this attempted DHCP off. In practice, this doesn't work in 22.04 (although it did in 20.04). With every interface set to do DHCP in the installed system, any unused and disconnected interfaces will cause the systemd-networkd-wait-online two minute timeout, as it waits for DHCP answers on them all.

This is a significant issue for people with physical servers because it's fairly routine for physical servers to have extra interfaces. Modern Dell 1U servers come with at least two, for example, and most of our servers are only using one. Do you have a server with 1G onboard but you need 10G so you put in an add-on card? Now you have two unused 1G ports that are open to this issue.

(Of course in theory you can avoid this issue by carefully going through all unused interfaces on every server install and doing the several steps to explicitly disable them. Since this requires fallible humans to not ever fail, you can guess what I think of it in practice.)

The somewhat obvious apparent workaround is to run a sed over your system's /etc/netplan/00-installer-config.yaml to turn 'dhcp4: true' into 'dhcp4: false'. Unfortunately this does not actually work. At boot time, any interface mentioned in your netplan configuration will become an interface known to systemd-networkd, and then systemd-networkd-wait-online will wind up waiting for it, even if there is no way it can get a configuration because it's not doing DHCP and has no IP address set.

Instead, you must either delete all inactive interfaces from your netplan configuration or, equivalently, write a completely new version of your netplan configuration that only mentions the active interfaces. Since as far as I know there are no command line tools to manipulate netplan files to delete interfaces and so on, the second approach may be easier to automate in a script. Remember that you're going to have to embed this script into the install image and arrange to run it at install time, unless you enjoy waiting two extra minutes for the system to boot the first time.

This issue is probably much less acute for virtual servers, because my impression is that virtual servers are usually only configured with the network interfaces that they're actually going to use. Physical servers are not so convenient.

(Even if the network interfaces can be disabled in the BIOS, that requires a trip through the BIOS. And makes life harder on people who are reusing the physical hardware later.)

As far as I can tell from a number of attempts, there is also no way to fix this by modifying systemd-networkd-wait-online command line parameters. If I so much as touch these, things seem to explode, generally with s-n-w-o finishing much too fast, before the network is actually configured. Sometimes fiddling seems to trigger mysterious failures and timeouts starting other programs. Unfortunately s-n-w-o has no verbosity or debugging options; it's a silent black box, with no way of extracting what it's decided to look at, what it thinks the state is at various points, and so on.

(This elaborates on some tweets of mine.)

PS: Even in a 22.04 install without this issue, it can take over ten seconds for systemd-networkd-wait-online to decide that the network is actually online, for a configuration with a single, statically configured (virtual) network. I really don't know what it's doing there.

Ubuntu2204SlowServerBoot written at 22:59:50; Add Comment

(Previous 10 or go back to April 2022 at 2022/04/25)

Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.