Wandering Thoughts archives

2020-12-12

My views on the suitability of CentOS Stream

In a comment on my most recent entry on CentOS Stream, Ben Cotton said:

I honestly believe that CentOS Stream will be suitable for the majority of CentOS Linux users, and a huge improvement for some. [...]

At one level, I agree with Ben Cotton on this. There's every indication that CentOS Stream won't be worse than plain CentOS 7 as far as bugs and security issues go; while it will now be getting (some) package versions before RHEL does instead of afterward, Red Hat has also apparently drastically increased its pre-release testing of packages. The move from CentOS 8 to CentOS Stream does cost you an extra five years of package updates, but I also feel that you shouldn't run ancient Linux distribution versions so you probably shouldn't be running most CentOS installs for longer than five years anyway.

(I measure these five years from the release of RHEL 8, since what matters is increasingly ancient software versions. And since RHEL freezes package versions well in advance of the actual release, that means that by the end of five years after release the packages are often six or more years out of date. A lot changes in six years.)

So at that level, if you're already running CentOS 8 as a general OS I believe that CentOS Stream will be perfectly fine replacement for it for you and I don't see a strong reason to, say, migrate your existing systems to Ubuntu LTS. There's good indication that CentOS Stream will not create more bugs and instability, while migrating to Ubuntu LTS is both a bunch of work and won't get you much longer of a support period (20.04 LTS support will run out in early 2025, while I believe that CentOS Stream for 8 support will end in late 2024).

Unfortunately, that's only at one level, the level that ignores the risks now in the future. The blunt fact of the matter is that the IBM-ized Red Hat has now shown us that they are willing to drastically change the support period for an existing CentOS product with basically no notice. We have only Red Hat's word that CentOS Stream for 8 support will continue through end of full maintenance for RHEL 8 in late 2024, or actually we don't even have that; Red Hat has made no promises to not change things around again, for example when RHEL 9 is released. Red Hat has made it clear that they decide how this goes and what the CentOS board feels doesn't really matter; the board can at best mitigate the damage (as they apparently did this time around, including getting Red Hat to allow CentOS Stream for 8 to continue longer than Red Hat wanted).

(Red Hat has also made it relatively clear that their only interest in CentOS today is as a way to give people a free preview of what will be in the current RHEL in the future. This neither requires nor rewards supporting and funding CentOS Stream for RHEL 8 after RHEL 9 comes out. It also implicitly encourages things that get in the way of using CentOS Stream as a substitute for RHEL.)

Any commercial company can change direction at the drop of a hat, so Canonical (or SUSE) could also decide to make similar abrupt changes with their Linux distributions (yes, Ubuntu is Canonical's thing, not a community thing, but that's another entry). However, Canonical has not done this so far (instead they've delivered a very consistent experience for over a decade), while Red Hat just has. There's a bigger difference in practice between 'never' and 'once' than there is between 'once' and 'several'.

If I had a CentOS based environment that I had to plan the next iteration of (for example CentOS 7 and I was considering what next), I'm not sure I would build the next iteration on CentOS Stream. It might well be time to start considering alternatives, ones with a longer record of stability in what had been promised and delivered to people. Certainly at this point Ubuntu LTS has a more than a decade record of basically running like clockwork; there are LTS releases every other April, and they get supported for five years from release. There are real limits on the 'support' you get (see also), but at least you know what you're getting and it seems very likely that there won't be abrupt changes in the future.

(Debian doesn't have Canonical's clockwork precision but may give you more or less the same support period and release frequency, but see also. I don't know enough about SUSE to say anything there, but it does use RPM instead of .debs and I like RPMs better. The Debian community is probably the most stable and predictable one; Debian is extremely unlikely to change its fundamental nature after all this time.)

CentOSStreamSuitability written at 23:50:23; Add Comment

2020-12-11

CentOS's switch to CentOS Stream has created a lot of confusion

After the news broke of CentOS's major change in what it is, a number of sysadmins here at the university have been discussing the whole issue. One of the things that has become completely clear to me during these discussions is that the limited ways that this shift has been communicated has created a great deal of confusion, leaving sysadmins with a bunch of reasonable questions around the switch and no clear answers (cf).

(It doesn't help that the current CentOS Stream FAQ is clearly out of date in light of this announcement and contains some contradictory information.)

This confusion matters, because it's affecting people's decisions and is hampering any efforts to leave people feeling good (or at least 'not too unhappy') about this change. If Red Hat and CentOS care about this, they need to fix this, and soon. Their current information is not up to the job and is leaving people lost, unhappy, and increasingly likely to move to something else, even if they might be fine with CentOS Stream if they fully understood it. The longer the confusion goes unaddressed, the more bridges are being burned.

(The limited communication and information also creates a certain sort of impression about how much Red Hat, at least, cares about CentOS users and all of this.)

The points of confusion that I've seen (and had) include what the relationship between updates to CentOS Stream and updates to RHEL will be, how well tested updates in Stream will be, how security issues will be handled (with more clarity and detail than the current FAQ), what happens when a new RHEL release comes out, and whether old versions of packages will be available in Stream so you can revert updates or synchronize systems to old packages. It's possible that some of these are obvious to people in the CentOS project who work with Stream, but they're not obvious to all of the sysadmins who are suddenly being exposed to this. There are probably others; you could probably build up quite a collection by quietly listening to various discussions of this and absorbing the points of confusion and incorrect ideas that people have been left with.

CentOSStreamConfusion written at 00:57:17; Add Comment

2020-12-08

CentOS's switch to Stream is a major change in what CentOS is

The news of the time interval is that CentOS is making a major change in what it is that make it significantly less useful to many people, although their blog entry CentOS Project shifts focus to CentOS Stream will not tell you that. I had some reactions on Twitter and this is an expanded explanation of my views.

What CentOS has been up until this change is people taking the source code for Red Hat Enterprise Linux that Red Hat was releasing (that was happening due to both obligation and cultural factors), rebuilding it to essentially identical binaries (except for trademarks they were obliged to remove and, these days, digital signatures they could not duplicate), and distributing the binaries, installer, and so on for free. When RHEL released updated packages for a RHEL version, CentOS rebuilt them and distributed them, so you could ride along with RHEL updates for as long as RHEL was doing them at all. If you did not have the money to pay for RHEL, this appealed to an overlapping set of two sorts of people, those who wanted to run machines with an extremely long package update period (even if they became zombies) and those who needed to run (commercial) software that worked best or only on RHEL.

(We are both sorts of people, as covered in an older entry about why we have CentOS machines.)

The switch to CentOS Stream makes two major changes to what CentOS is from CentOS 8 onward (CentOS 7 is currently unaffected). First, it shortens the package update period to no more than five years, because package updates for the CentOS Stream version of RHEL <x> stop at the end of RHEL's five year full support period. In practice CentOS Stream for <x> is not likely to be immediately available when RHEL <x> is launched, and you won't install it immediately even if it was, so you will get less than five years of package updates before you must switch or operate machines without someone providing security updates for you.

(It's unclear if there will be a way to upgrade from one version of CentOS Stream to another, or if the answer will be the traditional RHEL one of 'reinstall your machines from scratch'.)

Second, CentOS is no longer what RHEL is except for those required trademark changes. Instead it “tracks just ahead of a current RHEL release”, to quote the blog entry (emphasis theirs), which currently appears to mean that it will contain versions of packages that are not released to RHEL yet. The CentOS distro FAQ is explicit that this will sometimes mean that CentOS Stream has a different ABI and even API than RHEL does, and it's unclear how stable and bug-free those packages will be. If CentOS Stream is intended to be an in-testing preview of RHEL updates, they will probably be less stable and bug-free than RHEL is, and there will be more risk in using CentOS Stream than in using RHEL. But perhaps this is too pessimistic a view. Right now we don't know and the CentOS project is pretty vague and is not making any promises. On the one hand they explicitly say that CentOS Stream will be “serving as the upstream (development) branch of Red Hat Enterprise Linux” (in the blog post); on the other hand they also say that “we expect CentOS Stream to have fewer bugs and more runtime features than RHEL” (in the FAQ in Q5).

(Also, it seems very unlikely that commercial software vendors will conflate the two the way they currently often say 'supported on RHEL/CentOS <x>', although I would expect the software to work on CentOS.)

All of this significantly reduces the advantages of using CentOS over something like Ubuntu LTS and increases the general risks of using CentOS. For a start, CentOS no longer gives us a longer support period than Ubuntu LTS; both are at most five years. Since using additional Linux distributions has a cost all by itself, and since CentOS no longer appears to have significant advantages over Ubuntu LTS for us, I expect that we will be migrating our CentOS 7 machines to Ubuntu 22.04 LTS in a couple of years, thereby reducing the number of different Linux distributions we have to operate.

There is a bit of me that regrets this. CentOS is at the end of a long line of my and our history with Red Hat, and it's sad to see it go. But I guess I now have an answer to my two year old uncertainty over CentOS's future (and I no longer need to write the entry about what things CentOS was our best option for).

PS: It's possible that another distribution will arise that does what CentOS did until now. But I don't know if there is enough interest there any more, or if all of the organizations (and people) who might care enough have moved on.

PPS: Oracle is trying to attract CentOS users, but like plenty of other people, I have no trust in Oracle. We are extremely unlikely to use Oracle Linux instead of Ubuntu LTS, even if we would get (much) longer package updates if all went well; the risks just aren't worth it.

CentOSStreamBigChanges written at 18:54:43; Add Comment

2020-12-06

Linux's hostname -s switch is now safe for many people, but the situation is messy

Slightly over a decade ago I wrote an entry about our discovery that 'hostname -s' sometimes did DNS lookups, depending on the exact version involved. We discovered this the hard way, when our DNS lookups failed at one point and suddenly 'hostname -s' itself started failing unexpectedly. We recently had a reason to use 'hostname -s' again, which caused me to remember this old issue and check the current situation. The good news is that common versions of hostname now don't do DNS lookups.

Well, probably, because it turns out that the Linux situation with hostname is much more complicated and tangled than I had any idea before I started looking. It appears that there are no less than four sources for hostname, and which version you wind up using can depend on your Linux. On top of that, the source you're probably using is distributed in an unusual way that makes it hard for me to say exactly when its 'hostname -s' became safe. So let's start with the basics.

If you check with 'rpm -qf /usr/bin/hostname' or 'dpkg -S /usr/bin/hostname' on appropriate systems (Fedora, CentOS, Debian, and Ubuntu), you will probably find that the version of hostname you're using comes from a 'hostname' package. This package has no upstream as such, and no source repository; the canonical source seems to be the Debian package. Old versions of its source can be found in its part of debsources. This version has handled 'hostname -s' correctly since somewhere between 2.95 (which doesn't) and 3.04 (which does).

(Based on the information shown in its part of debsources, hostname 2.95 was part of Debian 5.0 (Lenny), released in 2009, and hostname 3.04 was part of Debian 6.0 (Squeeze), released in 2011.)

Arch Linux seems to use a hostname that comes from the GNU inetutils project. The relevant code currently appears to do a DNS lookup if you use '-s', but it will proceed if the DNS lookup fails instead of erroring out (the way the decade ago hostname behaved). This does mean that under some conditions your 'hostname -s' command may stall for some time while its DNS lookup times out, instead of running essentially instantly.

The Linux manpages project has two manpages online for hostname (1, 2). The default one is from net-tools, and the other one is from GNU coreutils. The GNU Coreutils version has no '-s' option (or other commonly supported ones), and as a result I would be surprised if many Linuxes used it. The net-tools version is apparently the original upstream of the plain hostname package version. Based on the Fedora 11 bug report about this, back a decade ago Fedora was using the net-tools version of hostname (I don't know about Debian). The current net-tools version of hostname.c now bypasses DNS lookups when used with '-s', a change that was made in 2015.

(While Fedora still packages net-tools, their package only has a few of its binaries. And apparently net-tools as a whole may be basically unmaintained; the last commits in the repository seem to be from 2018, and it was 2016 when it was particularly actively developed.)

HostnameSwitchFine written at 00:44:36; Add Comment

2020-11-26

The better way to make an Ubuntu 20.04 ISO that will boot on UEFI systems

Yesterday I wrote about how I made a 20.04 ISO that booted on UEFI systems. It was a messy process with some peculiar things that I didn't understand and places where I had to deviate from Debian's excellent documentation on Repacking a Debian ISO. In response to my entry, Thomas Schmitt (the author of xorriso) got in touch with me and very generously helped me figure out what was really going on. The short version is that I was confused and my problems were due to some underlying issues. So now I have had some learning experiences and I have a better way to do this.

First, I've learned that you don't want to extract ISO images with 7z, however tempting and easy it seems. 7z has at least two issues with ISO images; it will quietly add the El Torito boot images to the extracted tree, in a new subdirectory called '[BOOT]', and it doesn't extract symlinks (and probably not other Rock Ridge attributes). The Ubuntu 20.04.1 amd64 live server image has some symlinks, although their presence isn't essential.

The two reliable ways I know of to extract the 20.04.1 ISO image are with bsdtar (part of the libarchive-tools package in Ubuntu) and with xorriso itself. Bsdtar is easier to use but you probably don't have it installed, while you need xorriso anyway and might as well use it for this once you know how. So to unpack the ISO into our scratch tree, you want:

xorriso -osirrox on -indev example.iso -extract / SCRATCH-TREE

(See the Debian wiki for something you're going to want to do afterward to delete the tree. Substitute whatever is the correct ISO name here in place of example.iso.)

As I discovered due to my conversation with Thomas Schmitt, it can be important to re-extract the tree any time you think something funny is going on. My second issue was that my tree's boot/grub/efi.img had been quietly altered by something in a way that removed its FAT signature and made UEFI systems refuse to recognize it (I suspect some of my experimentation with mkisofs did it, but I don't know for sure).

In a re-extracted tree with a pristine boot/grub/efi.img, the tree's efi.img was valid as an El Torito EFI boot image (and the isolinux.bin is exactly what was used for the original 20.04.1 ISO's El Torito BIOS boot image). So the command to rebuild an ISO that is bootable both as UEFI and BIOS, both as a DVD image and on a USB stick, is:

xorriso -as mkisofs -r \
  -V 'Our Ubuntu 20.04 UEFI enabled' \
  -o cslab_ubuntu_20.04.iso \
  -isohybrid-mbr isohdpfx.bin \
  -J -joliet-long \
  -b isolinux/isolinux.bin -c isolinux/boot.cat \
  -boot-load-size 4 -boot-info-table -no-emul-boot \
  -eltorito-alt-boot -e boot/grub/efi.img -no-emul-boot \
  -isohybrid-gpt-basdat \
  SCRATCH-TREE

(The isohdpfx.bin file is generated following the instructions in the Debian wiki page. This entire command line is pretty much what the Debian wiki says to do.)

If xorriso doesn't complain that some symlinks can't be represented in a Joliet file name tree, you haven't extracted the 20.04.1 ISO image exactly; something has dropped the symlinks that should be there.

If you're modifying the ISO image to provide auto-installer data, you need to change both isolinux/txt.cfg and boot/grub/grub.cfg. The necessary modifications are covered in setting up a 20.04 ISO image to auto-install a server (for isolinux) and then yesterday's entry (for GRUB). You may also want to add various additional files and pieces of data to the ISO, which can be done by dropping them into the unpacked tree.

(It's also apparently possible to update the version of the installer that's in the ISO image, per here, but the make-edge-iso.sh and inject-subiquity-snap.sh scripts it points to in the subiquity repo are what I would call not trivial and so are beyond what I want to let monkey around in our ISO trees. I've already done enough damage without realizing it in my first attempts. I'll just wait for 20.04.2.)

On the whole this has been a learning experience about not questioning my assumptions and re-checking my work. I have the entire process of preparing the extracted ISO's scratch tree more or less automated, so at any time I could have deleted the existing scratch tree, re-extracted the ISO (even with 7z), and managed to build a working UEFI booting ISO with boot/grub/efi.img. But I just assumed that the tree was fine and hadn't been changed by anything, and I never questioned various oddities until later (including the '[BOOT]' subdirectory, which wasn't named like anything else on the ISO image).

Ubuntu2004ISOWithUEFI-2 written at 23:39:15; Add Comment

Making an Ubuntu 20.04 ISO that will boot on UEFI systems

As part of our overall install process, for years we've used customized Ubuntu server install images (ie, ISOs, often burned on to actual DVDs) that were set up with preseed files for the Debian installer and a few other things we wanted on our servers from the start. These ISOs have been built in the traditional way with mkisofs and so booted with isolinux. This was fine for a long time because pretty much all of our servers used traditional MBR BIOS booting, which is what ISOs use isolinux for. However, for or reasons outside the scope of this entry, today we wanted to make our 20.04 ISO image also boot on systems using UEFI boot. This turned out to be more complicated than I expected.

(For basic background on this, see my earlier entry on setting up a 20.04 ISO image to auto-install a server.)

First, as my co-workers had already discovered long ago, Linux ISOs do UEFI booting using GRUB2, not isolinux, which means that you need to customize the grub.cfg file in order to add the special command line parameters to tell the installer about your 20.04 installer data. We provide the installer data in the ISO image, which means that our kernel command line arguments contain a ';'. In GRUB2, I discovered that this must be quoted:

menuentry "..." {
  [...]
  linux /casper/vmlinuz quiet "ds=nocloud;s=/cdrom/cslab/inst/" ---
  [...]
}

(I advise you to modify the title of the menu entries in the ISO's grub.cfg so that you know it's using your modified version. It's a useful reassurance.)

If you don't do this quoting, all the kernel (and the installer) see is a 'ds=nocloud' argument. Your installer data will be ignored (despite being on the ISO image) and you may get confused about what's wrong.

The way ISOs are made bootable is that they have at least one El Torito boot section (see also the OsDev Wiki). A conventional BIOS bootable ISO has one section; one that can also be booted through UEFI has a second one that is more intricate. You can examine various information about El Torito boot sections with dumpet, which is in the standard Ubuntu repositories.

In theory I believe mkisofs can be used to add a suitable extra ET boot section. In practice, everyone has switched to building ISO images with xorriso, for good reason. The easiest to follow guide on using xorriso for this is the Debian Wiki page on Repacking a Debian ISO, which not only has plenty of examples but goes the extra distance to explain what the many xorriso arguments mean and do (and why they matter). This is extremely useful since xorriso has a large and complicated manpage and other documentation.

Important update: The details of much of the rest of this entry turns out to not be right, because I had a corrupted ISO tree with altered files. For a better procedure and more details, see The better way to make an Ubuntu 20.04 ISO that will boot on UEFI systems. The broad overview of UEFI requiring a GRUB2 EFI image is accurate, though.

However, Ubuntu has a surprise for us (of course). UEFI bootable Linux ISOs need a GRUB2 EFI image that is embedded into the ISO. Many examples, including the Debian wiki page, get this image from a file in the ISO image called boot/grub/efi.img. The Ubuntu 20.04.1 ISO image has such a file, but it is not actually the correct file to use. If you build an ISO using this efi.img as the El Torito EFI boot image, it will fail on at least some UEFI systems. The file you actually want to use turns out to be '[BOOT]/2-Boot-NoEmul.img' in the ISO image.

(Although the 20.04.1 ISO image's isolinux/isolinux.bin works fine as the El Torito BIOS boot image, it also appears to not be what the original 20.04.1 ISO was built with. The authentic thing seems to be '[BOOT]/1-Boot-NoEmul.img'. I'm just thankful that Ubuntu put both in the ISO image, even if it sort of hid them.)

Update: These '[BOOT]' files aren't in the normal ISO image itself, but are added by 7z (likely from the El Torito boot sections) when it extracts the ISO image into a directory tree for me. The isolinux.bin difference is from a boot info table that contains the block offsets of isolinux.bin in the ISO. The efi.img differences are currently more mysterious.

The resulting xorriso command line I'm using right now is more or less:

xorriso -as mkisofs -r \
  -V 'Our Ubuntu 20.04 UEFI enabled' \
  -o cslab_ubuntu_20.04.iso \
  -isohybrid-mbr isohdpfx.bin \
  -J -joliet-long \
  -b isolinux/isolinux.bin -c isolinux/boot.cat \
  -boot-load-size 4 -boot-info-table -no-emul-boot \
  -eltorito-alt-boot -e '[BOOT]/2-Boot-NoEmul.img' -no-emul-boot \
  -isohybrid-gpt-basdat \
  SCRATCH-DIRECTORY

(assuming that SCRATCH-DIRECTORY is your unpacked and modified version of the 20.04.1 ISO image, and isohdpfx.bin is generated following the instructions in the Debian wiki page.)

The ISO created through this definitely boots in VMWare in both UEFI and BIOS mode (and installs afterward). I haven't tried it in UEFI mode on real hardware yet and probably won't for a while.

PS: If you use the Debian wiki's suggested xorriso command line to analyze the 20.04.1 ISO image, it will claim that the El Torito EFI boot image is 'boot/grub/efi.img'. This is definitely not the case, which you can verify by using dumpet to extract both of the actual boot images from the ISO and then cmp to see what they match up with.

Ubuntu2004ISOWithUEFI written at 00:56:13; Add Comment

2020-11-14

Linux servers can still wind up using SATA in legacy PATA mode

Over the course of yesterday and today, I've been turning over a series of rocks that led to a discovery:

One of the things that the BIOS on this machine (and others [of our servers]) is apparently doing is setting the SATA ports to legacy IDE/ata_piix mode instead of AHCI mode. I wonder how many driver & hardware features we're missing because of that.

(The 'ata_piix' kernel module is the driver for legacy mode, while 'ahci' module is the driver for AHCI SATA. If you see boot time messages from ata_piix, you should be at least nervous.)

Modern SATA host controllers have two different modes, AHCI, which supports all of the features of SATA, and legacy Paralle ATA emulation (aka IDE mode), where your SATA controller pretends to be an old IDE controller. In the way of modern hardware, how your host controller presents itself is chosen by the BIOS, not your operating system (or at least not Linux). Most modern BIOSes probably default to AHCI mode, which is what you want, but apparently some of our machines either default to legacy PATA or got set that way at some point.

The simplest way to see if you've wound up in this situation is to use lsblk to see what it reports as the 'TRAN' field (the transport type); it will be 'sata' for drives behind controllers in AHCI mode, and 'ata' for legacy PATA support. On one affected machine, we see:

; lsblk -o NAME,HCTL,TRAN,MODEL --nodeps /dev/sd?
NAME HCTL       TRAN MODEL
sda  0:0:0:0    ata  WDC WD5000AAKX-0

Meanwhile, on a machine that's not affected by this, we see:

; lsblk -o NAME,HCTL,TRAN,MODEL --nodeps /dev/sd?
NAME HCTL       TRAN   MODEL
sda  0:0:0:0    sata   WDC WD5003ABYX-1
sdb  1:0:0:0    sata   ST500NM0011

It's otherwise very easy to not notice that your system is running in PATA mode instead of AHCI (at least until you attempt to hot-swap a failed drive; only AHCI supports that). I'm not sure what features and performance you miss out on in legacy PATA mode, but one of them is apparently Native Command Queueing. I suspect that there also are differences in error recovery if a drive has bad sectors or other problems, at least if you have three or four drives so that the system has to present two drives as being on the same ATA channel.

Based on our recent experience, my strong belief is now that your system BIOS is much more likely to play around with the order of hard drives if your SATA controller is in legacy mode. A SATA controller in AHCI mode is hopefully presenting an honest view of what drive is cabled to what port; as we've found out, this is not necessarily the case in legacy mode, perhaps because the BIOS always has to establish some sort of mapping between SATA ports and alleged IDE channels.

(SATA ports can be wired up oddly and not as you expect for all sorts of reasons, but at least physical wiring stays put and is thus consistent over time. BIOSes can change their minds if they feel like it.)

(For more on AHCI, see also the Arch Wiki and the OSDev wiki.)

ServerSATAInATAMode written at 00:16:12; Add Comment

2020-11-13

If you use Exim on Ubuntu, you probably want to skip Ubuntu 20.04

The Exim MTA (Mail Transfer Agent, aka mailer) recently added a mandatory new security feature to 'taint' data taken directly from the outside world, with the goal of reducing the potential for future issues like CVE-2019-13917. Things that are tainted include not just obvious things like the contents of message headers, but also slightly less obvious things like the source and especially destination addresses of messages, both their domains and their local parts. There are many common uses of now-tainted data in many parts of delivering messages; for example, writing mail to '/var/mail/$local_part' involves use of tainted data (even if you've verified that the local address exists as a user). In order to still be usable, Exim supports a variety of methods to generate untainted versions of this tainted data.

Exim introduced tainting in Exim 4.93, released in December of 2019. Unfortunately this version's support for tainting is flawed, and part of the flaws are that a significant number of methods of de-tainting data don't work. It's probably possible to craft an Exim 4.93 configuration that works properly with tainted data, but it is going to be a very ugly and artificial configuration. Exim 4.94 improves the situation significantly, but even then apparently you should use it with additional fixes.

Ubuntu 20.04 ships a somewhat patched version of Exim 4.93, but it has significant de-tainting flaws and limitations which mean that you don't want to use it in its current state. As is normal and traditional, there's essentially no prospect that Ubuntu will update to Exim 4.94+ over the lifetime of Ubuntu 20.04; what we have today in 20.04 is what we get. As a result, if you use Exim on Ubuntu, I think that you should skip 20.04. Run your Exim machines on 18.04 LTS until 22.04 LTS comes out with a hopefully much better version of Exim.

If you absolutely must run Ubuntu 20.04 with some version of Exim, I don't recommend building your own from upstream sources because that has inherent problems. The Debian source packages for 4.94 (from testing and unstable) appear to rebuild and work fine on Ubuntu 20.04, so I'd suggest starting from them. Possibly you could even use the Debian binary packages, although I haven't tried that and would be somewhat wary.

(It's possible that someone will put together a PPA for the Debian packages rebuilt on Ubuntu 20.04. It won't be me, as we're skipping 20.04 for our Exim machines. It's also possible that someone will get the Exim 4.94 package from Ubuntu 20.10 included in the 20.04 Ubuntu Backports. Anyone can make the request, after all (but it won't be us).)

Ubuntu2004EximSkip written at 00:16:59; Add Comment

2020-11-07

Turning on console blanking on a Linux machine when logged in remotely

When my office workstation is running my X session, I lock the screen if it's idle, which blanks the display. However, if the machine winds up idle in text mode, the current Linux kernel defaults keep the display unblanked. Normally this never happens, because I log in and start X immediately after I reboot the machine and then leave it sitting there. However, ongoing world and local events have me working from home and remotely rebooting my office workstation for kernel upgrades and even Fedora version upgrades. When my office workstation reboots, it winds up in the text console and then just sits there, unblanked and displaying boot messages and a text mode login prompt. Even when I come into the office and use it, I now log out afterward (because I know I'm going to remotely reboot it later).

(Before I started writing this entry I thought I had set my machine to deliberately never blank the text mode display, for reasons covered here, but this turned out not to be the case; it's just the current kernel default.)

Leaving modern LCD panels active with more or less static text being displayed is probably not harmful (although maybe not). Still, I feel happier if the machine's LCD panels are actually blanked out in text mode. Fortunately you can do this while logged in remotely, although it is slightly tricky.

As I mentioned yesterday, the kernel's console blanking timeout is reported in /sys/module/kernel/parameters/consoleblank. Unfortunately this sysfs parameter is read-only, and you can't just change the blanking time by writing to it (which would be the most convenient way). Instead you have to use the setterm program, but there are two tricks because of how the setterm program works.

If you just log in remotely and run, say, 'setterm -blank 5', you will get an error message:

# setterm -blank 5
setterm: terminal xterm does not support --blank

The problem is that setterm works not by making kernel calls, but by writing out a character string that will make the kernel's console driver change things appropriately. This means that it needs to be writing to the console and also it needs to be told the correct terminal type so that it can generate the correct escape sequences. To do this we need to run:

TERM=linux setterm -blank 5 >/dev/tty1

The terminal type 'linux' is the type of text consoles. The other type for this is apparently 'con', according to a check that is hard-coded in setterm.c's init_terminal().

(And I lied a bit up there. Setterm actually hard codes the escape sequence for setting the blanking time, so the only thing it uses $TERM for is to decide if it's willing to generate the escape sequence or if it will print an error. See set_blanking() for the escape code generation.)

The process for seeing if blanking is on (or forcing blanking and unblanking) is a bit different, because here setterm actually makes Linux specific ioctl() calls but it does them on its standard input, not its standard output. So we have to do:

TERM=linux setterm -blank </dev/tty1

This will print 0 or 1 depending on if the console isn't currently blanked or is currently blanked. I believe you can substitute any console tty for /dev/tty1 here.

ConsoleBlankingRemotely written at 21:58:50; Add Comment

2020-11-06

Console blanking now defaults to off on Linux (and has for a while)

For a long time, if you left a Linux machine sitting idle at a text console, for example on a server, the kernel would blank the display after a while. Years ago I wrote an entry about how you wanted to turn this off on your Linux servers, where at the time the best way to do this was a kernel parameter. For reasons beyond the scope of this entry, I recently noticed that we were not setting this kernel parameter on our Ubuntu 18.04 servers yet I knew that they weren't blanking their consoles.

(Until I looked at their /proc/cmdline, I thought we had just set 'consoleblank=0' as part of their standard kernel command line parameters.)

It turns out that the kernel's default behavior here changed back in 2017, ultimately due to this Ubuntu bug report. That bug led to this kernel change (which has a nice commit message explaining everything), which took it from an explicit ten minutes to implicitly being disabled (a C global variable without an explicit initializer is zero). Based on some poking at the git logs, it appears that this was introduced in 4.12, which means that it's in Ubuntu 18.04's kernel but not 16.04's.

(You can tell what the current state of this timeout is on any given machine by looking at /sys/module/kernel/parameters/consoleblank. It's 0 if this is disabled, and otherwise the number of seconds before the text console blanks.)

We have remaining Ubuntu 16.04 machines but they're all going away within a few months (one way or another), so it's not worth fixing their console blanking situation now that I've actually noticed it. Working from home due to ongoing events makes that a simpler choice, since if a machine locks up we're not going to go down to the machine room to plug in a monitor and look at its console; we're just going to remotely power cycle it as the first step.

(Our default kernel parameters tend to have an extremely long lifetime. We're still automatically setting a kernel parameter to deal with a problem we ran into ino Ubuntu 12.04. At this point I have no idea if that old problem still happens on current kernels, but we might as well leave it there just in case.)

ConsoleBlankingDefaultsOff written at 22:50:49; Add Comment


Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.