Wandering Thoughts archives

2010-09-30

Stopping kernel updates on Ubuntu

Suppose that you run production machines, where you don't want to have to reboot things without a bunch of advance planning (or a serious emergency). One of the things you want to do on such a system is block kernel updates. On dpkg-based systems, this is called holding a package.

(One way to do it, the one I use, is 'echo pkgname hold | dpkg --set-selections'. 'dpkg --get-selections | fgrep hold' can then be used to list held packages.)

In order to block all Ubuntu kernel updates, you have to remember that Ubuntu does two sorts of kernel updates:

  • entirely new kernel packages (with the new kernel version in their names).

    As new packages these aren't seen as upgrades to anything already installed on your system, so Ubuntu updates the kernel meta-packages to require the new kernel packages. Holding the meta-packages blocks any chance that these new kernel packages will get pulled in by a routine update.

    In theory 'apt-get -u upgrade' won't install new packages, even dependencies of upgrades of existing packages (you have to use dist-upgrade instead). In practice I'm not sure that I trust that to happen all of the time; holding the meta-packages is harmless and makes sure.

    (Ubuntu appears to update only the meta-packages from time to time, but since the meta-package contains basically nothing, not updating it seems harmless.)

  • 'minor' point releases of existing kernel packages.

    As point releases of an already installed package, these are update candidates on their own (without a meta-package update to go with them), so you have to hold all of the existing kernel packages to block them. This means that you have to remember to apply a hold to any new kernel package that gets installed as a result of updating the meta-packages.

    (If you don't care about older kernel packages, you can either leave them un-held or just remove them.)

The way we explicitly upgrade held packages is to use 'apt-get install ...'. There is probably a better command line way, but this one works for us.

(Please do not suggest aptitude. Aptitude's command line interface makes me want to strangle people; it is about five times too clever.)

UbuntuHoldingKernels written at 18:45:05; Add Comment

2010-09-27

Why Grub2 doesn't solve my Ubuntu kernel problem

A commentator on this entry asked what sounds like a good question: why don't I have Grub2 boot the older kernel that I want to use, instead of removing the newer one?

The problem is that it's not a robust solution. The first and largest issue is that what I really want to do is enforce a negative ('never boot this specific kernel'), but the only tool Grub gives me is a positive one ('boot this specific kernel'). There are a number of plausible situations that cause what kernel I want to boot to change; for example, if Ubuntu releases a new kernel update that is anything but a point version. In any of these situations, I lose.

The other issue is specific to my somewhat unusual situation and how Ubuntu packages kernel updates, but cannot be fixed even with a true negative option in Grub. Suppose, not entirely hypothetically, that Ubuntu gets around to releasing some version of their proposed kernel update as an official update. This official update will have the 32-on-64 security fix, and we will want to run it. However, it's quite possible that Ubuntu will make it a point release of the general proposed kernel update version. As already established, such a point release increase will use exactly the same kernel names and file names as the current proposed kernel update, and as such Grub will see it as exactly the same kernel. Even with a true negative option, there is no way for Grub to get this one right (short of reaching into the packaging system to pull out detailed metadata on just what exactly the kernel is).

The only way to fix both issues is to make the undesirable kernel disappear and then to have Grub do its normal thing of 'default to the highest version kernel'. Only then will any normal Ubuntu kernel update do the right thing.

(If this was a bad kernel update from the regular Ubuntu repositories, instead of a proposed testing one from elsewhere, I would then have to arrange to block reinstallation of that specific version of the kernel package.)

UbuntuSpecificKernelIssue written at 02:35:22; Add Comment

2010-09-22

The mysteries of video cards for Linux

One of the frustrating things about putting together a Linux machine is just how much specialized knowledge you need in order to do a decent job of it. Today I'm going to pick on video cards, partly because it's an area that I sort of follow so I actually know a bit about it.

Suppose that you want to spec out a new Linux machine, and you insist on using open source video drivers instead of being willing to live with proprietary binary drivers. The leading graphics cards contenders are ATI (now AMD), nVidia (still independent so far), and Intel's integrated graphics (if you get a motherboard with a suitable onboard chipset).

(At one point ATI was considered the best open source choice, then Intel surged ahead until they stumbled with the 'Poulsbo' mess, and now who knows. nVidia seems to be continuing its stance as basically open source hostile.)

You will search in vain for an easily located page on any widely used Linux distribution's website that says what the current best or well supported video cards are, or gives you any sort of feature tradeoff for different sorts of cards. Instead you are left to assemble this information on your own, and it's not simple.

There are at least three areas of driver support that have historically mattered: basic (2D) graphics capabilities, '3D' hardware acceleration, and various flavours of Xvideo (aka Xv). Some degree of '3D' hardware acceleration has mattered not just for 3D applications with OpenGL but also for a number of increasingly widely used graphics effects like alpha-blending (and Compiz and related desktop effects); however, it now may be included in the X server's EXA or XAA hardware acceleration framework, so that a card that has full EXA support would be good enough. At one point Xvideo support was essential in order to get good playback of video, but I don't know if that's still true in this era of fast machines and content that is already as big as many people's screens.

(The manual pages and websites for common video players are no help; many are hysterically out of date as far as X Windows issues go.)

There is a confusing matrix of what drivers support what features on what hardware, and what answers you get depend on where you check; for example, the radeon manual page on my Fedora 13 machine lists a different (and more limited) set of features than the Radeon feature matrix. The nVidia driver manpages don't even list Xvideo support, but I happen to know that it's available on some nVidia hardware.

In order to get a good answer, you need to know more here than I do, because what I've written down here is more questions than answers. You need to know not just what Xvideo is, but how important it is on modern machines and for various video players and sorts of content. You need to know something about how Glib and Qt implement their graphics effects, how much hardware support is needed to make them fast, and how that support is implemented in the X server. And, for that matter, how much OpenGL is quietly used by any programs that you care about (given that web browsers are already talking about GPU accelerated rendering, this may soon be more important than you think).

(Also, you will probably become quite good at decoding between chipset names and marketing names for various graphics chipsets. Wikipedia may help.)

All of this is frustrating; assembling this specialized knowledge is a bunch of work, you use it once, and then it rapidly goes obsolete (like all PC hardware knowledge, it has a half life measured in months). But you have to do this in order to pick a decently supported graphics card.

(Well, you probably have to.)

Sidebar: resources for doing this

VideoCardMysteries written at 02:56:01; Add Comment

2010-09-18

Another reason why I don't like Ubuntu kernel packaging

I've written before about the over-arching problems with how Debian and Ubuntu package kernels, but today I ran into another annoying issue with how Ubuntu handles their kernel packaging mess.

The problem Ubuntu has is that their kernel packages need to have the kernel version as part of the package name, but they want to create a simple way of upgrading from kernel to kernel without too much special magic in the tools (as an entirely new package, a new kernel is not an upgrade for any existing package so package managers will just ignore it). So Ubuntu has kernel meta-packages, things like linux-image-server, that exist only to depend on the current specific kernel packages. When Ubuntu releases a new kernel they release a new version of the meta-packages that depend on the new kernel's new package, and so upgrading your meta-package pulls in the new kernel as a dependency.

Now suppose that you want to remove the most recent kernel version for some reason; perhaps it's no good and you're taking the easiest way to avoid having it be the default kernel that your system boots.

In an RPM-based environment you can just rpm -e this kernel, just as you would any other kernel that you wanted to get rid of. On Ubuntu this fails, because the most recent kernel is a dependency of the kernel meta-package (and you don't want to remove the meta-package). In order to remove the most recent kernel, you need to find some way to revert to an old version of the meta-package (one that depends on an older kernel version).

(At this point it may be useful to point at /var/cache/apt/archives.)

The mess with the meta-packages wouldn't be necessary if Ubuntu didn't have to give each new kernel version a completely different package version, and that wouldn't be necessary if the Debian package system allowed more than one version of a single package to be installed at once. Sadly, this single version assumption seems to be very deeply embedded in how the Debian package system stores various bits of data about packages.

Sidebar: what happened to us

Ubuntu 10.04 has an issue where (among other things) unmounting NFS filesystems takes seconds to tens of seconds; this leads to very, very slow system reboots when you have more than 200 NFS mounts, as we do. They had a proposed kernel update that should fix this and that needed testing to verify this, so I installed it on one of our Ubuntu 10.04 test machines. In the process I also wound up installing the kernel meta-package that went with it.

Today, Ubuntu came out with a security update for the recently disclosed 32-bit emulation on 64-bit kernels local root exploit. Presumably because this was an urgent security issue, they did not release some version of the proposed kernel update but instead patched the older official 10.04 kernel. Which left me wanting to get rid of the proposed kernel update that I had installed, which is where I ran into the meta-package issue.

KernelMetaPackageGotcha written at 01:26:10; Add Comment

2010-09-15

An overview of the Debian and RPM source package formats

This is a brief and jaundiced overview of the format of Debian and RPM source packages, what the Debian and RPM package systems theoretically use to generate the compiled binary packages that people actually install. As usual, this applies to all distributions that use the Debian .deb package format or the Red Hat .rpm package format, although specific details vary. Also, I'm going to simplify to the common case.

A source RPM contains a specfile, a source tarball, and some number of patches. The specfile describes the package, names the source tarball and the patches, and contains a script that configures and compiles the binaries (I simplify). It can also contain scripts that will be run when the binary package is installed, removed, upgraded, or a number of other events. Specfiles support a complicated system of text macros, macro substitution, conditional 'execution' of portions of the specfile (which may wind up omitting or including some patches), and even more peculiar things; these are used to automate a lot of standard parts of the package build process, such as configuring a program that uses standard GNU autoconf.

There is no fixed layout of where all of these pieces go when a source RPM is unpacked and built; it depends on your local configuration, although some arrangements are more sensible than others.

(Note that those RPM settings have probably gotten slightly broken since 2006, since they seem to now be doing slightly odd things for me. RPM macros have a lot of magic in them.)

A Debian source package contains a description file, a source tarball, and a patch. After unpacking the source tarball and applying the patch, there must be a top level subdirectory called debian. Files in this subdirectory are used to control the rest of the build and packaging process; although a number are required, the most important one is debian/rules, which is the Makefile used to build the package.

(Note that this subdirectory can contain lots of things besides the Debian package building control files. For instance, if the Debian package wants to run scripts when it's installed, removed, or so on, it will usually store the scripts in debian/.)

Much like RPM specfiles and their macros, Debian rules files support a complicated system of helper programs to do most of the actual work. A typical Debian rules file cannot be fully understood without knowing what these programs do (some of this can be deduced from their names). Debian being Debian, I believe that there are several generations and versions of these helper programs (and no doubt epic flamewars have been fought over which ones to use when).

(Debian helper programs are better documented than RPM macros, for various reasons. Or at least more conveniently documented, since they have manpages.)

A Debian rules file may or may not further patch the source in the process of building it. One style of Debian package rolls both making any necessary modifications to the package source code and creating the contents of the debian directory into the initial patch; another uses the initial patch only to create the debian directory and then, RPM-like, applies a series of source patches from the debian directory during the build process. Determining which approach any particular Debian package uses may require close attention to the rules file, although if there is a debian/patches directory the odds are good that this source package uses some version of RPM-like two stage patching.

(In the Debian way, there appear to be at least three different systems for doing such patching, each somewhat different.)

DebianAndRPMSourcePackages written at 01:09:45; Add Comment

2010-09-09

Go, IPv6, and single binding machines

The current libraries for the Go language and their built in tests strongly believe that you can talk to IPv4 addresses through IPv6 sockets, which is not necessarily the case. This is a known issue (see also), and is more than somewhat inconvenient on a machine with dual binding turned off, such as my workstation, as Go will not install from source unless all its tests pass.

(Since Debian has apparently changed their minds about dual binding, this may not affect very many people. I maintain that it should, although it's now a quixotic battle that I am probably not going to win any time soon.)

If this affects you, the simple fix is probably to just apply the patch from Joerg Sonnenberger that's (currently) at the end of Go issue 679. I opted for a slightly different fix, because I wanted to force Go to use IPv4 sockets where possible. Thus, I forced preferIPv4 in src/pkg/net/ipsock.go to true and applied only his patch to src/pkg/net/sock.go to always turn off IPV6_V6ONLY on IPv6 sockets.

(A more thorough fix for preferIPv4 would be to test if the kernel let you bind IPv4 addresses to an IPv6 socket. But I didn't feel like going to that much effort for what is ultimately a quick hack that the Go maintainers are unlikely to support.)

While this is an incomplete hack with some limits, I think it is generally going to do what I want from Go even with servers, provided that I am careful (basically I can't mix an explicit IPv4 server with a Go-based IPv6 one). A better fix would be to change the code to explicitly force IPV6_V6ONLY only when we are using IPv6 sockets with IPv4 addresses, and I may try that fix at some point when I feel more ambitious about hacking up the innards of Go packages.

(One of the attraction of Go is that it looks familiar enough to me that I can fumble my way through this sort of chainsaw modifications and usually get them to work.)

As a side note: since OpenBSD doesn't allow dual binding under any circumstances, this is going to be a real issue if anyone ever attempts to port Go to OpenBSD. I suspect that the solution will be to turn off a bunch of tests.

GoIpv6DualBinding written at 00:00:17; Add Comment


Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.