2008-06-29
Why user exposure matters for Linux distributions, especially on desktops
Red Hat Enterprise Linux (or equivalently CentOS) has any number of things going for it from the perspective of sysadmins, but one of things it does not have is user visibility. Ubuntu is the hot Linux distribution these days, despite its issues, with special acclaim for its desktop experience.
This user visibility matters, contrary to what some people believe (or would like to be the case).
A good part of why we run Ubuntu on our core servers is that research groups were already running Ubuntu on their machines, both desktops and compute nodes, and they wanted to us to have the same environment, partly because it was what they were already familiar with, and partly because it meant that they could easily move programs back and forth between their machines and ours. Those research groups did not select Ubuntu because they had gone through a careful technical evaluation of which Linux distribution would be better; they used Ubuntu because it had the mindshare and because it worked well enough to justify its PR.
In a nutshell, that is why user visibility matters: these decisions do get driven from the bottom up, with users advocating for what they are already using and are familiar with.
(Also, it is easier to sell something to users if it already has the visibility with them. I am sure that there would have been people asking why we weren't using Ubuntu if we'd made a different choice, and yes, their opinions matter.)
I say that user visibility especially matters on desktops because desktops are the easiest and the best place for users to get hooked on something. They're the easiest because in practice they're the machines that users have the most control over, and they're the best because they're what users use all the time.
Why Ubuntu's LTS releases are inferior to Red Hat Enterprise Linux
It's time to update my view of Ubuntu with my most recent set of feelings. Well, with why I feel my most recent set of feelings, which is that Ubuntu LTS is significantly inferior to Red Hat Enterprise Linux.
Ubuntu's LTS releases (Ubuntu 6.06 and Ubuntu 8.04) promise five years of support (hence the 'Long Term Support' label). This support is why we're able to consider them, since we need more than 18 months of support that you get with regular Ubuntu releases; we're simply not in a position to update our servers that frequently.
(There are two reasons. First, moving operating systems in a production environment requires a fairly large amount of careful testing (and a certain amount of dealing with changes). Second, we run login servers and our users do not want to have to do migration work that frequently either; they have better things to do with their time, like complete their PhDs or do research.)
The problem is that in practice Ubuntu's 'long term support' is actually only 'long term security fixes'. I have almost never seen Ubuntu fix a problem that was not a security problem, even when the problems are reported in Ubuntu's bug report system (and in one case, even when the problem let an ordinary user crash the kernel). The inevitable result is that we have an ever-growing catalog of bugs in 6.06 that will never be fixed.
(I think that Ubuntu does fix bugs under some limited circumstances; what they really don't seem to do is fix bugs when the fix would require backporting things into the old 6.06 version of packages.)
By contrast, something like Red Hat Enterprise Linux does provide real long term support, where even non-security bugs will be fixed (at least for a while). This is not just theoretical, in that I have seen actual RHEL packages released to backport fixes for mere bugs.
(I am also relatively certain that Red Hat would consider 'user can crash the kernel' to be a security bug.)
Ubuntu, LTS releases included, still has an unmatched selection of packages (and is what users have heard of, which matters more than you might think). But there is less and less enthusiasm here for running it on 'backend' machines, machines that users don't log in to or run programs on, and I can't say that we're very enthused about it even on the login servers.
2008-06-09
Mirrored system disks should be trivial to set up
I have a simple request for people putting together installers for modern systems, especially systems generally aimed at servers: it should be dirt simple to do an install with mirrored system disks.
With most modern servers having two drive bays (often hotswap ones) and disk space being so cheap, going to mirrored system disks make a lot of sense. But most people won't move to this configuration until it is somewhere between easy and trivial to set up, much like many people did not move to LVM-based system setups, despite their advantages, until installers made it trivially easy to install the system that way.
What I'd like to see for mirrored system disks is something similar to the LVM approach. If the system detects two identically sized disks, it offers 'standard mirrored system disks' as a partitioning option, and then does all of the magic necessary to make everything work nicely. (These days, probably using LVM on top of a single software RAID partition.)
But really, the specifics don't matter: what matters is that it is long since time for mirrored system disks to get first class support as an installation option, because they are (or should be) so common these days.
(Why yes, I was installing a Red Hat Enterprise 5 system today and cursing yet again the backwards way that the Red Hat installer approaches this. But my co-workers are wrestling with this for Solaris 10, and we have a pile of Ubuntu 6.06 machines that would have mirrored system disks if it was easily done in the installer.)
2008-06-08
Recovering my Eee PC from a post-update problem
I recently applied a bunch of pending Eee updates from Asus, and suddenly the full desktop mode that I prefer stopped working. The Eee always booted into the basic interface, and when I switched back to advanced mode everything in my Desktop directory had disappeared (which is bad, because I have some customized launchers there)
(Fortunately I had an off-machine backup of my Desktop directory so I could restore my customizations.)
I enabled advanced desktop mode the easy way, namely by installing the
advanced-desktop-eeepc package. After poking around and trying several
things, what fixed the problem problem was to apt-get remove and then
apt-get install the package again.
(I assume that something in one of the Asus updates overwrote a customization that the advanced desktop stuff required, although I couldn't spot anything obvious. It is a little disturbing to me that something was apparently deleting all additions to my Desktop folder, although I don't know whether this is something in the basic desktop or in the advanced desktop setup stuff.)
Sidebar: the launchers I've found useful
By 'launchers' I mean icons on the desktop that start various
programs when clicked, which are created by .desktop files in your
$HOME/Desktop directory. The ones I use are: blank the screen, start konsole, start xterm, and start
a black-on-white xterm (instead of the default white on black colour scheme).
2008-06-02
Improving RPM as a packaging system
As I mentioned yesterday, scripts in packaging systems are an opportunity to make mistakes. Thus, one thing that RPM could do to improve is to automate as many of the things that are done repeatedly in install and removal scripts (of which RPM has several variants).
There are two improvements that jump out at me. First, many packages only do things when they are being actually installed or removed instead of being upgraded (and sometimes do other things during an upgrade). Right now people detect these cases by boiler-plate shell script code, but RPM should make this available directly, by having scripts that are executed only in the appropriate context.
The second is that there are all sorts of generic types of files that want specific actions to be performed when they are installed or removed; for example, every time you install or remove a Gnome application it rebuilds the gconf schemas stuff. Right now (you saw this coming) everyone does this with standard boilerplate in scripts, which is bad.
Clearly we don't want the core RPM people to have to add specific actions and file types for every sort of semi-standard file that people invent. Instead RPM needs a mechanism where you can tag a file as having some arbitrary type, and a standard extension mechanism (such as shell scripts in a known place) to tell RPM what to do for each type. Then people building RPMs for Gnome applications would just tag their gconf schema files as the appropriate type and rely on the actions that the core Gnome people had defined to handle all the details.
(Then people could extend this with extra features, for example to automatically run a syntax checker over all files tagged as gconf schemas at RPM build time and complain if they were malformed. That would just be a build-time action for that type of files.)