How not to improve your package updater application
Earlier today I tweeted:
Every time I use the Fedora 16 gpk-update-viewer I'm filled with anger for what someone did to a nice, usable program.
Now I feel like explaining that (in the spirit of an earlier entry). First,
gpk-update-viewer is the program behind
the 'Software Update' information display on Fedora systems; it's part
In Fedora 15, the (Gnome) interface for it has two relatively minor issues (which you can actually see in the current screenshots on the PackageKit website): it doesn't group the updates into separate sections for security updates, bugfix updates, and regular updates, and it always shows the 'Details' area even if you're not interested in it. In the Fedora 16 version, someone decided to fix these problems. To properly appreciate the result, I must show you a picture (which I am not going to inline here because it's too big): a screenshot of the Fedora 16 gpk-update-viewer.
The Fedora 16 version has certainly fixed those two little problems; the Details area can now be folded away, and updates are now grouped. However this 'improvement' has created all of the following issues, many of them disastrous:
- the 'Details' area for the details of the updates is not resizable; it's fixed at four and a bit lines, which is completely and utterly inadequate for usefully reading those details without endless scrolling. I am now very glad that I recently contrived a scroll wheel.
- if you scroll the 'Details' area (and you will be) and then move to another package, the area's scroll position is not reset to the top. It doesn't always stay static either; at least sometimes it shifts oddly.
Since the entire reason I'm running
gpk-update-viewer at all is to
read the update details, this change alone is a huge usability loss. (I
check for and actually apply updates through
yum, but as far as I know
g-u-v is the only tool for seeing the update text (which is not the same
as the RPM changelogs).)
- there is no longer any indication of if the update will require
a system reboot or for you to log out and back in again. Fedora
15 annotated such updates with little icons.
- Fedora 15 mostly grouped all related binary packages together as a
single entry (which you could expand to see the list of individual
packages that would be updated). Fedora 16 lists every binary
package separately, even in cases where an update to a single
source package produces a shower of fifteen or twenty updated
(You can see this in the screenshot; in Fedora 15, there would not be separate entries for 'cheese' and 'cheese-libs'. Fedora 15's version of this was incomplete, but it was still much better than not having this at all.)
- Those headings for the type of updates you see in the screenshot ('Security updates' and 'Bug fix updates') are actually selectable entries. You can click on them and, more importantly, if you are scrolling through the list of updates you will wind up on those headings too (and have to explicitly scroll past them). This is especially irritating while you are using the arrow keys to scroll to the next entry because your mouse cursor is in the Details area so you can use your scroll wheel to read the update details.
And on a slightly more minor gripe, the Fedora 16 version always starts up with a completely crazy window size when I run it under fvwm in my normal configuration. No other Gnome program seems to have this problem and the Fedora 15 version was (and is) fine.
(Fedora 15 has gnome-packagekit 3.0.0; Fedora 16 has 3.2.1. Sadly the Fedora 15 binary doesn't work on Fedora 16, but maybe I can recompile the old version from source.)
A quiet advantage of the systemd approach to service management
One of the ways that things like
upstart are different
from the traditional
/etc/init.d approach to starting and restarting
services is in who starts and restarts services. In the traditional
approach, you do this directly by running
/etc/init.d scripts (either
explicitly or through some cover scripts). In the systemd world, things
are indirect; '
systemctl start foo.service' just asks the master
systemd to run the appropriate commands to start the service.
There is a somewhat subtle advantage to having systemd do this, one that I have been quietly appreciating lately: your shell environment never leaks into restarted services.
init.d world, a manually started or restarted service inherits
whatever environment your shell had when you ran the script(s),
especially your environment variables. This is different than
environment and can quite possibly contain settings that will change how
the service operates.
(Even logging in directly as root or using '
su -' doesn't
necessarily make this concern go away. As we've seen before, locale settings can drastically change the
behavior of programs and ssh will helpfully propagate your current ones
to the remote end.)
systemd approach makes all of those concerns go away. The
environment you run
initctl in doesn't matter, because
all these commands do is ask the master process to do things. The
master process doesn't inherit anything from you and so the commands
it runs are uncontaminated by whatever weird environment you run
stuff in. Instead they're guaranteed to always be running in the same
environment regardless of whether they were started at boot time or
stopped and restarted later.
This issue is somewhat near and dear to my heart because I retain a
fairly customized environment even when I
su to root. On
systems, I need to go out of my way to make sure that things like
sshd will inherit as few peculiar things as possible (and
I'm sure I'm not always successful). On systemd systems, well, I can
forget about the whole concern. I enjoy that.
(There are sometimes drawbacks to this approach; for example, you can't quickly increase the number of open file descriptors a daemon can have by changing your root shell's ulimit and then restarting the daemon, or set special debugging flags in the environment. I haven't found these drawbacks to be an issue in practice so far.)
(This entry was brought to you by me spending a bunch of time today
systemctl restart ...' to test some things.)
The current state of GPT, EFI, and Linux
I'm in the process of migrating my office machine's Linux installation from an old, small pair of disks to a newer and somewhat larger pair of disks. As part of this I have been doing a bunch of reading about the current state of PC technology as far as partitioning and booting goes, because getting new machines recently has made me clearly aware that BIOSes have come a long way from the old days.
There are two parts of the new world, GPT and (U)EFI. GPT is a new scheme for disk partitioning, replacing the MBR partition table and primary and extended partitions; UEFI is a new scheme for booting machines, replacing the old BIOS MBR booting. Booting with UEFI requires a GPT partition table as well as a UEFI-aware BIOS but the reverse is not the case; using GPT does not require booting with UEFI or a GPT-aware BIOS. There are two reasons to use GPT, either if you have a disk over 2TB or so (dealing with this is what GPT was introduced for) or if you ever want to boot from the disk using UEFI.
As far as I can tell, at the moment your best option for booting Linux (if you have a choice) is still old fashioned BIOS MBR booting, not UEFI boot. Many Linux distributions are in the process of switching to Grub 2, but Grub 2's UEFI support is still apparently unreliable which leaves you using various other options which are likely to be less well-supported, and in general EFI booting in Linux is still relatively young. If you're a future-looking person you can start experimenting with UEFI booting on a UEFI-capable machine, but MBR booting is going to be the easier and better way to go for now because it's still the majority choice.
My new disks are nowhere near large enough to require GPT, but I'm going to use GPT partitioning anyways for future-proofing. My new office machine supports but doesn't require UEFI booting (and I'm happy with BIOS MBR booting). However, I seem to keep disks for quite some time so it's quite possible that I'll still be using these disks when I want (or need) to switch to UEFI booting (either due to a new machine or new versions of Linux). And if I want to boot these disks with UEFI someday, I need to partition them with GPT now and set up appropriate UEFI boot partitions.
(My current disks are now slightly more than five years old and have been transplanted between three machines. Barring disk failure I suspect that the new disks will have a similar lifespan, especially since I hate reinstalling machines from scratch.)
From what I can tell (eg from the gdisk documentation), a future-proof GPT disk partitioning scheme needs two special partitions: a BIOS boot partition (gdisk code EF02) and an EFI System Partition (gdisk code EF00), probably with the ESP as the first partition on the disk. The BIOS boot partition is used to give the MBR bootloader somewhere that's guaranteed safe to store its second stage code; the EFI System Partition is where EFI bootloaders go (and where the BIOS will find them). I'm currently planning to make these 4 Mbytes and 256 MBytes respectively. This is probably overkill for both, especially since I don't plan to multiboot several OSes on my machine.
(The gdisk documentation seems to be an excellent reference for all of this; it's been my primary source.)