Wandering Thoughts archives

2016-01-31

One thing I don't like about Fedora is slow security updates

I generally like Fedora, but there are things that they don't seem to do well. Unfortunately one of them is prompt security updates, especially for nominally supported but not current versions (such as Fedora 22 right now).

At the best of times I can generally expect a multi-day delay for security updates. Consider OpenSSL CVE-2016-0701. This was warned about in advance and announced on Thursday. Most distributions had immediate updates out that day (Ubuntu, for example). Fedora got an update out for Fedora 23 only on the weekend (I'm not clear if it became available Saturday or Sunday). It's not just OpenSSL, either; I've seen similar delays for OpenSSH, the kernel, and other things that I've heard security announcements about.

Worse is the current situation with Fedora 22, as far as I can see. I noticed this recently because I noticed that my Fedora 22 machine had not been rebooted in over 30 days. I assure you that there have been Linux kernel security issues in the past 30 days that apply to the Fedora 22 kernel, because Fedora 23 uses basically the same kernel and has had a series of kernel updates over that time. Yes, Fedora 22 is not the current version, but in theory it is still supported.

(Fedora 22 may also be missing other security updates. I normally don't really keep track of security issues to the level of checking whether I have a vulnerable package and whether it's been updated; that's something I delegate to my distribution. I notice kernels because I notice reboots being needed due to them.)

Of course Fedora never promised us Fedora users anything in particular. It's open source so I get to keep all of the pieces. I'm just glad that I only run Fedora on my personal machines, which are relatively locked down, and I don't have any Fedora servers.

(At this point I would strongly advise against running Fedora on servers for the obvious reasons. Use something like Debian, or Ubuntu if you have to and don't care about Canonical's increasingly questionable behavior.)

FedoraSlowSecurityUpdates written at 20:55:22; Add Comment

2016-01-26

Why my home backup situation is currently a bit awkward

In this recent entry I mentioned that my home backup strategy is an awkward subject. Today I want to talk about why that is so, which has two or perhaps three sides; the annoyances of hardware, that disks are slow, and that software doesn't just do what I want, partly because I want contradictory things.

In theory, the way to good backups is straightforward. You buy an external disk drive enclosure and a disk for it, connect it to your machine periodically, and 'do a backup' (whatever that is). Ideally you will be disciplined about how frequently you do this. And indeed, relatively early on I set myself up to do this, except that back then I made a mistake; rather than get an external enclosure with both USB and eSATA, I got one with just USB because I had (on my machine at the time) no eSATA ports. To be more precise I got an enclosure with USB 2.0, because that's what was available at the time.

If you know USB 2.0 disk performance, you are now wincing. USB 2.0 disks are dog slow, at least on Linux (I believe I once got a benchmark result on the order of 15 MBytes/sec), and they also usually hammer the responsiveness of your machine into the ground. On top of that I didn't really trust the heat dissipation of the external drive case, which meant that I was nervous about leaving the drive powered on and running overnight or the like. So I didn't do too many backups to that external enclosure and drive. It was just too much of a pain for too long.

With my second external drive case and drive, I learned better (at least in theory); I bought a case with USB and eSATA. Unfortunately only USB 2.0, and then something in the combination of the eSATA port on my new machine and the case didn't work really reliably. I've been able to sort of work around that but the workaround doesn't make me really happy to have the drive connected, there's still a performance impact from backups, and the heat concerns haven't gone away.

(My replacement for the eSATA port is to patch a regular SATA port through the case. This works but makes me nervous and I think I've seen it have some side effects on the machine when the drive connects or disconnects. In general, eSATA is probably not the right technology here.)

This brings me to slow disks. I can't remember how fast my last backup run went, but between the overheads of actually making backups (in walking the filesystem and reading files and so on) and the overheads of writing them out, I'd be surprised if they ran faster than 50 MBytes/sec (and I suspect they went somewhat slower). At that rate, it takes an hour to back up only 175 GB. With current disks and hardware, backups of lots of data are just going to be multi-hour things, which does not encourage me to do them regularly at the best of times.

(Life would be different if I could happily leave the backups to run when I wasn't present, but I don't trust the heat dissipation of the external drive case that much, or for that matter the 'eSATA' connection. Right now I feel I have to actively watch the whole process.)

As I wrote up in more detail here, my ideal backup software would basically let me incrementally make full backups. Lacking something to do that, the low effort system I've wound up with for most things uses dump. Dump captures exact full backups of extN filesystems and can be compressed (and I can keep multiple copies), but it's not something you can do incrementally. Running dump against a filesystem is an all or nothing affair; either you let it run for as many hours as it winds up taking, or you abort it and get nothing. Using dump also requires manually managing the process, including keeping track of old filesystem backups and removing some of them to make space for new ones.

(Life would be somewhat different if my external backup disk was much larger than my system disk, but as it happens it isn't.)

This is far from an ideal situation. In theory I could have regular, good backups; in practice there is enough friction in all of the various pieces that I have de facto bad ones, generally only made when something makes me alarmed. Since I'm a sysadmin and I preach the gospel of backups in general, this feels especially embarrassing (and awkward).

(I think I see what I want my situation to look like moving forwards, but this entry is long enough without trying to get into that.)

HomeBackupHeadaches written at 23:18:37; Add Comment

2016-01-21

One example of why I like ZFS on Linux

Yesterday evening, my office workstation blared notifications at me of SMART errors on one of my HDs. The disk is old enough by now for this not to be too surprising (and it's a 1 TB Seagate, which we've had some problems with), but still, a disk issue is never exactly welcome, even if all of the data on it is mirrored. Since we've seen SMART errors be not really a problem before, I did the obvious and easy thing to test the situation: I started a ZFS pool scrub on the ZFS pool that takes up most of the disk. This scrub turned up actual read errors (as reported by the disk to the kernel), but it also caused ZFS to say that it was repairing the issue. After the scrub finished without any reported errors (but with 256 KB reported repaired), I did a second scrub; this one did not report any problems and didn't cause the disk to report any read errors.

(Specifically the HD reported '3 currently unreadable (pending) sectors' and '3 offline uncorrectable sectors'. The SMART daemon reported that this condition had cleared somewhat after the first pool scrub and repair finished.)

So, what seems to have happened here is that ZFS scanned most of the disk, found some bad sectors, and quietly rewrote them in place. When it did the rewrite, the normal operation of the HD caused the bad sectors to be spared out and replaced by good ones. My HD is, for the moment, back to being healthy and doesn't need to be replaced. And if does need to replaced, the ZFS scrub gives me pretty good confidence that the data on the other mirror is fully intact and there are no latent read errors that are going to cause me heartburn.

This is not a big save by ZFS, the way I've had on other systems. But I consider it a midsized save; the ZFS scrub turned an alarming and uncertain situation into a much more certain one that may even be fully fixed.

None of this is exceptional for ZFS and parts of it are normal for anyone with mirrored storage (which has saved me from abrupt disk failure at home). But the whole reassuring, simple, and pain free experience is unusual for Linux. And that in a nutshell is one of the big benefits of ZFS on Linux and a good part of why I like it so much. Dealing with potentially failing drives and uncertain read error locations and so on would be much more hassle with basically any other setup, and hassle is exactly the thing I don't want when I'm already jumpy enough because smartd is alarming me.

(There are other reasons to like ZFS, of course. And you can get this sort of scan, checksum verify, and repair experience with btrfs too as far as I know, assuming that you're willing to use btrfs in its current state.)

ZFSOnLinuxScrubSave written at 00:10:32; Add Comment

2016-01-11

The benefits of flexible space usage in filesystems

My home and work Linux machines are very similar. They have about the same collection of filesystems, and they both have significant amounts of free disk space (my home machine more than my work machine, because I've got much bigger disks there). But despite this, the filesystems on my work machine have lots of free space while the filesystems on my home machine tend to run perpetually relatively close to being out of space.

At one level, the difference is in how the disk space is managed. At work, I've migrated to ZFS on Linux; at home, everything is ext3 on top of LVM (on top of a software RAID mirror). But the real answer is that shrinking an extN filesystem and a LVM logical volume is kind of a pain, and also kind of dangerous (at least as far as I know). If I grew filesystems wildly at home, it'd be a pain to shrink them later if I needed the space elsewhere, so for the most part I only expand filesystems when I really need the space.

In theory this shouldn't make any difference; if I need the space, I'll grow the filesystem. In practice it makes me irrationally reluctant to do things that need substantial chunks of space temporarily. I would probably be better off if I adopted a policy that all of the filesystems I used actively should have, say, 40 GB of free space more or less at all times, but I'm not that sensible.

(There's some irrational bit of me that still thinks that disk space is in short supply. It's not; I have more than a TB free, and that's after extravagantly using space to store more or less every photograph I've ever taken. In RAW format, no less.)

This doesn't happen at work because ZFS dynamically shares the free pool space between all of the filesystems. Unless you go out of your way to set it up otherwise, there is no filesystem specific free space, just general pool free space that is claimed (and then released) as you use space in filesystems. Filesystems are just an organizational thing, not something that forces up-front space allocation. So I can use however much space I want, wherever I want to, and the only time I'll run out of space in a filesystem is if I'm genuinely out of all disk space.

This is a really nice feature of ZFS, and I wish I had it at home. It would clearly make my life easier by entirely removing one current concern; I just wouldn't have to manage space on a per-filesystem basis any more. Space would just be space.

(Someday I will have this at home, by migrating my home system to ZFS. But that probably won't be for a while for various reasons. Not having ZFS at home is still tolerable, so I suspect that I won't migrate until I'm migrating hardware anyways and that probably won't be for a while for various reasons.)

PS: btrfs is not what I consider a viable option. At this point I'd probably only consider btrfs once a major Linux distribution has made it their default filesystem for new installs and has survived at least a year of that choice without problems. And I'm not holding my breath for that.

Sidebar: Why I believe shrinking a LVM logical volume is dangerous

To grow a filesystem inside a LVM volume, you first grow the volume and then grow the filesystem to use the new space. To shrink a volume, you do this in reverse; you first shrink the filesystem and then shrink the volume. However, as far as I know there is nothing in LVM that prevents you from accidentally shrinking the volume so that it is smaller than the filesystem. Doing this by accident will truncate the end of your filesystem, almost definitely lose some of your data, and quite probably destroy the filesystem outright. Hence the danger of doing this.

It would be really great if LVM knew about common filesystem types, could read their superblocks to determine the FS-level size, and by default would refuse to shrink a logical volume below that size. But as far as I know it doesn't.

(As a practical matter I probably never want to shrink a filesystem without a current backup of it, which brings up the awkward subject of my home backup strategy.)

FlexibleFilesystemSpaceBenefit written at 00:48:16; Add Comment

2016-01-09

The convenience of having keyboard controls for sound volume

For a long time now, many keyboards have come with various additional keys over and above the traditional set. First it was Windows keys, and then people started adding various 'multimedia' keys and buttons for things. For an equally long time I used a very minimal keyboard without such keys, so I rolled my eyes a little bit at the indulgence (and the wasted space) and otherwise ignored the issue of keyboard 'multimedia' control. Even when I recently got a keyboard that sort of had these keys, I initially kept ignoring them and the whole issue; it was easy enough to reach out to the appropriate speaker to fiddle the volume, or maybe call up the applet I have sitting there for general volume control.

It turns out that I was kind of being a fool about this (as usual). Having keys that I wasn't actually using nagged at me a bit, and recently I got just irritated enough with reaching for the volume control to figure out how to wire up these keys to do something. The first thing I learned is that it turned out to be relatively easy; there are command line tools that will let you control volume, these keys generated distinct keycodes, and they were easily bound to actions in fvwm. My current setup is:

Key XF86AudioRaiseVolume   A N   Exec amixer -q set Master '2%+'
Key XF86AudioLowerVolume   A N   Exec amixer -q set Master '2%-'
Key XF86AudioMute          A N   Exec amixer -q set Master toggle

(Some of my sources when I researched this include here, here, and here. Some of these also give the pactl equivalents of my amixer commands. See also this little GUI tool (via), although I actually use the old Gnome volume control applet from Fedora 14, which miraculously hasn't broken yet.)

When I set this up, I expected this to be basically a curio. As you may have gathered, I was wrong; the keyboard keys have rapidly become my primary method of doing volume control. It's not because they're any better than turning the volume knob or moving the volume slider up or down (in some ways they're worse). It's because they're significantly more convenient, because my hands are usually right there on the keyboard and volume shifts are thus just a quick tap away. As small as it seems, not having to reach for the volume knob or the mouse really does make a palpable difference for how things feel. Even if it's a little bit of work to shift my hand, it's less work, less interruption, and less annoyance, and I like all of that.

(The convenience of having a mute/unmute was especially a surprise. It's now trivial to mute sound if I'm watching some Youtube video just to see it, and as a result I do it fairly often.)

As a side note, I arrived at a 2% volume change per keypress partly by experimentation about what felt good and partly by observing that it's easy to rapidly repeat a keypress, so small changes were probably better than big ones. Also, I keep my sound volumes rather low so small changes in nominal volume can have clearly audible effects.

LinuxVolumeKeys written at 00:09:54; Add Comment

2015-12-26

Adjusting mouse sensitivity on Linux, and why you might want to

Suppose, not entirely hypothetically, that you are moving from an old and relatively low resolution mouse to a new high resolution mouse, say a 1200 DPI mouse. If you do nothing, what you'll experience is that your new mouse is very twitchy and it's hard to point precisely to small things even when you're trying to move the mouse pointer slowly and carefully. This can easily wind up giving you unhappy feelings about your new mouse and of course it's generally frustrating. So what you want to do is turn down the mouse sensitivity so that it feels more like your old mouse.

(Some of this will depend on how your new mouse feels and moves relative to your old mouse. If you had to make sweeping moves with your old mouse and your new mouse is one that you can do tiny shifts with, you may not need to turn down the sensitivity very much at all. And the worst case is moving the other way, from a low resolution mouse that you just nudged around lightly to a high resolution mouse that wants you to move it around with sweeping gestures.)

As usual, the ArchLinux wiki has a pretty good page on mouse acceleration that steered me straight to the xinput command's ability to set detailed properties, even on a per-mouse basis (this is potentially important if you have more than one mouse that you might plug in). For me the most important property to set was 'Device Accel Constant Deceleration':

xinput --set-prop '<MOUSE NAME>' 'Device Accel Constant Deceleration' 2.2

The other setting I change is the 'Device Accel Velocity Scaling', because the default value of '10' is apparently based on a mouse sample rate of 100 Hz instead of my actual one of 125 Hz (see here for details on this). So I set:

xinput --set-prop '<MOUSE NAME>' 'Device Accel Velocity Scaling' 8

Note that total effect I get depends on both of these settings together, which means that there's no point in tuning everything carefully for one setting and then adjusting the other. Adjust them both first and tune from there.

(I never tried to 'tune' the velocity scaling, since it theoretically has a well defined proper value.)

I determined the deceleration figure by starting with 1200 DPI divided by the resolution of another mouse that I use and am happy with the feel of, but then I adjusted things to taste several times. The most important thing is that the mouse feel right to you; the math is just a starting point. This also means that you may not want to bother changing the velocity scaling; I did it because I'm the kind of person who usually sets that kind of stuff.

(Of course, one fly in the ointment of working out a careful DPI scaling if you also set the velocity scaling is that your old mouse will have been running not just at its lower DPI but also at the default velocity scaling. I have no idea what exact effect this has, but I expect that it has some.)

If or when I move to a high-DPI display, I expect that I'll want to reduce or reverse this mouse resolution reduction, since smaller pixels make the same mouse movement (in pixels) cover less physical area on screen. Of course by the time I get a high-DPI display, X or something else in the environment may be helpfully compensating for this effect (in much the same way that browsers on high-DPI displays redefine what a CSS 'pixel' means).

Also, I started out with higher values for deceleration and have been slowly adjusting it downwards (ie, less deceleration) over time. You may find that you have a similar adjustment process to a high-DPI mouse.

(Another reference to doing this is here, and also this StackExchange question and answers.)

PS: long distance movement of a high-DPI mouse also interacts with X's basic xset-based mouse acceleration. I found that I was okay with the normal default settings here once I'd set the deceleration in xinput, but you may have to play around with that.

AdjustingMouseSensitivity written at 01:41:19; Add Comment

2015-12-18

Some things about the XSettings system

Yesterday I mentioned the XSettings standard for exposing (some) toolkit related configuration options to theoretically interested parties in a theoretically toolkit-independent way. There are some slightly non-obvious or not entirely documented things about this and daemon support for it.

First, as hinted by the 'X' in the name, this is is not a DBus-based system. Instead it uses the old-fashioned approach of setting an X property on the root window and having programs read this property. Because this is an X property, all clients can see it, whether they are on the local machine or on a remote machine. In turn this means that remote clients may change their behavior if you start running xsettingsd or the like, because now they can see your (local) configuration settings. How your local configuration settings interact with what's available on a remote machine can be potentially chancy; for example, it's perfectly possible to specify a Gtk/FontName that doesn't exist on other machines.

Some but not all settings daemons have side effects when run. For example gnome-settings-daemon appears to also add some X resources for things like Xft settings. This itself can cause (some) programs to change their behavior, even if they don't use a toolkit with support for XSettings. As far as I can tell, xsettingsd does not do this.

At least xsettingsd allows you to set essentially arbitrary settings properties, including in existing namespaces; for instance, it sure looks like you can set all sorts of XFT properties in XSettings. However, this is an illusion. In practice, there is a small set of known shared settings for general cross-toolkit things and if something's not in there, you setting it will do nothing. Where this really starts to matter (at least to me) is that the available XFT settings are pretty minimal. In particular, they don't include the fontconfig lcdfilter setting, which turns out to be one of the settings necessary to get fonts to look how I want them to.

(It's not clear to me if lcdfilter can be set in the Xft.* X resources either. I suspect not, but it probably can't hurt to try.)

At the same time, modern GTK has way more settings exposed through XSettings than are documented in the registry. To find out what all of them are, you basically need to fire up gnome-settings-daemon temporarily and run dump_xsettings to extract them all. I don't know what settings KDE exposes (if any); I haven't tried to find and run the KDE equivalent of gnome-settings-daemon.

For XFT settings specifically, I'm not sure what reads XSettings, what reads the X resource database, and what ignores all of this. I expect that GTK applications read XSettings, but I've seen some basic X programs like xterm appear to read either XSettings or X resources or perhaps both.

(And gnome-settings-daemon itself seems to do at least some DBus stuff, although I don't know if that's used for querying settings. All of this is annoyingly complicated. See this blog entry from 2010 for a picture of how complicated it was back then, and it's probably worse now.)

On the whole, if you have a mostly or entirely working environment now without a settings daemon involved, it seems safest to have the daemon publish only an extremely minimal set of XSettings settings. I started out feeling quite enthused about setting all of the XFT options but I'm now shifting more and more towards publishing only Gtk/FontName as the minimal fix for my issues. Of course, the mere existence of an active XSettings daemon may change program behavior (most especially including on remote machines), but you take what you can get in the world of modern X.

XSettingsNotes written at 01:40:23; Add Comment

2015-12-17

Fixing the GTK UI font in my Fedora 23 setup

When I upgraded my office machine from Fedora 22 to Fedora 23, one thing I noticed immediately is that some of the fonts in a number of my applications had changed. After I looked at things for a while, it was clear that the font used for UI elements in GTK based applications had shrunk between Fedora 22 and Fedora 23. This font is used, for example, for Firefox's URL bar and in much of Liferea's interface, although in both cases the actual content (web pages and feed entries) was not affected. Trying to fix this sent me down a whole bunch of rabbit holes, because I don't use an existing desktop environment that has all of this solved and integrated; instead I have my own minimal desktop, which leaves me on my own to solve this sort of thing.

The first thing I discovered is that changing font settings in gnome-tweak-tool (for GTK 3) or gconf-editor (for GTK 2) didn't seem to do anything. The changes clearly got saved, but they didn't change how Firefox, Liferea, and so on looked (even when set to absurd values that should have forced clear changes). It turns out that GTK applications don't seem to look this information up directly (or at least not things like global font settings); instead they have an entire protocol to communicate with a settings daemon. If you do not have a settings daemon running, at least in Fedora 23 your applications use default values and ignore your theoretical changes. So it turned out that the first thing I needed was a settings daemon.

Gnome has one of these, gnome-settings-daemon, but it turns out that there are better options, because of course this is actually a freedesktop standard called XSettings. I wound up with xsettingsd, which is a simple daemon with a simple configuration system, and apparently XFCE also has a relatively lightweight daemon that can be configured via a GUI. Part of what I like about xsettingsd is that it can be told to only make a very few settings available, which is what I want here; I only really want to fix my font issues, not start having to maintain lots of GTK configuration options.

(I stumbled over this via the ArchWiki page on font configuration; see also their page on GTK+. One issue with just running gnome-settings-daemon is that it has a whole bunch of side effects, since it expects to be run as part of an integrated Gnome environment.)

Fiddling with the Gtk/FontName XSetting got me close to the Fedora 22 appearance but not quite on it; my best result was setting the font to 'Sans 11' (which made things not obnoxiously small or constantly bold). To solve the mystery of what actual font and font size my applications were using on Fedora 22, I resorted to brute force using my home machine (which is still running Fedora 22) via fontconfig debugging options:

FC_DEBUG=1025 liferea

Per the fontconfig documentation, this dumps out enough information that you can determine what fonts the application is using at what font sizes. You get reports like:

Match Pattern has 25 elts (size 32)
        family: [...]
        [...]
        size: 10.4443(f)(s)
        [...]

Best score [...]
Pattern has 23 elts (size 23)
        family: "DejaVu Sans"(w)
        familylang: "en"(w)
        style: "Book"(w)
        [...]

Although someone who understands fontconfig can probably get a lot more out of these messages, for me this says that Liferea wound up getting DejaVu Sans at size '10.4443' [sic].

That weird fractional size turned out to be the missing piece of the puzzle. Although the Fedora default GTK UI font is apparently 'Sans 10' in both Fedora versions, in my Fedora 22 setup this was being scaled up just a bit and so it became Sans at 10.4443. In Fedora 23, it was no longer getting scaled up; 'Sans 10' was 10 points and so shrunk compared to Fedora 22. 'Sans 11' was of course just a bit bigger still.

(I suspect that Fedora 22 GTK was doing some DPI related scaling, although I can't make the numbers come out exactly right for scaling from 96 DPI. Fedora 23 may have dropped this scaling or it may have changed some DPI related thing in the environment so that no scaling gets done.)

Somewhat to my happy surprise, you can actually set Gtk/FontName to "Sans 10.4443" and have it work. On my Fedora 22 machine, the resulting font sizes are exactly the same with and without xsettingsd running, so I expect that when I get back to work tomorrow this will make Fedora 23 be completely happy.

On the whole this has been a very educational experience, even if it did basically eat up much of my day and frustrated me during chunks of it. I've learned a bunch more about how the Gnome and GTK environment operate, got a potentially useful surprise about Xft fonts and in the process I wound up stumbling over several other issues that are going to improve my environment a bit.

(I have some stuff to write about on XSettings and related issues, but this entry is already long enough so that's going in another entry.)

Fedora23FixingGTKUIFont written at 00:07:59; Add Comment

2015-12-16

I just had another smooth Fedora version upgrade with ZFS on Linux

When I gave in to temptation and started using ZFS on Linux, one of my big concerns was whether it would cause problems when I upgraded from one Fedora version to another. In my initial report on my experiences I wasn't able to say anything here because I hadn't done a version upgrade yet. Well, now I have; in fact, I've gone through three of them now (to Fedora 21, 22, and now 23). So I can say that for me, this was problem free. As I expected a year ago, it was basically like installing another kernel; DKMS rebuilt everything for me and it all just worked.

With that said, I think there are two important things that help me a lot here. First, Fedora keeps kernels basically in sync between their major versions. This means that a Fedora upgrade is very unlikely to turn up an incompatibility between ZFS on Linux and a new kernel (an extreme case would some change that means ZoL's kernel modules can't be built). I also stay pretty up to date with ZoL's development version, which means that I have the latest kernel compatibility fixes; as a result, I've never had problems with applying Fedora kernel updates in general.

Second, I do my Fedora upgrades via a live yum (now dnf) upgrade. I suspect that DKMS kernel module rebuilds work fine in other upgrade mechanisms, but there are at least more things that might go wrong there simply because things are happening in an environment that's at least somewhat different from the normal one. While a lot changes during a live upgrade, it's still reasonably close to a normal environment for rebuilding DKMS modules.

(Possibly this is just superstitious reassurance.)

As I mentioned back then, I do take the precaution of doing a test upgrade of a Fedora virtual machine (with ZoL installed and a pool running and so on) before I attempt the real upgrade. This can also be a reasonably good way of finding (and investigating) other upgrade surprises, although some things only become visible afterwards. Doing such a test VM upgrade doesn't take too long each time and I figure it's a good precaution to take in general (along with upgrading my work laptop first, because that's more dispensable than my office workstation).

ZFSOnLinuxSmoothFedoraUpgrade written at 00:59:19; Add Comment

2015-12-12

My views on the choice of Linux distribution

I have tangled and complicated feelings about the choice of a Linux distribution for myself and about the idea of changing distributions. So today I feel like trying to run down some of the complexities at play.

In more or less point form, because it's easier that way:

  • Given that I use a basically completely hand-built custom environment (I compile my own copy of fvwm, for example), in theory I'm basically indifferent to the actual Linux distribution or even Unix OS that I'm using. The out of box look of a distribution is basically irrelevant. This sounds like it should make switching easy (even all the way to eg FreeBSD).

    (I don't even use a graphical login manager, so I wouldn't notice gdm or lightdm or xdm or whatever configuration differences.)

  • However, setting up a highly custom environment that actually works takes a lot of effort because many of the desirable pieces of a modern desktop environment are neither standardized nor documented. Working audio, working automatically mounted USB memory sticks, fully working Gnome and KDE programs, all of that takes work, is at least somewhat different between different distributions, and is fragile. This creates a serious disincentive to switch distributions or Unixes once I have one of them working.

    (Having a fully hand-built custom environment makes this worse, since I expect to see basically no difference in my actual environment.)

  • My actual environment is not just my desktop. Instead it encompasses a bunch of things like heavily VLAN'd networking, a GRE tunnel with IKE, policy based routing, a webserver, ZFS on Linux with bind mounts and so on. Most of these parts of my machine are distribution dependent, because no one has standardized this sort of system level setup. Switching Linux distributions thus involves re-engineering at least some of it, if only to account for things like drastically different ways of configuring daemons.

  • Some of the elements of my custom environment have to be (re)built as system packages, which means that I wind up caring about how easy it is to patch and (re)build packages. I have wound up with strong opinions on this for RPMs versus .debs (cf) and I'm likely to wind up with equally strong opinions for any other packaging system. On top of that, I'm already quite familiar with building and working with RPMs; anything else would have to be learned, adding to the cost of switching.

    I'm not going to say that you can't have a better package format than RPM, because RPM sure has problems. But I do think it's very hard to beat right now and most package formats are going to fall short.

  • As a sysadmin, I have fairly strong opinions on right and wrong ways for the overall system to be structured and managed. Normal people would just ignore all of this, as it's not directly relevant to my custom desktop or the overall custom environment, but I care and so different distributions (or Unixes) have under the surface differences for me. Switching is guaranteed to give me exciting new things to be irritated about (as opposed to the old irritating things that I'm already familiar with).

    (There is also the simple mechanical issue of getting familiar with sysadmining a new distribution or OS. This always takes time and work.)

The overall effect here is to make switching much harder and far less attractive than it looks on the surface. At this point I am far less likely to someday switch to something than to someday switch away from Fedora, and I have a lot of reasons to hope that that day is a long way away (because it would be a pain for probably very little practical gain).

As a corollary, at this point I'm not sure what a Linux (or Unix) could do to get me to switch to it. Everyone is packaging more or less the same upstream open source software and it seems really unlikely that a distribution is going to magically achieve stunning and compelling packaging and system management.

(There certainly was a time when there was a real difference between Linux distributions, but major ones strike me as pretty close now. For example, Debian, Ubuntu, and Fedora are all pretty up to date, all have pretty good package selections, release pretty frequently, and so on.)

DistributionChoiceViews written at 02:07:10; Add Comment


Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.