Wandering Thoughts archives

2016-01-31

One thing I don't like about Fedora is slow security updates

I generally like Fedora, but there are things that they don't seem to do well. Unfortunately one of them is prompt security updates, especially for nominally supported but not current versions (such as Fedora 22 right now).

At the best of times I can generally expect a multi-day delay for security updates. Consider OpenSSL CVE-2016-0701. This was warned about in advance and announced on Thursday. Most distributions had immediate updates out that day (Ubuntu, for example). Fedora got an update out for Fedora 23 only on the weekend (I'm not clear if it became available Saturday or Sunday). It's not just OpenSSL, either; I've seen similar delays for OpenSSH, the kernel, and other things that I've heard security announcements about.

Worse is the current situation with Fedora 22, as far as I can see. I noticed this recently because I noticed that my Fedora 22 machine had not been rebooted in over 30 days. I assure you that there have been Linux kernel security issues in the past 30 days that apply to the Fedora 22 kernel, because Fedora 23 uses basically the same kernel and has had a series of kernel updates over that time. Yes, Fedora 22 is not the current version, but in theory it is still supported.

(Fedora 22 may also be missing other security updates. I normally don't really keep track of security issues to the level of checking whether I have a vulnerable package and whether it's been updated; that's something I delegate to my distribution. I notice kernels because I notice reboots being needed due to them.)

Of course Fedora never promised us Fedora users anything in particular. It's open source so I get to keep all of the pieces. I'm just glad that I only run Fedora on my personal machines, which are relatively locked down, and I don't have any Fedora servers.

(At this point I would strongly advise against running Fedora on servers for the obvious reasons. Use something like Debian, or Ubuntu if you have to and don't care about Canonical's increasingly questionable behavior.)

FedoraSlowSecurityUpdates written at 20:55:22; Add Comment

2016-01-26

Why my home backup situation is currently a bit awkward

In this recent entry I mentioned that my home backup strategy is an awkward subject. Today I want to talk about why that is so, which has two or perhaps three sides; the annoyances of hardware, that disks are slow, and that software doesn't just do what I want, partly because I want contradictory things.

In theory, the way to good backups is straightforward. You buy an external disk drive enclosure and a disk for it, connect it to your machine periodically, and 'do a backup' (whatever that is). Ideally you will be disciplined about how frequently you do this. And indeed, relatively early on I set myself up to do this, except that back then I made a mistake; rather than get an external enclosure with both USB and eSATA, I got one with just USB because I had (on my machine at the time) no eSATA ports. To be more precise I got an enclosure with USB 2.0, because that's what was available at the time.

If you know USB 2.0 disk performance, you are now wincing. USB 2.0 disks are dog slow, at least on Linux (I believe I once got a benchmark result on the order of 15 MBytes/sec), and they also usually hammer the responsiveness of your machine into the ground. On top of that I didn't really trust the heat dissipation of the external drive case, which meant that I was nervous about leaving the drive powered on and running overnight or the like. So I didn't do too many backups to that external enclosure and drive. It was just too much of a pain for too long.

With my second external drive case and drive, I learned better (at least in theory); I bought a case with USB and eSATA. Unfortunately only USB 2.0, and then something in the combination of the eSATA port on my new machine and the case didn't work really reliably. I've been able to sort of work around that but the workaround doesn't make me really happy to have the drive connected, there's still a performance impact from backups, and the heat concerns haven't gone away.

(My replacement for the eSATA port is to patch a regular SATA port through the case. This works but makes me nervous and I think I've seen it have some side effects on the machine when the drive connects or disconnects. In general, eSATA is probably not the right technology here.)

This brings me to slow disks. I can't remember how fast my last backup run went, but between the overheads of actually making backups (in walking the filesystem and reading files and so on) and the overheads of writing them out, I'd be surprised if they ran faster than 50 MBytes/sec (and I suspect they went somewhat slower). At that rate, it takes an hour to back up only 175 GB. With current disks and hardware, backups of lots of data are just going to be multi-hour things, which does not encourage me to do them regularly at the best of times.

(Life would be different if I could happily leave the backups to run when I wasn't present, but I don't trust the heat dissipation of the external drive case that much, or for that matter the 'eSATA' connection. Right now I feel I have to actively watch the whole process.)

As I wrote up in more detail here, my ideal backup software would basically let me incrementally make full backups. Lacking something to do that, the low effort system I've wound up with for most things uses dump. Dump captures exact full backups of extN filesystems and can be compressed (and I can keep multiple copies), but it's not something you can do incrementally. Running dump against a filesystem is an all or nothing affair; either you let it run for as many hours as it winds up taking, or you abort it and get nothing. Using dump also requires manually managing the process, including keeping track of old filesystem backups and removing some of them to make space for new ones.

(Life would be somewhat different if my external backup disk was much larger than my system disk, but as it happens it isn't.)

This is far from an ideal situation. In theory I could have regular, good backups; in practice there is enough friction in all of the various pieces that I have de facto bad ones, generally only made when something makes me alarmed. Since I'm a sysadmin and I preach the gospel of backups in general, this feels especially embarrassing (and awkward).

(I think I see what I want my situation to look like moving forwards, but this entry is long enough without trying to get into that.)

HomeBackupHeadaches written at 23:18:37; Add Comment

2016-01-21

One example of why I like ZFS on Linux

Yesterday evening, my office workstation blared notifications at me of SMART errors on one of my HDs. The disk is old enough by now for this not to be too surprising (and it's a 1 TB Seagate, which we've had some problems with), but still, a disk issue is never exactly welcome, even if all of the data on it is mirrored. Since we've seen SMART errors be not really a problem before, I did the obvious and easy thing to test the situation: I started a ZFS pool scrub on the ZFS pool that takes up most of the disk. This scrub turned up actual read errors (as reported by the disk to the kernel), but it also caused ZFS to say that it was repairing the issue. After the scrub finished without any reported errors (but with 256 KB reported repaired), I did a second scrub; this one did not report any problems and didn't cause the disk to report any read errors.

(Specifically the HD reported '3 currently unreadable (pending) sectors' and '3 offline uncorrectable sectors'. The SMART daemon reported that this condition had cleared somewhat after the first pool scrub and repair finished.)

So, what seems to have happened here is that ZFS scanned most of the disk, found some bad sectors, and quietly rewrote them in place. When it did the rewrite, the normal operation of the HD caused the bad sectors to be spared out and replaced by good ones. My HD is, for the moment, back to being healthy and doesn't need to be replaced. And if does need to replaced, the ZFS scrub gives me pretty good confidence that the data on the other mirror is fully intact and there are no latent read errors that are going to cause me heartburn.

This is not a big save by ZFS, the way I've had on other systems. But I consider it a midsized save; the ZFS scrub turned an alarming and uncertain situation into a much more certain one that may even be fully fixed.

None of this is exceptional for ZFS and parts of it are normal for anyone with mirrored storage (which has saved me from abrupt disk failure at home). But the whole reassuring, simple, and pain free experience is unusual for Linux. And that in a nutshell is one of the big benefits of ZFS on Linux and a good part of why I like it so much. Dealing with potentially failing drives and uncertain read error locations and so on would be much more hassle with basically any other setup, and hassle is exactly the thing I don't want when I'm already jumpy enough because smartd is alarming me.

(There are other reasons to like ZFS, of course. And you can get this sort of scan, checksum verify, and repair experience with btrfs too as far as I know, assuming that you're willing to use btrfs in its current state.)

ZFSOnLinuxScrubSave written at 00:10:32; Add Comment

2016-01-11

The benefits of flexible space usage in filesystems

My home and work Linux machines are very similar. They have about the same collection of filesystems, and they both have significant amounts of free disk space (my home machine more than my work machine, because I've got much bigger disks there). But despite this, the filesystems on my work machine have lots of free space while the filesystems on my home machine tend to run perpetually relatively close to being out of space.

At one level, the difference is in how the disk space is managed. At work, I've migrated to ZFS on Linux; at home, everything is ext3 on top of LVM (on top of a software RAID mirror). But the real answer is that shrinking an extN filesystem and a LVM logical volume is kind of a pain, and also kind of dangerous (at least as far as I know). If I grew filesystems wildly at home, it'd be a pain to shrink them later if I needed the space elsewhere, so for the most part I only expand filesystems when I really need the space.

In theory this shouldn't make any difference; if I need the space, I'll grow the filesystem. In practice it makes me irrationally reluctant to do things that need substantial chunks of space temporarily. I would probably be better off if I adopted a policy that all of the filesystems I used actively should have, say, 40 GB of free space more or less at all times, but I'm not that sensible.

(There's some irrational bit of me that still thinks that disk space is in short supply. It's not; I have more than a TB free, and that's after extravagantly using space to store more or less every photograph I've ever taken. In RAW format, no less.)

This doesn't happen at work because ZFS dynamically shares the free pool space between all of the filesystems. Unless you go out of your way to set it up otherwise, there is no filesystem specific free space, just general pool free space that is claimed (and then released) as you use space in filesystems. Filesystems are just an organizational thing, not something that forces up-front space allocation. So I can use however much space I want, wherever I want to, and the only time I'll run out of space in a filesystem is if I'm genuinely out of all disk space.

This is a really nice feature of ZFS, and I wish I had it at home. It would clearly make my life easier by entirely removing one current concern; I just wouldn't have to manage space on a per-filesystem basis any more. Space would just be space.

(Someday I will have this at home, by migrating my home system to ZFS. But that probably won't be for a while for various reasons. Not having ZFS at home is still tolerable, so I suspect that I won't migrate until I'm migrating hardware anyways and that probably won't be for a while for various reasons.)

PS: btrfs is not what I consider a viable option. At this point I'd probably only consider btrfs once a major Linux distribution has made it their default filesystem for new installs and has survived at least a year of that choice without problems. And I'm not holding my breath for that.

Sidebar: Why I believe shrinking a LVM logical volume is dangerous

To grow a filesystem inside a LVM volume, you first grow the volume and then grow the filesystem to use the new space. To shrink a volume, you do this in reverse; you first shrink the filesystem and then shrink the volume. However, as far as I know there is nothing in LVM that prevents you from accidentally shrinking the volume so that it is smaller than the filesystem. Doing this by accident will truncate the end of your filesystem, almost definitely lose some of your data, and quite probably destroy the filesystem outright. Hence the danger of doing this.

It would be really great if LVM knew about common filesystem types, could read their superblocks to determine the FS-level size, and by default would refuse to shrink a logical volume below that size. But as far as I know it doesn't.

(As a practical matter I probably never want to shrink a filesystem without a current backup of it, which brings up the awkward subject of my home backup strategy.)

FlexibleFilesystemSpaceBenefit written at 00:48:16; Add Comment

2016-01-09

The convenience of having keyboard controls for sound volume

For a long time now, many keyboards have come with various additional keys over and above the traditional set. First it was Windows keys, and then people started adding various 'multimedia' keys and buttons for things. For an equally long time I used a very minimal keyboard without such keys, so I rolled my eyes a little bit at the indulgence (and the wasted space) and otherwise ignored the issue of keyboard 'multimedia' control. Even when I recently got a keyboard that sort of had these keys, I initially kept ignoring them and the whole issue; it was easy enough to reach out to the appropriate speaker to fiddle the volume, or maybe call up the applet I have sitting there for general volume control.

It turns out that I was kind of being a fool about this (as usual). Having keys that I wasn't actually using nagged at me a bit, and recently I got just irritated enough with reaching for the volume control to figure out how to wire up these keys to do something. The first thing I learned is that it turned out to be relatively easy; there are command line tools that will let you control volume, these keys generated distinct keycodes, and they were easily bound to actions in fvwm. My current setup is:

Key XF86AudioRaiseVolume   A N   Exec amixer -q set Master '2%+'
Key XF86AudioLowerVolume   A N   Exec amixer -q set Master '2%-'
Key XF86AudioMute          A N   Exec amixer -q set Master toggle

(Some of my sources when I researched this include here, here, and here. Some of these also give the pactl equivalents of my amixer commands. See also this little GUI tool (via), although I actually use the old Gnome volume control applet from Fedora 14, which miraculously hasn't broken yet.)

When I set this up, I expected this to be basically a curio. As you may have gathered, I was wrong; the keyboard keys have rapidly become my primary method of doing volume control. It's not because they're any better than turning the volume knob or moving the volume slider up or down (in some ways they're worse). It's because they're significantly more convenient, because my hands are usually right there on the keyboard and volume shifts are thus just a quick tap away. As small as it seems, not having to reach for the volume knob or the mouse really does make a palpable difference for how things feel. Even if it's a little bit of work to shift my hand, it's less work, less interruption, and less annoyance, and I like all of that.

(The convenience of having a mute/unmute was especially a surprise. It's now trivial to mute sound if I'm watching some Youtube video just to see it, and as a result I do it fairly often.)

As a side note, I arrived at a 2% volume change per keypress partly by experimentation about what felt good and partly by observing that it's easy to rapidly repeat a keypress, so small changes were probably better than big ones. Also, I keep my sound volumes rather low so small changes in nominal volume can have clearly audible effects.

LinuxVolumeKeys written at 00:09:54; Add Comment


Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.