Wandering Thoughts archives

2008-12-26

The consequences of the Debian OpenSSL compromise

Although this is rather behind the times, I don't think I've seen the practical consequences of the Debian OpenSSL vulnerability written down clearly and in one place. So here is my list, concentrating on SSH and SSL certificates:

  • SSH host keys and personal SSH keys generated on a vulnerable system are entirely compromised.
  • OpenSSL generated SSL certificates are compromised. This especially includes signed certificates used on public websites; if this applies to you, get ready to explore the marvelous world of certificate compromises.
  • any SSH DSA key used from a vulnerable machine could have been compromised.

  • pretty much any SSH session involving a vulnerable machine (on either end) can be decrypted by an attacker, because of how SSH does encryption. It is important to understand that this has nothing to do with whether or not you are using vulnerable keys and either end can destroy the effectiveness of the session encryption.

  • even with uncompromised SSL certificates, some SSL sessions involving a vulnerable machine (on either end) can be decrypted. Affected sessions are those using SSL forward secrecy.
  • I believe that most sessions not using SSL forward secrecy can be decrypted if they involve a compromised SSL certificate, regardless of whether the session involves any vulnerable machines.

Or in short: even if you are not using bad keys or certificates, a vulnerable system is still bad news.

Complicating the SSL situation is the issue of which source of SSL libraries an application uses. Some number of Debian systems have both OpenSSL and GNUTLS installed, and GNUTLS is not vulnerable. So an application using GNUTLS does not lose any perfect forward secrecy it had, while if it did not have PFS, its sessions are still vulnerable if it was using a compromised certificate generated by OpenSSL. (The converse is true; a certificate generated by GNUTLS on a vulnerable system is not vulnerable.)

(OpenSSH always uses OpenSSL and people usually generate certificates with OpenSSL, although not always. Web servers, IMAP servers, and so on can vary widely, although in practice most use OpenSSL.)

Note: 'Debian' here includes all Debian derived distributions, which includes at least Ubuntu (and its variants), Knoppix, and Xandros.

DebianSSLConsequences written at 02:19:12; Add Comment

2008-12-20

The source of spurious .rpmnew files

I wrote before about how updating RPMs would occasionally leave .rpmnew files behind that were, in fact, identical to the normal version of the file. I believe that I now understand what causes this and what's going on: I think it's another manifestation of the RPM multi-architecture file ownership problem.

The .rpmnew files are created when you update a package where you've changed one of its configuration files from the default version and the package also includes a new version of the configuration file. RPM checks for changes by comparing the MD5 checksum of the installed file against the MD5 checksum in the database for the old package; if they differ, you've edited the configuration file.

The problem comes up when you upgrade a package that you have for both architectures with a (shared) configuration file. The configuration file is 'owned' by both packages, and RPM applies updates one by one and immediately puts new files (including new versions of shared files) into place. So I think that what happens goes like this (as an example):

  • RPM updates libfoo.x86 from 1.1 to 1.2. The (shared) configuration file /etc/foo.conf is the stock 1.1 version, so it gets replaced with the version from libfoo-1.2.x86.
  • RPM updates libfoo.x86_64 from 1.1 to 1.2. Since /etc/foo.conf is no longer the stock 1.1 version, RPM creates the libfoo-1.2.x86_64 foo.conf as foo.conf.rpmnew.

Thus, to see the problem I believe that you need to be on a multi-arch machine, update a package that you have for both architectures, and have that update change a configuration file. This seems to be fairly uncommon, at least for the packages that I have installed on my machines.

(I think that this is an RPM bug, though, not just a natural consequence of how things work in a multiarch world. RPM could be smart enough to realize what is actually going on and really should be, since the current behavior is not really correct.)

SpuriousRmpnewFiles written at 01:11:01; Add Comment

2008-12-18

Why LVM snapshots should really have hooks into filesystems

Like a lot of block-level logical storage managers, LVM has a basic read-only snapshot facility. A remapping storage manager makes it easy to implement snapshots, since basically all you do is remap blocks on write (although the devil is in the details, as always). And like most of them with snapshot facilities, I believe that LVM lacks hooks into filesystems to tell them about an impending snapshot.

You might reasonably ask why the filesystem should care about this. The simple answer is that you really want the filesystems in your snapshots to be marked as clean and consistent, so that when you go to mount them, nothing freaks out about dealing with a dirty filesystem. These days, a lot of filesystems want to do various sorts of recovery actions when you try to mount a dirty filesystem; for example, when ext3 mounts a dirty filesystem, it tries to replay the log of pending transactions to bring the filesystem up to consistency. At least some of these recovery actions (such as ext3's) don't go together very well with a genuinely read-only filesystem.

(I believe that ext3 has sensible reasons for wanting to replay the log even when you mount a filesystem read-only, since if it did not replay the log to the disk it would have to build an in-memory version of the log's changes. Not having two recovery paths has to be more reliable, especially when the in-memory one is only going to be exercised infrequently.)

Making sure that the snapshot is clean and consistent also makes them more useful for block-level backup tools like dump. An unchanging filesystem makes it more likely that dump will work, but I have seen frozen filesystems that were still in a peculiar enough state that dump failed. And speaking as a sysadmin, I am really in favour of reliable dump.

LVMHooksNeeded written at 02:10:46; Add Comment

2008-12-08

How I split up my workstation's disk space

There's a lot of different ways to partition disks and split out filesystems. Mine isn't necessarily the best one, it's just what I use, partly because I am cautious and conservative.

My workstation has two disks, partitioned identically and generally mirrored. I split the filesystems up like so:

  • /boot is a separate and non-mirrored filesystem. I have a /boot2 on the second disk which I synchronize by hand every so often (usually not, to be honest, which is a bad habit).

  • there are two swap partitions, one on each disk. I don't bother mirroring swap; it's too much work for what I get out of it.

  • /, /usr, and /var are each in separate mirrored partitions. (I still make them separate filesystems, which may be pointless these days.)

  • all the rest of the disk space is in a single mirrored partition, which is used for a single LVM logical volume.
  • all other filesystems are in that logical volume (and are thus sitting on LVM over RAID-1, which seems to perform well enough).

Keeping the system partitions outside of LVM means that I can boot even if something goes wrong with LVM (which it sometimes does). I put my other data into LVM because LVM is a lot more convenient.

If I was doing this today, I would have two mirrored partitions for /, /usr, and /var, with the goal of using the second set to make operating system upgrades less alarming by always having a fully bootable and functional version of the old system sitting around. I figure that on modern disks, 40 to 50 GB is a cheap price to pay for such an insurance policy and well worth it.

(Of course, I don't know if this works in practice or if it would horribly confuse Fedora's install/upgrade stuff, although since there is a disk upgrade and a Fedora upgrade in my near future, I'm probably going to get to find out. And in credit where credit is due department, this was inspired by what little I know of Sun's 'live upgrade' stuff.)

My only current observation on filesystem sizes is that /var needs to be much bigger than I thought it would be. My next /var will be at least 10 GB and may go all the way to 20 GB. (Disk space is cheap and running out is painful, which is one drawback to not having system filesystems in LVM.)

WorkstationPartitioning written at 23:44:53; Add Comment


Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.