Wandering Thoughts archives

2014-05-19

Why desktop Linuxes want you to reboot after updates

Anyone who uses a mainline Linux desktop may have noticed a trend where more and more the system wants you to either reboot the system or log out and log back in again after you apply distribution updates (the two are roughly equivalent in terms of disrupting you, so I'm going to treat them the same). You might wonder why Linux has been shifting towards this increasingly, well, Windows-like experience. While I don't have direct knowledge of the internal decisions of Linux distributions, as a system administrator I can certainly see the factors that are driving people towards this.

There are two basic problems. The first one is simply getting your new updates fully activated when they may be updating either long-running programs or shared libraries used by long-running programs (because the copy of the shared library that a program is using is usually fixed at the point it starts). Some of these programs may be things like browsers and email clients; others may be daemons that are deeply tangled into the desktop environment (or even the system environment) to the point where other things assumes that they never exit or restart.

(Making a desktop environment that can survive random parts of itself restarting is actually quite a challenge. For instance you're going to need lots of programs to be able to safely serialize their state and then re-execute themselves, including security sensitive programs like ssh-agent. Many of them don't do this today, so you've got a lot of work ahead.)

The second problem is that of making a partially updated environment work. You get such a partially updated environment when some running programs (or loaded shared libraries, or whatever) are the old, pre-update versions while others are the new post-update versions. Unlucky programs can also see a partially updated environment if they start during the update process and see some files from after the update and some files from before it. Pragmatically it's quite hard for a distribution to even test that stuff works in this sort of situation; there are a huge number of different combinations and things that can go wrong and most of this is upstream software that a distribution has little power over.

The easy way out for both problems is to tell you to either log out or reboot after updates have been applied, depending on what's been updated. A reboot guarantees that everything is the current version and it's all coherent with each other (barring bugs in the actual updates). It may be overkill but it's simple and reliable overkill and this has a certain attraction to distributions that want to just make things work.

(This isn't the same issue as offline updates, but it's closely related. Offline updates are an even more extreme version of this that try to avoid potential problems even while applying updates.)

WhyRebootOnUpdates written at 01:22:19; Add Comment

2014-05-16

Some notes from migrating towards encrypted SSH keys

I'll start with the admission: up until now I've used unencrypted SSH keys on my home and office workstations, ultimately because I didn't think that doing so made the risks particularly worse than using encrypted keys and it's undeniably more convenient. For hand-waving reasons I've recently decided to experiment with encrypted keys on at least my home machine so this is a collection of early notes on the process.

The most important part of making encrypted SSH keys convenient is to be running ssh-agent. Normal people will have this done automatically as part of their X session because GDM or xdm or the KDE equivalent sets all of this up for you. I'm the kind of crazy person who starts their X session by hand so I had to add a magic incantation to start it in the right way. On Fedora 20 and in my case this is:

xinit /usr/bin/ssh-agent /bin/env TMPDIR=$TMPDIR \
        /usr/bin/dbus-launch --exit-with-session \
        $HOME/.xinitrc -- <server args>

(This comes from /etc/X11/xdm/xinit/xinitrc-common. I assume the TMPDIR bit is necessary because ssh-agent normally changes $TMPDIR in the environment it passes to anything it starts.)

If you have an existing unencrypted SSH key (as I did) you encrypted it with 'ssh-keygen -p'. This prompts for everything and is smart enough to recognize that your key is unencrypted. Note that you don't encrypt the public key, just the private key.

Loading keys into your running ssh-agent is done with ssh-add. On Fedora 20, if you have the openssh-askpass package installed ssh-add will automatically use the graphical frontend from it when needed without you having to set $SSH_ASKPASS, which is somewhat contrary to the manpage. This behavior may also happen for other graphical ssh-add password agents; I haven't tested. I invoke ssh-add early on in my .xinitrc (after I've started my window manager but before almost anything else) so that I have automatic SSH logins available for anything else I want to start.

(I've found that I want a cover script for ssh-add because I don't put my SSH keys in the default place. The cover script is just 'ssh-add /path/to/identity-rsa', more or less.)

I find it somewhat annoying that I have yet to find a ssh-add password agent that will accept an X -geometry argument or any equivalent of it. I don't want to have to place the password window; I want it to just appear in a fixed place so I can park my mouse there and type the password. If I decide I really care about this the solution is to run ssh-add in a disposable xterm because I can definitely place those.

(Ie run 'xterm -geometry ... -e ssh-addkeys', where ssh-addkeys is my cover script. When ssh-add is run this way it just prompts on the terminal instead of popping up a graphical window for it.)

I lock my screen with the xlock from xlockmore. This offers a pretty convenient way to integrate with ssh-agent; you can run a command before the screen locks (ie 'ssh-add -D' to drop all keys) and then run a second command afterwards which gets fed the password you used to unlock the screen. If you use your regular password as your SSH key password, this can thus wind up re-adding your SSH keys to ssh-agent without any further input from you.

(Of course it would probably be more secure to use a separate password for your SSH keys, but then it would be less convenient and you might wind up locking your screen less (or not purging the keys from ssh-agent when you lock the screen). I've chosen to go with convenience here.)

This is getting long enough that I think I'm going to stop here for now. I have some remaining unsolved issues with encrypted keys but they'll go in a separate entry.

PS: users of more sophisticated desktop environments may have all of this integrated into their desktop's existing key management infrastructure so that everything unlocks on login without you having to do anything and screen locking is automatically handled and so on. This is certainly the way it should be and modern desktops do have general password stores.

Sidebar: a post-unlock xlock script

According to the xlock manpage, a sample script for the -pipepassCmd argument comes with xlock. This script is not packaged in the Fedora 20 version and since I had to dig it out of some web searches, here it is for anyone else (without the original comments):

#!/usr/bin/perl -w
use strict;
use Expect;

my $pass = <STDIN>;
my $exp = Expect->spawn('/u/cks/bin/X11/ssh-addkeys');
$exp->expect(10, ':');
$exp->send("$pass\r\n");
$exp->expect(10, ':');
$exp->hard_close;

On Fedora 20, you'll need the perl-Expect package. In general you'll need to change where it starts my ssh-addkeys script to something that runs ssh-add with whatever keys are appropriate for you.

(It's kind of a pity that ssh-add can't do this by itself. All it would take is an argument to specify 'just read the key from standard input and be done with it'.)

EncryptedSSHKeyMigration written at 01:48:51; Add Comment

2014-05-07

How I use Unbound on Fedora 20 to deal with the VPN DNS issue

A while back Pete Zaitcev asked about the issue of VPNs versus DNS (and wound up with a good answer). This is the problem where you have a VPN connection off to some internal networks somewhere and you want to look up VPN hosts using one DNS server (the VPN's DNS server) but look up everything else with another DNS server.

I have a similar situation with my home machine, as I have an IPSec tunnel to work that basically puts one side of my machine on the work network, complete with a bunch of internal DNS and internal networks. Since I leave the IPSec tunnel up all the time, in theory I could rely on work's internal DNS server for everything. In practice the tunnel can get interrupted and anyways I wanted to avoid the latency hit for sending DNS over it when possible. My solution is to run Unbound as a local caching DNS server, configured to mostly go to the Internet but to divert lookups for some zones off to our internal DNS servers.

(Well, I do tricks with some other lookups but they're not as interesting.)

Fedora 20 has split the Unbound configuration up into multiple files so you don't need to modify the package-supplied files very much (or at all), although where to put things is a bit arcane. To simplify, configuring what DNS lookups for what zones should go where goes in /etc/unbound/conf.d while access control and result filtering goes in /etc/unbound/local.d. You're going to need both.

My conf.d file has things like:

forward-zone:
     name: "sandbox"
     forward-addr: 128.100.3.250
forward-zone:
     name: "10.in-addr.arpa"
     forward-addr: 128.100.3.250

(.sandbox is our internal top-level domain for purely internal hosts.)

Here the IP address I'm pointing the forward-zone stanzas at is the internal caching DNS server that ordinary client machines use. Unbound will make normal recursive DNS queries to it just as if it was a regular client instead of a real DNS server.

(If you're pointing at authoritative, non-recursive DNS servers you want to use stub-zone, but this is usually going to be rare.)

My local.d file has a bunch of stuff about listening interfaces and access control, but the important bit is a collection of statements about result filtering. These cover things like:

# Remove these IPs from public DNS results
private-address: 10.0.0.0/8
# Allow these domains to use private IPs
private-domain: sandbox
# Don't try to do DNSSEC for these
domain-insecure: sandbox

# MUST INCLUDE THIS
local-zone: "10.in-addr.arpa" nodefault

(On a non Fedora 20 machine, the local.d bits go in the server: section of unbound.conf while conf.d bits go at the top level.)

The local-zone statement here is really important. If you leave it out, Unbound defaults to returning NXDOMAIN for PTR queries in 10/8 (along with all the other RFC1918 private IP ranges) despite the fact that I configured forwarding for these DNS queries.

(Yes, this caused a certain amount of heartburn when I set Unbound up for the first time on my work machine. It's documented in the unbound.conf manpage if you read the whole thing carefully.)

My setup implicitly assumes that the IPSec tunnel is up and the work internal DNS server listed in eg the forward-zone declarations is reachable. If I brought the tunnel down regularly I'd need to do something more clever (or more brute force, eg restarting Unbound with a 'no tunnel' configuration that didn't have all the forwarding and so on). It also doesn't have any interaction with NetworkManager because I don't use NetworkManager on this machine (it's a bad fit for NM).

(I understand that NetworkManager either has or is in the process of gaining its own mechanisms to handle this problem, assuming that you manage all your networking and VPNs and so on through NetworkManager. See Pete Zaitcev's entry.)

UnboundDNSforVPN written at 02:15:18; Add Comment

By day for May 2014: 7 16 19; before May; after May.

Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.