Wandering Thoughts archives

2014-05-23

What ssh-agent does with multiple keys loaded

Ssh-agent is probably normally used to handle a single identity key, but it can hold more than one if you want. One of the things that the ssh-agent and ssh manpages are a bit silent on is what happens if you have multiple SSH keys loaded into a single ssh-agent. Since I've been dealing with this as the result of transitioning from one set of SSH keys to another, I'm going to write down what I've learned so far.

(The simple version of why I decided to roll over my SSH keys is that I decided to transition from keys that had once been used without encryption to keys that were created encrypted.)

Before I started loading multiple keys into ssh-agent, what I expected to happen is that the choice of which ssh-agent key to use would be controlled by the key that ssh itself would normally use. If you had, for example, 'IdentityFile .../key1-rsa', then I expected ssh to ask ssh-agent to do operations only with that key. This is not what happens. Instead what happens by default is that ssh tries all keys loaded into ssh-agent, one after another, in the order that they were loaded into ssh-agent.

You can partly override this behavior with the IdentitiesOnly configuration directive, which will restrict the keys that ssh tries to only the identities listed either as IdentityFile directives or supplied on the command line with -i. However this is an incomplete override because it doesn't prioritize the -i identity the way a normal (agentless) ssh will; ssh will first ask ssh-agent for any IdentityFile keys it has and only then fall back to a non-agent key given with -i. This implies that if you have a script and you want it to always use a particular restricted identity even if more general ones are available (as I do in one case) you need to clear $SSH_AUTH_SOCK in the script.

(This can apply any time you have a remote system that accepts multiple identities from you but applies different access permissions or access restrictions to them. Remember that IdentityFile directives add together and -i stacks with with them too, so even if you have a specific identity configured for something, a general 'Host *' identity or the like will also be tried.)

There are a couple of interesting uses I can see for this multiple key behavior. One of them is making a transition between old and new SSH keys easier. First off, you can load both your new and your old key into ssh-agent; you'll then use your new key on systems that have been updated to accept only it but have a transparent fallback to systems that only have your old key. More cleverly you can use this to uncover systems that haven't been updated to your new key by loading only your new key into ssh-agent but leaving your old encrypted key configured as your IdentityFile. If you try to ssh to somewhere but get prompted to unlock your old key, you've found a host that either prefers your old key to your new key or doesn't have your new key at all.

Another use is encrypting secondary keys (for example your Github key) but still loading them into ssh-agent for passwordless use. Since ssh with ssh-agent will try multiple keys, pushing to Github and other such uses will eventually try the right keys. You can force this to happen earlier by setting IdentitiesOnly in .ssh/config for the particular hosts involved; this will definitely be necessary if you have a lot of SSH keys, because SSH servers only accept so many key attempts (cf).

(Some of this information comes from this stackoverflow answer.)

(Talking of the interaction of ssh and ssh-agent, it's a pity that as far as I know ssh can't be told 'load keys into ssh-agent when you unlock them'. This would make it very convenient to incrementally load keys into ssh-agent as you turn out to need them while not having them sitting around unlocked in a session if you didn't.)

SshAgentAndMultipleKeys written at 23:18:31; Add Comment

2014-05-19

A building block of my environment: sps, a better tree-based process list

Once upon what is now a very long time ago, it appears back in 1987 or so, Robert Ward wrote a better version of BSD ps that he called sps (cf). The highly useful signature feature of sps was that it displayed processes in sort of a tree form, with UID transitions marked. This was in the days before pstree and equivalents were even a gleam in anyone's eye, and anyways I maintain that sps's display is better than pstree, ptree, or the Linux ps option that will do process trees. I used SPS happily for a number of years on BSD-based machines but then wound up dealing with the System V based SGI Irix and really missed it. Rather than take on the epic work of rewriting code that grubbed around in kernel data structures, I redid the important features I cared about as a script that used a pile of awk code (well, nawk code) to post-process ps output (using the System V ps feature of printing out only specific columns in parseable ways).

(In the process I learned a great deal about how what are now ancient versions of awk and nawk handled attempts at things like recursion and how to fake local variables.)

Ever since then I have carried my sps script forward across OS after OS (SGI Irix 6, then Solaris, then Linux), adopting it slightly for each one. It remains my favorite way of getting process listings on Linux (partly because I fixed the Linux ps problem with long login names); on modern versions of Solaris ptree is almost as good, especially since our Solaris machines don't have users (and thus UID transitions).

(Jim Frost wrote a Linux version of sps back in 1998 and I used it for a while but it has to be compiled, I don't think it's been updated for a long time, and I don't know if it still works on modern Linuxes. For that matter I don't know where you'd still get the source code today.)

SPS output looks like this:

Ty     User            PID CMD
[...]
       root           1192 /usr/lib/postfix/master
        |postfix      1197 qmgr -l -t fifo -u
        |postfix     22675 cleanup -z -t unix -u -c
        |postfix     22676 trivial-rewrite -n rewrite -t unix -u -c
        |postfix     22677 smtp -t unix -u -c
        |postfix      6205 pickup -l -t fifo -u -c
[...]
       root           6899 /usr/sbin/sshd -D
        |            22741 sshd
         |cks        22760 sshd
pts/0     *          22761 -rc
pts/0      |         22856 /bin/sh /u/cks/bin/bin.i386-linux/sps -A
pts/0       |        22858 ps -A -o user

This is a very small excerpt from 'sps -A' that shows the essential features (it's a small excerpt because modern Linux systems have a lot of processes even if they're not doing much).

If this sounds interesting I've put my current version of sps for Linux on the web here and there's also a lightly tested OmniOS version. Adaptation for other Unixes is left as an exercise for the interested.

One of the reasons I quite like my script version of sps, apart from its sheer usefulness, is that it shows how Unix evolves useful capacities over time (and how more CPU power makes them more feasible). In the BSD era ps was sufficiently hard-coded and awk was sufficiently limited that you'd probably have had a hard time duplicating the C version of sps with a script and if you had, the result would have been pretty slow and resource intensive. Move forward a decade or two and there's no serious problem with either. Today I doubt you could measure the impact of using a script for this and committing to modern gawk features would probably make this even easier.

(A truly modern version of sps would probably use Perl instead of trying to mangle everything with awk and other shell tools; Perl is now ubiquitous enough to make that perfectly viable. Since I'm not really fond of Perl, I'm not the right person to write that version of sps but feel free to go wild. I'd expect a Perl version to be smaller, better, and possibly faster.)

ToolsSps written at 23:59:59; Add Comment

2014-05-17

The problem of encrypted SSH keys and screen

One of the things that I do from home is that I log in to my office workstation and then from my office workstation I log in to other machines. This works conveniently when I have unencrypted SSH keys on the office workstation, but problems develop in a straightforward implementation of encrypted keys; now I can no longer ssh from my office workstation to other machines without being challenged for either the passphrase to the keys or the remote password.

Part of this can be solved with SSH agent forwarding. If I enable it for the connection from my home machine to my office machine, ssh'ing from my office machine to other machines can use the SSH agent on my home machine (and thus the unlocked keys it holds). This does rely on a somewhat relaxed approach to authorized key source access restrictions in that other machines have to accept my home SSH key identity from my office workstation.

(SSH agent forwarding is somewhat dangerous in that anything running on my office workstation can potentially make use of my home SSH keys. Of course the same is true if I unlocked my office workstation's SSH keys via a local ssh-agent or the like.)

What this doesn't deal well with is screen, or rather a screen session once I've detached from it and then reattached to it. If I just start screen it will inherit the current connection's SSH agent forwarding, but that inheritance breaks once I disconnect; when I log back in I'll have a different SSH agent forwarding and the old one that's preserved inside screen will not work any more.

There turn out to be two half solutions to this, depending on what behavior you want while you're detached from screen. If you don't need ssh to work inside screen while it's detached (eg if you don't have a script running or something), the simple solution is to rewire where the $SSH_AUTH_SOCKET environment variable points so that it goes to a constant place; see this article by Alan Pinstein for details (or here for another version).

(One simple way to set up the SSH agent socket symlink is just to have a cover script for screen that recreates the symlink. This insures that it only gets done when necessary, so that if you just log in another time you don't perturb an existing screen session.)

If you want processes inside screen to keep being able to make ssh connections even when your screen session is detached, then screen needs to run its own ssh-agent (and you have to unlock keys for that ssh-agent). Screen makes this awkward to do since it doesn't directly provide, say, a 'run this command on startup' option; however there are hacky workarounds, such as the one covered in this article by Charles Fry.

PS: I don't want to have my encrypted SSH keys unlocked on my office workstation at all times once I've logged in to it; I actively want them to be locked when I step away from the machine and do things like lock the screen. It would be a compromise to have SSH keys unlocked permanently in a running (but detached) screen session.

(These are the 'unresolved issues with encrypted keys' I mentioned in yesterday's notes on migrating to encrypted SSH keys.)

Sidebar: my likely solution

I have an automated script that runs inside my screen sessions that makes ssh connections, but all it does with them is run a mail status reporting command. What I'll probably do is give that script its own SSH identity with an unlocked key, restrict that key heavily on the destination systems (with eg a 'command=...' setting), and then use the $SSH_AUTH_SOCKET approach to enable me to make other SSH connections from inside screen. That's going to be both easier and more secure than running a full-blown ssh-agent setup from inside screen.

EncryptedSshKeysAndScreen written at 00:12:23; Add Comment

2014-05-15

My personal and biased view of sudo versus su

I am an old fashioned Unix sysadmin so I have not really become very enthused about using sudo as a replacement for su. I could make lots of excuses for this but what it boils down to is that I'm very used to using su and I haven't felt like trying to readjust my entire set of reflexes and my environment to using sudo instead. This doesn't mean that I don't like sudo or that I don't use it at all. Instead I merely use sudo as a replacement for what would otherwise be setuid programs (what I listed as the first face of sudo in my entry on the three faces of sudo).

Part of this is what I see as a weaker sudo security model. Part of this is because some amount of my work involves directly logging in to systems as root already, either because of the System V init environment leakage issue for restarting services (also) or because we simply haven't set up user accounts on all of our systems (eg OpenBSD firewalls). But a lot of this is just habit, my cultivated usage patterns, and what I see as additional friction on the part of using sudo.

One large part of my usage patterns is that I mostly don't intersperse operations that can be performed as an ordinary user and operations that need root powers. If I need to do a run of operations as root, adding sudo to the front of all of them would be additional friction. If I only need to do one or two operations as root I'll usually then immediately discard the root shell, because I am a disposable usage pattern person. I almost always have root shells in their own windows and I mark those windows to make them very distinctive (I mentioned it here); I think it would actually make me nervous to have sudo powers in an otherwise unmarked and undistinguished shell session.

(I do keep some root shells lingering around but these are for specific periodically repeated operations when I want to hold on to context and where repeatedly re-typing the root password would irritate me too much. They're also on my personal machines instead of any of our servers.)

So on the whole I could switch to sudo but it would be a pain in at least the short term, it would require changing how I do things, it might make the practical security issues somewhat worse, and I'm not convinced I would get much benefit from it.

All of this neglects two and a half separate elephants in the room. The first elephant is that sudo is less universally available than su is. Every Unix machine we'll ever use at work has su; not all of them supply sudo natively. The second elephant is the opinions of my co-workers. Partly because of the first elephant, my co-workers are highly likely to be no more receptive to switching to sudo than I am. Switching by myself is somewhere between pointless and quixotic (even if I switch purely on my home and office workstations) and unless I persuade my co-workers not just to switch but to change work patterns like logging in directly as root it's not likely to give us any particular benefits (which of course makes it that much harder a sell to my co-workers).

I don't necessarily think this is the ideal thing and I don't particularly advocate my approach here to anyone else. But my environment is what it is and today I feel like being honest about it.

(One little pragmatic downside of switching to sudo would be a drastic increase in sudo warning emails as we'd probably routinely fumble-finger the applicable password.)

PS: Please note that if you're using sudo audit logs to assign blame for particular bad things that happen on your machines, you're doing it wrong (also). This is one reason I don't find audit logs to be a particularly compelling advantage for sudo, especially because a crisis is both the time when you might most need audit logs (due to people's fallible memory under pressure) and also the time when people are most likely to wind up logging in directly as root because nothing else works.

SudoVsSuForMe written at 00:08:45; Add Comment

2014-05-09

Operating systems cannot be hermetically sealed environments

There's an idea that you can find rattling around operating systems; the simplest way to describe it is that operating systems and their OS-supplied components should be seen as essentially a black box that's there to provide you certain basic services. In a Unix environment, this would be very little beyond a standard library, standard shell script pieces, and a few similar things. The operating system may have other components but they are for its internal use, not for the use of your programs and systems. In OmniOS this idea is known as 'keep your stuff to yourself' but it's by no means exclusive to OmniOS, partly because it's attractive to many people who want to build a minimal OS.

The problem with this is that like it or not, operating systems are not hermetically sealed environments with minimal and standardized interfaces (libc, basic shell utilities, etc). I don't mean this in the sense that people using an OS will inevitably find it convenient to use the OS's versions of things even though they're not supposed to (which they totally will, by the way). I mean this in the sense that such a minimal interface is too small to be practical.

We saw one point of friction with the mailer dependency issue. MTAs are generally one to a system so the interface to the MTA implicitly becomes an API that the OS both exposes and uses itself. Another example is how you hook yourself into whatever fault monitoring and management system the OS has. How the OS reports faults (and what faults it reports) forms at least an implicit API because you need this information to sanely manage your systems.

('We syslog kernel messages' or 'we write messages to a file' is still an implicit API.)

This is what I mean by the OS not being a hermetically sealed environment in practice. You cannot give people a simple black box OS and have it be useful. All of those implementation details of logging and fault management and mail and so on will inevitably leak outside of the box whether you officially document them or not, because this is what's needed to run real systems.

(I think that we often don't notice this because we take them as 'part of Unix, more or less', and they aren't standardized across Unixes.)

Sidebar: one diagnostic test of 'is something purely internal'

My test is 'could the basic OS remove this entirely without people exploding'. For things like Perl and Python (when you've been told to not use the OS's versions of them) the answer is theoretically yes. Now imagine a Unix OS that did not log anything at all via syslog (or just at all). Would you accept that or would you immediately rule it out?

(Yes, there are some environments where this wouldn't be a disqualification. I don't think there are very many.)

OSesAreNotClosed written at 01:59:07; Add Comment

2014-05-08

The modern world of spliced together multi-layer DNS resolution

In the beginning most people's DNS resolution was relatively simple. You might be directly on the Internet and only dealing with other things there, or your organization might have directly exposed internal IPs in general DNS, or at the least your machine was fully inside your organization on a straightforward network. As yesterday's entry on using Unbound to do split DNS resolution implicitly points out, today's world is often not anywhere near that simple any more. Today's DNS resolution is increasingly assembled by splicing together multiple DNS namespaces, sometimes ones that dynamically appear and disappear.

On clients you can have the basic external DNS resolution obtained from whatever base network you're connected to at the moment (which can change), plus your own names for things like local virtual machines, plus some number of VPNs (you may have more than one depending in what you're doing). The VPNs may both introduce new names and override DNS resolution for existing 'public' (general Internet) names, due to things like split horizon DNS. The local names for eg virtual machines might be on your own client machine or they might be on a server on the local network but not part of the regular 'normal' DNS infrastructure for various reasons.

(We have groups here that want to do a bunch of stuff with virtual machines without constantly bugging us to add and remove DNS entries in our own DNS servers. Sometimes they want to do this VM work on a separate unrouted subnet. In theory these groups could run local caching DNS servers and splice in their own virtual machines but in practice this runs into all sorts of problems, especially if we provide DHCP for their regular client machines. We're perhaps at the height of baroque peculiarity here.)

You have quite similar situations with (organizational) caching DNS servers. Your DNS view can be a composite of zones you actually transfer from internal primaries, data queried from the outside world, and zones (internal or external) that must be explicitly delegated or forwarded to other DNS servers (perhaps because they have to be dynamically updated, perhaps because groups want to run their own DNS servers). You may even want to cascade some or all of your remaining 'general Internet' queries through other servers instead going straight to the root zone and walking downwards.

As Unbound illustrates, modern DNS servers can do this but they don't necessarily do it gracefully. Especially they probably don't deal gracefully with bits of the namespace surgery changing on the fly, with DNS resolution for some names either mutating or just disappearing. Speaking as someone who deals with this both on clients and on caching DNS servers, it would be nice if DNS servers were better at this.

(My optimistic hope is that the increase emphasis on having a local resolver in order to do end to end DNSSEC validation will start pushing people on at least the client side of this.)

MultilayerDNSQuerying written at 00:05:47; Add Comment

2014-05-06

Another problem with building your own packages: dependency issues

I've written before about how I feel being forced to build software yourself is a waste, but this is not the only problem with the 'make our users build things themselves' approach to system packaging. Another one is interlocking dependencies, where OS packages depend on things that you (the end sysadmin) may want to replace with your own stuff.

Suppose that the base operating system supplies both a minimal mailer package and a 'smtp-notifier' package that sends mail when alert-worthy system level things happen, like faults being reported through the OS's fault management system. However you (the sysadmin) want to use an entirely different mailer, which is clearly not going to come from the OS's package set (the OS packaged only the minimal mailer it 'needs') and might even come from a completely different packaging system than the OS's one.

At a minimum there's a dependency issue here. A naively done smtp-notifier package will insist that you install the system's undesired mailer. Even a relatively sophisticated one is likely to insist that you install your choice of mailer using the OS's native packaging system so that it knows that smtp-notifier's dependency is satisfied. Of course life gets worse if you're not really supposed to install your things in the OS's directory hierarchies; how does smtp-notifier even find the right mailer binary to use? Normally it has every right to expect that said binary is living in the OS-owned area of the filesystem hierarchy.

All of this is solvable with effort, but it does require effort and it requires this sort of dependency interlock issue to be anticipated by the people who build OS packages and write the OS package manager. If they did not, you probably get to chose between smtp-notifier and your mailer.

Broadly speaking this sort of dependency issue can happen to anything that you can only really have one of on the system and that may be used by other OS packages (either directly by invoking its programs or indirectly by, eg, TCP services). Mailers are an obvious case because a lot of things want to send email (unless you deliberately make them less useful by breaking that).

PS: to be fair I'm not sure if this issue is really likely to come up for anything other than mailers, especially in OSes with minimal native package sets. They seem unlikely to have, eg, web-based management systems that require the OS-packaged webserver.

BuildingPackagesDependencyIssue written at 00:10:37; Add Comment

By day for May 2014: 6 8 9 15 17 19 23; before May; after May.

Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.