Wandering Thoughts archives

2016-03-03

My views on clients for Lets Encrypt

To use Let's Encrypt, you need a client, as LE certificates are available only through their automated protocol. I can't say I've checked out all the available clients (there are a lot of them), but here are the three that I've actively looked at and explored. I stopped exploring clients after three because these meet my needs and pretty much work the way I want.

The official Let's Encrypt client is, well, the official client. It's big and has many features and can be kind of a pain (or at least slightly scary) to get going, depending on if it's already available as a system package. Its advantage is that it is the official client; it'll probably have the most up to date and complete protocol support, it's pretty much guaranteed to always work (LE isn't going to let their official client be broken for very long), and there are tons of people who are using it and can help you if you run into problems. If you don't really care and just want a single certificate right now, it's probably your easiest option. And it has magic integration with an ever growing collection of web servers and so on, so that it can do things like automatically reload newly-renewed certificates.

The drawback of the official client is that it's big and complicated. One of my major use cases for LE is certificates for test machines and services, where what I just want a cert issued with minimal fuss, bother, flailing around, and software installation. My client of choice for 'just give me a TLS certificate, okay?' cases like this is lego. You turn off the real web server on your test machine (if any), you run lego with a few obvious command line options, and you have a certificate. Done, and done fast. Repeat as needed. As a Go program, the whole thing is a single executable that I can copy around to test machines as required. Lego doesn't really automate certificate renewal, but this is not an issue for test usage.

(You do have to remember to (temporarily) allow HTTP or HTTPS in to your test machine, which is something that I've forgotten a few times.)

My other usage case for LE is as the regular TLS certificates for my personal site. Here I very much wanted a client that had a clear story for how to do automated certificate renewals in an unusual environment with a custom setup, and that I could understand, control, and trust. Perhaps the official client could have done that if I studied it enough, but I felt that it was more focused towards doing 'just trust us' magic in standard setups. The client I settled on for this is Hugo Landau's acmetool, which is refreshingly free of magic. The simple story of how you get automatic certificate renewals and service reloads is that acmetool supports running arbitrary scripts after a certificate has been renewed. Just set up a script that does whatever you need, put an appropriate acmetool command in a once-a-day crontab entry, and you're basically done. One of the reasons that I like acmetool so much is that I think its way of handling the whole process is the correct approach. As its README says, it's intended to work like 'make' (which is a solidly proven model), and I think the whole approach of running arbitrary scripts on certificate renewal is the right non-magic way to handle getting services to notice the renewed certificates.

(Acmetool also has some clever and handy tricks, but that's something for another entry.)

Unsurprisingly, acmetool requires a certain amount of work to set up and configure (unlike lego, which is 'run and done'). But after that, so far it has been something I can completely ignore. I rather look forward to being able to not think about TLS certificate renewal on my personal site at all, instead of having to remember it once a year or so.

(The necessary disclaimer is that it hasn't yet been 60 days since I started using LE and acmetool, so I haven't had it go through the certificate renewal process. If I was more patient, I'd have waited longer to write this entry. But as it is, I think acmetool's fundamental model is sound so I'm fairly confident that everything's going to be fine.)

LetsEncryptMyClients written at 00:35:52; Add Comment

2016-03-02

Some notes on OpenSSH's optional hostname canonicalization

As I mentioned in my entry on how your SSH keys are a potential information leak, I want to stop offering my ssh public keys to all hosts and instead only offer them to our hosts. The fundamental reason that I wasn't doing this already is that I make heavy use of short hostnames, either entirely without a domain or with only our local subdomain (ie, hostnames like apps0 or comps0.cs). When you use short hostnames, OpenSSH's relatively limited 'Host ...' matching power means that it's easiest to just say:

Host *
  IdentityFile ....

This has the effect that you offer your public keys to everything.

There are two ways to deal with this. First, you can use relatively complex Host matching. Second, you can punt by telling ssh to canonicalize the hostnames you typed on the command line to their full form and then matching on the full domain name. This has a number of side effects, of course; for instance, you'll always record the full hostnames in your known_hosts file.

Hostname canonicalization is enabled with 'CanonicalizeHostname yes'. This can be in a selective stanza in your .ssh/config, so you can disallow it for certain hostname patterns; for instance, you might want to do this for a few crucial hosts so that you aren't dependent on ssh's canonicalization process working right in order to talk to them. CanonicalDomains and CanonicalizeMaxDots are well documented in the ssh_config manpage; the only tricky bit is that the former is space-separated, eg:

CanonicalDomains sub.your.domain your.domain

The CanonicalizePermittedCNAMEs setting made me scratch my head initially, but it has to do with (internal) hostname aliases set up via DNS CNAMEs. We have some purely internal 'sandbox' networks in a .sandbox DNS namespace, and we have a number of CNAMEs for hosts in them in the internal DNS view of our normal subdomain, for both convenience and uniformity with their external names. In this situation, if I did 'ssh acname', OpenSSH would normally fail to canonicalize acname as a safety measure. By setting CanonicalizePermittedCNAMEs, I can tell OpenSSH that hosts in our subdomain pointing to .sandbox names is legitimate and expected. So I set up:

CanonicalizePermittedCNAMEs *.sub.our.dom:*.sandbox,*.sub.our.dom

I don't know if explicitly specifying our normal subdomain as a valid CNAME target is required. I threw it in as a precaution and haven't tested it (partly because I didn't feel like fiddling with our DNS data just to find out).

Although it's not documented, OpenSSH appears to do its hostname canonicalization by doing direct DNS queries itself. This will presumably bypass any special nsswitch.conf settings you have for hostname lookups. Note that although OpenSSH is using DNS here, it only cares about the forward lookup (of name to IP), not what the reverse lookup of the eventual host's IP is.

I've been experimenting with having OpenSSH do this hostname canonicalization for a few weeks now. So far everything seems to have worked fine, and I haven't noticed any delays or hiccups in making new SSH connections (which was one of the things I was worried about). Of course we haven't had any DNS glitches or failures over that time, either (at least none we know about).

Sidebar: Why OpenSSH cares about CNAMEs during canonicalization (I think)

I assume that this is because if OpenSSH was willing to follow CNAMEs wherever they went, an attacker with a certain amount of access to your DNS zone could more or less silently redirect existing or new names in your domain off to outside hosts. You would see the reassuring message of, say:

Warning: Permanently added 'somehost.sub.your.domain' (RSA) to the list of known hosts.

but your connection would actually be going to otherhost.attacker.com because that's where the CNAME points.

You still get sort of the same issue if you don't have hostname canonicalization turned on (because then the system resolver will presumably be following that CNAME too), but then at least the message about adding keys doesn't explicitly claim that the hostname is in your domain.

SSHCanonHostnames written at 01:25:24; Add Comment

2016-02-27

Sometimes brute force is the answer, Samba edition

Like many places, we have a Samba server so that users with various sorts of laptop and desktop machines can get at their files. For good reason the actual storage does not try to live on the Samba server but instead lives on our NFS fileservers. For similarly good reasons, people don't have separate Samba credentials; they use their regular Unix login and password. However, behind the scenes Samba has a separate login and password system, so we are actually creating and maintaining two accounts for people; a Unix one, used for most things, and a Samba one, used for Samba. This means that when we create a Unix account, we must also create a corresponding Samba account, which is done by using 'smbpasswd -a -n' (the password will be set later).

For a long time we've had an erratic problem with this, in that occasionally the smbpasswd -a would fail. Not very often, but often enough to be irritating (since fixing it took noticing and then manual intervention). Our initial theory was that our /etc/passwd propagation system was not managing to update the Samba server's /etc/passwd with the new login by the time we ran smbpasswd. To deal with this we wrote a wrapper around smbpasswd that explicitly waited until the new login was visible in /etc/passwd and dumped out copious information if something (still) went wrong. Surely we had solved the problem, we figured.

You can guess what happened next: no, we hadn't. Oh, it was clear that some of the problem was /etc/passwd propagation delays, because every so often we could see the wrapper script report that it had needed to wait. But sometimes smbpasswd still failed, reporting simply:

Unable to modify TDB passwd: NT_STATUS_UNSUCCESSFUL!
Failed to add entry for user <newuser>. 

We could have spent a lot of time trying to figure out what was going wrong in the depths of Samba and then how to avoid it, staring at logs, perhaps looking at strace output, maybe reading source, and so on and so forth. But we decided not to do that. Instead we decided to take a much simpler approach. We'd already noticed that every time this happened we could later run the 'smbpasswd -a -n <newuser>' command by hand, so we just updated our wrapper script so that if smbpasswd failed it would wait a second or two and try again.

This is a brute force solution, or really more like a brute force workaround. We haven't at all identified the cause or what we really need to do to fix it; we've simply identified a workaround that we can execute automatically without actually understanding the real problem. But it works (so far) and it did not involve us staring at Samba for a long time; instead we could immediately move on to productive work.

Sometimes brute force and pragmatics are the right answer, under the circumstances.

(It helps that account creation is a rare event for us.)

BruteForceSambaAccountCreation written at 00:27:37; Add Comment

2016-02-20

My two usage cases for Let's Encrypt certificates

As I mentioned yesterday, we unfortunately can't use Let's Encrypt certificates in production here. That doesn't mean I have no use for LE certificates, though. Instead I have two different ones.

My first usage case for LE certificates is as the first stop for temporary certificates for test machines at work. I not infrequently need to set up test versions of TLS-based services for various reasons, including testing configuration changes, operating system upgrades, and even whether or not I can make some random idea actually work. All of these cases need real, valid certificates because an ever increasing amount of software refuses to deal with self-signed certificates (at least in any reasonable way). Since it's very unlikely that I'll run a test server for anywhere close to 90 days, various sorts of LE certificate renewal issues are of little or no importance.

LE's rate limits mean that I may not be able to get a certificate from them when I want one (or renew an existing one if I'm about to recycle one of my generic virtual machines to test something else), but this is more than made up by the fact that I can try to get a LE certificate in minutes with absolutely no bureaucracy. If it works, great, I can go on with my real work; if not, either I put this particular project on the back burner for a few days or I get us to buy a commercial certificate and forget about the issue for a year.

(And when I can get a LE certificate for a general host name, I'm good for the next 90 days no matter what I'm doing with the host. Even though it's a little bit ugly, there's usually nothing I'm testing that requires a specific host name, or at least nothing that can't be fixed by hand editing a few configuration files for testing purposes.)

My second usage case is as the regular TLS certificates for my personal site, which is basically the canonical Let's Encrypt situation. Here I'm unlikely to run into rate limits and since I'm the only person getting certificates, I can coordinate with myself if it ever comes up. I do care about certificate renewal working smoothly, but on the other hand there are few enough certificates involved that if something doesn't work I can do things by hand and in an extreme case, even go back to my previous source for free TLS certificates. I'm also willing to run odd software in a custom configuration if it works for me, since I don't have to maintain things across a fleet of machines with co-workers; 'it works here for me' is good enough.

(And, while I care about my personal site, it is not 'production' in the way that work machines are. I can take risks with it that I wouldn't even dream of for work, or simply do things as experiments to see how they pan out. This is partly what Let's Encrypt is for me right now.)

These two usage cases wind up leaving me interested in different Let's Encrypt clients for each of them, but that's once again a subject for another entry.

LetsEncryptMyUsage written at 03:13:04; Add Comment

2016-02-19

We can't use Let's Encrypt on our production systems right now

I really like Let's Encrypt, the new free and automated non-profit TLS Certificate Authority. Free is hard to beat, especially around here, and automatically issued certificates that don't require tedious interaction with websites are handy. And in general I love people who're striking a blow against the traditional CA racket. Unfortunately, despite all of that, there's basically no prospect of us using LE certificates in production around here.

The problem is not any of the traditional ones you might think of. Browsers trust the LE certificates, and that LE only does basic 'Domain Validation' (DV) certificates is not an issue because those are what we use anyways. And I have no qualms about using a free CA; CAs are in a commodity business and LE is easier to deal with than the alternatives due to their automation. It's not even the short 90-day duration on their certificates (although that's a potential issue).

The problem for us is that Let's Encrypt (currently) has relatively low rate limits, and especially it has a limit of five certificates per domain per week. Even if LE interpreted this very liberally (applying it to just our department's subdomain instead of the entire university), this is probably nowhere near enough for our usage. We have more than five different servers doing TLS ourselves, never mind all of the web servers run by research groups or even individual graduate students. This isn't just an issue of having to carefully schedule asking for certificates (and the resulting certificate renewals); it's also a massive coordination problem among all of the disparate people who could request certificates. As far as I can tell, using LE certificates in production here would mean giving a very large number of people the power to stop us from being able to renew (production) certificates. That's just not a risk we can take, especially since you have to renew LE certificates fairly often.

(Sure, we'd renew well ahead of time and if there were problems we could buy a commercial TLS certificate to replace the LE one. But if we're going to have problems very often we can save ourselves the heartburn and the fire drill by just buying commercial certificates in the first place. The university may not value staff time very highly in general but our time is still worth some actual money, and commercial certificates are cheap.)

I do feel sad about this, as I'd certainly like to be able to use LE certificates in production here (and I'd prefer to use them, especially with automatically handled renewal). But I suspect that a big university is always going to be a corner case that LE's rate limits simply won't deal with. If the university got seriously into 'TLS for all web sites', we're probably talking about at least thousands of separate servers.

(This doesn't mean that I have no use for LE certificates here. But that's another entry.)

Sidebar: my views on multiple names on the same certificate

TLS certificates can be issued with multiple names by using SANs, which means that you can theoretically cut down the number of distinct certificates you need by cramming a bunch of names on to one certificate. LE is especially generous with how many SANs you can attach to one certificate.

My personal dividing line is that I'm only willing to put multiple names into a TLS certificate when all of the names will be used on the same server. If I'm putting fifteen virtual host names into a certificate that will be used on a single web server, that's fine. If I'm jamming fifteen different web servers into one TLS certificate and so I'm going to have fifteen copies of it (and its key) on fifteen hosts, that's not fine. I should get separate certificates, so that the damage is more limited if one of those hosts gets compromised.

LetsEncryptNoProduction written at 01:49:30; Add Comment

2016-02-11

My current views on using OpenSSH with CA-based host and user authentication

Recent versions of OpenSSH have support for doing host and user authentication via a local CA. Instead of directly listing trusted public keys, you configure a CA and then trust anything signed by the CA. This is explained tersely primarily in the ssh-keygen manpage and at somewhat more length in articles like How to Harden SSH with Identities and Certificates (via, via a comment by Patrick here). As you might guess, I have some opinions on this.

I'm fine with using CA certs to authenticate hosts to users (especially if OpenSSH still saves the host key to your known_hosts, which I haven't tested), because the practical alternative is no initial authentication of hosts at all. Almost no one verifies the SSH keys of new hosts that they're connecting to, so signing host keys and then trusting the CA gives you extra security even in the face of the fundamental problem with the basic CA model.

I very much disagree with using CA certs to sign user keypairs and authenticate users system-wide because it has the weakness of the basic CA model, namely you lose the ability to know what you're trusting. What keys have access? Well, any signed by this CA cert with the right attributes. What are those? Well, you don't know for sure that you know all of them. This is completely different from explicit lists of keys, where you know exactly what you're trusting (although you may not know who has access to those keys).

Using CA certs to sign user keypairs is generally put forward as a solution to the problem of distributing and updating explicit lists of them. However this problem already has any number of solutions, for example using sshd's AuthorizedKeysCommand to query a LDAP directory (see eg this serverfault question). If you're worried about the LDAP server going down, there are workarounds for that. It's difficult for me to come up with an environment where some solution like this isn't feasible, and such solutions retain the advantage that you always have full control over what identities are trusted and you can reliably audit this.

(I would not use CA-signed host keys as part of host-based authentication with /etc/shosts.equiv. It suffers from exactly the same problem as CA-signed user keys; you can never be completely sure what you're trusting.)

Although it is not mentioned much or well documented, you can apparently set up a personal CA for authentication via a cert-authority line in your authorized_keys. I think that this is worse than simply having normal keys listed, but it is at least less damaging than doing it system-wide and you can make an argument that this enables useful security things like frequent key rollover, limited-scope keys, and safer use of keys on potentially exposed devices. If you're doing these, maybe the security improvements are worth being exposed to the CA key-issue risk.

(The idea is that you would keep your personal CA key more or less offline; periodically you would sign a new moderate-duration encrypted keypair and transport them to your online devices via eg a USB memory stick. Restricted-scope keys would be done with special -n arguments to ssh-keygen and then appropriate principals= requirements in your authorized_keys on the restricted systems. There are a bunch of tricks you could play here.)

Sidebar: A CA restriction feature I wish OpenSSH had

It would make me happier with CA signing if you could set limits on the duration of (signed) keys that you'd accept. As it stands right now, it is only ssh-keysign with the CA that enforces any expiry on signed keys; if you can persuade the CA to sign with a key-validity period of ten years, well, you've got a key that's good for ten years unless it gets detected and revoked. It would be better if the consumer of the signed key could say 'I will only accept signatures with a maximum validity period of X weeks', 'I will only accept signatures with a start time after Y', and so on. All of these would act to somewhat limit the damage from a one-time CA key issue, whether or not you detected it.

SSHWithCAAuthenticationViews written at 01:07:55; Add Comment

2016-02-06

You can have many matching stanzas in your ssh_config

When I started writing my ssh_config, years and years ago, I basically assumed that how you used it was that you had a 'Host *' stanza that set defaults and then for each host you might have a specific 'Host <somehost>' stanza (perhaps with some wildcards to group several hosts together). This is the world that looks like:

Host *
   StrictHostKeyChecking no
   ForwardX11 no
   Compression on

Host github.com
   IdentityFile /u/cks/.ssh/ids/github

And so on (maybe with a generic identity in the default stanza).

What I have only belatedly and slowly come to understand is that stanzas in ssh_config do not have to be used in just this limited way. Any number of stanzas can match and apply settings, not just two of them, and you can exploit this to do interesting things in your ssh_config, including making up for a limitation in the pattern matching that Host supports.

As the ssh_config manpage says explicitly, the first version of an option encountered is the one that's used. Right away this means that you may want to have two 'Host *' stanzas, one at the start to set options that you never, ever want overridden, and one at the end with genuine defaults that other entries might want to override. Of course you can have more 'Host *' stanzas than this; for example, you could have a separate stanza for experimental settings (partly to keep them clearly separate, and partly to make them easy to disable by just changing the '*' to something that won't match).

Another use of multiple stanzas is to make up for an annoying limitation of the ssh_config pattern matching. Here's where I present the setup first and explain it later:

Host *.cs *.cs.toronto.edu
  [some options]

Host * !*.*
  [the same options]

Here what I really want is a single Host stanza that applies to 'a hostname with no dots or one in the following (sub)domains'. Unfortunately the current pattern language has no way of expressing this directly, so instead I've split it into two stanzas. I have to repeat the options I'm setting, but this is tolerable if I care enough.

(At this point one might suggest that CanonicalizeHostname could be the solution instead. For reasons beyond the scope of this entry I prefer for ssh to leave this to my system's resolver.)

There are undoubtedly other things one can do with multiple Host entries (or multiple Match entries) once you free yourself from the shackles of thinking of them only as default settings plus host specific settings. I know I have to go through my .ssh/config and the ssh_config manpage with an eye to what I can do here.

SSHConfigMultipleStanzas written at 01:19:39; Add Comment

2016-01-31

The tradeoffs of having ssh-agent hold all of your SSH keys

Once you are following the initial good practices for handling SSH keys, you have a big decision to make: will you access all of your encrypted keys via ssh-agent, or will at least some of them be handled only by ssh? I don't think that this is a slam dunk decision, so I want to write down both sides of this (and then give my views at the end).

The first and biggest thing that might keep you from using ssh-agent for everything is if you need to use SSH agent forwarding. The problem with agent forwarding is twofold. First and obviously, it gives the remote host (where you are forwarding the agent to) the ability to authenticate with all of the keys in ssh-agent, protected only by whatever confirmation requirements you've put on them. This is a lot of access to give to a potentially compromised machine. The second is that it gives the remote host a direct path to your ssh-agent process itself, a path that an attacker may use to attack ssh-agent in order to exploit, say, a buffer overflow or some other memory error.

(Ssh-agent is not supposed to have such vulnerabilities, but then ssh itself wasn't supposed to have CVE-2016-0777.)

In general, there are two advantages of using ssh-agent for everything. The first is that ssh itself never holds unencrypted private keys (and you can arrange for ssh to have no access to even the encrypted form, cf). As we saw in CVE-2016-0777, ssh itself is directly exposed to potentially hostile input from the network, giving an attacker an opportunity to exploit any bugs it has. Ssh-agent is one step removed from this, giving you better security through more isolation.

The second is that ssh-agent makes it more convenient to use encrypted keys and therefor makes it more likely that you'll use them. Without ssh-agent, you must decrypt an encrypted key every time you use it, ie for every ssh and scp and rsync and so on. With ssh-agent, you decrypt it once and ssh-agent holds it until it expires (if ever). Some people are fine with constant password challenges, but others very much aren't (me included). Encrypted keys plus ssh-agent is clearly more secure than unencrypted keys.

The general drawback of putting all your keys into ssh-agent is that ssh-agent holds all of your keys. First, this makes it a basket with a lot of eggs; a compromise of ssh-agent might compromise all keys that are currently loaded, and they would be compromised in unencrypted form. You have to bet and hope that the ssh-agent basket is strong enough, and you might not want to bet all of your keys on that. The only mitigation here is to remove keys from ssh-agent on a regular basis and then reload them when you next need them, but this decreases the convenience of ssh-agent.

The second drawback is that while ssh-agent holds your keys, anyone who can obtain access to it can authenticate things via any key it has (subject only to any confirmation requirements placed on a given key). Even if you don't forward an agent connection off your machine, you are probably running a surprisingly large amount of code on your machine. This is not just, eg, browser JavaScript but also anything that might be lurking in things like Makefiles and program build scripts and so on.

(I suppose any attacker code can also attempt to dump the memory of the ssh-agent process, since that's going to contain keys in decrypted form. But it might be easier for an attacker to get access to the ssh-agent socket and then talk to it.)

Similar to this, unless you have confirmations on key usage, you yourself can easily do potentially dangerous operations without any challenges. For example, if you have your Github key loaded, publishing something on Github is only a 'git push' away; there is no password challenge to give you a moment of pause. Put more broadly, you've given yourself all the capabilities of all of the keys you load into ssh-agent; they are there all the time, ready to go.

(You can mitigate this in various ways (cf), but you have to go out of your way to do so and it makes ssh-agent less convenient.)

My personal view is that you should hold heavily used keys in ssh-agent for convenience but that there are potentially good reasons for having less used or more dangerous keys kept outside of ssh-agent. For example, if you need your Github key only rarely, there is probably no reason to have it loaded into ssh-agent all the time and it may be easier to never load it, just use it directly via ssh. There is a slight increase in security exposure here, but it's mostly theoretical.

SSHAgentTradeoffs written at 02:37:28; Add Comment

2016-01-30

Some good practices for handling OpenSSH keypairs

It all started with Ryan Zezeski's question on Twitter:

Twitter friends: I want to better manage my SSH keys. E.g. different pairs for different things. Looking for good resources. Links please.

I have opinions on this (of course) but it turns out that I've never actually written them all down for various reasons, including that some of them feel obvious to me by now. So this is my shot at writing up what I see as good practices for OpenSSH keypairs. This is definitely not the absolutely best and most secure practice for various reasons, but I consider it a good starting point (but see my general cautions about this).

There are some basic and essentially universal things to start with. Use multiple SSH keypairs, with at least different keypairs for different services; there is absolutely no reason that your Github keypair should be the keypair that you use for logging in places, and often you should have different keypairs for logging in to different places. The fundamental mechanic for doing this is a .ssh/config with IdentityFile directives inside Host stanzas; here is a simple example.

(My personal preference is to have different keypairs for each separate machine I'll ssh from, but this could get hard to manage in a hurry if you need a lot of keypairs to start with. Consider doing this only for keypairs that give you broad access or access to relatively dangerous things.)

Encrypt all of your keys. Exactly what password(s) you should use are a tradeoff between security and convenience, but simply encrypting all keys stops or slows down many attacks. For instance, the recent OpenSSH issue would only have exposed (some of) your encrypted keys, which are hopefully relatively hard to crack.

Whenever possible, restrict where your keys are accepted from. This is a straightforward way to limit the damage of a key compromise at the cost of some potential inconvenience if you suddenly need to access systems from an abnormal (network) location. In addition, if you have some unencrypted keys because you need some automated or unattended scripts, consider restricting what these keys can do on the server by using a 'command=' setting in their .ssh/authorized_keys line; an example where we do this is here (see also this). You probably also want to set various no-* options, especially disabling port forwarding.

At this point we're out of truly universal things, as the path splits depending on whether you will access all of your keys via ssh-agent or whether at least some of them will be handled only by ssh (with passphrase challenges every time you use them). There is no single right answer (and covering the issues needs another entry), but for now I'll assume that you'll access all keys via ssh-agent. In this case you'll definitely want to read the discussion of what identities get offered to a remote server and use IdentitiesOnly to limit this.

If you need to ssh to hosts that are only reachable via intermediate hosts, do not forward ssh-agent to the intermediate hosts. Instead, use ProxyCommand to reach through the intermediates. This is sometimes called SSH jump hosts and there are plenty of guides on how to do it. Note that modern versions of OpenSSH have a -W argument for ssh that makes this easy to set up (you no longer need things like netcat on the jumphost).

(There are some cases that need ssh agent forwarding, but plain 'I have to go through A to get to B' is not one of them.)

With lots of keys loaded, your ssh-agent is an extremely large basket of eggs. There are several things you can do here to reduce the potential damage of an attacker gaining access to its full authentication power, although all of them come with convenience tradeoffs:

  • Set infrequently used or dangerous keys so that you'll have to confirm it before they can be used, by loading them with ssh-add's -c 'have ssh-agent ask for confirmation' argument.

  • Treat some keys basically like temporary sudo privileges by loading them into ssh-agent with a timeout via ssh-add's -t argument. This will force you to reload the key periodically, much as you have to sudo and then re-sudo periodically.

  • Arrange to wipe all keys from ssh-agent when you suspend, screenlock, or otherwise clearly leave your machine; my setup for this is covered here.

    (This is good practice in general, but it becomes really important when access to ssh-agent is basically the keys to all the kingdoms.)

You'll probably want to script some of these things to make them more convenient; you might have an 'add-key HOST' command or the like that runs ssh-add on the right key with the right -c or -t parameters. Such scripts will make your life a lot easier and thus make you less likely to throw up your hands and add everything to ssh-agent in permanent, unrestricted form.

(Also, check your ssh_config manpage to see if you have support for AddKeysToAgent. This can be used to create various sorts of convenient 'add to ssh-agent on first use' setups. This is not yet in any released version as far as I know but will probably be in OpenSSH 7.2.)

PS: You probably also want to set HashKnownHosts to yes. I feel conflicted about this, but it's hard to argue that it doesn't increase security and most people won't have my annoyances with it.

PPS: My personal views on SSH key types are that you should use ED25519 keys when possible and otherwise RSA keys (I use 4096 bits just because). Avoid DSA and ECDSA keys; the only time you should generate one is if you have to connect to a device that only supports DSA (and then the key should be specific to the device).

SSHKeyGoodPractices written at 01:54:16; Add Comment

2016-01-29

What SSH identities will be offered to a remote server and when

I've already written an entry on what SSH keys in your .ssh/config will be offered to servers, but it wasn't quite complete and I still managed to confuse myself about this recently. So today I'm going to try to write down in one place more or less everything I know about this.

Assuming that you're using ssh-agent and you don't have IdentitiesOnly set anywhere, the following is what keys will be offered to the remote server:

  1. All keys from ssh-agent, in the order they were loaded into ssh-agent.
  2. The key from a -i argument, if any.
  3. Any key(s) from matching Host or Match stanzas in .ssh/config, in the order they are listed (and matched) in the file. Yes, all keys from all matching stanzas; IdentityFile directives are cumulative, which can be a bit surprising.

    (If there are no IdentityFile matches in .ssh/config, OpenSSH will fall back to the default .ssh/id_* keys if they exist.)

(If you aren't using ssh-agent, only #2 and #3 are applicable and you can pretty much ignore the rest of this entry.)

If there is a 'IdentitiesOnly yes' directive in any matching .ssh/config stanza (whether it is in a 'Host *' or a specific 'Host <whatever>'), the only keys from ssh-agent that will be offered in #1 are the keys that would otherwise be offered in both #2 and #3. Unfortunately IdentitiesOnly doesn't change the order that keys are offered in; keys in ssh-agent are still offered first (in #1) and in the order they were loaded into ssh-agent, not the order that they would be offered in if ssh-agent wasn't running.

Where the 'IdentitiesOnly yes' directive comes from makes no difference, as you'd pretty much expect. The only difference between having it in eg 'Hosts *' versus only (some) specific 'Host <whatever>' entries is how many connections it applies to. This leads to an important observation:

The main effect of a universal IdentitiesOnly directive is to make it harmless to load a ton of keys into your ssh-agent.

OpenSSH servers have a relatively low limit on how many public keys they will let you offer to them; usually it's six or less (technically it's a limit on total authentication 'attempts', which can wind up including eg a regular password). Since OpenSSH normally offers all keys from your ssh-agent, loading too many keys into it can cause authentication problems (how many problems you have depends on how many places you can authenticate to with the first five or six keys loaded). Setting a universal 'IdentitiesOnly yes' means that you can safely load even host-specific keys into ssh-agent and still have everything usable.

(This is the sshd configuration directive MaxAuthTries.)

Note that specifying -i does not help if you're offering too many keys through ssh-agent, because the ssh-agent keys are offered first. You must enable IdentitiesOnly as well, either in .ssh/config or as a command line option. Even this may not be a complete cure if your .ssh/config enables too many IdentityFile directives and those keys are loaded into ssh-agent so that they get offered first.

If the key for -i is loaded into your ssh-agent, OpenSSH will use the ssh-agent version for authentication. This will cause a confirmation check if the key was loaded with 'ssh-add -c' (and yes, this still happens even if the -i key is unencrypted).

(ssh-agent confirmation checks only happen when the key is about to be used to authenticate you, not when it is initially offered to the server.)

PS: you can see what keys you're going to be offering in what order with 'ssh -v -v ...'. Look for the 'debug2: key: ...' lines, and also 'debug1: Offering ...' lines. Note that keys with paths and marked 'explicit' may still come from ssh-agent; that explicit path just means that they're known through an IdentityFile directive.

Sidebar: the drawback of a universal IdentitiesOnly

The short version is 'agent forwarding from elsewhere'. Suppose that you are on machine A, with a ssh-agent collection of keys, and you log into machine B with agent forwarding (for whatever reason). If machine B is set up with up with universal IdentitiesOnly, you will be totally unable to use any ssh-agent keys that machine B doesn't know about. This can sort of defeat the purpose of agent forwarding.

There is a potential half way around this, which is that IdentityFile can be used without the private key file. Given a stanza:

Host *
  IdentitiesOnly yes
  IdentityFile /u/cks/.ssh/ids/key-ed2

If you have a key-ed2.pub file but no key-ed2 private key file, this key will still be offered to servers. If you have key-ed2 loaded into your ssh-agent through some alternate path, SSH can authenticate you to the remote server; otherwise ssh will offer the key, have it accepted by the server, and then discover that it can't authenticate with it because there is no private key. SSH will continue to try any remaining authentication methods, including more identities.

(This is the inverse of how SSH only needs you to decrypt private keys when it's about to use them.)

However, this causes SSH to offer the key all the time, using up some of your MaxAuthTries even in situations where the key is not usable. Unfortunately, as far as I can tell there is no way to tell SSH 'offer this key only if ssh-agent supports it', which is what we really want here.

SSHIdentitiesOffered written at 02:24:59; Add Comment


Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.