2009-12-23
Another demonstration of SSL Certification Authority (in)competence
Every so often, another SSL CA provides a demonstration that they are run by baboons, and not very smart ones at that. Past demonstrations have involved security, or the lack of it; the current one involves terrible business practices, which is arguably worse (at least in terms of how large a failure it is).
Put simply, the current root CA certificate for ipsCA expires on December 29th. This will orphan and invalidate all SSL certificates signed (directly or indirectly) with it, which is almost all of the certificates that ipsCA has sold.
(Full proper SSL certificate validation requires not just that the direct certificate be within its valid time range but that all certificates in the certificate chain are valid. We may be about to find out which programs do full proper cert validation, and which ones take shortcuts.)
While ipsCA has a new root CA certificate (and is re-signing their SSL certificates with it), it doesn't do them much good; it's currently included in exactly one program, that being IE 8. Their current official statement is remarkably non-informative about other browers, IMAP mail clients, and so on. This lack of broad inclusion effectively renders their root certificate and any SSL certificate signed by it useless, since the only real reason to pay a CA for a SSL cert is to avoid your users getting a scary warning, and to avoid that the CA's root cert has to be included in whatever program your users are using.
Let me repeat that in different words: having your root CA certificate included in as many programs as possible is a CA's real job. Thus, ipsCA failed to do the only thing that is essential for them to make money (and are about to experience the very harsh downside of the SSL business model), despite basically sitting on a license to print money, which is what a root certificate is.
One reason that ipsCA's new root cert is in so few programs may be because it appears to only have been generated this September (judging from its 'Not Before' date). From what I understand, getting your root certificate included in programs is a very slow process, and even once this happens there is the small issue of getting users to actually update to the new versions of all of these things. Leaving it to four months before your old certificate expires is simply not workable.
The larger lesson I draw from this is a reinforcement of my extremely cynical view of CA competence, since ipsCA fell down on this despite having practically the best motivation possible. If I can't count on CAs to merely preserve their ability to make money, what can I count on them to do?
(Obligatory attribution: I learned of this issue today from Bob Plankers, via Planet Sysadmin.)
2009-12-22
Using OpenID for local web application authentication
We have a problem, and that problem is authentication. In a not uncommon pattern, we have a central set of core services, run by a core group; email, fileservers, the login servers, and so on. Then we have a bunch of other people who want to build various web applications, ranging from departmental things all the way down to graduate students putting together projects.
Many of these web applications need accounts and authentication. The natural and best logins and passwords to use are people's existing departmental accounts, because who wants to force people to remember another password? However, for obvious reasons we're in no position to give our Unix password file out to people in general; we use shadow files for a reason, after all. Ideally we would like to not even give them out for departmental web applications.
At a conceptual level, what we need is some sort of authentication service. It's easy to build something that takes a plaintext password and login and gives you a yes or no answer (in fact, given IMAP people can build one themselves), but this has two drawbacks. First, we'd like the service not to be a mass password guessing service too, and second, we'd ideally like web applications to never even deal with those departmental passwords, so that we don't have to worry about people's applications mis-handling them.
For a while I have been thinking that OpenID could be the solution to this problem. It should be simple to create an OpenID provider that authenticates users against our Unix password file, and expose it as, say, 'openid.cs/~<user>/'. Authors of local web apps would then have a simple way of authenticating people; essentially they would get access to our departmental logins for free, in a way that means we don't have to worry about their application and system security, or try to get approval for sharing selected encrypted shadow passwords with them.
(And who knows, a departmental OpenID identity might turn out to be more generally useful; people might want to use it when dealing with outside websites that use OpenID, if there are very many.)
I suspect that it's simpler to integrate (restricted) OpenID into modern web applications than to try to hook them into a Unix or Unix-ish password authentication system. And even if it's just as complicated, the upstream developers are more likely to accept patches to add OpenID support than to add support for authenticating against a Unix password file; it's simply more general, these days.
2009-12-20
Some thoughts on intercepting https traffic
It's been pointed out to me that there are legitimate reasons to intercept and inspect https traffic, and this can even be a primary purpose of having a local certificate authority. For example, breaking open https traffic can be vital for being able to see and possibly analyze malware downloads.
(Note that you should really not do this covertly, without admitting that you're inspecting https traffic. Sooner or later someone will notice that the SSL certificate authority for some outside site is your own internal CA, and things go rapidly downhill from there.)
If you are going to do this, you should do it selectively, for both policy and technical reasons. The policy reasons should be obvious, including that the less you intercept the less that you can inadvertently leak if something goes wrong. The technical reason is that unless you build a quite complicated https interception system, you only really want to intercept things that have valid certificates.
With simple interception schemes, you set up SSL with the internal client, including giving it a valid signed certificate, before you've necessarily connected to the remote server, gotten its certificate, and validated it. If the remote server cert fails to validate, pretty much the only thing you can do is break the connection. Even with a more complicated scheme, you can't pass through the invalid server cert while still being able to intercept the traffic, and without seeing the real server cert there is no way for the user to make a sensible decision about whether or not to continue.
I can think of two ways to do such selective https interception. The best way is to use a https proxy, because this gives you access to the actual hostname the client is trying to connect to; this lets you make the most fine-grained decisions about what traffic to intercept. In this approach, the https proxy selectively diverts some connections to your special https inspection system, while proxying all of the rest as usual.
The more brute force approach is to use firewall redirection to divert https traffic for some IP addresses off to your https inspection system. This has the twin flaws that you have to get all of the IP addresses of the websites you want to intercept traffic for, and that you may intercept too much traffic by using IPs instead of hostnames (although until SNI catches on this probably won't be much of a worry, since shared-host https is basically impossible right now).
2009-12-19
Local CAs and an interesting consequence of the SSL security model
Suppose that your organization creates an internal organizational Certificate Authority, so that it can issue SSL certificates for strictly internal hostnames. Of course, everyone needs to have the internal CA's certificate loaded in their browser and so on in order to get work done on your intranet; as a practical matter, you probably preload it in your standard machine setups. I suspect that this is a not uncommon setup in sufficiently large companies.
It's recently struck me that this has an interesting consequence: your company security and firewall people can now intercept and proxy any or all external https websites without certificate warnings. All they have to do is make a certificate for whatever hostname they want and sign it with the internal CA certificate. This works because CA certs do not have restricted spheres of operation (at least as far as I know), so you cannot create a CA certificate that can only be used to sign your internal hostnames.
(You can only use your internal CA cert for this purpose, but the difference between 'only used' and 'can only be used', while small, is vital.)
Since this will be more or less transparent (although not difficult to detect), it's unfortunately now probably significantly more attractive to the powers that be. There are all sorts of firewall security people who probably salivate over the prospects of no longer having to pass https traffic uninspected and unmolested, or block it entirely.
(The cynical view is that even having restricted spheres of operation for CA certificates wouldn't help. The kind of people who would push for firewall https interception would also push for the company CA certificate having no sphere of operation restriction, and that's always going to be possible.)