Why SSL needs certificate authorities, or at least trust roots

January 2, 2009

Despite what I wrote yesterday and my general views on authenticating SSL, I think that in real life SSL still needs certificate authorities, or at least some sort of trust roots. The problem is that normal people are simply not interested in assessing trust issues and probably are not capable of doing so. In fact I'm not sure that I'm capable of doing it.

(As I wrote once before, in practice I make no attempt to actually verify the SSH host keys that I get prompted to approve, so the odds that I would hand-validate SSL certificates is low. And I am far more technical and aware of these issues than ordinary people are. Remember, security is a pain.)

If SSL is going to mean anything more than very casual opportunistic encryption in practice, we need to delegate trust issues to someone; in other words, we need some sort of trust roots. The traditional alternative answer to this is a 'web of trust' model, but this is just a cop-out, and a dangerous one at that. You're still leaving it to people to pick their own trust roots, with the predictable result that many people are going to do a bad job of it. Plus, in practice I believe that the whole model has flaws that weaken its security.

(Yes, yes, at this point it is popular to blame the people for not doing their 'job' correctly. Wrong. If you design a system that you know is going to be used incorrectly by people, you have failed because you haven't solved the important problem.)

(On a side note, I'd expect a web of trust system to have even more problems with revocation than SSL currently does, since it would involve more signatures that are more widely distributed.)

This doesn't mean that you need certificate authorities as they currently are (that would be truly depressing), but it does mean that you need trust roots and networks of trust that are trustworthy enough that they can be configured into browsers and shipped off to people who don't want to care about all of this. My feeling is that each trust root will have to have a pretty flat trust hierarchy in order to stay trustworthy overall; we've already seen the practical problem with doing otherwise.

I haven't thought very much about how many trust roots there should be. On the one hand, more trust roots should mean that less depends on any given trust root, which would make it easier (in practice) to revoke one when things go wrong. On the other hand, the more trust roots you have the less you can inspect and monitor each individual one's operations, so they can be sloppier before getting caught.

(The cynic would say that there is no real monitoring of CAs now; all you have to do is pass an initial inspection and then you're pretty much set.)


Comments on this page:

From 216.254.116.241 at 2009-01-05 23:01:51:

I agree with you that we will probably always need some sort of commonly-trusted authorities. Most users don't have the resources to properly identify every entity that they interact with over the 'net, and even those who do might prefer to delegate some of these tasks. And good software authors need to be able to offer reasonable defaults, especially since they won't be able to assume the existence of anything like an external validation agent (which would itself need reasonable defaults) for a long time to come.

So given that we're going to need some centralized authorities, how should software and operating system authors choose and deploy their pre-trusted roots? It seems to me that all of these principles are needed:

  • trust only root authorities with well-established verification practices.
  • trust a wide enough range of roots that most secured systems on the 'net today will cleanly validate.
  • do not trust any root with known bad practices, and be able to deprecate trust in a root relatively easily if it proves to be untrustworthy.

Ideally, these decisions would not need to be fully black-or-white either, to allow some root authorities a "probationary period" where they are not fully trusted, but can still contribute to the infrastructure.

Unfortunately, the current X.509 deployment sets these principles at odds with each other, and it has no mechanism for anything but black-or-white, fully-trusted or not-at-all root authorities.

In practice, what happens is that software authors simply cast as wide a net as possible to ensure that all valid sites are detected by default their software. What incentives do they have to limit their list of trusted roots? What mechanisms do they have available to them to put authorities on probation?

Certificate authorities themselves currently make money from issuing certifications -- they have little to no incentive to avoid issuing certs other than bad press. It's unclear what amount of bad press would be needed to actually unseat an entrenched trusted root in the current model.

If the user has an interest in preventing forgeries, site administrators are even more perversely aligned against it than software authors. Since X.509 allows only one certifier per certificate, every site administrator is effectively bound to advocate for their certifier's inclusion in the list of default trusted roots. Since the subject of the X.509 cert foots the bill for the certificate under today's X.509 model, the administrator also has the incentive to purchase the cheapest cert (which often translates into the most fly-by-night, lowest-overhead "authority"), and then to advocate for their inclusion in default trusted root lists.

In such a racket, there is no effective way for any smaller player to start participating, and there is no effective way to deprecate any entrenched authority without causing massive disruption to anyone certified by that authority.

Allowing certificates to have more than one issuer, and enabling software authors to grant marginal trust to probationary authorities would address both those problems. This is most easily conceived by using OpenPGP certificates for TLS. The Monkeysphere project can already demonstrate the use of OpenPGP certificates to authenticate SSH connections, though it does not include any trusted roots by default. Work remains to adapt similar principles to other transport layers.

-- Daniel Kahn Gillmor <dkg@fifthhorseman.net>
Written on 02 January 2009.
« Flaws in the 'web of trust' approach to trust issues
How to help programmers (parts 2 and 3): os.environ and sys.argv »

Page tools: View Source, View Normal, Add Comment.
Search:
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Fri Jan 2 01:14:13 2009
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.