2011-09-13
'Web of trust' is a security failure
Here is a simple yet unpleasant thing (one that I have circled around and implied before, but never stated outright):
Any time you propose a security system that uses a 'web of trust' model to validate anything, you've failed; the system is not secure in practice. You can demonstrate all of the math that you want, but in the end it comes down to the users must pick the right people to trust in order for the system to be secure. And they will not. This is not a theoretical 'they won't', it is an experimentally and historically proven 'they will not', because they never have before now.
(Some number of them will try their best but not know enough. Some number of them will pick randomly. Some number of them will make mistakes. And if the system is attacked, some number of them will be fooled.)
When you propose a web of trust system, what you have really done is abrogated the work of making the system secure and instead dumped it on the users. This is a great way to feel morally superior (after all, the system works perfectly if used right so it's clearly the user's fault for screwing it up), but it is very much not a useful way to design an actual security system. You have not solved the real problem and you are ducking the issue.
(Attestation is a handy idea, but it does not need to be handled in the system in order to be useful; in real life, people already use all sorts of out of band mechanisms to handle it.)
Another way of putting this is that a web of trust system is not actually a secure system out of the box; it is not secure by itself, and indeed it's often not operable by itself. Instead, it's just most of the components of a secure system and assembling the actual secure system is left as an exercise for the users. Claiming that the pre-assembled component is 'secure' is vacuous, because it is not yet a complete and operable system.
This is a specific instance of the general issue that asking users questions never increases security (in part because (most) users don't care about security). And yes, 'who do you trust?' is clearly a question.
Web of trust also has a number of practical low level problems that I've written about in WebOfTrustFlaws (especially when used as a replacement for SSL CAs).
Sidebar: web of trust as an implementation detail
The one time that a 'web of trust' system is acceptable is if the users don't have to answer any questions in order to use it securely, ie they are not required to pick their own set of trust roots. Instead, the system is designed to work with a preconfigured, pre-vetted list and then the designer takes responsibility for keeping that list good.
(The system will need a way to update the list, for obvious reasons.)
2011-09-11
The weakness of the certificate authority model, illustrated
There are two leading models for checking identity via public key cryptography. When someone demonstrates that they know the private key for a given public key, you can either check that you know the public key itself (the SSH model) or check that the public key is itself signed by an authority you approve of (the SSL CA model). In theory SSL can be used for either model; in practice, many SSL tools and APIs seem to be strongly convinced that you should use the certificate authority model. The problem with this is that the SSL CA security model has a major flaw, one that we have seen on display in the DigiNotar breach.
Any model needs to deal with the possibility of compromised or improperly issued certificates. The CA model's answer is a CRL, a list of certificates that you should not accept even though they have a valid CA signature. This is the problem: to create a CRL entry, you need to know the bad certificate. In fact you need to know some relatively specific information about a certificate in order to revoke it. In short, the SSL CA security model requires perfect CA knowledge.
In other words, the mechanism designed to cope with security breaches only works if the CA has a limited security breach; if they were broken into sufficiently badly that the attacker could issue certificates but not so badly to allow the attacker to prevent those certificates from being recorded correctly. If the security breach is not limited, or even if you don't know whether it was or was not limited, the only recourse in the SSL CA model is to revoke your approval of the CA itself, which results in all CA-signed signatures being rejected.
This is more or less exactly what happened with DigiNotar. No one believes any more that DigiNotar knows what bad certificates have been issued with its signature (their list of such certificates has already grown several times, and may yet grow again). In the absence of trustworthy knowledge, the only remedy possible is dropping DigiNotar's root certificate.
The SSH model does not necessarily scale when you are dealing with a large and unpredictable set of identities, but it works perfectly well when you are dealing with a smaller, enumerable set. And it does not require perfect knowledge of what identities you may have been compromised (or just fooled) into certifying. Hence my long standing desire to see SSL tools support direct certificate checking.
(When I wrote that original entry, I did not expect that we would get such a perfect illustration of the problem. Sadly, we did.)
2011-09-05
The real reason why true asynchronous file IO is hard
Yesterday I wrote about the obstacles that faced the Linux kernel in delivering true kernel-level asynchronous file IO. In that entry I wrote:
Revising all of this [synchronous filesystem] code to work asynchronously is a lot of work and so didn't get done for a while.
Well. That's the real reason in a nutshell. To wit, writing callback based non-blocking code in conventional languages, especially in C, is hard and a pain in the rear. It takes what is simple straight line code and contorts it massively. You wind up with a maze of callback functions, state tracking objects, and so on, and your actual logic and control flow vanishes into the underbrush never to be seen again. It should be no surprise that people avoid this like the plague; it's hard to write, hard to debug, and hard to understand.
In fact, the most successful abstraction for dealing with asynchronous IO is in fact the thread, which you use to make it synchronous again in your actual code. This is clearly visible in things like the Linux kernel, where the kernel-level IO is almost invariably asynchronous but most code immediately waits for its IO to complete.
All of this should not be surprising. Ordinary code contains a large amount of implicit state (in the form of the call stack, local variables, and the location of the program counter). With our current technologies, writing explicitly asynchronous code generally requires capturing and restoring this state explicitly, while threads manage the state for you implicitly. Having to do things explicitly instead of having them handled implicitly is always a pain.
(It doesn't help that this implicit state is so fundamental to how we think about code, since in the abstract code is the sequence of steps that get run through in some environment.)
This issue in general makes me feel that general asynchronous programming in current languages is not going to catch on and become popular, because all of them make it too hard. One way or another, asynchronous programming systems need to preserve the implicit state for you, so that you can write what looks like ordinary sequential code and it becomes asynchronous mostly behind your back. Probably it's too late for C, though; threads are likely to be it, more or less.
(There are languages that have done this, sometimes only in limited ways
such as Python's 'yield'.)