2010-12-31
The only way to really be secure with SSL
There is a popular view that use of SSL creates authentication. If you really care seriously about security (and worry a lot about interception), this is not the case. Assuming that all else goes well, a SSL certificate only creates authentication if you (as the user of a website) can trust that the certificate actually belongs to the website; otherwise you could be talking to an imposter or someone conducting a man in the middle attack.
(Note that there is a lot that can go wrong before you get to this point if there are people involved, or even software that makes mistakes.)
Knowing this is a lot different than merely knowing that a website's SSL certificate is signed by a general CA that you accept. It follows that the only way to be really secure with SSL in practice is to control the full signature authentication chain, so that you simply don't accept SSL certificates signed by general CAs; this is the only way to keep either CA screwups or 'corrupt' CAs from destroying your security by issuing imposter certificates through accident or malice.
(As I've alluded to before, I put 'corrupt' in scare quotes because you can't really call CAs corrupt as such when they are actually arms of a hostile government, or simply vulnerable to legal coercion. By the way, don't think that you won't get spied on just because you've got no espionage value for governments. National governments have a long history of conducting economic spying on behalf of sufficiently important local companies.)
Unfortunately this is relatively difficult with today's software. It isn't enough to simply have a private CA and preload all of your employee's machines with the CA root certificate; that just lets them accept your CA in addition to all of the other CAs. If you really care about security, you do not want your employees accepting certificates for your machines that are signed by anyone other than your CA (and you do not want this to be something they can override under any circumstances, because at this point you are well into the rainbow series 'better to fail than to disclose information' territory).
(I say employee instead of user because generally employees are the only people you can make go through this much pain and annoyance.)
The two solutions to this are client certificates and custom software. Client certificates let your SSL servers verify that they are talking directly to the employee's machine (and thus that it has accepted your certificate and your strong crypto), with no CA-enabled 'transparent' interception going on. The easiest custom software is simply to use a VPN (with appropriate certificates and non-relayable authentication); you can then make your important secure services only accept connections from the VPN's IP address range. Using a VPN has the twin advantages that you do not have to figure out how to make all of your important SSL servers require client certificates (and manage them), and that you can secure services that do not use SSL or do not use SSL soon enough.
(On a general level you're still vulnerable, but now you only have to worry about keyloggers and other problems with compromised employee machines. And client issues are theoretically out of scope for SSL anyways.)
2010-12-28
A modest proposal for fixing your bug tracker
Everyone knows that bug trackers are where bug reports go to die, at least if your project is at all large or popular. This causes various problems, the root cause of all of which can be summarized as that your bug tracker is lying to people about the true status of nominally active bug reports.
So here is a modest proposal (somewhat in the Swiftian sense) for how to fix your bug tracker. Simply introduce and automate the following policies:
- close or expire bugs that have received no action in N months, say 4
months; such bugs can be re-opened by new activity. The assigned
owner of the bug (but not the submitter) can flag such bugs as
'do not auto-close'.
- close bugs that are still open M months after they've been submitted,
say a year and a half (18 months) or after a release or two. Such
bugs cannot be re-opened at all, even by the assigned owner. If
the assigned owner wants to maintain a low priority 'would be
nice' list, they should do so somewhere outside of your bug
tracking system.
- close bugs as 'neglected' if they have no activity by anyone but the submitter after a short period of time, say 1 month. Such bugs can only be re-opened by activity from the assigned owner or a maintainer, not by further activity from the original submitter.
(You can pick the actual numbers for the various times by mining your bug database to find how many bugs are actually solved as a function of time and the various sorts of activity. Note that 'solved' is not the same as 'marked as resolved'.)
The overall goal with this modest proposal is to have the open bugs in your bug tracker reflect issues that are likely to be worked on, instead of mostly being a graveyard of dead issues that no one will ever pay attention to regardless of what the bug's nominal state is. Thus, we expire bugs that are de facto being ignored as shown by either lack of suitable activity or simply absolute age. You can argue about absolute age, but my strong feeling is that the bug tracker that users submit to is not the right place for your wishlist.
Closing bugs that your users have submitted is going to irritate them, but it has the great virtue of being honest and you can put apologetic text in the closing message that they get. Users are not stupid and will soon work out that their bug reports are neglected and dead regardless of what the bug's official status is, and pretending otherwise is not a good idea. No one likes being lied to about what is really going on.
And after the dust settles, your maintainers will be able to actually use your bug tracker's searches to find things to work on, instead of being confronted with a wall of 'open' bugs that are anything but.
2010-12-27
Users don't care about security
Here is something that I have come to slowly believe about computer security:
Users don't care about security.
This goes beyond security being a pain. Security is simply not interesting to most people; it has nothing to do with what they actually want to do with their computers (or anything else). Instead security is simply a deadweight overhead, something they do because they have to. Well, because they have been scared into doing so.
One consequence of this is that people are not interested in educating themselves about security issues. You can give them everything that they theoretically need to make the near-mythical 'well informed decision' about something, and they won't pay attention because they don't care. You can make it really short, you can agonize over the wording to make it completely clear, and they still won't pay attention because they still don't care.
(This underlies some of my opinions about asking users questions about security stuff, among other things.)
There are exceptions. Some people are genuinely interested. Some people are obsessive. Some people know (or believe) that they operate in high threat or high consequences environments, with very high risks. But the exceptions are just that, and they are not the majority.
The corollary is that truly effective security is only achieved when users don't have to care about security in order to make it work. Only then will your system's security avoid being subverted because your users don't care (enough) about security to do things 'right'.
(Well, and because people make mistakes, and because they don't know enough to make sensible choices, and because you are bombarding them with false positives, and many other problems. But lack of caring is a fundamental issue even if you could magically solve all the other ones.)
As with many similar issues, this is often hard for security researchers or even well educated computer users (such as sysadmins) to see and understand. Pretty much axiomatically, security researchers care a lot about security, and dedicated computer users have often soaked enough information to either care or be scared. It is hard for us to take a step back and look at it from a less immersed perspective and to realize that yes, it's uninteresting, just like a lot of other things we care about.
2010-12-24
The jurisdiction problem with making SSL CAs liable for things
Suppose that you want to fix the SSL CA racket, so that SSL CAs actually had a motive to do a good job (whatever that means). One vaguely popular approach is to align their economic interests through the traditional tool of liability, so you pass laws that make SSL CAs liable if they issue bad certificates. However, as I've mentioned before, I don't think that this will actually work.
The core problem that I see is jurisdiction, in two forms. The first and most obvious one is your government's jurisdiction over the CAs, since commercial CAs are located all around the world. Even if you persuade your local legal system to allow you to assert jurisdiction over a foreign company on the grounds that it accepts money from your citizens, you're left with the issue of enforcing a court judgement against a company that may well have no assets in your country.
This is important, because successfully imposing liability on all SSL CA vendors is vital. Making a SSL CA liable for things drives up its costs, which means that it's going to have to increase its prices. Given that SSL certificates are a commodity, any non-liable SSL CAs can and will undercut these higher prices, with the net effect of driving CAs in your jurisdiction out of business.
The second part is jurisdiction between the SSL CA vendor and the person wanting a certificate. If something goes wrong and the SSL CA issues a bad certificate, it's quite possible that they have been sold a bill of goods by the certificate purchaser. With the CA on the hook for money via liability, they will clearly want to recover it by turning around and suing the purchaser. If the purchaser is not within the same jurisdiction as the CA, well, the CA now has a problem; the further 'away' the purchaser, the larger the CA's problem.
Even apart from any practical difficulties, making SSL CAs liable for validating the purchaser's identification is likely to result in SSL CAs refusing to sell certificates to foreigners. This will cut down SSL CA competition, often drastically, plus there are large areas of the world that do not have an SSL CA in their country at all.
(The practical difficulties of verifying the identity of someone in another jurisdiction are themselves non-trivial, especially if you assume that some of the would-be certificate purchasers are criminals, willing to lie to you and forge what look like official documents.)
2010-12-15
Always remember that people make mistakes
One very important thing to remember when trying to design practical security systems is that people make mistakes. Always. Even under the best of circumstances and the best of intentions, sooner or later someone will do accidentally do something wrong.
If your security system breaks explosively when people makes mistakes, your system is wrong in practice. Regardless of how mathematically pure it is, you have not designed something with real security. Real security needs to cope with things going wrong and people making mistakes, because that's what actually happens.
(There are all sorts of mitigation and coping strategies, depending on what the overall design goals are for your security system.)
You cannot fix this fact. You cannot exhort users to not make mistakes; it doesn't work. You cannot threaten users to get them to not make mistakes; it doesn't work, you can't make it work, and the side effects of trying this are extremely unpleasant. You can't even make it so strongly in people's self-interest to not make mistakes that they won't make mistakes; it still doesn't work. People just make mistakes.
Perhaps you're convinced that your system and environment is an exception. If so, please consider aviation's 'controlled flight into terrain', which is the dry technical term for 'a highly trained pilot with their life on the line spaced out and flew their plane into the ground'. Pilots kill themselves (and other people) in CFIT accidents every year. This happens in basically the best situation possible; commercial pilots are highly trained, they've got pretty much the best motivation possible to not do this, and there are huge support structures and millions of dollars invested in avoiding these accidents. Given that commercial pilots still fly planes into the ground, your system is not going to do better.
PS: obviously this applies to more than just security systems. It's just that security systems are the most common place for people to appeal to shiningly perfect math and dismiss actual reality as an annoying inconvenience. By now, most other computing subfields are willing to acknowledge actual human behavior and design accordingly.
Sidebar: how many mistakes is too many
It's sensible to say that you can't cope with too many mistakes at once, although ideally you will have some modeling to assess how likely this is. Please do not make this merely some handwaving math about low percentages multiplied together; for a start, mistakes are not necessarily independent events.
2010-12-03
Asking users questions never increases security
Here is something that I've more or less written about before, but I want to reinforce by saying explicitly:
Asking users questions never increases security.
Never ever. Really.
(See SecurityChoiceProblem for a discussion of why.)
What this means is simple. Every time you design a system where part of the design is 'if something questionable happens, we will ask the user if they approve it', assume that at least half of your users will make the wrong choice. Ask yourself what this does to the security of your system, and then ask yourself if the question is actually doing any good or if the real design purpose is so that you can say with a straight face 'we tried to solve this problem, but the stupid users are screwing it up; it's their fault, not ours'.
(This answer is wrong. Twice.)
Then delete the question and either make the system work anyways or give up, admit that you are not delivering perfect security, and figure out how to do the best you can despite this.
In reality, of course, it will not be just half of your users who answer the question wrong, because users do not answer mysterious questions by picking randomly. Instead they pick whichever choice it is that lets them do whatever they were in the process of doing. Users almost always want to do what they're trying to do, even when it is actually a mistake.
(Sometimes, if you are very lucky, you can catch the user's attention long enough to persuade them that they're making a mistake. But this is very difficult for good reasons.)
If you are firmly convinced that what the user is trying to do is a mistake, tell them so very strongly, and tell them why. I am almost tempted to say don't give them any way to overrule you, but that's too strong. What you shouldn't do is present them with a yes/no question or some other dialog that implies that the two options are equally likely, because you've already decided that they aren't. 'Go on anyways' should be tucked away in tiny print, not given equal billing with 'get me out of here'.
(If you can't be almost certain that the user is making a mistake, see above. Find some way to not ask the question at all.)
(I've written about this issue before in SecurityChoiceProblem, but there I was more focused on configuration and setup questions instead of questions that you want to ask the user on the fly during normal operation.)