2009-07-22
Thinking like a security paranoid: an example
There's been a bunch of commotion lately over OpenSSH and how perhaps there's a 0-day exploit in an older version of it, and so on. Given this, clearly the thing to do is upgrade to a current version just in case, right?
If you think this, you're not thinking like a security paranoid. Allow me to illustrate.
Imagine that you are an attacker. You've found a vulnerability in the very latest and most recent version of OpenSSH, and you want to exploit it. However, the problem is that sysadmins are lackadaisical about updating things, especially things like OpenSSH, so there aren't many people running the new version yet. Now, you could wait for the vulnerable version to slowly spread around, but the longer you wait the greater the chance that an OpenSSH developer will spot their mistake and fix the problem. Besides, you're impatient.
(You can see where this is going.)
So you go out and make some noise to stir up doubts about the older versions, to get sysadmins and distributions thinking 'we'd better update, just in case'. Fear of a security vulnerability makes a great driver of updates, and the more publicity the better. By updating, all of these people play into your hands, because they're installing the version that's vulnerable to your exploit, and better yet they're probably doing it without a serious inspection because they feel it's semi-urgent (and thus lowering the chance that anyone else will spot the vulnerability, especially since many of the people who do such inspections will be busy looking over old versions just in case).
Now, this is a hypothetical example; I don't particularly believe that it's what is going on with the recent OpenSSH 0-day claims. But it makes a good illustration of how security people have to think; every time something peculiar happens, you look at it and ask yourself 'who benefits? if I was evil, how could I benefit from this and why would I be doing it?'
(And I think it probably also makes a good example of how unnatural it is to think like a security paranoid. If you found this example totally over the top, well, you're normal, and that's the gap between normality and serious security.)
2009-07-19
The importance of making an issue visible
One of the things that's started to help the program energy efficiency
issue on Linux is the development
and popularization of programs like iotop and powertop. For the
first time, people could conveniently get a simple overview of what was
going on with IO and power on their system and, not surprisingly, people
reacted to what they saw.
This provides a handy illustration of the importance of making an issue or a behavior visible to people, both to users and especially to programmers. By and large, if things run well enough (or what people have become habituated to think of as 'well enough') a developer is not going to go to special effort to instrument their program for metrics like wakeup frequency or amount of IO done (it would be a mis-optimization). They may not even be conscious of the issue at all. And when people don't think much about an issue they can't take steps to make things better, even simple obvious steps.
By making the issues visible, these tools both gave developers an easy way to see how their program was doing and worked to make developers aware of the whole issue in the first place. And by making things visible to users too, the tools increased the pressure on developers to get with the program on the whole issue.
(They may also help users make more specific problem reports; there's a lot of difference between 'my laptop battery doesn't last very long any more' and 'powertop says the following three programs are eating lots of battery power'.)
In a sense, this is a corollary of getting the costs right; you can't get the costs right until you can see them (and until you know that they exist at all).
(I'm sure that this is a well-trodden observation. I just feel like writing it down, if only to remind myself about yet another benefit of getting statistics.)
2009-07-18
Why NFS filehandles fail as access capabilities
I think it's clear that Sun initially intended NFS filehandles to serve as capabilities, ie opaque tokens that could be used to grant access to useful things like your files. Unfortunately, pretty soon it was clear that this didn't work and NFS filehandles actually made pretty terrible access capabilities.
(I may be mis-using the term 'access capabilities' here. As I learned it, it refers to opaque (to the user) tokens or identifiers that you can present to services to get access to some resource without having to go through some sort of more elaborate access authentication. Web session identifiers are one example of access capabilities.)
NFS filehandles failed as the NFS security mechanism because they lack at least two of the minimum properties you want in access capabilities. First, NFS filehandles were in practice guessable. This is very bad, because capabilities are only even theoretically secure if they cannot be created by the user and thus possession of a capability means that it has been given to you by the server.
Second, NFS filehandles are in practice not revocable, especially not easily, because a significant amount of the filehandle is a stable identifier for the file; to revoke a filehandle, you mostly have to create a different file and remove the old one, which can be very difficult if the file is, for example, the root directory of a filesystem. Lack of revocation means that once an NFS filehandle is compromised or guessed you can't fix the situation short of quite drastic action.
(Revocation is important in real security systems because sooner or later, something always goes wrong and an access capability leaks into the wrong hands. For good security, you need to be able to fix that without blowing up the world.)
2009-07-05
The coming Internet identity problem
Right now, there are a fair number of websites that assume that everyone can have their own IP address and thus, if there are two connections from the same IP address that they should be considered to be the same person for things like load limiting, anti-cheating systems, and so on. This assumption is already false in the corporate world and has been for some time, but it has survived reasonably well for 'consumer'-oriented stuff, because it has tended to be true of general user ISPs.
(At least it has been true of general user ISPs in North America; I understand that it's not always true elsewhere, which periodically causes people problems.)
Enter the much-heralded coming scarcity of IPv4 addresses. If you can't get enough addresses to cover all of your customers, you are going to have to NAT them somehow, and this necessarily destroys the 'one person per IP address' assumption that these websites have been relying on. This leaves these websites with an Internet identity problem. Unfortunately, it's not an easy one.
The obvious (but not entirely sufficient) approach is to use payment details; one credit card means one person. Apart from the other problems, this doesn't work for free sites, and many of the places using this identity assumption are free (at least initially, to build interest). And there's nothing else you can use that the user can't easily have more than one of at once.
The other aspect of this problem is that it isn't going to happen all at once, and it isn't necessarily going to be obvious to you. It's not as if all ISPs are going to move towards NATs at the same time, or as if each will only use a single IP address for their NAT'ing (and even then, you will only notice the problem if you have two customers there, or even two customers that are active at the same time). So if ISP NAT'ing does happen, you can expect a period of time when each website's service gets worse and more annoying for you until they realize what's happening and work out a solution.
(And for the obvious reasons, this problem affects small websites more than it affects big websites. To paraphrase an old aphorism about banks, if an ISP breaks a small website, it's the website's problem, but if an ISP breaks a big Internet application (consider World of Warcraft for example) it's the ISP's problem.)