Programming blindness and security
Here is a thought and a theory that has recently struck me.
One reason that writing secure programs is hard is because programmers have a kind of blindness about our own code. Instead of seeing the code as it really is, we tend to see the code as we imagine it in our heads, and as a result we see what we wrote it to do, not what it actually does.
(This is not usually a literal blindness, although we can do that too, but a disconnect between what the the text says and what we 'know' that it does.)
In ordinary code, this just makes your bugs harder to find (and leaves
you wondering how you could have possibly missed such an obvious mistake
once you find it). In security sensitive code it leads to holes that
you can't see because of course you didn't intend to create holes. If
you wrote code to introspect operations on the fly by loading a file
from a directory, you don't see that it could load a file from anywhere
with judicious use of
../ in the filename, because it's not what
you wrote the code to do. Of course the code doesn't let you load
arbitrary files because you didn't write the code to load arbitrary
files, you wrote it to load files from a directory.
Effective debugging in general requires being able to escape this blindness, but at least you have an initial shove in that you know that there's a bug in your code. Checking for security holes is even harder because there is nothing obviously wrong, nothing to kick you out of your mindset of blindness and get you to take a second look.
This leads to obvious but interesting (to me) thoughts about the effectiveness of things like pair programming, code reviews, and security audits. From the angle of this theory, these work in part because they expose your code to people who have less of a chance of being caught up in the blindness.
(I suspect that this is not an original thought.)