The problem with security bugs is that they aren't bugs
Here is a thesis that I have been mulling over lately:
One reason that security bugs are hard is that they aren't bugs. In regular bugs, your code doesn't do things that you want it to. In security bugs, the end result is that your code does extra things, things that let an attacker in. Ergo, security bugs are actually features that you didn't intend your code to have.
(Often security bugs are created by low-level mistakes that are regular bugs. I'm taking the high level view here based on how these bugs make the code behave.)
This makes security bugs much harder to find, especially in ordinary development. It's easy to notice when your code doesn't do something that it should (or does it incorrectly), but it's much harder to notice when it does extra things, and even harder to spot that it merely could do extra things (especially when we can be blind about it). As a result, the existence of extra features, security bugs included, rarely surfaces during ordinary testing.
(This is a magnified version of how tests are not very good at proving negatives. Proving that your code doesn't have extra features is even harder than proving that it doesn't have any bugs.)
The immediate, obvious corollary is that most normal techniques for winding up with bug-free code are probably ineffective at making sure that you don't have security bugs. You're likely to need an entirely different approach, which means directly addressing security during development instead of assuming that your normal development process will take care of security bugs too.
(Your normal development process might also catch security bugs, but it depends a lot on what that process is. I suspect that things like code reviews are much more likely to do so than, say, TDD.)
Comments on this page:
|
|