Are security bugs always code bugs?
I'll be generous and ask the slightly broader question: are security 'bugs' always caused by code bugs? I believe that the answer clearly has to be no; while there's a decent argument that failure to properly escape input is a code bug, I think it stretches the term beyond all non-circular meaning to say that using the wrong cryptography is a code bug.
I think that there are at least the following sources of security bugs:
- outright code bugs, the traditional off by one errors and the like.
- undesigned or unimplemented features (failure to properly escape input,
bad input validation, missing security checks, etc).
- design mistakes, such as using the wrong cryptography.
- unforeseen interactions between existing features (as seen in quite a lot of Linux kernel security bugs, which often have the abstract form 'if you use X in a clever way, you can bypass apparently unrelated thing Y').
Now I go back to my argument about why security bugs aren't bugs.
From a suitable height, all four of these sources are defects in the code. If you consider 'bug' to be the same as 'defect in the code', then all four are bugs, but I think that this is stretching the term too far; by that definition, inadequate performance is a bug, along with pretty much anything you don't like about the code. In practice, programmers have a pragmatic definition of bugs, based on the general practices we follow to get rid of them, and at least two out of the four sources of security bugs are not 'bugs' under those practices; they are qualitatively different from even very hard code bugs.
(They are mistakes, they are defects, but they are not bugs; the program is working exactly as designed and the design is not bad or incomplete, it is just that it has subtle flaws.)