The problem with security bugs is that they aren't bugs

October 5, 2009

Here is a thesis that I have been mulling over lately:

One reason that security bugs are hard is that they aren't bugs. In regular bugs, your code doesn't do things that you want it to. In security bugs, the end result is that your code does extra things, things that let an attacker in. Ergo, security bugs are actually features that you didn't intend your code to have.

(Often security bugs are created by low-level mistakes that are regular bugs. I'm taking the high level view here based on how these bugs make the code behave.)

This makes security bugs much harder to find, especially in ordinary development. It's easy to notice when your code doesn't do something that it should (or does it incorrectly), but it's much harder to notice when it does extra things, and even harder to spot that it merely could do extra things (especially when we can be blind about it). As a result, the existence of extra features, security bugs included, rarely surfaces during ordinary testing.

(This is a magnified version of how tests are not very good at proving negatives. Proving that your code doesn't have extra features is even harder than proving that it doesn't have any bugs.)

The immediate, obvious corollary is that most normal techniques for winding up with bug-free code are probably ineffective at making sure that you don't have security bugs. You're likely to need an entirely different approach, which means directly addressing security during development instead of assuming that your normal development process will take care of security bugs too.

(Your normal development process might also catch security bugs, but it depends a lot on what that process is. I suspect that things like code reviews are much more likely to do so than, say, TDD.)

Comments on this page:

From at 2009-10-05 05:26:27:

I'm going to have to disagree with this. Most security bugs can be used as unintended features only by carefully constructed triggers. SQL injection vulnerabilities are ordinary bugs: quoting characters in user input break the program. Buffer overflows are ordinary bugs: over-long user input breaks the program. Integer overflows are ordinary bugs: certain inputs or execution states break the program. In all these cases, the initially undefined behaviour of the program can be determined and then purposefully exploited by an attacker, but the basic problem in the code is a bog-standard bug.

Now, there are cases where the program logic itself performs checks, often pertaining to authorization in some form. If that logic is broken, then you do indeed usually have security holes that are unintended features rather than ordinary bugs.

But it's not at all the case that all vulnerabilities are of this nature, rather, I would wager, only a minority. Exploitable bugs seem to constitute the bulk of vulnerabilities, and are probably also a lot more commonly exploited (even correcting for their relative abundance) simply on account of falling neatly into broad categories with known exploit strategies. Unintended features, in contrast, tend to be one-offs with one-off exploit strategies, and are thus much less appealing as targets.

Aristotle Pagaltzis

From at 2009-10-05 13:16:55:

I concur with Aristotle. My working definitions are:

A software bug
a (mis-)construction of code with unintended consequences.
A security bug
a software bug with security implications.

These are just the definitions that I endorse and use on a day-to-day basis. You can have different ones if you want.

A naive programmer might imagine that a full name field should be 25 characters wide and design his code with a fixed length buffer. To the end user of Hellenic descent named Alexander Papanicolopolous, this is a software bug. To a hacker this is a stack overflow and application-level access to the database.

Love the blog. Thank you for the good work.


Bobby Tables

By cks at 2009-10-11 02:07:41:

My reaction got sufficiently long that I put it into an entry, SecurityBugProblemII.

The short version is: I agree that security bugs are often ultimately caused by things that are nominally regular bugs, but I feel that security bugs (and their root causes) are drastically different because our normal methods of finding and getting rid of bugs manifestly don't really work on them. A good part of this is that security bugs almost never manifest themselves under testing as program misbehavior, so you usually can't find them by looking with tests the way you can find regular bugs.

(Once you know where a security bug is, you can write a test that your code will fail. But finding security bugs with tests is very hard, for the same reason that finding unintended features (like easter eggs in games) is hard.)

From at 2009-10-13 09:11:59:

Mr. Siebenmann:

If you'll allow me, I'll add one last counter argument for your consideration. Certainly, we rid our programs of most of their bugs in the ways you mention:

To a large extent, we understand how to write bug-free programs and code: you use and test the program until it doesn't crash or break things, produces the correct results, and doesn't do anything surprising (generate extra messages, pop up stray windows, and so on).

And I agree that there are other, harder to find bugs which somehow creep into otherwise perfect programs.

I do not think that the Security-ness of these harder bugs is an essential attribute. Only the hardness.

For a counter example: in high school I had a TI-82 graphing calculator which returned 4 for the following expression when X was set to 5:


That's the "integer part of the square root of five squared." The iPart function is basically a floor function: it returned the integer value of a mixed fraction. 6.25 would return 6 in other words. The correct return value is 5, of course. And for just about any integer value of X except 5 this function would return X. 125 is not such a fantastically high number that it should kick off a rounding error. Texas Instruments released a calculator with this bug because it was hard to find. (Evidently, I wasn't the only person to find this bug because both TI-82s I own now calculates that function correctly.

This is a hard bug. Whatever I was doing when I came up with that function was probably grossly inefficient, so it makes sense for TI not to find it. My bad, I was in high school. This is a hard bug, yet there are no security implications in this bug.

Bobby Tables

By cks at 2009-11-03 00:15:51:

Department of slow replies:

I do not think that the Security-ness of these harder bugs is an essential attribute. Only the hardness.

I think that the harder security bugs are qualitatively different than hard regular bugs, because hard security bugs don't come from mistakes in the code at all, they come from things like designing the wrong thing. Security bugs that are straightforward code bugs are the simple case.

The general techniques used to find even hard code bugs are applicable to finding security bugs, but using them to find code bugs won't also find security bugs unless you specifically look for them (and have the necessary knowledge and experience).

Written on 05 October 2009.
« Why Unix filesystems precreate lost+found directories
The danger of software suspend on servers »

Page tools: View Source, View Normal, Add Comment.
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Mon Oct 5 01:01:35 2009
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.