One consequence of mathematical security thinking
This realization struck me after writing yesterday's entry:
One consequence of the security paranoia way of thinking (or if you want to use the polite term, of the mathematically correct security way of thinking) is that false negatives are considered unacceptable. Either they are flaws in the implementation or they are flaws in the theory underlying the security system, and in either case they must be eliminated; it is incorrect to allow one to remain.
(By false negatives here I mean situations where the security system fails to detect something that is not secure, within the bounds of the things that the system is supposed to detect and deal with.)
The problem with this is that in practice, an unwillingness to accept false negatives almost always drives up the false positive rate, with predictable bad consequences in worlds inhabited by ordinary people (as opposed to Alice and Bob).
(In theory you can embed mathematical security perfection inside heuristic systems that sometimes let questionable things past in order to reduce the false positive rate. In practice, should you do this the mathematical security people generally pitch a fit about the 'security holes' in your system. This reminds me of attitudes on refactoring dynamic languages.)
Now, to be fair I'll give you a devil's advocate view of this: the problem with allowing small holes in computer security is automation. Automation means that any vulnerability, no matter how small, that can be automatically exploited winds up creating a huge amount of leverage against a system; a tiny chance or a tiny amount of return multiplied by enough actions equals an explosion. This implies that computer security must be much more airtight than physical security, where attacks are bounded by relatively low physical limits, and therefor we must insist on no false negatives, no matter the costs.
(Spot the problem with this argument for a no-prize.)