There is a balance between optimism and paranoia for compromised machines
There is an important proviso for the first principle of analyzing compromised machines: in practice, most attacks are not that good or that thorough. In real life, as opposed to mathematically correct security advice, there is a tradeoff between your caution level and the amount of work you have to do (either in analyzing a machine or in reinstalling it) and sometimes it is appropriate to take some risk in exchange for doing less work.
The truly paranoid will reinstall machines from scratch (and never mind the disruption) if there is a chance of system compromise, such as you know that an account has been compromised. You do this because you make the most cautious assumptions possible; assume that the attacker knows some way of getting root that is undetected and unpatched, you assume that they were sophisticated and successfully hidden their traces, and so on. Even if you detect something, you can never assume that you have detected everything.
(At this level of paranoia, you should probably be using two factor authentication with smartcards and randomly reinstalling machines every so often, just in case. I suspect that few people are this paranoid.)
In my world it is often appropriate to be more optimistic about the lack of competence about our attackers (especially if we have detected the compromise in some obvious way, such as noticing a password cracker eating CPU on a login server). So I wind up doing things like simple system verification, and then conclude that the absence of evidence is evidence of absence (to put it one way), despite the zeroth law.
(This is another of the fuzzy compromises necessary for real computer security.)
|
|