Wandering Thoughts archives

2021-11-22

Why it matters to us if exploits are available for security issues

In theory, a security issue is a security issue whether or not anyone has released a public exploit for it, or exploits for it have been found in use in the wild. In practical system administration, it often does matter, especially for broad and potentially hard to mitigate issues like whether or not simultaneous multithreading is secure (enough) on modern x86 CPUs. It's a truism that the more that exploits exist and are circulating, the more likely you are to be exploited, but there's a bit more to it than just that, especially in a university environment.

In general, the more that plug and play exploits exist for something, the more chances are that low-skill people can use the exploit against you casually. They might be running an exploit they found on the web, or it might be part of a broadly available canned toolkit they're using. But either way, the easier it is to get the exploit in some usable form, the more you're exposed to casual people who would never write their own exploit but who are perfectly happy to run someone else's to break into your system (or just to break it).

This is especially an issue in a university environment, where you have both people who might be tempted into running a canned exploit (even just to see what happens) and people who have had their accounts compromised so outsiders have ready access to your systems. The university's security perimeter is quite porous, and even at the best of times some of the threats are already inside.

As a practical matter, our risk of a security issue being exploited goes up significantly the moment that an active exploit is public or is clearly circulating readily within the black-hat community. For a relevant example, if there was a public exploit that used simultaneous multithreading to extract SSH host keys, all of our generally accessible machines would have SMT turned off very rapidly, even if the exploit wasn't all that reliable or was time consuming. We would have no choice.

(Many of our systems already have SMT disabled, mostly the older ones, but a number of the more modern ones don't for various reasons.)

This is a calculated bet, of course. Waiting until we know that exploits are out there instead of immediately dealing with everything leaves us more vulnerable than we otherwise would be. We are hoping that the odds of being exploited are low enough to make this worth while when compared to the other costs we would incur by dealing with things right away. But security in practice is about tradeoffs and making such bets. You can never fix everything, and there are some things where the cost of fixing them is not worth it in your environment.

sysadmin/ExploitAvailabilityMatters written at 22:19:07; Add Comment


Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.