Why it matters to us if exploits are available for security issues

November 22, 2021

In theory, a security issue is a security issue whether or not anyone has released a public exploit for it, or exploits for it have been found in use in the wild. In practical system administration, it often does matter, especially for broad and potentially hard to mitigate issues like whether or not simultaneous multithreading is secure (enough) on modern x86 CPUs. It's a truism that the more that exploits exist and are circulating, the more likely you are to be exploited, but there's a bit more to it than just that, especially in a university environment.

In general, the more that plug and play exploits exist for something, the more chances are that low-skill people can use the exploit against you casually. They might be running an exploit they found on the web, or it might be part of a broadly available canned toolkit they're using. But either way, the easier it is to get the exploit in some usable form, the more you're exposed to casual people who would never write their own exploit but who are perfectly happy to run someone else's to break into your system (or just to break it).

This is especially an issue in a university environment, where you have both people who might be tempted into running a canned exploit (even just to see what happens) and people who have had their accounts compromised so outsiders have ready access to your systems. The university's security perimeter is quite porous, and even at the best of times some of the threats are already inside.

As a practical matter, our risk of a security issue being exploited goes up significantly the moment that an active exploit is public or is clearly circulating readily within the black-hat community. For a relevant example, if there was a public exploit that used simultaneous multithreading to extract SSH host keys, all of our generally accessible machines would have SMT turned off very rapidly, even if the exploit wasn't all that reliable or was time consuming. We would have no choice.

(Many of our systems already have SMT disabled, mostly the older ones, but a number of the more modern ones don't for various reasons.)

This is a calculated bet, of course. Waiting until we know that exploits are out there instead of immediately dealing with everything leaves us more vulnerable than we otherwise would be. We are hoping that the odds of being exploited are low enough to make this worth while when compared to the other costs we would incur by dealing with things right away. But security in practice is about tradeoffs and making such bets. You can never fix everything, and there are some things where the cost of fixing them is not worth it in your environment.


Comments on this page:

There are a couple more reasons I'm always looking for a PoC script for an exploit.

a) to see how easy it would be for a (low-skilled) person to find one and execute it. Additionally, it gives me an insight to what the PoC script targets by default, e.g. the /etc/shadow file, to run a reverse shell, to download and install a crypto-mining client (rootkits are no longer popular these days!) etc...

b) run it before patching, patch, run it after, i.e. verify the vulnerability has been successfully patched and the exploit no longer works.

Written on 22 November 2021.
« Why V7 Unix matters so much
I wish systemd logged information about the source of "transactions" »

Page tools: View Source, View Normal, Add Comment.
Search:
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Mon Nov 22 22:19:07 2021
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.