The many problems with bad security patches

June 30, 2008

One might perhaps accuse me of getting overly worked up about bad security patches. Is it really such a big deal if a security patch has a flaw?

My answer is yes, because there are a number of bad consequences when security patches are untrustworthy:

  • it discourages people from installing them. As we've seen repeatedly, having more insecure systems around endangers everyone, whether it is on the Internet or behind your firewall.

  • a broken but 'secure' machine is not really an improvement over a functional but insecure machine. In both cases the overall system is not functional, assuming that you consider security as part of the overall system functionality.

    (Of course the devil is in the details, specifically what broke and what the security issue was, and also how important security is; in some environments being completely turned off is preferred to being insecure. I am assuming here that the breakage is in something relatively important.)

  • you can't use security patches to solve the security issue right now, because you have to put patches through testing in order to see if they broke anything this time and if so, what. At best you can use the release of a security patch as a signpost that your system really is vulnerable to some general issue, and that you need to get working on some sort of a fix.

    (Yes, yes, test everything. Wouldn't it be nice if you didn't have to? And in theory that is the promise of security patches; the only change they are supposed to introduce is a security fix, and thus they should be safe to apply under almost all circumstances.)

  • they increase the overhead of security in general, in both people's time and in hardware needs. All else being equal, this overhead has to come out of somewhere, in actual useful work not getting done and machines not getting used for useful things.

  • if sysadmins believe vendors and do rush installs of what turn out to be bad patches, we lose credibility and thus our overall ability to influence people. This is bad because there are security things that people should listen to you about; you really don't want to be the sysadmin that cried wolf.

Collectively, this set of consequences is pretty bad news. Hence my strong opinions on the issue.


Comments on this page:

From 99.236.189.35 at 2008-07-01 19:13:19:

A few comments.

a) Security is always part of the machine's functionality. If it's known to be insecure (which isn't necessarily the same thing as missing patches), how can you trust any data that is either on it or goes through it?

b) Verizon's security team would appear to agree with you: http://securityblog.verizonbusiness.com/2008/07/01/123/ - in other words, the conventional wisdom of "OMG PATCHES MUST APPLY NOW OR GET PWNED" gets thrown out the window in the face of some concentrated study. (That being said, we all remember what happened September 2003. It wasn't pretty, firewalls or no.)

Your point about overhead is well-taken, but doesn't that apply to all patches, good or bad? You're still going to have to test them. True, a bad patch means you get to later test one that's hopefully better, but that's incremental, no?

Once Slammer rolled through our campus, I found it much easier to persuade users to apply patches. Most of the time they work, and when they don't, I'm just trying to fix one or two users' machines (much easier if they have decent backups) instead of staying late three or four days in a row just trying to fix busted machines.

MikeP

By cks at 2008-07-02 00:35:19:

My view is that the need for testing security patches is created by vendors that release bad security patches; if security patches were sure to be good, we could confidently apply them to systems without having to test them beforehand.

By rdump at 2008-07-03 05:27:21:

If the systems are critical enough to your organization, then when it comes to deciding when to patch it doesn't matter how good the vendor is at making reliable patches. You will apply the patches immediately, regardless. If the patches cause problems, then you'll roll back to the previous configuration, and sort it out.

That kind of rollback capability cuts out a lot of work in testing. You only have to test for yourself when problems occur and you can't wait until the vendor fixes things in a new release of the patch.

Written on 30 June 2008.
« Why user exposure matters for Linux distributions, especially on desktops
Why reverse proxies are good for big web applications »

Page tools: View Source, View Normal.
Search:
Login: Password:

Last modified: Mon Jun 30 23:56:06 2008
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.