Why noting security fixes in Linux kernel changelogs doesn't really help

August 22, 2008

People periodically ask for the Linux kernel commit messages and changelogs to mark changes that are security fixes, generally so that they can easily know that they need to cherry-pick just those fixes into their own kernel versions. Ignoring the higher level level issues with this for the moment, there are at least three practical difficulties:

  • not all security fixes are recognized as such at the time; sometimes even the people making a change think that they are just fixing an ordinary bug, and the security implications are only recognized later.

  • just because you can find a specific change that fixes a security bug doesn't mean that you can use it as-is in another kernel. This goes beyond merely the need to backport the specific change; it means that you need a deep understanding of the code so you can tell if just the change alone is sufficient, or if the fix actually implicitly relies on an earlier change that modified some behavior that is now important for this bug and fix.

    (Even having exploit code that you can test against isn't complete assurance, because it just tells you that this exploit doesn't work any more; it doesn't tell you that you have completely patched the hole.)

  • it is fundamentally the wrong approach if you are maintaining a kernel code base, because it means that you are always scrambling to close vulnerabilities after they have been very publicly disclosed. What you should be doing is being proactive instead of reactive; you should be finding out about the problem and developing your own patch for your kernel version concurrently with the patch development for the main kernel.

There is one relatively sensible use for this sort of information: deciding if you need to upgrade the kernel on your production machines to the latest available version, and how urgently. But this doesn't require specific details and marked changes, just 'this kernel is known to fix security issues in the following areas' summary information.


Comments on this page:

From 83.245.144.38 at 2008-08-24 07:09:24:

I find myself disagreeing with the last two of your points. Some critical thoughts without a coherent idea.

If backporting critical kernel security bug fixes is "fundamentally the wrong approach" for maintainers, then it must be that the whole followed software development model is fundamentally flawed. How can it be that this is not a problem for many other kernels that are actively developed? Let alone for other big software projects? Both without doubt good questions.

What always irritates me in this discussion is that somehow, in a wider context, the Linux kernel is understood to be so special that widespread and well-established full-disclosure practices would not apply. Things like "deep understanding of the code" when patching or being "proactive instead of reactive" are so fundamentally accepted facts for every serious software project that it is merely funny that these needs to be emphasized when it comes to the Linux kernel.

As for the practical side: could the following quote from a previous entry ironically fit here too?

"The basic principle is simple: when you make things harder, you do not select for quality; you select for people who care enough."

Also: I dare you to find a 2.6-series kernel which would not deserve this summary label of "this kernel is known to fix security issues". Do we really need such a nice rubber stamp in this current situation in which each release contains at least two to three security vulnerabilities? Hopefully this is not the direction that, say, CVE would be heading.

I dare you to also remove the word "Linux" (or even better, the words "Linux kernel") in the title and rewrite a blog entry with this new title. Is it really that tracking and publicitly mentioning security issues as a software developer constitutes something that should be avoided? Or is it just a special rule for this special thing called Linux kernel?

Cheers,

a system administrator who views the half-baked, vague and insufficient information regarding Linux kernel vulnerabilities as a sad and increasingly scary picture.

By cks at 2008-08-24 11:55:41:

The problem with backporting security fixes isn't the backporting itself, it is doing it after one's upstream publicly releases the security fix. After public disclosure you are in a race with attackers; can you patch first, or can they reverse engineer the patch and exploit you first?

You don't want to be in that race at all, so you want to release your security fix simultaneously with the upstream release, which means that you cannot wait for the upstream to publicly disclose the patch and then react; you must be proactively involved in the whole process, developing your version of the original patch before release.

(As you note, this applies to any piece of software, not just the Linux kernel.)

I think that we do need specific information about what security issues a release fixes, or at least what sort of security issues they are. I was unclear about this in the original entry, but that is what I meant by the 'in the following areas' bit, which should then go on to list the areas that have fixed security issues.

(By 'areas' I mean something like 'the Frobnitz ethernet driver had a locally exploitable bug that would crash the kernel; the Flicker video driver had a locally exploitable bug that would let an attacker read any piece of kernel memory; the dog() system call had a range checking bug that would let a local attacker become root'. Where CVEs and so on are available they should be mentioned.)

I see this as a lot different from attaching this information to specific patches or commit messages or the like, which is what people are usually asking the Linux kernel to do. Even if this information was attached to specific patches I don't believe that it would help those people, and why this is so is what the entry is about.

As you note, all of this logic applies to any piece of open source software. I wrote the entry about the Linux kernel because it's what I've seen people specifically comment on this way.

By cks at 2008-08-24 12:06:47:

A final note: I'm not taking a position here on whether developers should mention that a commit fixes a CVE bug or whatever in their commit messages. I'm just trying to explain why I think that even if they did, it wouldn't really do what the people asking for this generally want out of it.

(The concise summary of why is that it doesn't guarantee that you get all security fixes, it doesn't guarantee that you get a complete security fix that you can immediately use, and it gets you the security fix too late.)

From 83.245.144.38 at 2008-08-26 07:20:36:

First things first: as for the rant-like style of the original comment -- perhaps I was answering more to Linus himself and the opinions he so bluntly expressed in the recent debate on LKM. (Conclusion from it would be that Linus' Linux does not need nor want disclosure [let alone full public disclosure] nor the 'security circus' in any form.)

But if we limit the discussion only and only to commit messages, you're right: no need to mention security issues specifically in those logs; there are plenty of other means to announce these fixes, as you proposed. But the prolem with Linux kernel is that they have no functional channel nor policy for discussing, analyzing and reporting security issues; that was the original point of the people who requested information in commit logs ("please give us at least some information if you are unable to properly disclose otherwise").

"The problem with backporting security fixes isn't the backporting itself, it is doing it after one's upstream publicly releases the security fix."

I think this is somehow twisted; cf. in "normal", large-scale, perhaps commercial, software development settings it is this so-called upstream that (a) announces a security fix, (b) ports it to current version of the given software and (c) backports it to older versions, all at once. This is also how "good" open source (operating system or kernel -) projects operate or should operate.

(The answer of Linus was: because security bugs are just bugs, you just always run the greatest and latest. I did not quite grasp this -- wasn't Linux now supposed to power all those mission-critical systems, the same systems you really would not want to upgrade once a month? In millions and millions of firmware blobs?)

The general picture surrounding Linux is that all responsibility regarding security is on the shoulders of "vendors". Upstream merges thousands and thousands of lines of code per release, and vendors pick up the pieces. It is a very sad and telling situation that practically a handful of people in the whole world know which 2.6.X.X-rcX version has fixed X number of important security bug fixes from the total of X vulnerabilities in the X version of 2.6.X. For small, community-based distributions (or, say, developers of various Linux security technologies) the situation is getting worse every day, while large commercial ones have full-time employees tracking the security issues. In both cases the provided "official" information is very limited, occasional, non-standardized and obscure. And in both cases you trust the man in the middle. And yes, in both cases information (even) in commit logs would help the people in the middle, the same people who in turn help system administrators in their risk assessment process.

Perhaps what the security history of the 1990s showed was that the full-disclosure is the most efficient when you think about consumers or users of given software; when you publicitly announce a security vulnerability with full-scale working exploit, at least this so-called vendor will have to react. This, at the end of the day, has proven to be more valuable than opening a small window for race with the bad people.

As they say, just my two cents, filled with lot of rhetoric.

Written on 22 August 2008.
« What you select for when you make something harder
Another problem with SSL identities »

Page tools: View Source, View Normal, Add Comment.
Search:
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Fri Aug 22 00:53:05 2008
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.