Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Security Software Linux

New Linux Kernel Flaw Allows Null Pointer Exploits 391

Trailrunner7 writes "A new flaw in the latest release of the Linux kernel gives attackers the ability to exploit NULL pointer dereferences and bypass the protections of SELinux, AppArmor and the Linux Security Module. Brad Spengler discovered the vulnerability and found a reliable way to exploit it, giving him complete control of the remote machine. This is somewhat similar to the magic that Mark Dowd performed last year to exploit Adobe Flash. Threatpost.com reports: 'The vulnerability is in the 2.6.30 release of the Linux kernel, and in a message to the Daily Dave mailing list Spengler said that he was able to exploit the flaw, which at first glance seemed unexploitable. He said that he was able to defeat the protection against exploiting NULL pointer dereferences on systems running SELinux and those running typical Linux implementations.'"
This discussion has been archived. No new comments can be posted.

New Linux Kernel Flaw Allows Null Pointer Exploits

Comments Filter:
  • Double standards (Score:1, Insightful)

    by Anonymous Coward on Saturday July 18, 2009 @08:26AM (#28739929)

    If this had been Windows, the article would have been tagged defectivebydesign.

  • by BPPG ( 1181851 ) <bppg1986@gmail.com> on Saturday July 18, 2009 @08:32AM (#28739947)

    It's important to note that there is almost never any "preferred" or "special" release of Linux to use. And obviously this flaw doesn't affect people that don't use any security modules.

    This is not good news, but it's important news. The kernel's not likely to have a "fixed" re-release for this version, although there probably will be patches for it as well. And when in doubt, just don't upgrade. Not very many machines can take advantage of all of the cool bleeding-edge features that come with each release, anyways. Lots of older versions get "adopted" by someone who will continue to maintain that single kernel release.

  • by Anonymous Coward on Saturday July 18, 2009 @08:40AM (#28739981)

    If this had been Windows we'd find out 9 months later after the guy who discovered it informed Microsoft, they stick their fingers up their butt for 3 months. Microsoft would then spend 3 months finding that the code they used to fix the problem (which was copied from a dll they wrote in 2005) causes problems in newer versions of Excel because it uses null pointers to calculate file bloat or something. Then they threaten him with lawsuits for a few months if he releases the information. In a rush to release a fix MS uses a newer bit of code that breaks all versions of Excel because they really dont need another reason for people not to upgrade.

    The end result of which is that MS releases a fix mere days after the flaw is "announced".

    The Excel bug goes unfixed for 4 years because its a low priority and nobody has found a way to exploit it yet.

  • by Anonymous Coward on Saturday July 18, 2009 @08:40AM (#28739987)

    Your post will get modded insightful pretty quick even though it's just a dirty trick similar to "oh gosh, I know I'll get mod down for this but".
    Look at the number of positive comments regarding the latest build of windows 7 and compare them to the latest linux kernel. They just don't stack up.
    Either slashdotters don't run linux anymore, or windows actually grown to a nice product. Either way, stop living in the past.

  • by Shinobi ( 19308 ) on Saturday July 18, 2009 @08:49AM (#28740009)

    And yet comp sci trash wonder why some of us actually learn assembler, and don't blindly trust compilers and libraries.

  • by Kjella ( 173770 ) on Saturday July 18, 2009 @08:56AM (#28740047) Homepage

    It's important to note that there is almost never any "preferred" or "special" release of Linux to use. (...) And when in doubt, just don't upgrade. Not very many machines can take advantage of all of the cool bleeding-edge features that come with each release, anyways. Lots of older versions get "adopted" by someone who will continue to maintain that single kernel release.

    As a guess pulled out of my nethers 99% use their distro's default shipping kernel, which means there's maybe a dozen kernels in widespread use with a long tail. Unless you're living on the bleeding edge that's what you want to do, otherwise you have to keep up with and patch stuff like this yourself. I'd much rather trust that than not upgrading or picking some random kernel version and hope it's adopted by someone.

  • Re:Just like Linux (Score:4, Insightful)

    by Tony Hoyle ( 11698 ) <tmh@nodomain.org> on Saturday July 18, 2009 @09:11AM (#28740133) Homepage

    Unless they're going to add a proper warning for the condition to gcc 'today' it won't, really.

    Sure there are enough developers to go over the kernel to make sure such errors haven't been missed elsewhere, but all it takes is one to miss it and it's still there. Then there's all the other software compiled by gcc..

    I'm not entirely sure how it can lead to an exploit (short of remapping page zero, which requires root privileges so doesn't really count) but since it has it's going to need a proper fix.

  • Re:Wait, what? (Score:5, Insightful)

    by TheSunborn ( 68004 ) <mtilsted.gmail@com> on Saturday July 18, 2009 @09:12AM (#28740137)

    I think the compiler is correct. If tun is null, then tun->sk is undefined and the compiler can do what even optimization it want.

    So when the compiler see tun->sk it can assume that tun is not null, and do the optimization, because IF tun is null, then the program is invoked undefined behavier, which the compiler don't have to preserve/handle. (How do you keep the semantic of an undefined program??)

  • by Bananenrepublik ( 49759 ) on Saturday July 18, 2009 @09:13AM (#28740141)

    They were writing nonsense. GCC makes use of the fact that in the C language any pointer that was dereferenced can't be NULL (this is made explicit in the standard). People use C as a high-level assembly where these assumptions don't hold. This is why code that doesn't assume this breaks. This issue came up a few months ago on the GCC lists, where an embedded developer pointed out that he regularly maps memory to the address 0x0, thereby running into issues with this assumption in the optimizers. The GCC developers introduced a command-line flag which tells the computer to not make that assumption, therefore allowing the compiler to be used even in environments where NULL pointers can be valid.

    Now, the exploit uses this feature of the compiler (or the C language, if you will) to get the kernel into an unspecified state (which is then exploited) -- the NULL pointer check will be "correctly" optimized away. But in order to do this it first has to make sure that the pointer dereference preceding the NULL pointer check doesn't trap. This needs some mucking around with SELinux, namely one has to map memory to 0x0.

    This is a beautiful exploit, which nicely demonstrates how complex interplay between parts can show unforeseen consequences. Linux fixes this by using the aforementioned new compiler option to not have the NULL pointer check optimized away.

  • Re:Wait, what? (Score:1, Insightful)

    by Anonymous Coward on Saturday July 18, 2009 @09:26AM (#28740207)

    No, dereferencing a NULL pointer results in undefined behaviour. The GCC compiler is generating code which follows the C standard in this case.

    if (tun == null) then the behaviour becomes undefined as soon as the first line is passed.
    if (tun != null) then the check is not needed.
    Since "assuming tun is a non-NULL pointer" falls under the remit of "undefined" the compiler is acting reasonably in doing that.

    "Undefined behaviour" includes anything that the program can possibly do, such as setting your printer on fire and emailing death threats to your grandmother. (Although it would be a malicious compiler which generates code that did that deliberately, it would still follow the standard!)

    Therefore the compiler is allowed to assume that the C program guarantees that tun is not NULL at that point. Expecting any specific behaviour, such as the program halting, is outside the C standard. The compiler could optimize out the read to sk if sk is never used, for example, and that would be an entirely reasonable optimization.

  • by gilgongo ( 57446 ) on Saturday July 18, 2009 @09:42AM (#28740317) Homepage Journal

    For such a piece of shit company, they sure do have a lot more marketshare than the computing godOS known as Linux.

    Microsoft's current market share has nothing to do with quality, and everything to do with monopoly. It doesn't matter whether their product is any good or not, because not only do the vast majority of computer users not even know what Windows is, they wouldn't have the first clue what an alternative to Windows or MS Office would be like.

    Time to learn about basic economic theory [wikipedia.org] I think.

  • by Anonymous Coward on Saturday July 18, 2009 @09:50AM (#28740361)

    The point is that GCC silently optimizes it away so the programmer has no idea that it's not even running the code they put in (however incorrect that code is). It's like saying "if there is an error in my code just remove that code and keep the rest without telling me".

  • by Rockoon ( 1252108 ) on Saturday July 18, 2009 @10:17AM (#28740547)
    This issue is a bit more complicated than you people are making it out to be.

    For the most part, programmers DO WANT this kind of optimization, which is why they use an optimizing compiler. Things like dead-code elimination, constant propogation, and whole program optimizations are important to programmers.

    If you don't want this stuff done, you don't reach for an optimizing compiler and then enable those optimizations. Its their purpose. If (something we know at compile time) should *always* be eliminated in a decent optimizing compiler.

    Now, should GCC make assumptions in this specific case about the state of the pointer? Probably not. This isnt actualy a case of "something we know at compile time" so its a bug in the optimizer.
  • by bschorr ( 1316501 ) on Saturday July 18, 2009 @10:23AM (#28740593) Homepage

    How stereotypically /. is this? We have a story about a security flaw in Linux and it somehow turns into a Microsoft-bashing session.

  • by Pvt_Ryan ( 1102363 ) on Saturday July 18, 2009 @10:24AM (#28740597)
    To be fair if it was windows we'd never know which bit of the code was exploitable or why just that there was an exploit after all we cant SEE their source.

    Good news is that this will be fixed in 2.30.2 in the next month instead of left to be fixed in windows 2012 if ever...
  • by Shinobi ( 19308 ) on Saturday July 18, 2009 @10:29AM (#28740647)

    I never said anything about writing everything in it. But many of us with proficiency in it tend to check what the compiler actually outputs, because we know that the compiler is not smarter than the human who wrote it is. (A behaviour further reinforced by the two smelly piles of fecal matter that are MSVC and GCC). This is also why many of us don't blindly trust optimizations to the compiler either, and always double-check. A disassembler is also useful for tearing through critical pieces of code to see if the compiler has built it in the way you intended.

    I've removed quite a few obscure but potentially very nasty bugs in my software by doing that. Then again, I'm a freelancer, I live by my reputation for solid, fault-free code.

  • by Shamenaught ( 1341295 ) on Saturday July 18, 2009 @10:43AM (#28740739)

    Well, Microsoft got that market share by providing cheap software, specifically DOS. It was arguably of low quality, but who cares that much about the quality if it's cheap, right?

    I don't know if it was originally the plan, but at some point along the way Microsoft realized they had a monopoly. They leveraged their share by putting up prices and using FUD tactics to discourage people from switching. The main issue I have is that although the prices went up, the comparative quality of software didn't. Sure, It looks a lot better than DOS, but that's because modern computers are practically supercomputers compared to what DOS ran on. So you see, having a large market share doesn't mean the company isn't a piece of shit, it just means they can be a piece of shit and get away with it.

    Sure, you can argue that a bug in the latest Linux kernel is a sign that there are bugs in lots of OSes. The difference is that with Linux it'll be fixed in a couple of days. Very few people will be using the latest kernel, AFAIK none of the big distros released with it yet, and although some users may have downloaded and compiled the source themselves (I did so myself, as it offered some driver compatibility for new hardware) the architecture is versatile enough that you can simply switch between different kernels, even without fully rebooting, although not completely without disruption.

  • by gbutler69 ( 910166 ) on Saturday July 18, 2009 @10:52AM (#28740793) Homepage
    To me, the "if (!tun)" check should/must be before the de-reference; otherwise, it is meaningless! However, the compiler should print a warning in this case, not just optimize it away.
  • by QuoteMstr ( 55051 ) <dan.colascione@gmail.com> on Saturday July 18, 2009 @11:41AM (#28741139)

    Of course NULL is part of the C language, you blathering idiot, and it always has been. The level of ignorance here astounds me. Don't post about things you don't understand.

    Quoting from C89 [flash-gordon.me.uk]: (not C99, C89, the one that's older than dirt.)

    4.1.5 Common definitions The following types and macros are defined in the standard header . Some are also defined in other headers, as noted in their respective sections.... NULL which expands to an implementation-defined null pointer constant ... A.6.3.13 Library functions * The null pointer constant to which the macro NULL expands ($4.1.5).

    NULL wasn't even "added" in C89: NULL appears in the oldest, cruftiest UNIX code you can imagine [tuhs.org]. (That link is the original cat command from 1979.)

  • by marcansoft ( 727665 ) <hector AT marcansoft DOT com> on Saturday July 18, 2009 @11:45AM (#28741181) Homepage

    Sure it does - GCC knows at compile time that if the if() condition were true, we're already in the "undefined behavior" realm and all bets are off. So it gets rid of it. The code is broken: it's not the compiler's job to compile for the maximum defensiveness of the resulting machine code, otherwise we'd all be using bounds-checking compilers. If the compiler realizes that a certain runtime value will lead to undefined results (because the programmer chose to do so), it is free to break the execution as much as it wants in that case for code that runs afterwards. Essentially, undefined behavior is a contract signed by the programmer that says "I certify that this will never happen", which is why the compiler chose to perform this optimization.

    Even though the real bug is clearly in the code, moving on to the realm of what's desirable from a compiler, I think it's clear that this behavior can make some problems worse (to the compiler, problems are binary - if there's a problem all bets are off - but not to us). This is fine in the name of optimization, but I think in this particular instance either a) kernel developers should opt to turn this optimization off, or b) (better) make GCC warn when this kind of optimization happens, because it's quite likely a bug.

    In effect, the code is a form of broken defensive programming (you check after the fact whether you've screwed up). It's wrong, but we still wouldn't want the compiler to silently remove the check. So I think the ideal solution (besides fixing the code) is to add a warning to the compiler. NULL pointer dereferences are a bug in the vast majority of cases, and checking for a NULL pointer after dereferencing it (in such a way that the compiler recognizes it and is about to remove the check) is at best redundant and more likely a bug.

    There's still the issue of the page 0 fuckery. If someone can make page 0 accesses not crash the kernel then that's also a bug - there are good reason why we want NULL and neal-NULL pointer accesses to always crash.

  • by kestasjk ( 933987 ) * on Saturday July 18, 2009 @11:50AM (#28741203) Homepage
    So you can disassemble compiled code, way to go.. Have fun disassembling a huge binary that's far too large to economically analyze in assembly.

    What's that? You don't fully disassemble and analyze large binaries but only critical paths or small binaries? How unique and sought-after your services must be. I'm sure analysis of compiled kernels is the best way to tackle this bug..
  • by QuoteMstr ( 55051 ) <dan.colascione@gmail.com> on Saturday July 18, 2009 @12:08PM (#28741337)

    NO ONE has time to write all their code in assembly, not even for the kernel

    To be fair, the OP wasn't suggesting that programs actually be written in assembly, but rather that programmers learn assembly and know how to debug libraries. That's a sentiment I'll second: of course you don't write programs using machine code operations, but when something breaks, it's quite useful to be able to drop down to assembly in a debugger and see what's actually going on, especially when debugging optimized code.

    As for compiler bugs: once in a while, they really do happen. Of course, one's first, second, and even third reaction shouldn't be to blame the compiler, but when all other options are exhausted, the possibility is there.

  • by QuoteMstr ( 55051 ) <dan.colascione@gmail.com> on Saturday July 18, 2009 @12:13PM (#28741373)

    For the sake of argument, let's suppose you're right. (I think it'll be a cold day in hell when the BSDs move away from GCC.) Increasing performance demands will lead to the inclusion of more optimizations in PCC, and these optimizations will lead people like you to make the same complaints about PCC that people make about GCC today.

    Really, what you're opposed to isn't GCC, but the notion of an optimizing compiler. Sorry, but history has spoken: the gain of optimization far outweighs the minor cost of forcing people like you to actually learn what's guaranteed by the language and what is not.

  • Re:Wait, what? (Score:2, Insightful)

    by ext42fs ( 725734 ) on Saturday July 18, 2009 @12:50PM (#28741625) Homepage

    I think the compiler is correct. If tun is null, then tun->sk is undefined and the compiler can do what even optimization it want.

    So when the compiler see tun->sk it can assume that tun is not null, and do the optimization, because IF tun is null, then the program is invoked undefined behavier, which the compiler don't have to preserve/handle. (How do you keep the semantic of an undefined program??)

    The compiler is a complete asshole for deliberately optimizing a too late NULL check away instead of screaming "possibly dereferencing NULL" or something.

  • by K. S. Kyosuke ( 729550 ) on Saturday July 18, 2009 @01:10PM (#28741771)
    There *is* a sane way to develop for such environments, *without* C, with predictable results, with deterministic behaviour, in short time, interactively, iteratively, on a reasonably high level and without getting mad. It's called Forth. Why is that such a big deal suddenly, when it was no problem for decades?
  • by rtfa-troll ( 1340807 ) on Saturday July 18, 2009 @01:42PM (#28742019)

    The error was that the compiler optimized away the if statement,

    Being more specific, based on reading the code in the SANS report after getting the suggestion from a user comment in the Register, the error was that the compiler was in an optimising mode which told it to optimise away such checks where the Null pointer had already been dereferenced. -O2 was active and that clearly means that -fdelete-null-pointer-checks is turned on.

    Two groups are at fault here:

    The optimisation was sufficiently clearly documented (it's listed in gcc under -O2 and when you look at the documentation of that option it does say that some checks may be optimised away and that in some environments this may be dangerous). Thus the Linux Kernel team has some blame.

    In the GCC optimisation options under -O2 it does not explicitly mention that the optimisations may have security implications. There should be, in my opinion, a clear statement that below some optimisation level GCC will try not to change the meaning of code under any circumstances.

    From one point of view, I guess this shows the strength of the Linux model of development where release early/release often means such bugs can be found before the public starts to actually use the software.

    From another point of view, however, I wish that the Linux kernel developers could come up with a more mature attitude to security people, however immature. Yes, it's true that the "security bugs" are, in a sense, just more bugs that could cause loss of data. However, a) that's not an excuse and b) if those bugs did turn up on a real production system in an important application they could cause real problems; exploitable security bugs are much more dangerous than other data loss bugs precisely because they only trigger when someone wants them to trigger.

    Linux kernel people who want their kernel to be broadly userd have to make a clear security statement which says why bugs don't matter. It's not good enough to just say "don't use Linux for high security applications" but it should be enough to say "before you use Linux in a configuration where security might be important, ensure you have done a) a source code audit; b) the functional tests according to XYZ etc...".

  • by RightSaidFred99 ( 874576 ) on Saturday July 18, 2009 @02:08PM (#28742221)

    Oh nonsense. You have no idea of the root cause of most Windows exploits. In fact, it would be interesting if you would get in your Way Back machine and point out the last time windows had an F'ING KERNEL exploit. Jesus.

    At the OS level, anyone with half a brain can guess that Linux is probably less secure than Windows. It's all the shit that runs on Windows that has the issue. Yet you apologists all come on and act like any kind of Linux flaw is just some odd anomaly and easily explained away.

    What's worse, how long will this exploit live in the wild? You'll all foam at the mouth how quickly it will be patch. OK. That's nice and all, but how long until it's disseminated on say 90% of Linux machines exhibiting the flaw? And I'm sure, being Linux, this won't take a reboot right? (I'm joking on that last one - of course it will).

    Double standard indeed.

  • by RightSaidFred99 ( 874576 ) on Saturday July 18, 2009 @02:11PM (#28742249)
    C does not support threading. If this code breaks because of threading it is not the compiler's fault. This is not a compiler bug, and the correct behavior includes optimizing this away. It would be _nice_ if it warned.
  • Re:Wait, what? (Score:3, Insightful)

    by sjames ( 1099 ) on Saturday July 18, 2009 @02:26PM (#28742365) Homepage Journal

    Arguably the compiler is wrong because it's (obviously) not actually impossible for address 0 to refer to valid memory however against convention and best practices that may be. The very existence of this problem proves that the compiler can NOT assume that tun is not null.

  • Re:Wait, what? (Score:1, Insightful)

    by Anonymous Coward on Saturday July 18, 2009 @03:46PM (#28742907)

    In this case, it is tun->sk, not &(tun->sk) which is being loaded, however the pointer arithmetic which generates the address happens first.

    Well, there is no provision for this. The C standard says (6.5.2.3/4) that the value of tun->sk is that of the sk member of the object to which tun points - but since tun does not point to any object (it is a NULL pointer, remember?), then sk is a member of no object.

    Whatever "pointer arithmetic" is going on behind the scenes, it is the implementation issue and outside of the C standard. Actually, the C standard does not even mention the this term anywhere near this paragraph. Your reasoning is correct from the compiler hacker point of view, but is outside of the spec's scope.

  • by inode_buddha ( 576844 ) on Saturday July 18, 2009 @04:15PM (#28743079) Journal
    Submissions and patches to the kernel are independently tested and verified at least twice before being signed off and committed, usually by upstream developers (more experienced). This is the normal process. The only thing different in this case is that a vulnerability was exposed, hence it is in the news.
  • by Anonymous Coward on Saturday July 18, 2009 @06:09PM (#28743851)

    You, sir, have no clue what you're talking about. This has nothing to do with best practices, the code is just wrong. That is, unless you consider "you won't dereference a NULL pointer" a "best practice". The rest of us consider it a fundamental law of the universe.

An authority is a person who can tell you more about something than you really care to know.

Working...