Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Security Software Linux

New Linux Kernel Flaw Allows Null Pointer Exploits 391

Trailrunner7 writes "A new flaw in the latest release of the Linux kernel gives attackers the ability to exploit NULL pointer dereferences and bypass the protections of SELinux, AppArmor and the Linux Security Module. Brad Spengler discovered the vulnerability and found a reliable way to exploit it, giving him complete control of the remote machine. This is somewhat similar to the magic that Mark Dowd performed last year to exploit Adobe Flash. Threatpost.com reports: 'The vulnerability is in the 2.6.30 release of the Linux kernel, and in a message to the Daily Dave mailing list Spengler said that he was able to exploit the flaw, which at first glance seemed unexploitable. He said that he was able to defeat the protection against exploiting NULL pointer dereferences on systems running SELinux and those running typical Linux implementations.'"
This discussion has been archived. No new comments can be posted.

New Linux Kernel Flaw Allows Null Pointer Exploits

Comments Filter:
  • by BPPG ( 1181851 ) <bppg1986@gmail.com> on Saturday July 18, 2009 @07:32AM (#28739947)

    It's important to note that there is almost never any "preferred" or "special" release of Linux to use. And obviously this flaw doesn't affect people that don't use any security modules.

    This is not good news, but it's important news. The kernel's not likely to have a "fixed" re-release for this version, although there probably will be patches for it as well. And when in doubt, just don't upgrade. Not very many machines can take advantage of all of the cool bleeding-edge features that come with each release, anyways. Lots of older versions get "adopted" by someone who will continue to maintain that single kernel release.

    • by Kjella ( 173770 ) on Saturday July 18, 2009 @07:56AM (#28740047) Homepage

      It's important to note that there is almost never any "preferred" or "special" release of Linux to use. (...) And when in doubt, just don't upgrade. Not very many machines can take advantage of all of the cool bleeding-edge features that come with each release, anyways. Lots of older versions get "adopted" by someone who will continue to maintain that single kernel release.

      As a guess pulled out of my nethers 99% use their distro's default shipping kernel, which means there's maybe a dozen kernels in widespread use with a long tail. Unless you're living on the bleeding edge that's what you want to do, otherwise you have to keep up with and patch stuff like this yourself. I'd much rather trust that than not upgrading or picking some random kernel version and hope it's adopted by someone.

    • by inode_buddha ( 576844 ) on Saturday July 18, 2009 @08:09AM (#28740113) Journal
      Actually, it's already been fixed as of 2.6.31-rc3. Interestingly enough, the code by itself was fine until gcc tries to re-assign the pointer value upon compiling. Steven J. Vaughn-Nichols had a decent write-up about it in Computerworld.
      • Re: (Score:3, Interesting)

        Actually, it's already been fixed as of 2.6.31-rc3. Interestingly enough, the code by itself was fine until gcc tries to re-assign the pointer value upon compiling

        The code by itself is not fine. The underlying bug is that the kernel is allowing memory at virtual address 0 to be valid. The compiler was designed for an environment where there can never be a valid object at 0, and has chosen the bit pattern 000...000 to be the null pointer. If you want to use C in an environment where 0 can be a valid address

    • I'm just curious about 2.6.30 version. In just ran a > to recheck my version and I'm using 2.6.28-13. I ran update just to make sure, and this is the current version for my distribution. So, can anyone tell me whether 2.6.30 is a stable version in widespread use, or is it a bleeding-edge version under development and test. Finding flaws in DT is what's supposed to happen. Unfortunately the takeaway headline for some will be "see? told you Linux is just as vulnerable as Windows!" Well maintained distribu
      • Re: (Score:3, Funny)

        by INT_QRK ( 1043164 )
        don't know why but "uname -a" was replaced by ">" in my above post...something I did
      • Re: (Score:3, Informative)

        by PReDiToR ( 687141 )
        Your distro (lets say Fedora for this example) has to get the sources for a new kernel, apply their own patches, test that the kernel works, package it and then put it out to beta testers to see if it breaks any one of the many configurations of Fedora that are out there.

        Sound familiar?

        Then they have to upload it to their package management servers and put the fix out there for you to use.

        This might not sound like a lot of work, but who needs a new kernel when they are busy with a whole truck full of
    • I would expect most distributions to take the fix from .31 and apply it to .30. Most distributions are pretty good at watching for CVEs and other high-importance bug reports and backfitting them. For example, I would expect the fix to show up in the ebuild for Gentoo Real Soon Now .

    • by SEWilco ( 27983 )
      It's simpler than not using that version. It's a version of the kernel which has only been distributed in one specialized Red Hat distribution. So it's hard to find a vulnerable machine in the wild. The other vulnerable group are bleeding-edge developers who compiled that kernel, and they'll upgrade as soon as one of them fixes it.
  • by brunes69 ( 86786 ) <slashdot@keirsGI ... minus herbivore> on Saturday July 18, 2009 @07:40AM (#28739985)

    I always disable those security modules as they always end up to incompatibilities and other erratic behavior in software.

    Exactly what do they do anyway?

    • by 140Mandak262Jamuna ( 970587 ) on Saturday July 18, 2009 @08:07AM (#28740107) Journal
      They create vulnerabilities by allowing remote code to overload error handlers and thus pwn your system?
      • Re: (Score:3, Interesting)

        They ruin otherwise working code that was written in slightly different environments, and for which the very arcane behavior of SELinux has not been tuned. They're also often difficult to write test suites for, especially the unpredictability of SELinux changes, since they affect behavior due to factors entirely outside the control of the particular program author: they affect behavior based on where the package installs the code and what SELinux policies are in place.

        It's gotten better: Linux operating sys

  • Wait, what? (Score:5, Interesting)

    by TheRaven64 ( 641858 ) on Saturday July 18, 2009 @07:58AM (#28740061) Journal
    This code looks right?

    struct sock *sk = tun->sk; // initialize sk with tun->sk
    ...
    if (!tun)
    return POLLERR; // if tun is NULL return error

    So, he's dereferencing tun, and then checking if tun was NULL? Looks like the compiler is performing an incorrect optimisation if it's removing the test, but it's still horribly bad style. This ought to be crashing at the sk = tun->sk line, because the structure is smaller than a page, and page 0 is mapped no-access (I assume Linux does this; it's been standard practice in most operating systems for a couple of decades to protect against NULL-pointer dereferencing). Technically, however, the C standard allows tun->sk to be a valid address, so removing the test is a semantically-invalid optimisation. In practice, it's safe for any structure smaller than a page, because the code should crash before reaching the test.

    So, we have bad code in Linux and bad code in GCC, combining to make this a true GNU/Linux vulnerability.

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      The patch [kerneltrap.org].

    • Re:Wait, what? (Score:5, Insightful)

      by TheSunborn ( 68004 ) <mtilsted@nospaM.gmail.com> on Saturday July 18, 2009 @08:12AM (#28740137)

      I think the compiler is correct. If tun is null, then tun->sk is undefined and the compiler can do what even optimization it want.

      So when the compiler see tun->sk it can assume that tun is not null, and do the optimization, because IF tun is null, then the program is invoked undefined behavier, which the compiler don't have to preserve/handle. (How do you keep the semantic of an undefined program??)

      • Re: (Score:3, Insightful)

        by sjames ( 1099 )

        Arguably the compiler is wrong because it's (obviously) not actually impossible for address 0 to refer to valid memory however against convention and best practices that may be. The very existence of this problem proves that the compiler can NOT assume that tun is not null.

    • Re:Wait, what? (Score:5, Interesting)

      by pdh11 ( 227974 ) on Saturday July 18, 2009 @08:16AM (#28740153) Homepage

      Technically, however, the C standard allows tun->sk to be a valid address, so removing the test is a semantically-invalid optimisation.

      No. Technically, if tun is null, dereferencing it in the expression tun->sk invokes undefined behaviour -- not implementation-defined behaviour. It is perfectly valid to remove the test, because no strictly conforming code could tell the difference -- the game is already over once you've dereferenced a null pointer. This is a kernel bug (and not even, as Brad Spengler appears to be claiming, a new class of kernel bug); it's not a GCC bug.

      But as other posters have said, it would indeed be a good security feature for GCC to warn when it does this.

      Peter

      • Re: (Score:3, Informative)

        by TheRaven64 ( 641858 )

        No. Technically, if tun is null, dereferencing it in the expression tun->sk invokes undefined behaviour -- not implementation-defined behaviour

        I've seen a lot of people claiming that, however (as someone who hacks on a C compiler) there are a few things I take issue with in your assertion.

        First, NULL is a preprocessor construct, not a language construct; by the time it gets to the compiler the preprocessor has replaced it with a magic constant[1]. The standard requires that it be defined as some value that may not be dereferenced, which is typically 0 (but doesn't have to be, and isn't on some mainframes). Dereferencing NULL is invalid, however

        • But it's (tun->sk) not &(tun->sk). That is: The code is looking at the value of the member sk in the struct pointed to by tun. Looking at this value is undefined if tun is null.
          It does not take the address of tun or tun->sk.

        • by pdh11 ( 227974 )

          While dereferencing NULL is explicitly not permitted, pointer arithmetic on NULL is permitted, and dereferencing any non-NULL memory address is permitted.

          Not any non-null memory address -- only one that points into an object. As there is no object whose address is the null pointer, the dereference is still undefined. And the compiler knows that.

          Another way of looking at it is that tun->sk is equivalent to (*tun).sk, which is even more clearly undefined in C.

          Peter

        • Re: (Score:2, Informative)

          by Anonymous Coward

          You are completely wrong and you should learn some C before posting crap like this.

          The NULL pointer has the value 0 and no other value. Period. Internally, it can be represented by other bit-patterns than all-0. But the C standard demands that
          void *x = 0;
          generates the NULL pointer.

          The last paragraph is also completely wrong because you fail to realize that the substraction of two pointers gives an integer and not another pointer.

          So: please, please don't post again until you've learnt the abso

        • Re:Wait, what? (Score:5, Informative)

          by johnw ( 3725 ) on Saturday July 18, 2009 @09:54AM (#28740813)

          First, NULL is a preprocessor construct, not a language construct; by the time it gets to the compiler the preprocessor has replaced it with a magic constant[1].

          Which must be either "0" or "(void *) 0".

          The standard requires that it be defined as some value that may not be dereferenced, which is typically 0 (but doesn't have to be

          Not true - the standard requires NULL to be defined as one of the two values given above.

          and isn't on some mainframes

          There are indeed some platforms where a null pointer is not an all-bits-zero value, but this is achieved by compiler magic behind the scenes. It is still created by assigning the constant value 0 to a pointer, and can be checked for by comparing a pointer with a constant 0.

          • Re:Wait, what? (Score:4, Informative)

            by MSG ( 12810 ) on Saturday July 18, 2009 @01:24PM (#28742347)

            Which must be either "0" or "(void *) 0". ...
            There are indeed some platforms where a null pointer is not an all-bits-zero value, but this is achieved by compiler magic behind the scenes. It is still created by assigning the constant value 0 to a pointer, and can be checked for by comparing a pointer with a constant 0.

            What you've said is technically true, but doesn't contradict or clarify the post to which you replied in any way, so I'm not sure what your point is.

            As you point out, a NULL pointer is a pointer which is represented by "(void *) 0" in the C language. However, where you may be confused is that "(void *) 0 != (int) 0". At least, not always. The compiler is responsible for determining if any "0" is used in a pointer context and casting it to the appropriate value, which may not be the same as numeric "0". So, while it's always possible to check for a NULL pointer by comparing a pointer to 0 in code, the machine may use a different value for NULL pointers. When you check "if(p)", the binary code that is produced will be comparing the value of "p" to the NULL address which is appropriate for the machine on which it is running.

            The C FAQ [c-faq.com] has more information.

        • Re:Wait, what? (Score:4, Informative)

          by vslashg ( 209560 ) on Saturday July 18, 2009 @10:34AM (#28741095)

          &(((struct foo*)(void*)0)->bar) will also give the value of the offset of the bar field.

          You're speaking with a voice of authority, which is dangerous because of how incorrect in general your post is.

          Others have already pointed out that you are wrong about NULL. Here's precisely what the spec says about the argument to &:

          The operand of the unary & operator shall be either a function designator, the result of a
          [] or unary * operator, or an lvalue that designates an object that is not a bit-field and is
          not declared with the register storage-class specifier.

          (((struct foo*)(void*)0)->bar) in particular is none of those things, and your expression is not legal C.

          Some apparent dereferences of null pointers are allowed. For instance:

          void *a = 0;
          void *b = &(*a);

          The above is legal not because dereferencing a null pointer is legal, but rather because of an explicit exception to the rule carved out in section 6.5.3.2 of the spec, which says that in this case, the & and * cancel, and "the result is as if both were omitted".

          Your expression is neither safe nor portable. If you do need to check the offset of a field in a structure, use the standard library offsetof() macro -- that's what it's for.

        • Re: (Score:3, Informative)

          by gnasher719 ( 869701 )

          The value &(tun->sk) is the address of tun, plus a fixed offset. The expression &(((struct foo*)0)->bar) is valid C and will give the value of the offset of the sk field in the foo struct. A typical definition of NULL is (void*)0, and &(((struct foo*)(void*)0)->bar) will also give the value of the offset of the bar field.

          Wrong. If tun is a null pointer, then the only valid operations are the following:

          1. Assign tun to a pointer variable of a matching type or of type void*, which will set that variable to a null pointer.
          2. Cast tun to another pointer type, which will produce a null pointer.
          3. Cast tun to an integral type, which will produce the value 0 (and this is true whatever bit pattern the compiler uses for null pointers)
          4. Comparing tun to a pointer of a matching type or type void* using the == or != operators.

        • Re:Wait, what? (Score:5, Informative)

          by Anonymous Coward on Saturday July 18, 2009 @01:36PM (#28742431)

          In this case, it is tun->sk, not &(tun->sk) which is being loaded, however the pointer arithmetic which generates the address happens first. If tun is NULL then this is NULL + {the offset of sk}. While dereferencing NULL is explicitly not permitted, pointer arithmetic on NULL is permitted, and dereferencing any non-NULL memory address is permitted.

          Raven, I've seen you make the same comment a few times in this story. Please stop pushing this nonsense.

          The language standard calls * and -> operations "dereferencing". The way it works is that tun->sk dereferences the whole struct, then hands you the sk field from it.

          When you implement this in your compiler you do an address computation first then load only the field because you don't want to load the whole struct when you don't need to, but that's an implementation detail. The compiler is required to act as if the pointer tun were being dereferenced.

          It would be a major missed optimization bug if the compiler didn't eliminate the later if (!tun) operation. This is a case where the input code is simply wrong.

    • Re: (Score:3, Interesting)

      by Athanasius ( 306480 )

      This ought to be crashing at the sk = tun->sk line, because the structure is smaller than a page, and page 0 is mapped no-access (I assume Linux does this; it's been standard practice in most operating systems for a couple of decades to protect against NULL-pointer dereferencing).

      If you actually read the exploit code (see: http://grsecurity.net/~spender/cheddar_bay.tgz [grsecurity.net]) the thing that really enables this exploit is one of two ways to map page zero. One of these seems to be a flaw with SELinux (either wi

  • CFLAGS (Score:4, Informative)

    by Epsillon ( 608775 ) on Saturday July 18, 2009 @07:58AM (#28740063) Journal

    CFLAGS+= -fno-delete-null-pointer-checks

    Job done (should work with Gentoo, buggered if I know how to do this in other distros, DYOR), even with -O2/-O3. This is an optimisation/code conflict. The code itself is perfectly valid, so if your CFLAGS are -O -pipe you have nothing to worry about. GCC's info pages show what is enabled at various optimisation levels. -fdelete-null-pointer-checks is enabled at -O2. Of course, this only applies when you compile your own kernel. If vendors are supplying kernels compiled with -O2 without checking what it does to the code then it is obvious who is to blame.

    • Re:CFLAGS (Score:4, Informative)

      by Tony Hoyle ( 11698 ) <tmh@nodomain.org> on Saturday July 18, 2009 @08:06AM (#28740097) Homepage

      No. That doesn't fix the problem. All it does is stop the broken optimisation (why the *hell* did someone at gcc think such a thing should be default anyway?)

      You need an -ferror-on-bogus-null-pointer-checks parameter so that the code can be fixed.

      It's an easy error to make. It's the compilers job to warn you.. in this case not only did it fail to throw a warning it also made the problem worse by 'optimising' it.

      • Re: (Score:3, Informative)

        by QuoteMstr ( 55051 )

        (why the *hell* did someone at gcc think such a thing should be default anyway?)

        Because it makes sense on every modern platform on earth except for strange embedded ones, that's why. This kernel bug is the result of incorrect kernel code, not a GCC bug.

  • Interesting (Score:3, Funny)

    by improfane ( 855034 ) on Saturday July 18, 2009 @10:43AM (#28741157) Journal

    Guys, I'm trying to decide what to post:

    [ ] Downplay how serious flaw is
    [ ] Compare to Window's track record
    [x] Make a meta-reference to Slashdot psychology
    [ ] Post work-around that doesn't fix problem
    [ ] Say that flaw is a feature
    [ ] bash Windows
      [ ] Claim that not all Windows software is bad
    [ ] Claim that the more popular gets, Linux will be targeted more
    [ ] Pretend I understand the problem ...or we could RFA

  • by Animats ( 122034 ) on Saturday July 18, 2009 @11:21AM (#28741425) Homepage

    Isn't someone running a static checker on the Linux kernel? There are commercial tools which will find code that can dereference NULL. However, there aren't free tools that will do this.

To thine own self be true. (If not that, at least make some money.)

Working...