Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Security Bug

Project Zero Exploits 'Unexploitable' Glibc Bug 98

NotInHere (3654617) writes with news that Google's Project Zero has been busy at work. A month ago they reported an off-by-one error in glibc that would overwrite a word on the heap with NUL and were met with skepticism at its ability to be used in an attack. Google's 'Project Zero' devised an exploit of the out-of-bounds NUL write in glibc to gain root access using the setuid binary pkexec in order to convince skeptical glibc developers. 44 days after being reported, the bug has been fixed. They even managed to defeat address space randomization on 32-bit platforms by tweaking ulimits. 64-bit systems should remain safe if they are using address space randomization.
This discussion has been archived. No new comments can be posted.

Project Zero Exploits 'Unexploitable' Glibc Bug

Comments Filter:
  • by Anonymous Coward on Tuesday August 26, 2014 @06:53PM (#47761417)

    Never say never.

    Unexploitable? Srsly? GAC.

    An acquaintance recently posted "Six Stages of Debugging" on his g+ page. (1. That can't happen, 2. That doesn't happen on my machine, 3. That shouldn't happen, 4. Why does that happen? 5. Oh, I see, and 6. How did that ever work). Doesn't an software dev who has been working for more than about three years go straight to No. 4?

    The things they don't teach you in a CS degree.

    • by Narcocide ( 102829 ) on Tuesday August 26, 2014 @07:05PM (#47761481) Homepage

      This is seriously shit your CS 100 or 200-level teacher SHOULD have taught you, if you got a CS degree. I think it may depend largely upon where/when you got your degree though. They're only all the same on paper.

      • by jopsen ( 885607 )

        This is seriously shit your CS 100 or 200-level teacher SHOULD have taught you, if you got a CS degree.

        A CS professor shouldn't teach you to "never say never"... just ask for a formal proof :)
        Especially, if you're claiming that P != NP or the like...

    • by grahamsaa ( 1287732 ) on Tuesday August 26, 2014 @07:06PM (#47761483)
      No. While it depends on your end users (end users of some products / libraries / etc are very technical, while other products draw from a much larger, less technical user base), a non-trivial number of bug reports are due to user error, or to something that you don't actually have any control over. Skipping stage 1 probably makes sense in all cases, but the rest of the stages are all valid. Sometimes you never get past stage 2 because the answer is "oh, right, because my machine isn't infected with something" or "because I didn't mis-configure the application".
    • by katterjohn ( 726348 ) on Tuesday August 26, 2014 @08:23PM (#47761879)

      While I don't feel buffer overflows are something to ignore, from what I see the developer never actually said "unexploitable."

      From the "skeptical glibc developer" link:

      > if not maybe the one byte overflow is still exploitable.

      Hmm. How likely is that? It overflows in to malloc metadata, and the
      glibc malloc hardening should catch that these days.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      The things they don't teach you in a CS degree.
      Actually they *do* teach you that in a CS degree, and also how to fix it. FTFY. Also, they don't put the word 'an' before a word beginning with a consonant.

      • by Anonymous Coward

        Also, they don't put the word 'an' before a word beginning with a consonant.

        Not even if the word is "hour"?

    • An acquaintance recently posted "Six Stages of Debugging" on his g+ page. (1. That can't happen, 2. That doesn't happen on my machine, 3. That shouldn't happen, 4. Why does that happen? 5. Oh, I see, and 6. How did that ever work). Doesn't an software dev who has been working for more than about three years go straight to No. 4?

      Absolutely true for debugging. But there's a few steps you missed.

      Somewhere near 3-4: Ok, how bad would it be if that happened? Does it recover without user intervention (i.e. service crashes and cron restarts it)? Does it recover with user intervention ("did you turn it off and back on?)? Does it lose user data (oh poop)?

      The question here (which is altogether not trivial) is exactly this: "how bad would it be if we wrote an extra '\0' somewhere"? And what geohot did was answer that in the most productive w

  • by Anonymous Coward on Tuesday August 26, 2014 @07:40PM (#47761679)

    I read through the thread and at no point was the bug considered "Unexploitable". Even skepticism is too strong of a word to use. The only doubt that was raised was asking "How likely is that?"

    • by NotInHere ( 3654617 ) on Tuesday August 26, 2014 @11:04PM (#47762603)

      I chose the word scepticism, and still I think it is. I agree that the word "unexploitable" was a bit exaggerated, but that was added by unknown lamer.

      Florian Weimer [sourceware.org] said:

      My assessment is "not exploitable" because it's a NUL byte written into malloc metadata. But Tavis disagrees. He is usually right. And that's why I'm not really sure.

      Its however true that he corrects himself the same day a bit later:

      >> if not maybe the one byte overflow is still exploitable.
      >
      > Hmm. How likely is that? It overflows in to malloc metadata, and the
      > glibc malloc hardening should catch that these days.

      Not necessarily on 32-bit architectures, so I agree with Tavis now, and
      we need a CVE.

      • by Cramer ( 69040 )

        And to be perfectly fair, the issue hinges on glibc's completely idiotic insistence on free()ing everything at exit() instead of just f'ing exiting. The kernel knows exactly what to return to the free pool and does not depend on, or require, the application to return the memory it requested.

  • by Anonymous Coward on Tuesday August 26, 2014 @09:09PM (#47762061)

    Reminds me of this overflow bug [seclists.org] which was fixed in sudo 1.6.3p6. It writes a single NUL byte past the end of a buffer, calls syslog(), and the restores the original overwritten byte. Seems unexploitable, right?

    Wrong. Here's the detailed writeup [phrack.org] of the exploit. It requires some jiggering with the parameters to get the exploit to work on a particular system, but you don't need a local root exploit to work every time, you just need it to work once and you own the system.

  • One that a slight slip anywhere in millions of lines of code could produce random memory corruptions with unpredictable consequences. Who would have believed that anybody would even dream of using a language with constructs such as ptr++. And we are surprised to find bugs...

    • What high-level language does not depend on the C standard library and so would be suitable for implementing the C standard library?
    • by Yunzil ( 181064 )

      Yeah, they should have just invented Python in 1950.

  • by Animats ( 122034 ) on Wednesday August 27, 2014 @01:50AM (#47763103) Homepage

    64-bit systems should remain safe if they are using address space randomization.

    Nah. It just takes more crashes before the exploit achieves penetration.

    (Address space randomization is a terrible idea. It's a desperation measure and an excuse for not fixing problems. In exchange for making penetration slightly harder, you give up repeatable crash bug behavior.)

    • 1) if you make exploitation less likely than an astroid hitting the earth, then for all practical purposes you can say that it is prevented.
      2) 'repeatable crash bug behavior' doesn't matter, it will be repeatable if it is run in valgrind/address sanitizer or via a debugger which is really all that matters to a developer. An end user couldn't care less about repeatable crashes and would prefer if it occasionally/usually continued running.

    • Yes, ASLR somewhat works but is an afterthought. The ultimate solution would be to stop using computers which mix data and code adjacently, in other words get rid of the whole von Neumann computer architecture.
      • by tlhIngan ( 30335 )

        Yes, ASLR somewhat works but is an afterthought. The ultimate solution would be to stop using computers which mix data and code adjacently, in other words get rid of the whole von Neumann computer architecture.

        There are plenty of processors that are Harvard architecture out there (separate data/instruction memory). Though modern architectures do have a bit of Harvard in them (the separated instruction and data caches). And memory segmentation and permissions do help split code and data into separate areas.

        T

  • Sorry, old Unix guy here. My first reaction was "What the F is pkexec and why is it running setuid?"

    Yet another way to execute arbitrary privileged executables is yet another potential security hole. This dumb thing is apparently part of the "Free Desktop" but it's depended on by all kinds of stuff including the fricking RedHat power management. What's wrong with plain old sudo?

  • Don't make excuses for not fixing bugs. When you find bugs, fix them.

    All software is buggy by definition because the entire stack from the moving charge carriers to the behavior of the person using the computer cannot be mathematically proven to be correct.

    No matter what measures you as the hardware or software creator take, there will be bugs.

    Don't make people angry at you or ridicule their bug reports because that's a major incentive for them to make you look foolish.

There is no opinion so absurd that some philosopher will not express it. -- Marcus Tullius Cicero, "Ad familiares"

Working...