Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Security Bug

Project Zero Exploits 'Unexploitable' Glibc Bug 98

NotInHere (3654617) writes with news that Google's Project Zero has been busy at work. A month ago they reported an off-by-one error in glibc that would overwrite a word on the heap with NUL and were met with skepticism at its ability to be used in an attack. Google's 'Project Zero' devised an exploit of the out-of-bounds NUL write in glibc to gain root access using the setuid binary pkexec in order to convince skeptical glibc developers. 44 days after being reported, the bug has been fixed. They even managed to defeat address space randomization on 32-bit platforms by tweaking ulimits. 64-bit systems should remain safe if they are using address space randomization.
This discussion has been archived. No new comments can be posted.

Project Zero Exploits 'Unexploitable' Glibc Bug

Comments Filter:
  • by grahamsaa ( 1287732 ) on Tuesday August 26, 2014 @08:06PM (#47761483)
    No. While it depends on your end users (end users of some products / libraries / etc are very technical, while other products draw from a much larger, less technical user base), a non-trivial number of bug reports are due to user error, or to something that you don't actually have any control over. Skipping stage 1 probably makes sense in all cases, but the rest of the stages are all valid. Sometimes you never get past stage 2 because the answer is "oh, right, because my machine isn't infected with something" or "because I didn't mis-configure the application".
  • by katterjohn ( 726348 ) on Tuesday August 26, 2014 @09:23PM (#47761879)

    While I don't feel buffer overflows are something to ignore, from what I see the developer never actually said "unexploitable."

    From the "skeptical glibc developer" link:

    > if not maybe the one byte overflow is still exploitable.

    Hmm. How likely is that? It overflows in to malloc metadata, and the
    glibc malloc hardening should catch that these days.

  • by Anonymous Coward on Tuesday August 26, 2014 @10:09PM (#47762061)

    Reminds me of this overflow bug [seclists.org] which was fixed in sudo 1.6.3p6. It writes a single NUL byte past the end of a buffer, calls syslog(), and the restores the original overwritten byte. Seems unexploitable, right?

    Wrong. Here's the detailed writeup [phrack.org] of the exploit. It requires some jiggering with the parameters to get the exploit to work on a particular system, but you don't need a local root exploit to work every time, you just need it to work once and you own the system.

  • by dutchwhizzman ( 817898 ) on Wednesday August 27, 2014 @01:20AM (#47762853)

    While you have a point, you shouldn't forget the Raspberry Pi. It is probably the most popular internet facing non-mobile ARM platform today. Literally millions of these run glibc and at least hundreds of thousands are in some way or form directly connected to the internet. While I don't believe that this bug can be exploited without first gaining RCE on the raspberry pi, once an attacker gets access to the rpi, this bug should be able to get them to escalate to root privileges.

    There are quite a few people that put a full debian (or other) distribution on their NAS server. I own a zyxel NSA 325 and it is possible to install a full debian release on this and some other NAS boxes. These might be a limited amount of systems overall, but it's significant enough to deserve mentioning because they too often are internet facing.

  • by Animats ( 122034 ) on Wednesday August 27, 2014 @02:50AM (#47763103) Homepage

    64-bit systems should remain safe if they are using address space randomization.

    Nah. It just takes more crashes before the exploit achieves penetration.

    (Address space randomization is a terrible idea. It's a desperation measure and an excuse for not fixing problems. In exchange for making penetration slightly harder, you give up repeatable crash bug behavior.)

  • by Sanians ( 2738917 ) on Wednesday August 27, 2014 @03:14AM (#47763155)

    Your idea only works if bound sizes are defined at compile time which is hardly going to be even a majority of cases.

    Use your imagination...

    I was imagining a special type of pointer, but one compatible with ordinary pointers. Kind of how C99 added the "complex" data type for complex numbers, but you can assign to them from ordinary non-complex numbers. A future version of C could add a type of pointer that includes a limit, and a future version of malloc() could return this new type of pointer, and for compatibility, the compiler can just downgrade it to an ordinary pointer any time it is assigned to an ordinary pointer, so that old code continues to work with the new malloc() return value, and new code can continue to call old code that only accepts ordinary pointers. Of course, we won't call them "new" and "ordinary," we'll call them "safe" and "dangerous" when, after several years, we grow tired of hearing of yet another buffer overflow exploit discovered in some old code that hasn't yet been updated to use the new type of pointer.

    ...or I'm sure there's many other possibilities. This isn't an impossible thing to do.

They are relatively good but absolutely terrible. -- Alan Kay, commenting on Apollos

Working...