Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Security Bug

Project Zero Exploits 'Unexploitable' Glibc Bug 98

NotInHere (3654617) writes with news that Google's Project Zero has been busy at work. A month ago they reported an off-by-one error in glibc that would overwrite a word on the heap with NUL and were met with skepticism at its ability to be used in an attack. Google's 'Project Zero' devised an exploit of the out-of-bounds NUL write in glibc to gain root access using the setuid binary pkexec in order to convince skeptical glibc developers. 44 days after being reported, the bug has been fixed. They even managed to defeat address space randomization on 32-bit platforms by tweaking ulimits. 64-bit systems should remain safe if they are using address space randomization.
This discussion has been archived. No new comments can be posted.

Project Zero Exploits 'Unexploitable' Glibc Bug

Comments Filter:
  • by Anonymous Coward on Tuesday August 26, 2014 @08:07PM (#47761485)

    Embedded stuff would typically use uClibc. Android uses Bionic libc.

    Most ARM might be 32 bit but most ARM doesn't use Glibc.

  • by Anonymous Coward on Tuesday August 26, 2014 @08:40PM (#47761679)

    I read through the thread and at no point was the bug considered "Unexploitable". Even skepticism is too strong of a word to use. The only doubt that was raised was asking "How likely is that?"

  • by Sanians ( 2738917 ) on Tuesday August 26, 2014 @10:46PM (#47762223)

    Meanwhile, slopping programming in any language results in unintended side effects.

    Yes, but the lack of bounds checking in C is kind of crazy. The compiler is now going out of its way to delete error-checking code simply because it runs into "undefined behavior," but no matter how obvious a bounds violation is, the compiler won't even mention it. Go ahead and try it. Create an array, index it with an immediate value of negative one, and compile. It won't complain at all. ...but god-forbid you accidentally write code that depends upon signed overflow to function correctly, because that's something the compiler needs to notice and do something about, namely, it needs to remove your overflow detection code because obviously you've memorized the C standards in their entirety and you're infallible, and there's no chance whatsoever that anyone ever thought that "undefined behavior" might mean "it'll just do whatever the platform the code was compiled for happens to do" rather than "it can do anything at all, no matter how little sense it makes."

    Due to just how well GCC optimizes code, bounds checking wouldn't be a huge detriment to program execution speed. In some cases the compiler could verify at compile time that bounds violations will not occur. At other times, it could find more logical ways to check, like if there's a "for (int i = 0; i < some_variable; i++)" used to index an array, the compiler would know that simply checking "some_variable" against the bounds of the array before executing the loop is sufficient. I've looked at the code GCC generates, and optimizations like these are well within its abilities. The end result is that bounds checking wouldn't hinder execution speeds as much as everyone thinks. A compare and a conditional jump isn't a whole lot of code to begin with, and with the compiler determining that a lot of those tests aren't even necessary, it simply wouldn't be a big deal.

    ...but let's assume it was. Assume bounds checking would reduce program execution speeds by 10%. How often do you worry about network services you run being exploitable, vs. worrying that they won't execute quickly enough? Personally, I never worry about code not executing enough. I might wish it were faster, but worry? Hell no. On the other hand, I don't even keep an SSH server running, despite how convenient it might be to access my computer when I am away from home, because I fear it might be exploitable. I'd prefer more secure software, and if I'm then not happy with the speed at which that software executes, I'll just get a faster computer. After all, our software is clearly slower today than it was 20 years ago. I can put DOS on my PC and run the software from that era at incredible speeds, but I don't because I like the features I get from a modern OS, even if those features mean that my software isn't as fast as it could be. Bounds checking to prevent a frequent and often exploitable programming mistake is just another feature, and it's about time we have it.

    ..and like everything else the compiler does, bounds checking could always be a compile-time option. Those obsessed with speed could turn it off, but I'm pretty certain that if the option existed, anyone who even thought about turning it off would quickly decide that doing so would be stupid. Maybe for some non-networked applications that have already been well-tested with the option enabled and where execution speed is a serious factor, it might make sense to turn it off, but when it comes to network services and web browsers and the like, no sane person would ever disable the bounds checking when compiling those applications because everyone believes security is more important than speed.

  • by NotInHere ( 3654617 ) on Wednesday August 27, 2014 @12:04AM (#47762603)

    I chose the word scepticism, and still I think it is. I agree that the word "unexploitable" was a bit exaggerated, but that was added by unknown lamer.

    Florian Weimer [sourceware.org] said:

    My assessment is "not exploitable" because it's a NUL byte written into malloc metadata. But Tavis disagrees. He is usually right. And that's why I'm not really sure.

    Its however true that he corrects himself the same day a bit later:

    >> if not maybe the one byte overflow is still exploitable.
    >
    > Hmm. How likely is that? It overflows in to malloc metadata, and the
    > glibc malloc hardening should catch that these days.

    Not necessarily on 32-bit architectures, so I agree with Tavis now, and
    we need a CVE.

  • by NotInHere ( 3654617 ) on Wednesday August 27, 2014 @12:11AM (#47762627)

    I've read a bit through the threads and think that the reason it took so long was because they decided to remove a feature [openwall.com] to fix the problem:

    I believe the current plan is to completely remove the transliteration
    module support, as it hasn't worked for 10+ years.

    The git commit message states the same. There were really some problems in that function: https://sourceware.org/ml/libc... [sourceware.org]

  • by Dutch Gun ( 899105 ) on Wednesday August 27, 2014 @02:36AM (#47763071)

    Personally, I never worry about code not executing [quickly] enough.

    You know, people say stuff like that all the time, but all it proves is you're not a programmer that developers speed-critical applications. Guess what? There are lots of people who are. Game programmers (me). Simulations programmers. OS / Kernel developers. There are some situations where fast is never fast enough. You're thinking like a desktop developer who writes business applications that are probably not that demanding of the CPU. Get a faster processor? I wish! Not possible for console developers, or when you're running software in data centers with thousands of machines. Those are real problems, and they require highly optimized code, not more hardware. Most programmers have no idea how much the constant push for efficiency colors everything we do.

    Just today the other day I was looking at a test case where a complicated pathfinding test scenario bogs pegs my 8 core CPU when a lot of units are on-screen at once. That's not some theoretical problem, and telling users they need some uber-machine to play the game is a non-starter. I either need to ensure my game design avoids those scenarios or I'll need to further optimize the pathfinding systems to allow for more units in the game.

    That being said, I agree with your complain about C's fundamental insecurity, but it's not so simple as adding in a compilers switch. For the most common and checkable types of bounds problems, or library functions that can cause problems, Microsoft's C/C++ compiler already does what you've suggested to a degree (not as certain about GCC). The big problem with bounds checking in C is that arrays are simple pointers to memory. The compiler doesn't always know how big that free space is, because there's no type or size associated with it. It's possible in some cases to do bounds-checking, but not in many others. It's a fundamental difficulty with the language, and it's impossible for the compiler to check all those bounds without help from the language or the programmer.

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...