Project Zero Exploits 'Unexploitable' Glibc Bug 98
NotInHere (3654617) writes with news that Google's Project Zero has been busy at work. A month ago they reported an off-by-one error in glibc that would overwrite a word on the heap with NUL and were met with skepticism at its ability to be used in an attack. Google's 'Project Zero' devised an exploit of the out-of-bounds NUL write in glibc to gain root access using the setuid binary pkexec in order to convince skeptical glibc developers. 44 days after being reported, the bug has been fixed.
They even managed to defeat address space randomization on 32-bit platforms by tweaking ulimits. 64-bit systems should remain safe if they are using address space randomization.
Re: (Score:2, Funny)
CAN YOU HEAR ME NOW?? HELLO?
Re: microsofties here is your chance to party (Score:5, Insightful)
Actually, I find the arrogance of calling an obvious bug "unexploitable" disturbing.
Most ARM is 32 bit...
Re: (Score:1, Insightful)
The word you're looking for is 'skeptical', and then they went and fixed it when they were proven wrong. This is actually the opposite of arrogant.
Re: (Score:2, Insightful)
The first part is arrogance. The second part is pragmatic humility.
Re: (Score:2)
The first part is also pragmatic. Releasing a security fix is a lot of work, not just for the developers, but also for everybody else. So you only do that if you have reasonable suspicion that the bug is a security risk. They were good reasons to believe that it is not the case here, although in the end they did not apply in every situation.
If you treat every bug as a security issue, you end up with the Google situation where only one version, the latest, is ever supported. And for libc that is not an accep
not the same thing (Score:3)
The first part is also pragmatic. Releasing a security fix is a lot of work, not just for the developers, but also for everybody else. So you only do that if you have reasonable suspicion that the bug is a security risk. They were good reasons to believe that it is not the case here, although in the end they did not apply in every situation.
If you treat every bug as a security issue, you end up with the Google situation where only one version, the latest, is ever supported. And for libc that is not an acceptable option.
It is one thing to say we will not fix it right now because of the costs and the unlikely of seeing this in the wild. It is quite another to call it unexploitable. The former is pragmatism. The later is hubris.
Re: (Score:1)
You're right, I unfairly used summary words as quotes.
Re: microsofties here is your chance to party (Score:4, Insightful)
The word you're looking for is 'skeptical', and then they went and fixed it when they were proven wrong. This is actually the opposite of arrogant.
They should have fixed the bug as soon as they realized it was there, and not waited until someone proved it was an especially bad bug.
Re: (Score:2)
It's an oldschool attitude to not touch things, from back in the day where software was so flaky that chances were someone had already 'exploited' the bug to do something non-malicious.
It drives me fucking crazy, having been born pretty much into the internet age where the corrected answer can be available in *seconds*. It's pretty obvious from the description what the bug is, so saying you aren't going to fix it is, as you say, pure laziness.
Re: (Score:3)
It's an oldschool attitude to not touch things, from back in the day where software was so flaky that chances were someone had already 'exploited' the bug to do something non-malicious.
Ah, that actually makes sense, good analysis.
. It's pretty obvious from the description what the bug is, so saying you aren't going to fix it is, as you say, pure laziness.
This sort of thing worries me about glibc, and the attitude that 'bugs are no big deal' is a dangerous one that is infecting software developers all over.
Re: (Score:2)
Re: (Score:2)
You may be correct but there's a couple of differences - firstly the processor designs are so incredibly complex now (Intel recently issued a 'microcode patch' that actually disabled some instructions on a certain batch of CPUs) that they're all optimised by computer, so it's unlikely that there's much leftover unused functionality. That brings me to the second point, in that whatever 'undocumented' behaviour is available is unlikely to be as useful as e.g. a deprecated opcode on a ZX80. Moreover, you don't
Re: (Score:2)
It's an oldschool attitude to not touch things
It's called engineering.
, from back in the day where software was so flaky that chances were someone had already 'exploited' the bug to do something non-malicious.
It drives me fucking crazy, having been born pretty much into the internet age where the corrected answer can be available in *seconds*.
Just because we are in the era of the Interweebz, that does not mean everything is a web app whose solutions can be put together in seconds. Specially something like a compiler, a shared library or an embedded system. You have to think of regression testing and crap like that, the backlog of issues that are begging fixing, etc, etc, etc. As a result, you do not touch things unless you truly need to, in a controlled manner.
If it is a web-based system with limited visibility, yeah, s
Re: (Score:2)
If you're talking about 'hold your head this way, right click on your keyboard then unplug your RAM == crash' then yes a change might be something you weigh up.
When you're looking at the code and you see *'this is logically incorrect'* then you fix it immediately. If you're smart you also create some unit tests _proving_ that it was incorrect before and is now correct.
Fuck everyone else who wants to reformat the headers of this part of the project and has it all checked out, fuck people who bitch about 'sta
effort, priority and severity. (Score:2)
The word you're looking for is 'skeptical', and then they went and fixed it when they were proven wrong. This is actually the opposite of arrogant.
They should have fixed the bug as soon as they realized it was there, and not waited until someone proved it was an especially bad bug.
Hmmmm, not really. You fix bugs according to cost of fixing it which includes regression testing to ensure you do not break something else with your fix (effort), the likelihood of the bug manifesting itself in the wild (priority) and the ramifications when the bug manifests itself (severity.)
More systems have been broken by people "fixing" things without doing the proper analysis than by actually looking at the backlog and deciding what shall be fixed (fixed in this release), what will be fixed (fixed in
Re: (Score:2)
Furthermore, if regression tests are important (and they are), they need a suite of automated tests so those things aren't all being done manually.
Finally, it's not like the glibc team traditionally avoids breaking things.
Re: microsofties here is your chance to party (Score:5, Informative)
Embedded stuff would typically use uClibc. Android uses Bionic libc.
Most ARM might be 32 bit but most ARM doesn't use Glibc.
Raspberry Pi, obscure NAS boxes (Score:5, Interesting)
While you have a point, you shouldn't forget the Raspberry Pi. It is probably the most popular internet facing non-mobile ARM platform today. Literally millions of these run glibc and at least hundreds of thousands are in some way or form directly connected to the internet. While I don't believe that this bug can be exploited without first gaining RCE on the raspberry pi, once an attacker gets access to the rpi, this bug should be able to get them to escalate to root privileges.
There are quite a few people that put a full debian (or other) distribution on their NAS server. I own a zyxel NSA 325 and it is possible to install a full debian release on this and some other NAS boxes. These might be a limited amount of systems overall, but it's significant enough to deserve mentioning because they too often are internet facing.
And (some, rare) phones. (Score:1)
And 3 of my phones.
N900, N9 and Jolla all use glibc.
Re: (Score:2)
Was the glibc boffin who said it looked unexpoitable just expressing a casual opinion, or was he actually trying to wriggle out of fixing it? If the former, then its not very interesting. If the latter, then yeah it's a problem.
Re: (Score:1)
I think that's the definition of the difference between being "paranoid" and being "observant."
Re:microsofties here is your chance to party (Score:5, Insightful)
No.
Off by ones are much easier to fix than to prove safe. The amounts of bugs called "unexploitable" until an exploit was provided is staggering. No mildly security aware person will avoid fixing a buffer overflow because it is unexploitable.
Shachar
Re: (Score:3)
Honestly, when will people learn? (Score:5, Insightful)
Never say never.
Unexploitable? Srsly? GAC.
An acquaintance recently posted "Six Stages of Debugging" on his g+ page. (1. That can't happen, 2. That doesn't happen on my machine, 3. That shouldn't happen, 4. Why does that happen? 5. Oh, I see, and 6. How did that ever work). Doesn't an software dev who has been working for more than about three years go straight to No. 4?
The things they don't teach you in a CS degree.
Re:Honestly, when will people learn? (Score:4, Insightful)
This is seriously shit your CS 100 or 200-level teacher SHOULD have taught you, if you got a CS degree. I think it may depend largely upon where/when you got your degree though. They're only all the same on paper.
Re: (Score:2)
This is seriously shit your CS 100 or 200-level teacher SHOULD have taught you, if you got a CS degree.
A CS professor shouldn't teach you to "never say never"... just ask for a formal proof :)
Especially, if you're claiming that P != NP or the like...
Re:Honestly, when will people learn? (Score:4, Interesting)
Re:Honestly, when will people learn? (Score:4, Insightful)
Re:Honestly, when will people learn? (Score:4, Insightful)
Sure, which is why you have proper logging that allows you to point them in the right direction. At least a few times a year, I have to advise users to get in touch with their IT department to fix their corrupted Arial font file or some other such nonsense since it's causing problems for our app (and probably a number of others as well). Where the fault lies is a tangential discussion, however. What matters here is that Step 2 is actually valuable at times, since it can assist you in answering #4 by narrowing down the possible causes.
Re:Honestly, when will people learn? (Score:4, Interesting)
While I don't feel buffer overflows are something to ignore, from what I see the developer never actually said "unexploitable."
From the "skeptical glibc developer" link:
> if not maybe the one byte overflow is still exploitable.
Hmm. How likely is that? It overflows in to malloc metadata, and the
glibc malloc hardening should catch that these days.
Re: (Score:2, Insightful)
The things they don't teach you in a CS degree.
Actually they *do* teach you that in a CS degree, and also how to fix it. FTFY. Also, they don't put the word 'an' before a word beginning with a consonant.
Re: (Score:1)
Also, they don't put the word 'an' before a word beginning with a consonant.
Not even if the word is "hour"?
Re: (Score:2)
An acquaintance recently posted "Six Stages of Debugging" on his g+ page. (1. That can't happen, 2. That doesn't happen on my machine, 3. That shouldn't happen, 4. Why does that happen? 5. Oh, I see, and 6. How did that ever work). Doesn't an software dev who has been working for more than about three years go straight to No. 4?
Absolutely true for debugging. But there's a few steps you missed.
Somewhere near 3-4: Ok, how bad would it be if that happened? Does it recover without user intervention (i.e. service crashes and cron restarts it)? Does it recover with user intervention ("did you turn it off and back on?)? Does it lose user data (oh poop)?
The question here (which is altogether not trivial) is exactly this: "how bad would it be if we wrote an extra '\0' somewhere"? And what geohot did was answer that in the most productive w
C Needs Bounds Checking (Score:5, Informative)
Meanwhile, slopping programming in any language results in unintended side effects.
Yes, but the lack of bounds checking in C is kind of crazy. The compiler is now going out of its way to delete error-checking code simply because it runs into "undefined behavior," but no matter how obvious a bounds violation is, the compiler won't even mention it. Go ahead and try it. Create an array, index it with an immediate value of negative one, and compile. It won't complain at all. ...but god-forbid you accidentally write code that depends upon signed overflow to function correctly, because that's something the compiler needs to notice and do something about, namely, it needs to remove your overflow detection code because obviously you've memorized the C standards in their entirety and you're infallible, and there's no chance whatsoever that anyone ever thought that "undefined behavior" might mean "it'll just do whatever the platform the code was compiled for happens to do" rather than "it can do anything at all, no matter how little sense it makes."
Due to just how well GCC optimizes code, bounds checking wouldn't be a huge detriment to program execution speed. In some cases the compiler could verify at compile time that bounds violations will not occur. At other times, it could find more logical ways to check, like if there's a "for (int i = 0; i < some_variable; i++)" used to index an array, the compiler would know that simply checking "some_variable" against the bounds of the array before executing the loop is sufficient. I've looked at the code GCC generates, and optimizations like these are well within its abilities. The end result is that bounds checking wouldn't hinder execution speeds as much as everyone thinks. A compare and a conditional jump isn't a whole lot of code to begin with, and with the compiler determining that a lot of those tests aren't even necessary, it simply wouldn't be a big deal.
Re: (Score:1)
like if there's a "for (int i = 0; i
How is the compiler going to know the size of a runtime-allocated array? Your idea only works if bound sizes are defined at compile time which is hardly going to be even a majority of cases.
Re:C Needs Bounds Checking (Score:4, Interesting)
Your idea only works if bound sizes are defined at compile time which is hardly going to be even a majority of cases.
Use your imagination...
I was imagining a special type of pointer, but one compatible with ordinary pointers. Kind of how C99 added the "complex" data type for complex numbers, but you can assign to them from ordinary non-complex numbers. A future version of C could add a type of pointer that includes a limit, and a future version of malloc() could return this new type of pointer, and for compatibility, the compiler can just downgrade it to an ordinary pointer any time it is assigned to an ordinary pointer, so that old code continues to work with the new malloc() return value, and new code can continue to call old code that only accepts ordinary pointers. Of course, we won't call them "new" and "ordinary," we'll call them "safe" and "dangerous" when, after several years, we grow tired of hearing of yet another buffer overflow exploit discovered in some old code that hasn't yet been updated to use the new type of pointer.
Re: (Score:2)
I'd like to hear from someone who knows their stuff better than I do. Is this sanians_imaginary_ptr feasible in C and how would it technically work? Without sacrificing optimisations C allows, low-level access and things like that.
Re: (Score:3)
Re: (Score:2)
Nah. C just needs competent programmers who know something about how a computer works. While your attitude towards security is admirable, your attitude of "we'll just get faster computers" is the cause of all these bloated stacks we have nowadays...stacks that STILL aren't secure, and they were written with managed code languages no less!
Some C compilers already have bounds checking (Score:2)
You can already ask some compilers to do what you are asking - it's just often not on in shipped builds.
At compilation time warnings can be generated for out of bounds accesses that can be determined statically. Clang has -fsanitize=bounds [llvm.org], GCC has -Warray-bounds [gnu.org].
As an Anonymous Coward pointed out, it can be hard to detect runtime allocations overruns at compilation time. For these something like Clang's AddressSanitizer [llvm.org] (GCC has added it too [google.com] will help but at a cost of both time (slow down factor of 2) and
Re:C Needs Bounds Checking (Score:5, Informative)
Personally, I never worry about code not executing [quickly] enough.
You know, people say stuff like that all the time, but all it proves is you're not a programmer that developers speed-critical applications. Guess what? There are lots of people who are. Game programmers (me). Simulations programmers. OS / Kernel developers. There are some situations where fast is never fast enough. You're thinking like a desktop developer who writes business applications that are probably not that demanding of the CPU. Get a faster processor? I wish! Not possible for console developers, or when you're running software in data centers with thousands of machines. Those are real problems, and they require highly optimized code, not more hardware. Most programmers have no idea how much the constant push for efficiency colors everything we do.
Just today the other day I was looking at a test case where a complicated pathfinding test scenario bogs pegs my 8 core CPU when a lot of units are on-screen at once. That's not some theoretical problem, and telling users they need some uber-machine to play the game is a non-starter. I either need to ensure my game design avoids those scenarios or I'll need to further optimize the pathfinding systems to allow for more units in the game.
That being said, I agree with your complain about C's fundamental insecurity, but it's not so simple as adding in a compilers switch. For the most common and checkable types of bounds problems, or library functions that can cause problems, Microsoft's C/C++ compiler already does what you've suggested to a degree (not as certain about GCC). The big problem with bounds checking in C is that arrays are simple pointers to memory. The compiler doesn't always know how big that free space is, because there's no type or size associated with it. It's possible in some cases to do bounds-checking, but not in many others. It's a fundamental difficulty with the language, and it's impossible for the compiler to check all those bounds without help from the language or the programmer.
Re: (Score:2)
That's not quite true: the compiler could arrange to pass around more than just the raw pointer (or in extremis could maintain a duplicate of the malloc table and
Re: (Score:3)
I'm not sure how well you know C, but... you can't turn a pointer into something more than a raw memory pointer. This would flat-out destroy all sorts of code that relies on that behavior, both in C and C++, and not necessarily badly-written code. The behavior of memory pointers is a part of the language contract, and you can't change it without breaking the language. For systems programmers with large, legacy codebases, they'd never risk turning on such an intrusive feature because of the simply fact t
Re: (Score:2)
Well, right tool for the job. I think that for servers and clients that connect to untrusted servers, probably C is not the right tool. For example, I'd rather sshd was written in a language that checks out of bounds conditions and I'd rather have it be slow than insecure.
Re: (Score:2)
Go ahead and try it. Create an array, index it with an immediate value of negative one, and compile. It won't complain at all
It complains with -Wall -O2.
Summary is completely exagerated (Score:5, Informative)
I read through the thread and at no point was the bug considered "Unexploitable". Even skepticism is too strong of a word to use. The only doubt that was raised was asking "How likely is that?"
Re:Summary is completely exagerated (Score:5, Informative)
I chose the word scepticism, and still I think it is. I agree that the word "unexploitable" was a bit exaggerated, but that was added by unknown lamer.
Florian Weimer [sourceware.org] said:
My assessment is "not exploitable" because it's a NUL byte written into malloc metadata. But Tavis disagrees. He is usually right. And that's why I'm not really sure.
Its however true that he corrects himself the same day a bit later:
>> if not maybe the one byte overflow is still exploitable.
>
> Hmm. How likely is that? It overflows in to malloc metadata, and the
> glibc malloc hardening should catch that these days.
Not necessarily on 32-bit architectures, so I agree with Tavis now, and
we need a CVE.
Re: (Score:1)
And to be perfectly fair, the issue hinges on glibc's completely idiotic insistence on free()ing everything at exit() instead of just f'ing exiting. The kernel knows exactly what to return to the free pool and does not depend on, or require, the application to return the memory it requested.
"Unexploitable" sudo bug pre-1.6.3p6 (Score:5, Interesting)
Reminds me of this overflow bug [seclists.org] which was fixed in sudo 1.6.3p6. It writes a single NUL byte past the end of a buffer, calls syslog(), and the restores the original overwritten byte. Seems unexploitable, right?
Wrong. Here's the detailed writeup [phrack.org] of the exploit. It requires some jiggering with the parameters to get the exploit to work on a particular system, but you don't need a local root exploit to work every time, you just need it to work once and you own the system.
Re:"Unexploitable" sudo bug pre-1.6.3p6 (Score:4, Informative)
I've read a bit through the threads and think that the reason it took so long was because they decided to remove a feature [openwall.com] to fix the problem:
I believe the current plan is to completely remove the transliteration
module support, as it hasn't worked for 10+ years.
The git commit message states the same. There were really some problems in that function: https://sourceware.org/ml/libc... [sourceware.org]
Amazing to use such a crude programming language (Score:2)
One that a slight slip anywhere in millions of lines of code could produce random memory corruptions with unpredictable consequences. Who would have believed that anybody would even dream of using a language with constructs such as ptr++. And we are surprised to find bugs...
Re: (Score:2)
At some point, you'll have to break that high level language down to opcodes the cpu can understand, that means breaking high level logic down to many simple steps, which is what procedural languages are for. You can force the programmer to write these steps one at a time in assembly, 'script' the generation of assembly in C, or have a runtime and/or VM do it at a cost of speed and footprint, but there is no magical way to skip generating that list of procedures.
Re: (Score:2)
Protip: Your fancy "modern" language is written in this "crude" language.
Even if a compiler for a "fancy" safe language were written with a "crude" unsafe language, it would still be just one program to verify for ptr++ kind of bugs. Additionally, a compiler is a classical input -> output kind of non-interactive program, which yields itself very well for running under verification tools like valgrind, which increases confidence that at least for any given input, it will not do nasty things.
Re: (Score:2)
Nothing prevents writing runtime libraries on safer languages than C, even C++11 would be a lot better (unless abused, but that applies to C too). And assembler is used very little these days, because there are many relevant CPUs in the market (ARM variants, x86, x64).
Re: (Score:2)
Indeed. And yet, in Java, it's impossible for me to accidentally shoot myself in the face with pointer arithmetic.
I use C++, and like it in its way, but you don't have much of a point.
Re: (Score:2)
Re: (Score:2)
Of course, the C standard library itself is hardly [wikipedia.org] a shining example of secure library design.
Re: (Score:2)
Yeah, they should have just invented Python in 1950.
Address space randomization does not help. (Score:5, Interesting)
64-bit systems should remain safe if they are using address space randomization.
Nah. It just takes more crashes before the exploit achieves penetration.
(Address space randomization is a terrible idea. It's a desperation measure and an excuse for not fixing problems. In exchange for making penetration slightly harder, you give up repeatable crash bug behavior.)
Re: (Score:2)
1) if you make exploitation less likely than an astroid hitting the earth, then for all practical purposes you can say that it is prevented.
2) 'repeatable crash bug behavior' doesn't matter, it will be repeatable if it is run in valgrind/address sanitizer or via a debugger which is really all that matters to a developer. An end user couldn't care less about repeatable crashes and would prefer if it occasionally/usually continued running.
Re: (Score:2)
Re: (Score:2)
There are plenty of processors that are Harvard architecture out there (separate data/instruction memory). Though modern architectures do have a bit of Harvard in them (the separated instruction and data caches). And memory segmentation and permissions do help split code and data into separate areas.
T
pkexec?? (Score:2)
Sorry, old Unix guy here. My first reaction was "What the F is pkexec and why is it running setuid?"
Yet another way to execute arbitrary privileged executables is yet another potential security hole. This dumb thing is apparently part of the "Free Desktop" but it's depended on by all kinds of stuff including the fricking RedHat power management. What's wrong with plain old sudo?
Brilliant work (Score:2)
Don't make excuses for not fixing bugs. When you find bugs, fix them.
All software is buggy by definition because the entire stack from the moving charge carriers to the behavior of the person using the computer cannot be mathematically proven to be correct.
No matter what measures you as the hardware or software creator take, there will be bugs.
Don't make people angry at you or ridicule their bug reports because that's a major incentive for them to make you look foolish.