New Hack Exploits Common Programming Error 255
buzzardsbay writes "TechTarget's security editor, Dennis Fisher is reporting that researchers at Watchfire Inc. have discovered a reliable method for exploiting a common programming error, which until now had been considered simply a quality problem and not a security vulnerability. According to the article, the researchers stumbled upon the method for remotely exploiting dangling pointers by chance while they were running the company's AppScan software against a Web server. The good folks at Watchfire will detail the technique in a presentation at the Black Hat Briefings in Las Vegas in August, Fisher writes."
Well duhhhh. (Score:5, Funny)
Quick, someone alert Bill Gates!
Re:Well duhhhh. (Score:4, Insightful)
All the trouble in this world.. (Score:5, Funny)
Re: (Score:3, Funny)
That is why I use button fly, much harder for dangling bits to expose themselves.
Re: (Score:3, Funny)
I'm telling my mother! (Score:5, Funny)
Re:I'm telling my mother! (Score:4, Funny)
Re: (Score:3, Funny)
The cure... (Score:5, Funny)
Pleasantly surprised! (Score:2)
However, I thought that the compiler took care of destroying pointers when variables went out of scope? Aren't destructors implicit clean up calls? I'm fairly sure that Borland 16-bit Turbo C++ did this, back in the day.
Re:Pleasantly surprised! (Score:5, Interesting)
void (*myFuncPtr)() = NULL;
void cleanUp() {
item listItem = firstItem;
while ( listItem != NULL ) {
myFuncPtr = listItem->fcn;
myFuncPtr();
tempItem = listItem->next;
free(listItem);
listItem = tempItem;
}
}
A little contrived, sure, but it is an example of how a pointer might get left dangling.
Re: (Score:2)
Now all you need to do is be a real fool and try to call that dangling pointer.
Re:Pleasantly surprised! (Score:4, Insightful)
Re: (Score:2)
Re: (Score:2)
More push toward VM's (Score:2)
I can see this as fodder for the argument that it's safer to run software in sandbox like Java's VM.
Re:More push toward VM's (Score:4, Insightful)
Um, it is fodder for that argument -- environments where memory management is handled automatically mean the programmer has one less thing to screw up. Even if you consider that the VM implementations may have errors, there are far fewer VM implementations than there are pieces of software that can run on them, so it's easier to debug a good memory manager/garbage collector for each VM than to debug the manual memory allocation and freeing in each application.
Re:More push toward VM's (Score:5, Insightful)
Garbage collected languages is no solution to poor programming. If you can't remember to not call a function pointer that you just freed, you'll probably forget to close /etc/passwd before dropping privs, or something equally stupid.
Re: (Score:2)
And you could encode that kind of stuff in accessors which follow security policies, rather than directly hard coding the access and the policy every time. For example, this is why we separate code into separate processes owned by different users rather than throwing everythi
Re: (Score:2)
Well, if you're the type that can think of putting access to /etc/passwd into, say, an object that's destroyed just before dropping root, then you can probably think of a way to prevent yourself from calling dangling pointers too. And if you unit test properly, and avoid side-effects, you'll save a whole lot of trouble when you go to integrate, too.
It's just people being lazy, hurried, or in some unfortunate cases, stupid.
Re: (Score:3, Insightful)
Yes, which is why your architects and best developers create a platform on which
Re: (Score:2)
Man. We can dream, anyway.
Re: (Score:3, Insightful)
Can you do both at the same time, while dealing with dozens of other headaches? Do you really want to? There's something to be said for reducing the programmer's mental workload so he can more efficiently think about the problem he's supposed to solve. Of course, making it more d
Re: (Score:2)
Eliminating the possibility of an entire class of bugs reduces the number of things that need to be remembered in putting the code together (and checked once it is coded), and therefore reduce the number of bugs (and exploitable vulnerabilities) that get through to end user code.
Re: (Score:2)
Eliminating a class of bugs is a great thing, but GC doesn't do it. Instead of memory leaks, you have object leaks. People start to trust the GC too much, and it takes away more control than it's worth. That's just a personal preference, of course, and I'll write GC code, but since I don't like it I'll take a 10% lower salary if I don't have to deal with it.
GC does eliminate bugs (Score:4, Interesting)
GC does eliminate a few classes of bugs:
Re: (Score:2)
Hehe (Score:4, Funny)
Finally (Score:4, Funny)
Re: (Score:3, Insightful)
(Yes, it is a part flame in reply to another flamebait)
Re:Finally (Score:4, Insightful)
More like living in a gated community rather than living in the projects (the ghetto). The gated community may cost more, but it's safer than the projects, where you have to be careful not to get shot—though rather insular and not as safe as the people who live there generally think.
Re: (Score:2)
a new pickup line... (Score:3, Funny)
Known since 2005 (Score:5, Interesting)
"When Watchfire first alerted Microsoft's security response team to what Afek and Sharabani had found, they were met with skepticism, and understandably so, Allan said. The company had known since 2005 about the IIS bug that caused the crash, but it was considered a simple denial-of-service problem and not remotely exploitable."
Worded a little ambiguously, but I presume it's Microsoft their talking about... How can a bug like this get through the QA process since 2005 and multiple product versions without getting fixed?
Re:Known since 2005 (Score:5, Insightful)
Because people keep buying their buggy shit. If people buy your products regardless of the quality, what incentive do you have to fix anything?
Re: (Score:3, Interesting)
Very easily. Every bug gets a priority assigned to it, and from the sound of things this one was ranked pretty low.
The sheer number of bugs in any large product means it's not really practical to fix every one before a release so the higher priority ones get concentrated on. Time passes, products are realeased bec
Why are we still dealing with this? (Score:5, Insightful)
And this isn't a "use Python" or "use Java" rant, either. I will say, however, UNIT TEST YOUR SHIT! EVERY LINE! Even the little inline function, you need to test it all! Repeat after me: Resource Acquisition Is Initialization. Resource Release Is Destruction. -Wall -Werror, no, warnings aren't OK. No, not even signed vs unsigned comparison warnings, you need to either get your data types straight or wrap that in a partial-specialization template functor that correctly checks that you won't be killed by sign-promotion when you compare int and unsigned long long. strncpy(), not strcpy()! -fprofile-arcs -ftest-coverage! Valgrind!
I dunno. I manage to write C++ and never overflow a buffer, always release all resources when I'm done with them, and never throw away an error. Why can't the other 95% of the programmers out there do the same thing?
Re: (Score:2)
Whoa there, cowboy. Them's fightin' words.
Re: (Score:3, Funny)
Yeah, I just read that again. Guess I got a little carried away...
Re: (Score:2)
1) It does NOT guarantee that it creates a NULL-terminated string.
2) It always writes every byte of the destination. Copy 10 characters into a 1000-byte buffer and it writes 1000 bytes, mostly zeros.
Re:Why are we still dealing with this? (Score:5, Insightful)
I dunno. I manage to write C++ and never overflow a buffer, always release all resources when I'm done with them, and never throw away an error. Why can't the other 95% of the programmers out there do the same thing?
They are busy being yelled at by their boss to "just make it work" and to "not worry about getting it perfect" and they are dealing the idiot "build master" over in change-management who doesn't know what "make clean" is or how to read a make file, but thinks that he's some master csh hacker... Everyone wants that just not everyone works in a perfect world.
Shit, most of us are just happy when we are able to beat clear requirements out of people and get reasonable bug reports.
Re:Why are we still dealing with this? (Score:5, Insightful)
Because they don't care or they're too busy with other stuff, and even if that's not the case, sometimes people make mistakes. That's why you write tools to check that programs are actually being written correctly (wherever possible) and to make it as easy as possible to create full coverage tests, rather than relying on other programmers to do the right thing. Automation, it's a great thing.
Re:Why are we still dealing with this? (Score:5, Funny)
Something tells me that even if your programming is 100% spot-on, your grammar skills are slightly lacking...
Re: (Score:2)
I'm sorry, you're right, there's way more than 20 of us out there. What I should have said is, why can't the other 1,999,999%?
No, wait...
Re:Why are we still dealing with this? (Score:4, Informative)
Note that I'm using dependency grammar here (to which class algebraic grammar belongs). Followers of looser grammatical theories may find the statement technically correct since his meaning was clear. However,this is predominatly a tech site, it follows that dependency grammars should rule the roost.
Re: (Score:3)
No, it's a syntax error. "The other 95% of programmers" refers to the complete set of programmers, less excluded subset. He defined that subset as himself, instead of himself plus others who can code properly; improper usage of "The other" is what caused his dependent clause to be false.
You can argue all you like, but it's still not a syntax error. It is, I grant you, not strictly speaking the correct choice of words—but since one could make the statement 100% correct by changing the situation (i.e. by systematically killing off programmers in their thousands until only 20 remain in the world[1]), but without changing the phrase, it is most certainly a gramatically correct sentence—even if the semantics are wrong in the given context.
[1] This would, of course, be taking the
Re: (Score:2)
Re: (Score:2)
Hospitals.
Re: (Score:2)
Re:Why are we still dealing with this? (Score:5, Informative)
Because the other 95% saw that you take too long to write code and your code executes too slowly and you are going to be fired because of it.
Re: (Score:3, Insightful)
Don't be silly. It is perfectly possible to write robust, efficient C++ code at a decent speed. And it is certainly much more efficient than generating buggy crap and then spending weeks, months and years trying to find the elusive bugs. Not to mention the complete rewriting of large blocks of unintelligible, bad-quality code as 'bug fix'.
I fully agree with the ori
Re:Why are we still dealing with this? (Score:5, Funny)
Because we're employed.
Re: (Score:2)
And this isn't a "use Python" or "use Java" rant, either. I will say, however, UNIT TEST YOUR SHIT! EVERY LINE!''
You don't think that writing in a language that just doesn't allow these bugs to be coded would be easier? Not to mention that that would actually guarantee the absence of such bugs, which unit testing doesn't?
Re: (Score:2)
Re: (Score:3, Insightful)
You do realize that strncpy() is fundamentally broken, right? If the source string is longer than n characters, then the destination string will not get a null terminator. The first operation after that on the destination string will go flying off the end with unpredictable results. strncpy() is also inefficient in that it will fill the dest buffer with unnecessary null characters if the source is short. I use my own strcpy() function that takes the size of the destination-string
Re: (Score:3, Informative)
As someone who's written 2 pieces of OSS with 100% code coverage in unit tests, and probably the most secure C http server (comes with over 75% coverage). I have to say: "It's not quite that simple". Testing does not negate design, and designing for security is non-trivial and takes a certain mind-set ... and while a lot of people say they want security, almost none are actually prepared to buy it (with either money, lack of features, whatever).
Hell, one of the biggest advancements in security in rece
Re: (Score:2)
Yeah, there was a little hyperbole there, but I was trying to make a point. Although, to be fair, I still run all of my code through Valgrind and I've only caught an error like that twice in about 8 years. Once was C++ type demangling to make an error message human-readable, which as I recall was really poorly documented as to whether you or the library freed the memory, and the other was interfacing with getopt_long(), I tried to free argv. Whoops. I do have bugs, but it's extremely rare that it's of that
what a headache (Score:3, Informative)
welcome back to the days of sql slammer and code red folks. buffer overflows have been analyzed to death, but this is just the beginning
Re: (Score:2)
No, this is very little new. It's the same basic programming error that's been made since day one, "whoops, I forgot." Not to mention, there's absolutely no explanation on how they've managed to put code in what's basically an arbitrary memory location.
But man, this type of thing should be a 0.01% of software type of bug. What the hell? Isn't software engineering?
FUD! We're important! (Score:3, Insightful)
Let's just wait until the actual story next time? (since it doesn't seem likely there will be a real one, here, anyway)
NULL those pointers, folks (Score:4, Interesting)
/* C code */
void
FreeFoo(PFOO *ppfoo)
{
PFOO pfoo;
assert(NULL != ppfoo);
if (NULL == ppfoo)
return;
pfoo = *ppfoo;
assert(NULL != pfoo);
if (NULL == pfoo)
return;
free(pfoo);
*ppfoo = NULL;
}
/* typical use:
PFOO pfoo = PfooNew(args);
...do something with FOO object...
FreeFoo(&pfoo);
*/
Note that if you acidentally try to double-free the FOO, the above code will not crash; the first free sets the FOO pointer to NULL, and the second one notices that the pointer is already NULL and exits early. It does assert() when you try to free a NULL pointer, so you can catch the error and see what else you might have messed up.
For C++ you should be able to write a template that takes a reference to any pointer type and applies the above logic.
I once had to maintain a legacy code base, a whole bunch of C implementing a fairly complicated application. The app had a whole bunch of crashing bugs. I went through and applied the above logic everywhere the app was calling free() and suddenly the app stopped crashing. I wonder if the previous developers were using a different compiler or something, and the dangling pointers just happened to work for them?
steveha
Re: (Score:2)
Exactly! I'm no fan of C, there's too much I've gotten used to being able to do in C++, but this just goes to show that while you can write crap code in any language, you can write good code too.
Good on ya for putting your lvals on the right of your comparisons, too.
Re:NULL those pointers, folks (Score:4, Insightful)
Re:NULL those pointers, folks (Score:4, Insightful)
It prevents the common mistake of using the assignment operator "=" when you meant the equality operator "==". I like it better your way too, since it illustrates the object of the comparison better, but if I'm rushing out code that I don't have time to write good unit tests for, I switch over.
Re:NULL those pointers, folks (Score:4, Informative)
Re: (Score:2)
Unfortunately, a hammer does you no good when you're omg standing on the surface of the sun! so we should throw away all the hammers.
I think he was just trying to illustrate a simple point, not post the One True Memory Management pattern :P
Re:NULL those pointers, folks (Score:4, Informative)
Okay sure, the FreeFoo() logic will not, by itself, take care of the case where you have multiple pointers. Only the pointer you actually use to free the object would be automatically nulled.
As you note, it is possible to pass around "handles" and make the handles safe, and as you note, there can be a performance hit.
But if you have a clean code design, you will have an expected lifetime for those extra pointers, and when you are done using a pointer, you can NULL that pointer. When you are done, you should have only one pointer left pointing to the object, and when you call FreePfoo(), you will then have zero pointers left pointing to the now-freed object.
Another simple trick you can use: at the beginning of your structure, place a member variable called "signature" or something like that, and set it to some unique value. Then, in FreePfoo(), zero out the signature before you call free(). Then start each function that uses a PFOO with an assert() that checks that the signature is sane. Even if you have a dangling pointer, and even if that pointer can still be used to reference your structure after you free() the structure, the assert() will fail. If you like you can put the "signature" member under #ifdef so that you don't even compile it in unless asserts are enabled.
The last major application I developed, I used the above tricks. The most important data structures each had their own unique signature. Functions that took a PFOO started with a call to AssertValidPfoo(pfoo), which would check the signature and also perform every other sanity check I could think of upon the FOO and the pointer (and which would not be compiled for a release build). Once the compile succeeded with no errors or warnings, I would run a test and immediately get an assert() if I had a code bug. Once I fixed the code to no longer assert(), in general my code Just Worked.
Asserts are like unit tests that run every time you run your code, and don't cost anything in the final release build. I love asserts.
Does this sound like more work than a garbage-collected language like Java or Python? Well, it is. C is just plain a lower-level language and you need to do more stuff by hand.
steveha
Re: (Score:3, Insightful)
It's something people often suggest, and I don't like it at all. It doesn't catch the serious case, where you have more than one pointer to the memory. Who is going to null
Out of curiosity - who uses C/C++ for web (Score:2)
Let's ignore plugins or modules, because those are frequently C/C++; however, it's been my experience that writing modules is far less common than just implementing code in PHP or Perl. I'm talking about basic application logic, generally form processing and serving HTML content.
I have implemented barebones C and C++ HTTP servers because high performance was a critical requirement, but the percent gain wasn't really worth the effort. PHP with caching (apc, memcached, db pooling) is alm
Re: (Score:2)
What is this? (Score:2)
You free the the object the pointer is pointing too.
The bad guy figures out where the pointer is pointing and writes his code at that location. The next time the pointer is called the bad code is executed.
This is like saying you lock a door and put the key away somewhere. The bad guy finds the key, unlocks the door and takes what he wants.
Why is this suddenly a major security issue and what am I missing?
Re: (Score:2)
The bad guy figures out where the pointer is pointing and writes his code at that location. The next time the pointer is called the bad code is executed.
(emphasis mine)
Why is this suddenly a major security issue and what am I missing?
The bad guy typically has no control over where the pointer is pointing (that's determined by the memory management library). The bad guy also cannot write whatever he wants into arbitrary memory locations. If he could, he would already have complete control over the process and wouldn't need to exploit anything.
That's why this type of problem has typically been considered nearly impossible to exploit. There's too many variables that the attacker can't control (and memory management can
Re: (Score:3, Informative)
"Security experts" that aren't (Score:5, Insightful)
From the article:
Any security expert with at least half a brain is going to assume that a remotely-triggered crash might be exploitable, unless he can actually prove otherwise.
That said, I've known plenty "security experts" who weren't.
AAARGH! (Score:2)
"Common programming error"?
The foo! Just tell us WHICH error, will ya?!?
Ok, I know a few lines further on they actually did, but the beginning just ticked me off. This is a bloody tech site, I'd think that some people here _would_ know what you're talking about when you say "dangling pointer".
how... (Score:2)
Could by anyone of a hundred possible bugs.
Wrong terminology (Score:2)
Exploit (Score:3, Interesting)
1. Application allocates a C++ object, deletes it, but continues to point to it. The exploit code is likely to force this condition. The attacker has to know the type (and size) of the C++ object.
2. Exploit code sends a packet to the server causing it to allocate memory of exactly the same size as the C++ object that was deleted and store the exploit payload in that memory. For example, in case of a web server it might be an HTTP request of size X or an HTTP request with multiple HTTP headers of size X, depending how the implementation stores whatever it receives on port 80.
3. The heap allocation algorithm would presumably re-use the space that was deallocated when the C++ object was deleted. The exploit payload is copied over into the allocated buffer, examined and discarded (e.g. the payload is not valid HTTP). This is OK since the dangling pointer is still pointing to the memory area with the exploit payload.
4. Now the exploit causes the dangling pointer to be used to reference a virtual function and it's game over.
The exploit payload needs to act like a virtual table. It can reference exploit code in itself or jump somewhere in the running process which would make it exploitable (e.g. "DeinitializeSecurity" function).
Hopefully the paper is more interesting than this.
From TFA... (Score:5, Funny)
Did anyone else think:
"If we hit that bullseye, the rest of the dominoes will fall like a house of cards! Checkmate." - Zapp Brannigan
Not news for Mozilla (Score:5, Informative)
Re:That's nice and everything but.... (Score:5, Insightful)
Re:That's nice and everything but.... (Score:5, Insightful)
Re:That's nice and everything but.... (Score:5, Informative)
Re: (Score:3, Informative)
Re:That's nice and everything but.... (Score:4, Informative)
There is a software substitute which takes effect in case your CPU doesn't support the NX bit, but that only prevents some attacks and *won't* stop execution of code in data pages.
Re: (Score:3, Informative)
And the answer is yes, it is possible. Windows runs DEP in either an Always-on, Always-off, Opt-in or Opt-out modes. Opt-out lets you enable DEP by default and then override it for specific programs which break, and Opt-in lets you disable it by default and then enable it for specific programs which'll benefit such as a web server.
- It doesn't have to be a function pointer. (Score:5, Interesting)
That is to say - if the object is referenced through a bad pointer, *and executes* any methods of that object's type - then it could be used to run someone elses code. They'll need to have filled some memory with something that can be interpreted as a virtual function table that points at something that can be interpreted as code. Which is doable.
If the processor/OS has set an app to able to write to it's executable memory, then it is vulnerable to this class of vulnerability.
Many OS's and C++, Objective C and *java* implementations default to this.
Pascal and perl used (maybe they still do) stubby things that required that the *stack* be executable, nevermind just data... *buffer overruns* are much easier when the stack is executable.
Java is interesting. Modern VMs do a lot of dynamic optimization - this means that they write on code that is actually running. They need OS permission to do so (in decent OSs?) so now you *have* to give the VM's process that permission in order to run Java. Now any dangling pointers in the VM implemention are potentially exploitable. Or if the memory manager has a bug and improperly deallocates an object... Or if the application has to call a library and that library accidentally accesses a reference to an object that was already released by java. Or maybe the app calls the OS - and the OS has a dangling pointer (say to a data structure that the Java VM needed to allocate). If you can fill the Java heap with executable exploit data, then if someone, anyone, jumps into it - they are toast.
I hope this helps. There is likely an actual paper that they will present. It will document one or several of the myriad ways to exploit dangling pointers - hopefully more efficiently than previously.
Re: (Score:3, Informative)
The place where this sort of trickery is useful in Pascal is when a routine passes a routine at an "inner" lexical scope to another routine. In order for the passed routine to access variables in
Re: (Score:3, Informative)
It usually involves stack execution, but that can apparently be avoided with a little extra cost. It's not language specific, but compiler-specific.
Re: (Score:3, Informative)
While, in theory, all this stuff can be done by only writing on code when it is
SOS? (Score:2)
People have been 'sploiting this kind of thing for years as far as I know.
Not that there's any thing wrong with that! (Except that maybe there is a kit now for diddling the danglers.)
Re: (Score:3, Informative)
Perhaps you're confusing dangling pointers with buffer overflows. Buffer overflows [wikipedia.org] occur when you put too much data into a pre-allocated buffer, overwriting the return address of the current function with a return address pointing to your malicious code. Dangling pointers [wikipedia.org] are simply pointers pointing to invalid types. Before, it was thought that dangling pointers were not exploitable, because you had to know the actual type of the
Re: (Score:2)
I doubt it. How the heck do you get a dangling function pointer in C or C++? You never malloc() or operator new() functions, unless you fancy self-modifying code.
Re: (Score:3, Insightful)
Oh fine, be pedantic about it. Put the pointer in a struct and maintain a dangling pointer to the struct. All pointers within the struct are now dangling as well, including function pointers. An attacker can then (theoretically) change the value of the function pointer. This is the C equivalent of the C++ attack you describe. If someone attempts to call the fu
Re:That's nice and everything but.... (Score:5, Informative)
Now, however, they can change the value of the dangling pointer and when IIS does the jump this time, it executes their exploit code instead.
They are not saying they "change the value of the dangling pointer".
From the FA: "The problem before was, you had to override the exact location that the pointer was pointing to. It was considered impossible. But we discovered a way... The long and short of it is, if you can determine the value of the pointer, it's game over."
There are theoretically two ways to exploit a dangling pointer - change the address that it points to (which they don't do), or discover the address it is pointing to, and put some code there (considered impossible). Most likely, it is pointing to memory space within the program that once held valid executable code. They say this "was considered impossible, but we discovered a way". So I suspect they just stuck a jump instruction at the location the pointer was pointing to instead of trying to cram executable code into an unknown sized space. The jump would of course be to some space they allocated, with a known size, big enough to hold their exploit. Determining the value of the dangling pointer would be easy enough - you would get a message when it crashed that the app tried to access invalid memory at addr: 0x????????. Just stick a jump at that location - then get a big warm hug from Microsoft when you show them how you did it.
Re:That's nice and everything but.... (Score:4, Informative)
Well, no - a dangling pointer implies two different pointers referencing the same memory area. Since many objects have pointers to other objects, if you can change one object by modifying fields through the other pointer (either the dangling pointer or the memory area doubly referenced), you can change one of those object pointers to point to a location on the stack; then using common buffer overflow techniques, you can put code on the stack and modify a return address to point to that code.
I wouldn't say such an approach can ALWAYS be used to compromise a machine, but it is much more generic than the (also quite possible) C++ (and similar language) specific method using pointers to vtables and such.
I've always assumed that if you can get a program to crash, you can probably get it to execute arbitrary code. One way of avoiding such techniques (other than writing correct code) is to use hardware support. The No-Execute page flag is one good start; having separate control-flow and data stacks would be another (where the control-flow stack would only be accessible through special instructions). Randomizing the location of the stack and the heap, and possibly make the memory allocation routine be less optimal and more random would also help a lot. Having a tagged memory architecture would be helpful as well (pointers to code could ONLY be manipulated through special instructions, and trying to load the wrong type of memory would cause a hardware exception).
Re:That's nice and everything but.... (Score:4, Interesting)
I'm not sure why that would work better with dangling pointer than pointers to live code, however - how do you change the memory that the pointer points to without already having access? Presumably that's what these guys discovered: a way to get a buffer allocated and filled with user data at the spot the dangling pointer points.
Re: (Score:2)
Not an Apache issue. (Score:4, Informative)
IIS 5.1 and 6.0 is a smaller target space of possibilities.
Re: (Score:3, Interesting)
In short-running programs with no persistant side effects (sysv shared memory, semas, or msg queues, for instance) there's really nothing wrong with letting the OS clean up after you.