Heap Protection Mechanism 365
An anonymous reader writes "There's an article by Jason Miller on innovation in Unix that talks about OpenBSD's new heap protection mechanism as a major boon for security. Sounds like OpenBSD is going to be the first to support this new security method."
Slowdown? (Score:4, Informative)
OpenBSD Goals (Score:3, Interesting)
Re:Slowdown? (Score:5, Informative)
http://www-128.ibm.com/developerworks/java/librar
Malloc is slow. Per studies, 20-30% of CPU time wasted on memory management.
I haven't seen that level of retardation in JVM's since... oh... 1996?
But yeah, keep thinking you can do it better. Whatever. In the meanwhile, the rest of the world moves on.
Re:Slowdown? (Score:5, Insightful)
Interesting paper - thanks for the link.
However I find the conclusions given a bit dubious. For instance the claim that allocations are "free" is somewhat dubious - garbage collecting languages put all of the work on the back-end, giving the illusion of a free front-end (whereas non-GC languages put the hard work on the front-end). Yet for every object you create on the heap that is more work the heap walker has to do each GC to detect orphaned objects - a non-trivial task. It then has to free all of those objects, and because of the "free" allocation it has to move all of the objects and rebase every object pointer in the application. It doesn't take a genius to realize that is a signficiant task in an application of a real-world size.
I think the proof is in the empirical proof - how many high performance memory intensive Java applications are there?
Re:Slowdown? (Score:4, Interesting)
And anyone who's run a JVM knows about the price of this task -- yes GC takes time.
However, as I understood the article, the author was making a point that the way most C programmers manage memory tends to make the task more time consuming than is necessary. Therefore relying on a known optimized implementation rather than reinventing the wheel every time may be preferred. After all, it is just the VM implementor that needs to understand how to optimize the memory management, not the application developers. So yes, where the time is spent is shifted but also the amount of total execution time spent on memory management can be reduced -- because the task is managed differently.
As for the specific details of this paper, they're basically discussing how to determine which objects can be safely allocated from the stack, instead of heap, and therefore can be discarded without the usual book keeping required from a heap GC.
how many high performance memory intensive Java applications are there
Java is so widely used on the server side and middleware, it cannot be difficult to come up with examples -- Tomcat, J2EE app servers, etc. eBay for instance advertizes very clearly on their front page to be powered by Sun's Java technology. There are individual Java systems that manage millions of transactions daily, and there must be thousands of systems out there that do this every day with Java.
Re:Slowdown? (Score:3, Insightful)
Re:Slowdown? (Score:4, Interesting)
Essentially, I'd create a large ring buffer of malloced temporary buffers of some standard length. Any time a temporary buffer was needed, I'd grab the next one in the ring.
Before the buffer was provided to the function asking for it, the length would be checked. If the requested length was longer than the current length, the buffer would be freed and one of at least the proper length would be allocated. (I normally allocated by buffers in byte multiples of some fixed constant, usually 32.)
The idea was that by the time it was reused, what was already in the buffer was no longer needed. To achieve that, I'd estimate how many buffers might be needed in the worst case and then multiply that number by 10 for safety's sake.
My primary use of this was when doing enormous numbers of allocations of memory for formatting purposes. The function doing the formatting would request a buffer large enough to hold whatever it would need, write the formatted data into the buffer, and then return a pointer to the buffer. The calling function would simply use the buffer and never have to worry about freeing it.
The performance results were superb except in the very simplest cases where you allocated the buffers without ever using them.
I've never known anyone else who used this kind of approach although I've showed it to a large number of people.
Re:Slowdown? (Score:3, Informative)
Its use dates back to the first GC'd langauge, LISP, and was a common way to reduce garbage generation.
Re:Slowdown? (Score:3, Interesting)
Re:Slowdown? (Score:3, Informative)
I've heard of pooling but was under the impression that you were mallocing several memory allocations at once, but that the pieces of allocated memory were assigned to particular uses.
That is basically what you just described - malloc std_length * count. Allocate out of this pool preferentially.
The primary benefit of this was to perform one malloc instead of N mallocs for N different memory assignments.
This performs 1 + N*M(1-hit_rate) allocations instead of N*M allocations, which is generally reall
Re:Slowdown? (Score:3, Interesting)
I'm quite willing to concede the argument that programmer time is dearer than processor time; but why has the the Real Time Specification for Java [java.net], for instance, only been able to achieve 100 microsecond [rtcmagazine.com] interrupt response times, when 2.0 microsecond [cotsjournalonline.com] response times aren't unheard of in other domains?
Re:Slowdown? (Score:4, Interesting)
I work with real time systems and 0.0001 seconds (100 microseconds) is plenty fast for most Human to Real time systems applications. Granted JAVA is not what you want for fine-tuning your Engines performance but it's plenty fast for most applications. What makes Java so useful is you get to avoid most of the really time consuming bugs. Compare a fully functional java based multithreaded HTTP server with the C / C++ equivalent and it's going to be 1/3rd as much code. And will operate at vary close to the same speeds. In other words it's designed around applications where programmer time is worth more than machine time. We already have C so Java was built around the 95% of applications that don't need inline ASM.
I have killed BSD UNIX with buggy C networking code which is the only thing I have been unable to duplicate with good Java code. You can do bit twiddling in Java, but it's faster in C. You can have hundreds of threads doing their own thing in either but it's much easer to do that in Java than C/C++. The secret is to know enough about how Java works so that you avoid things like creating new threads that eat up a lot of time. Once you understand how things work you can use things like Thread Pooling that are extremely efficient. Instead of complaining that concatenating Strings takes so long try learning about what other tools are out there like StringBuffer.
PS: A quick look at some fast Java code. (It is a bit dated but gives you some idea what I am talking about.) [protomatter.com]
Re:Slowdown? (Score:5, Insightful)
PS: A quick look at some fast Java code. (It is a bit dated but gives you some idea what I am talking about.)
Hmm ... that's a little archaic. The page complains about Java performance in JDK 1.1, and appears to be based on a site (http://www.cs.cmu.edu/~jch/java/optimization.html [cmu.edu]) that hasn't been updated since 1998.
new method? (Score:4, Informative)
Re:new method? (Score:4, Informative)
Intron == heap protection (Score:3, Interesting)
OSes can put Intron between Exon (useful DNA -- useful stuff on the heap) to
detect badly behaving apps!
In other words, so-called "Junk DNA" may actually have a use...
HEAP PROTECTION
Re:Intron == heap protection (Score:3, Insightful)
You're still correct. It is heap protection in an evolutionary way. Heap protection on computers seeks to safeguard the data as it is. Junk DNA accepts that things are going to get corrupted and seeks to make it statistically less likely for the important parts to get corrupted.
Thanks for that train of thought.
Re:Intron == heap protection (Score:3, Insightful)
Just being a "dummy target" seems like an inefficient use for extra DNA. I would have expected something along the lines of ECC like reed-solomon coding to have evolved. Kinda shoots down the "intelligent design" theory too, unless
Re:new method? (Score:3, Insightful)
Perhaps, but look at it from the user's perspective :
First, their system is working properly. Their OS is working, and their application is working. And then SP2 comes out, and they install it. And their application stops working.
Who are they likely to blame? Microsoft of the creator of their application? They'll look at what changed, and decide that Microsoft is to blame. And if this application is important, they'll even back out SP2 to
Re:new method? (Score:3, Interesting)
Re:new method? (Score:2)
Unfortunately, this is a perspective that MS can't really afford to have, although they probably want to.
Re:new method? (Score:2)
I must have forgotten that only experts were allowed to use computers, and making them accessible to people was a mistake. Sorry.
Re:new method? (Score:2)
Re:new method? (Score:3, Insightful)
Ok, but most Windows users don't have anything to do with consultants, and think that vendors are the guys who sell hotdogs (or the things you close to stop the wind from coming in, depending on where they live.)
Ultimately, think of Grandma. She fires up AOL, then her computer tells her that she needs SP2. She goes `ok' and many hours later, SP2 is don
Re:new method? (Score:3, Interesting)
Re:new method? (Score:5, Informative)
This new feature from OpenBSD is the use of guard pages and the immediate freeing of memory. In essence this means that both bad programming and exploit attempts are much more likely to result in a core dump then some unidentifiable and non reproducible corruption or a working exploit. Many people consider that a good thing because it will result in bugs being found in userland applications that would have otherwise stayed unnoticed. So even if you don't use OpenBSD yourself this is helping your system becomming more secure and better. And if you are running OpenBSD there is o need to worry too much about the stability of this feature, it was actually enabled shortly after the 3.7 release and has been in every snapshot on the way to 3.8.
And I have to agree with the author that the best thing is that we get all the goods without ever having to switch them on!
Re:new method? (Score:5, Informative)
They also didn't need the per-page execute bit to do it. You need a fairly new machine to get the protection, but my 486 firewall has it. They also have stack protection, which is helpful because even if the heap and stack aren't executable you can overwrite return addresses or pointers to functions, and have them point to existing code that can be tricked into doing something malicious.
Hope (Score:4, Interesting)
But why did it take so long to implement?
Re:Hope (Score:2)
Re:Hope (Score:2)
In fairness, I don't know that we can really blame Microsoft for that one...
XP's memory protection works just fine (assuming the CPU supports NX pages) - The concept takes almost no thought to implement. The problem, however, arises from 99.999% of existing software not caring in the least about the separation of code and data. Usually that doesn't cause any problems, but when a "clever" (I put that in quotes as the bad-idea-of-the-day, but
Re:Hope (Score:2)
Re:Hope (Score:4, Informative)
Now that they're enforcing the specification, people complain that it broke.
Hey, PearPC was written before they enforced the specification, and Sebastian Biallas had the brilliant notion to actually follow the spec, and mark things as executable. Thus, when SP2 came out, PearPC worked fine.
Usually things break when moving to a newer version because some area of the spec wasn't very heavily stressed, and people writing code that just works (not as in, it works, but as in barely works,) thusly never really bothered shooting towards the spec. Then when the spec is enforced, they get all upiddy claiming that it breaks their app. Their app was broken to begin with, the previous implementations that you were relying on just didn't care.
For instance, when libc 5 (I think, don't hawk me about versions, if you know the correct versions, then please correct me, but I'm working off of a poor memory of the version numbers) came out, it enforced against passing a NULL file pointer. Before hand some people had hacked their code such that if an open failed, and returned a NULL file pointer, they didn't care or print an error message. They just kept going, since it would just waste CPU cycles, as nothing would get outputted or read from the file. It was silently gracefully failing for them, and they used that.
Then libc 5 comes out, and they break this silent graceful failure, and started reporting errors, or crashing when you passed them a NULL file pointer. People yelled and bitched, because they broke their app. But remember, THEIR APPS WERE BROKEN IN THE FIRST PLACE.
That's why I don't like people griping about "blah blah upgrade broke my app". Unless you can state that your app was built to spec from the beginning, then that upgrade didn't break your app. It was broken to begin with. The new upgrade just showed you how it was broken.
OpenBSD at the cutting edge on security (Score:4, Interesting)
Re:OpenBSD at the cutting edge on security (Score:3, Funny)
I know it seems strange...but OpenBSD isn't a Linux distribution at all.
I know its hard to wrap head around. Its one of those things you just have to accept. In addition:
-deep down, cows are not people too. So you can eat 'em, I guess.
-neither are cats or dogs. So don't force them to wear clothing.
-neither is information. So it doesn't care about being free or anything else.
-"Windows" is somehow both an operating system and a Window manager. You're not supposed
Re:OpenBSD at the cutting edge on security (Score:2)
As their focus is security, its understandable that they lead more incentives in these areas than more mainstream Linux distributions.
In this, the word "more" describes "Linux distributions." As it is a comparison, it also refers back to the subject of the phrase. The subject being "they," a pronoun which refers to "OpenBSD." This comparison also draws them into the same category because its comparin
Re:OpenBSD at the cutting edge on security (Score:4, Insightful)
Doesn't seem so. The new malloc and mmap behavior will tend to cause buggy memory allocation code to segfault rather than allowing various sorts of stupiness or nastiness.
Re:OpenBSD at the cutting edge on security (Score:3, Insightful)
Well, it'll catch it ... and the application will immediately crash.
I don't see how that encourages writing `sloppy memory management'. Nobody wants an application that crashes. (Granted, it's correct for an application to immediately exit when it's in danger of doing damage to something, but to an end user, they don't want their application to crash.)
Perhaps if somebody is explic
Re:OpenBSD at the cutting edge on security (Score:3, Insightful)
"hey, it might do something strange every once in a while, but at least it keeps running and doesn't crash, so the code is 'good-enough'".
I think Theo is doing entirely the right thing by killing badly written apps rather than letting them do bad things to the system. It's much more likely to make people fix bad code than the current system.
My solution is slower, but 100% effective (Score:5, Funny)
When the application is finished with the memory, it sends a FAX to the local electronics recycling facility who sends out a tech to remove the DIMMs and melt them down into whatever.
Using this method of heap memory allocation (I call it "ACAlloc" for "Anonymous Coward Alloc" has been 100% effective and I have NEVER had a heap overflow exploit in any of my code.
Yes, it's slow, but I am secure.
What's next? (Score:4, Funny)
-Charlie
Re:What's next? (Score:3, Funny)
You *need* a cold shower? Hell, to me, that image *was* the equivalent of a cold shower!
Sacrfice useability? Nice idea , won't work. (Score:2)
Nice dream , meanwhile in the real world both users and most coders
(if they dare to admit it) will NOT sacrfice usability or ease of
coding for security measures that (in their minds) are nothing to
do with their application. Unless that is they're forced to either
by company policy or restrictions in the OS. If there are restrictions
in the OS then IT tech leads might start to ask "well, I can do this in
OS A , why won't OS B let
Totally wrong (Score:2, Insightful)
Re:Sacrfice useability? Nice idea , won't work. (Score:5, Insightful)
The reason is very simple: There will always be some applications where the security of the system is a paramount point. Where it does have to do with the application. OpenBSD caters directly to those people, and those applications.
Now, you are right, this means OpenBSD is likely to never get as large a following as Linux or even FreeBSD, but they honestly don't care. They are making a system that fits their goals, and security is among the top goals.
This actually allows security to spread: Once these changes are on a 'major' system, applications start to be ported to work with them, which means the changes can be ported to other systems.
OpenBSD is a security testing ground. If it's features get in your way, you use a different system. This won't be the first time that advice will have been applicable.
Apologies to the Black-Eyed Peas (Score:2, Funny)
Lookin' at my heap, heap
You can look but you can't touch it.
If you touch it, I'ma start some drama.
You don't want no drama.
[...]
My heap, my heap, my heap, my heap.
Hm... old technique? (Score:5, Interesting)
Hm... gotta reply to myself (Score:5, Informative)
Re:Hm... old technique? (Score:4, Funny)
Shhh!! I was waiting until everyone started using them before hitting them with my patent ;)
Microsoft Windows? (Score:2, Funny)
Re:Microsoft Windows? (Score:2, Insightful)
It certainly could be implemented, but it would have the slight drawback that a huge proportion of apps would stop working.
It's going to break a lot of stuff on OpenBSD as well, but because of their audience they can get away with e.g telling you not to run Apache because it's insecure. Also, the OSS apps that break because they assume a specific memory management model will get fixed. No-one is going to be a
Could this help Gnome? (Score:3, Interesting)
Is it really true that the standard GNU/Linux heap implementation holds onto pages like this when it becomes fragmented? That sounds really primitive to me.
Re: (Score:2, Interesting)
Re:Could this help Gnome? (Score:3, Informative)
I don't know if GNU malloc uses mmap() or brk() for its allocation, but in both cases small memory chunk that the user allocates are taken from bigger, contiguous blocks of memory.
It uses mmap() for big allocs (IIRC the threshold is 4 MB) and brk() for smaller ones.
There would be one solution, and that's using different arenas, or memory regions for allocation. For instance every window might have its own allocation region, so when you close the window/document, the memory BLOCK is freed.
Something like memo
In related news, GCC 4.1 stack protector (Score:4, Interesting)
Hopefully mainstream distros that have been wary of propolice will start using this new feature. And perhaps glibc malloc will borrow a few tricks from this new openbsd malloc too.
Re:In related news, GCC 4.1 stack protector (Score:4, Informative)
Already in Microsoft DEP (Score:2, Interesting)
My CPU doesn't support DEP in hardware, so I imagine the software-based method of doing this will create quite a speed hit. Anybody have any experience with turning on DEP for all programs?
Re:Already in Microsoft DEP (Score:3, Interesting)
Also, I have not been hacked anytime in the last ~5 minutes, whatever that's worth (but would I know?).
As an aside, I read the paper on the Microsoft DEP flaw a few months ago, and wasn't that impressed. It looks very hard to exploit. And since DEP is a added protection mechanism, the existence of a small, hard-to-expliot flaw
Re:Not the same thing (Score:2)
Wrong solution for solving heap problems. (Score:4, Interesting)
From the kerneltrap.org post:
He explains that for over a decade efforts have been made to find and fix buffer overflows, and more recently bugs have been found in which software is reading before the start of a buffer, or beyond the end of the buffer.
The solution that the kerneltrap.org refers to against buffer overflows is to:
My opinion is that #1 will slow software down, although it will make it indeed more secure. #2 will make it more difficult to exploit buffer overflows, since the space between two allocated heap blocks will be random (and thus the attacker may not know where to overwrite data).
Unless I haven't understood well, these solutions will not offer any real solution to the buffer overflow problem. For example, stack-based exploits can still be used for attacks. The solution shown does not mention usage of the NX bit (which is i86 specific). It is a purely software solution that can be applied to all BSD-supported architectures.
Since all the problems relating to buffers (overflow and underflow) that have costed billions of dollars to the IT industly is the result of using C, doesn't anyone think that it is time to stop using C? there are C-compatible languages that allow bit manipulation but don't allow buffer overflows; e.g. Cyclone [wikipedia.org].
Re:Wrong solution for solving heap problems. (Score:4, Interesting)
In the event of a screw-up on the part of the JIT or runtime programmer for any language, every program is instantly vulnerable, and all of this generic proactive security stuff is disabled because this "secure language" doesn't work in an "inherantly secure" environment, only a much weakened one. C's runtime is rather basic (and it's still huge), as is its language; people still screw that up once in a while, but rarely.
While these "shiney new secure languages" may boast "immunity to buffer overflows," their runtimes are still designed around other concepts that may leave holes. Look at this memory allocator and think about a bug in the allocator that smashes up its own memory before it gets everything set up; because the new protections aren't yet set in place, it'd be totally vulnerable at that point (no practical exploit of course). A bug that forgets to add guard pages (generates 0 guard pages every time) might occur too in one update. Now add to that something like Java or Mono-- interpreted or not, you're running on a whole -platform- instead of just a runtime. C++ instruments loads of back-end object orientation.
So in short, C is a very basic language that has easily quantifiable attack vectors, and thus the system can be targeted around these for security. Several such enhancements exist, see PaX, GrSecurity, W^X, security in heap allocators, SELinux, Exec Shield, ProPolice. Higher level languages like C++ implement back-end instrumentation that ramps up complexity and may open new, unprotected attack vectors that are harder to quantify. Very high end languages on their own platform, like Java and Mono, not only implement massive complexity, but rely on a back-end that may lose its security due to bugs. Platform languages may also be interpreted or runtime generated, in which case they may require certain protections like PaX' strict NX policy to vanish; in some cases these models (as an implementation flaw) also don't work well with strict mandatory access control policies under systems like SELinux.
Face it. C is the best language all around for speed, security, portability, and maintainability. Assembly only brings excessive speed at the cost of all else; and higher level languages sacrifice both speed and real security (despite their handwaving claims of built-in security) at varying degrees for portability, speed of coding, and maintainability. Even script languages working inside a real tightly secured system would more easily fall victim to cross-site scripting, the injection of script into the interpretation line; under such a system, any similar attack is impossible in a C program.
On a side note, I'd love to see a RAD for C. Think Visual Basic 6.0, but open source, using C/GTK+. Glade comes close. . . .
Re:Wrong solution for solving heap problems. (Score:3, Insightful)
Permenantly dark goggles -- Java, etc
Goggles that tint when bright light happens -- C
The argument "C is insecure, we should use [insert odd language here]" is like saying, "Goggles that auto-tint are bad because they're not dark all the time, and you could see a bright flash."
The argument I made would be more like, "The auto-tint goggles are better because they take the basic need -- you have to see -- and allow that, but still protect you from the blinding flash of we
Re:Wrong solution for solving heap problems. (Score:3, Informative)
Actually, x86 is one of the last into town with "NX bit" functionality. POWER (and PPC I guess), PA-RISC, Sparc, Alpha, etc. on the big-iron has had this feature as a standard part of their architecture (along with the OSes that run on them) for bloody ages now... on those CPUs, even Linux has had hardware support since before whenever x86 got NX support.
http://en.wikipedia.org/wiki/NX_bit [wikipedia.org]
Although this can stop execution of arb
Re:Wrong solution for solving heap problems. (Score:3, Informative)
1. OpenBSD already makes use of NX-bit protection (they call it W^X).
2. OpenBSD previously implemented stack protection (propolice + other stuff).
3. OpenBSD even implements W^X on non-NX-bit i386 through other techniques.
The last point is important since most i386 machines don't have the NX-bit, and that will be the case for many years to come.
Unnecessary when using languages that solve this p (Score:3, Insightful)
It is therefore, in my opinion, less optimal (from a security perspective) to use something like "C" for a complicated app like sendmail, web server or secure shell daemon (sshd) than it is to use a language like "C".
Re:Unnecessary when using languages that solve thi (Score:2)
You forgot to add Java to your list of programming languages which goes out of its way to assist in preventing and avoiding both buffer and heap overflows/errors (i.e. bounds checking and similiar technology).
Re:Unnecessary when using languages that solve thi (Score:2)
You are a taxi driver. Your job is to deliver people from point A to point B, safely and quickly.
You can choose from two cars.
Car A is such that if you make an error when driving, it will do something bad -- perhaps even blow up, killing the passengers. Furthermore, a driver can accidentally drive the car into an obstacle, ruining everything.
Car B is the same as the first, except there's nothing the driver
Re:Unnecessary when using languages that solve thi (Score:2)
- Car A is such that if the taxi driver makes an error, the car explodes, killing everyone in it instantly.
- Car "C" is such that the taxi driver is a serial killer who picks up passengers, hunts down, tortures, and kills their immediate families, and then slowly, painfully kills the passengers.
Neither is particularly desirable, but I'd still take car A.
Not really unnecessary (Score:2)
I agree, and I'd go one step further to say that we're starting to get into a situation where programmers are learning and using languages (such as Java) that don't allow this particular kind of sloppy coding. The problem is that many of these programmers aren't even aware of the concept of a buffer overflow, let alone how to actively detect or prevent it.
I happen to be a fan of Java, b
Re:Unnecessary when using languages that solve thi (Score:2)
Re:Unnecessary when using languages that solve thi (Score:2)
It is therefore, in my opinion, less optimal (from a security perspective) to use something like "C" for a complicated app like sendmail, web server or secure shell daemon (sshd) than it is to use a language like "C".
The problem is for one reason (or randomness) or another, no mainstream daemons or operating systems or client applications are written in Lisp, Haskell, Sc
Re:Unnecessary when using languages that solve thi (Score:4, Funny)
Let me know when you release your Haskell version of Sendmail, and I'll switch over immediately.
Re:Unnecessary when using languages that solve thi (Score:2)
The problem has to do with returning closures with bound variables -- the nice thing about having a GC is that they will get freed, eventually.
If you do manual memory management, will you manage this yourself (for closures)? Or will you ban closures? If you are banning closures, you've lost one of the key benefits of using Lisp or ML --- you just got rid of lambda (in its most general form), and you'll just have
Finally Locking the Door (Score:5, Insightful)
After decades of these problems, and known techniques already applied in Computer Science, its surprising that we're only now seeing these techniques deployed in popular OS'es like OpenBSD. Hopefully the open nature of OpenBSD and other OSS OS'es will see them tested for winning strategies quickly, and widely adopted.
Re:Finally Locking the Door (Score:2)
Also Worth Mentioning (Score:5, Informative)
Heap protection? (Score:3, Funny)
Linux Had A Spec For This Ages Ago (Score:2, Funny)
Heap Protection vs. Managed Code (Score:2)
Re:Heap Protection vs. Managed Code (Score:2)
Re:Heap Protection vs. Managed Code (Score:2)
Re:Heap Protection vs. Managed Code (Score:3, Interesting)
You could probably also use the MMU to reduce pauses in the gc... you determine what objects are unreachable in the background and using the dirty bit you can tell which pages may have referen
This is how Electric Fence works. (Score:5, Interesting)
It may be a legitimate invention - it is cited as prior art in an ATT patent. This is also the first known example of a prior Open Source publication causing a patent filer to cite. ATT also removed a claim from the patent that my work invalidated. Just search for "Perens" in the U.S. patent database to find the patent.
We don't run it on production programs because of its overhead. To do this sort of protection exhaustively, it requires minimum two pages of the address space per allocation: one dead page between allocations and one page allocated as actual memory. This is a high overhead of page table entries, translation lookaside buffers, and completely destroys locality-of-reference in your application. Thus, expect slower programs and more disk-rattling as applications page heavily. If you are to allocate and release memory through mmap, you get a high system call overhead too, and probably a TLB flush with every allocation and free.
Yes, it makes it more difficult to inject a virus. Removing page-execute permission does most of that at a much lower cost - it will prevent injection of executable code but not interpreter scripts.
I don't think the BSD allocator will reveal more software bugs unless the programmers have not tested with Electric Fence.
Bruce
Re:This is how Electric Fence works. (Score:3, Insightful)
I think that's exactly the problem though. There are users out there installing applications they found somewhere, by somebody who may or may not have bothered to use a good debugger. This will prevent those unknown bad apps from fouling up the system.
I'm still on OpenBSD 3.7. I haven't tried the 3.8 builds, but I'm hoping the overhead from this won't be too bad.
Re:This is how Electric Fence works. (Score:4, Insightful)
The OpenBSD allocator already has revealed a number of software bugs (in X11, in ports, they lurk everywhere). Some of the bugs found were years old. Thats the point of the testing process in the OpenBSD release cycle.
I think you're missing the point behind the integration of these technologies into OpenBSD. The idea is that they are always on, with a performance hit that is acceptable that your day-to-day programs can be protected, and most crucially, used under them. Not sitting in a debug environment getting limited regressions and unit tests that the particular programmer felt like writing (and if I want that, I run it under Valgrind, which has a near-miraculous tendency to find lurking bugs).
And considering them in isolation is also dangerous. When you combine the address randomization, W^X, heap protection, propolice canaries and local variable re-ordering, you're left with a system that has accepted a reasonable performance hit in return for a large amount of protection against badly written code. Sprinkle in regular audits, timely releases every 6 months to keep our users up-to-date on stable systems, and a 'grep culture' to hunt down related bugs in the source tree when a bug does strike.
As others have pointed out, other "hero projects" have stuck bits and bobs into their respective distributions. But how many have had the discipline to follow through, maintain and integrate their patches, test the fallout and release a complete OS with thousands of third-party packages year after year... probably only Microsoft, but the first thing they do at the sign of incompatibility is to turn the protection off. Oh well
Re:This is how Electric Fence works. (Score:3, Interesting)
/me can't resist (Score:2)
Performance? (Score:3, Interesting)
Application heap allocation has "traditionally" been fairly inexpensive unless the heap has to be grown (update a couple of free block pointers/sizes) and the cost of growing the heap (which requires extending the virtual address space and therefore fiddling with page tables which would on a typical CPU require a mode change) is mitigated by allocating more virtual address space than is immediately needed.
If free space is always unmapped then each block allocation will require an alteration to the page tables, as will each unallocation. Not to mention that could cause the page-translation hardware to operate sub-optimally since the range of addresses comprising the working set will constantly change.
If most allocations are significantly less than a page size, then the performance impact may be minimal since whole pages will rarely become free, but if allocations typically exceed a page size, that would no longer be true. If the result is that some applications simply implement their own heap management to avoid the overhead, then you've simply increased the number of places that bugs can occur.
ElectricFence (Score:2)
http://perens.com/FreeSoftware/ElectricFence/ [perens.com]
If this is the case, it's great that such a feature should go into an OS by default. I personally love anything that gives me confidence in the implementation of any applications I write, especially as this type of technique makes debugging much easier.
Re:cool (Score:4, Insightful)
Just like...
Windows 95 was more secure than Windows 3.1
Windows 98 was more secure than Windows 95
Windows NT was much more secure than Windows 98
Windows 2000 was the mother of all security
Windows XP, this time we got it right
Windows XP SP2 this time we really really got it right, promise, cross my heart and hope to die
Windows 2003, most secure server OS ever built!
Windows VISTA, even better than the worlds best system ever built, this time ill put my mother up here on this 100 fot pole and if im wrong may she fall down into that pit of crocodiles!
Until i have a real life experience of good Windows security i will tend to think back and remember all the former promises that have gone down the drain. Today you expect Microsoft to promise things that arent really true.
Re:cool (Score:3, Interesting)
Yes, plenty, and maybe even most of their promises about being a generally secure system are complete and utter rubbish. However, I'm willing to bet that each of their OSes are more secure than the last one. The problem is that they still leave plenty of holes open when they do things like (to point out the landmark example) weld the web browser to the kernel. I know that most people crack windows because it's easy, but while
Re:cool (Score:2, Informative)
For about the 35,348th time: no, no they don't. IE has nothing to do with the kernel. Go and learn what a kernel is. Hope that helps, have a nice day. :-)
Re:cool (Score:3, Funny)
Re:My Windows XP has heap protection! (Score:5, Informative)
This is more. It looks like they are adding extra 'tripwire' pages to the heap, so if an attacker manages to write to part of the heap they shouldn't, there's a good chance they'll hit a tripwire and be detected.
Re:My Windows XP has heap protection! (Score:2)
I think they did add heap protection at the same time, although I hear it's been "broken" (although in practical terms I've no idea).
VBLinux (Score:5, Funny)
For real security, don't use C.
I am rewriting Linux in Visual Basic 6.0.
I am going to call the distro VBLinux.
Re:For Real Security (Score:2)
Re:For Real Security (Score:3, Interesting)
Do you also advocate four times the memory usage and double the speed? Do you blame the language or the speaker when they can't formulate a proper sentence construct? How about we just teach the programmers better instead of bitching about the tool they use.
If half the C coders out there knew the differences between stack, heap, and namespaces, we would not even be debating this issue. Don't blame the coders, blame the universities.
Enjoy,
Comment removed (Score:4, Insightful)
Re:This is why I couldn't use OpenBSD exclusively. (Score:5, Informative)
You gotta remember, the project doesn't do it for outsiders, what they do is for themselves. They want security and are willing to pay performance and ease of use to get it, it's like a mantra for them, never take the path of least resistance.
If this looses like 5 or 10 percent of it's performance on my machines I won't mind, it's another layer of protection and I like having it and am fine with the cost, faster hardware isn't that expensive. If something I run crashes, I will report to the people that wrote it, telling them that I found a problem that was found by OpenBSD's malloc, maybe they'll even devote an old test box to checking for bugs on it.
If OpenBSD was trying to be a Linux distribution then we'd not have most of the good stuff that makes OpenBSD unique.
Re:Whatever happened to segmentation? (Score:5, Funny)
2005 self would counter with, "Yeah, the pointers will be bigger than they used to be, but you progam in high-level languages now, so you don't ever worry about that. It's the compiler's problem."
1987 Sloppy would say, "But I'm going go write a compiler!"
2005 Sloppy would say, "You fuckwit, you never got anywhere on that project. You barely even started it. Too much time fucking around with graphics and genetics."
1987 Sloppy would say, "But, but, it's not fair! Segmentation is an x86 thing. Everyone knows that in the future, we'll all be using 68k. 68k doesn't do segmentation."
2005 Sloppy would sigh.
1987 Sloppy would say, "Oh come on. There's no way people are still using x86 in the 21st century, or even in the 1990s. No fucking way."
2005 Sloppy would just shrug. There's nothing to do in a situation like this. There's nothing you can say. They'll never believe you.