Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Security Operating Systems Software BSD

Heap Protection Mechanism 365

An anonymous reader writes "There's an article by Jason Miller on innovation in Unix that talks about OpenBSD's new heap protection mechanism as a major boon for security. Sounds like OpenBSD is going to be the first to support this new security method."
This discussion has been archived. No new comments can be posted.

Heap Protection Mechanism

Comments Filter:
  • Slowdown? (Score:4, Informative)

    by (1+-sqrt(5))*(2**-1) ( 868173 ) <1.61803phi@gmail.com> on Monday October 03, 2005 @10:38AM (#13703959) Homepage
    Continues Theo,
    A number of other similar changes which are too dangerous for normal software or cause too much of a slowdown are available as malloc options as described in the manual page.
    Id est, they stopped before reaching a Java-like retardation.
    • OpenBSD Goals (Score:3, Interesting)

      by RAMMS+EIN ( 578166 )
      Yep. After all, the goal of the OpenBSD project is not simply to be the most secure operating system ever. The goal is to provide the best security, along with a number of other goals, such as running Unix software, achieving good performance, and providing a good (according tho the stated goals even "the best") development platform.
    • Re:Slowdown? (Score:5, Informative)

      by Anonymous Coward on Monday October 03, 2005 @12:53PM (#13705320)
      Ho hum.

      http://www-128.ibm.com/developerworks/java/library /j-jtp09275.html [ibm.com]

      Malloc is slow. Per studies, 20-30% of CPU time wasted on memory management.

      I haven't seen that level of retardation in JVM's since... oh... 1996?

      But yeah, keep thinking you can do it better. Whatever. In the meanwhile, the rest of the world moves on.

      • Re:Slowdown? (Score:5, Insightful)

        by ergo98 ( 9391 ) on Monday October 03, 2005 @01:20PM (#13705604) Homepage Journal
        Malloc is slow. Per studies, 20-30% of CPU time wasted on memory management.

        Interesting paper - thanks for the link.

        However I find the conclusions given a bit dubious. For instance the claim that allocations are "free" is somewhat dubious - garbage collecting languages put all of the work on the back-end, giving the illusion of a free front-end (whereas non-GC languages put the hard work on the front-end). Yet for every object you create on the heap that is more work the heap walker has to do each GC to detect orphaned objects - a non-trivial task. It then has to free all of those objects, and because of the "free" allocation it has to move all of the objects and rebase every object pointer in the application. It doesn't take a genius to realize that is a signficiant task in an application of a real-world size.

        I think the proof is in the empirical proof - how many high performance memory intensive Java applications are there?
        • Re:Slowdown? (Score:4, Interesting)

          by liloldme ( 593606 ) on Monday October 03, 2005 @02:24PM (#13706185)
          It doesn't take a genius to realize that is a signficiant task in an application of a real-world size.

          And anyone who's run a JVM knows about the price of this task -- yes GC takes time.

          However, as I understood the article, the author was making a point that the way most C programmers manage memory tends to make the task more time consuming than is necessary. Therefore relying on a known optimized implementation rather than reinventing the wheel every time may be preferred. After all, it is just the VM implementor that needs to understand how to optimize the memory management, not the application developers. So yes, where the time is spent is shifted but also the amount of total execution time spent on memory management can be reduced -- because the task is managed differently.

          As for the specific details of this paper, they're basically discussing how to determine which objects can be safely allocated from the stack, instead of heap, and therefore can be discarded without the usual book keeping required from a heap GC.

          how many high performance memory intensive Java applications are there

          Java is so widely used on the server side and middleware, it cannot be difficult to come up with examples -- Tomcat, J2EE app servers, etc. eBay for instance advertizes very clearly on their front page to be powered by Sun's Java technology. There are individual Java systems that manage millions of transactions daily, and there must be thousands of systems out there that do this every day with Java.

        • Re:Slowdown? (Score:3, Insightful)

          by Krach42 ( 227798 )
          Well the same issue exists with Copy-on-Write. COW implementations give an impression of faster copying, because they back off all the copying until the first damage to that information. Turns out that in some cases you don't need to make an actual fresh copy in every situation, and that sometimes, you just copy it to use it, don't damage it then just return. In those cases COW gains you time. Since you only copy when you must. You just have to deal with blocking during the first write to the copy. (Y
      • Re:Slowdown? (Score:4, Interesting)

        by eric76 ( 679787 ) on Monday October 03, 2005 @02:28PM (#13706220)
        About 15 years ago I started using a technique to improve performance when doing lots and lots of very short term mallocs.

        Essentially, I'd create a large ring buffer of malloced temporary buffers of some standard length. Any time a temporary buffer was needed, I'd grab the next one in the ring.

        Before the buffer was provided to the function asking for it, the length would be checked. If the requested length was longer than the current length, the buffer would be freed and one of at least the proper length would be allocated. (I normally allocated by buffers in byte multiples of some fixed constant, usually 32.)

        The idea was that by the time it was reused, what was already in the buffer was no longer needed. To achieve that, I'd estimate how many buffers might be needed in the worst case and then multiply that number by 10 for safety's sake.

        My primary use of this was when doing enormous numbers of allocations of memory for formatting purposes. The function doing the formatting would request a buffer large enough to hold whatever it would need, write the formatted data into the buffer, and then return a pointer to the buffer. The calling function would simply use the buffer and never have to worry about freeing it.

        The performance results were superb except in the very simplest cases where you allocated the buffers without ever using them.

        I've never known anyone else who used this kind of approach although I've showed it to a large number of people.
        • Re:Slowdown? (Score:3, Informative)

          by Dan Farina ( 711066 )
          Many people have used this sort of approach. It is called "pooling."

          Its use dates back to the first GC'd langauge, LISP, and was a common way to reduce garbage generation.
          • Re:Slowdown? (Score:3, Interesting)

            by TheRaven64 ( 641858 )
            And it can seriously increase the speed of a bit of code. I recently profiled some of my code and found that it was spending around 40% of its time in malloc and free. I created a very simple memory pool (no ring buffers, just a simple linked list with insertions and removals at the same end, nice and fast) for each frequently-allocated object type, and saw a huge speed increase.
  • new method? (Score:4, Informative)

    by iggymanz ( 596061 ) on Monday October 03, 2005 @10:40AM (#13703973)
    other OS have had heap protection mechanisms, even one from Microsoft.
    • Re:new method? (Score:4, Informative)

      by Intron ( 870560 ) on Monday October 03, 2005 @10:51AM (#13704049)
      ISTR that MS tried doing immediate free and it broke some programs that depend on the memory still being sround after being freed, so they made it optional. Sounds like OpenBSD is playing hardball here.
      • by hey ( 83763 )
        I don't know if it was on purpose but your sig -- which mentions Intron [wikipedia.org] (aka Junk DNA) is very apt.
        OSes can put Intron between Exon (useful DNA -- useful stuff on the heap) to
        detect badly behaving apps!


        In other words, so-called "Junk DNA" may actually have a use...
        HEAP PROTECTION ;-)

        • In some cases introns serve a known purpose in indexing for the enzymes which work with DNA. Junk DNA is the stuff with no known purpose at all.

          You're still correct. It is heap protection in an evolutionary way. Heap protection on computers seeks to safeguard the data as it is. Junk DNA accepts that things are going to get corrupted and seeks to make it statistically less likely for the important parts to get corrupted.

          Thanks for that train of thought. :)
          • You're still correct. It is heap protection in an evolutionary way. Heap protection on computers seeks to safeguard the data as it is. Junk DNA accepts that things are going to get corrupted and seeks to make it statistically less likely for the important parts to get corrupted.

            Just being a "dummy target" seems like an inefficient use for extra DNA. I would have expected something along the lines of ECC like reed-solomon coding to have evolved. Kinda shoots down the "intelligent design" theory too, unless
    • Re:new method? (Score:5, Informative)

      by JohanV ( 536228 ) on Monday October 03, 2005 @11:20AM (#13704300) Homepage
      You mean the Data Execute Protection [microsoft.com] from Microsoft? OpenBSD has had that for a long time already, only they named it w^x [neohapsis.com].

      This new feature from OpenBSD is the use of guard pages and the immediate freeing of memory. In essence this means that both bad programming and exploit attempts are much more likely to result in a core dump then some unidentifiable and non reproducible corruption or a working exploit. Many people consider that a good thing because it will result in bugs being found in userland applications that would have otherwise stayed unnoticed. So even if you don't use OpenBSD yourself this is helping your system becomming more secure and better. And if you are running OpenBSD there is o need to worry too much about the stability of this feature, it was actually enabled shortly after the 3.7 release and has been in every snapshot on the way to 3.8.

      And I have to agree with the author that the best thing is that we get all the goods without ever having to switch them on!
      • Re:new method? (Score:5, Informative)

        by ArbitraryConstant ( 763964 ) on Monday October 03, 2005 @12:59PM (#13705387) Homepage
        "You mean the Data Execute Protection from Microsoft? OpenBSD has had that for a long time already, only they named it w^x."

        They also didn't need the per-page execute bit to do it. You need a fairly new machine to get the protection, but my 486 firewall has it. They also have stack protection, which is helpful because even if the heap and stack aren't executable you can overwrite return addresses or pointers to functions, and have them point to existing code that can be tricked into doing something malicious.
  • Hope (Score:4, Interesting)

    by Anonymous Coward on Monday October 03, 2005 @10:40AM (#13703974)
    Let's hope it's not as broken as Microsoft's attempt [maxpatrol.com] in SP2.

    But why did it take so long to implement?
    • Maybe the reason is precisely that they tried to make sure it's not as broken as Microsoft's attempt in SP2.
    • by pla ( 258480 )
      Let's hope it's not as broken as Microsoft's attempt in SP2.

      In fairness, I don't know that we can really blame Microsoft for that one...

      XP's memory protection works just fine (assuming the CPU supports NX pages) - The concept takes almost no thought to implement. The problem, however, arises from 99.999% of existing software not caring in the least about the separation of code and data. Usually that doesn't cause any problems, but when a "clever" (I put that in quotes as the bad-idea-of-the-day, but
      • The NX flag is basically opt-in, you can request pages that aren't marked with it. In an OS that flags everthing as NX by default, you'd need to rebuild your application (although XP allows you do disable NX on a process by process basis). The point of NX isn't to prevent stuff like Scheme working, it's to prevent exploits or accidents.
  • by Sv-Manowar ( 772313 ) on Monday October 03, 2005 @10:43AM (#13703997) Homepage Journal
    Kudos to the OpenBSD folks for being at the cutting edge, in terms of implementations of these security features. Where they lead, surely others will follow and we'll be seeing this feature become commonplace. As their focus is security, its understandable that they lead more incentives in these areas than more mainstream Linux distributions.
    • than more mainstream Linux distributions

      I know it seems strange...but OpenBSD isn't a Linux distribution at all.

      I know its hard to wrap head around. Its one of those things you just have to accept. In addition:
      -deep down, cows are not people too. So you can eat 'em, I guess.
      -neither are cats or dogs. So don't force them to wear clothing.
      -neither is information. So it doesn't care about being free or anything else.
      -"Windows" is somehow both an operating system and a Window manager. You're not supposed
  • by Anonymous Coward on Monday October 03, 2005 @10:47AM (#13704021)
    When my application needs a chunk of memory, it sends a specially crafted HTTPS request to my bank, debits the account, sends a fax to the local computer shop who then sends a tech over to install the DIMMs.

    When the application is finished with the memory, it sends a FAX to the local electronics recycling facility who sends out a tech to remove the DIMMs and melt them down into whatever.

    Using this method of heap memory allocation (I call it "ACAlloc" for "Anonymous Coward Alloc" has been 100% effective and I have NEVER had a heap overflow exploit in any of my code.

    Yes, it's slow, but I am secure.

    ...And I'm running the most up-to-date 80386 Linux 0.97 kernel. TDz.

  • by Groo Wanderer ( 180806 ) <{charlie} {at} {semiaccurate.com}> on Monday October 03, 2005 @10:51AM (#13704047) Homepage
    Ok, we start out with 'protection', then we move to 'a heap' of protection, most assuredly to be followed by 'a whole heap' of protection. I can only see this spiral continuing until Bill Gates himself gets up on stage at CES in an Elvis suit promising 'a hunka- hunka- burnin protection'. *SHUDDER* Time to take a cold shower.

                  -Charlie
    • Bill Gates himself gets up on stage at CES in an Elvis suit promising 'a hunka- hunka- burnin protection'. *SHUDDER* Time to take a cold shower.

      You *need* a cold shower? Hell, to me, that image *was* the equivalent of a cold shower!
  • "And if we have to sacrifice a little usability on the way there, then so be it."

    Nice dream , meanwhile in the real world both users and most coders
    (if they dare to admit it) will NOT sacrfice usability or ease of
    coding for security measures that (in their minds) are nothing to
    do with their application. Unless that is they're forced to either
    by company policy or restrictions in the OS. If there are restrictions
    in the OS then IT tech leads might start to ask "well, I can do this in
    OS A , why won't OS B let
    • Totally wrong (Score:2, Insightful)

      by Anonymous Coward
      What they restrict here, is extremely shitty programming. What's the point of allowing accessing memory areas out-of-bounds? Fewer crashes, because the system does not notice them? This is a very bad idea. If you don't know how to handle arrays and you don't know any other means to restrict yourself from doing something stupid, OpenBSD shows the best way to do it (as last instance).
    • by Daniel_Staal ( 609844 ) <DStaal@usa.net> on Monday October 03, 2005 @11:08AM (#13704196)
      Actually, for OpenBSD, it will work. And has.

      The reason is very simple: There will always be some applications where the security of the system is a paramount point. Where it does have to do with the application. OpenBSD caters directly to those people, and those applications.

      Now, you are right, this means OpenBSD is likely to never get as large a following as Linux or even FreeBSD, but they honestly don't care. They are making a system that fits their goals, and security is among the top goals.

      This actually allows security to spread: Once these changes are on a 'major' system, applications start to be ported to work with them, which means the changes can be ported to other systems.

      OpenBSD is a security testing ground. If it's features get in your way, you use a different system. This won't be the first time that advice will have been applicable.
  • by Anonymous Coward
    OpenBSD's "My Heap":

    Lookin' at my heap, heap
    You can look but you can't touch it.
    If you touch it, I'ma start some drama.
    You don't want no drama.
    [...]
    My heap, my heap, my heap, my heap.
  • Hm... old technique? (Score:5, Interesting)

    by archeopterix ( 594938 ) * on Monday October 03, 2005 @10:53AM (#13704064) Journal
    Ok, the article is light on technical details, but it seems that they are using guard pages. Guard pages aren't exactly shiny new. Efence [die.net] has been using them since a long long time.
  • Could this technology be implemented in the Microsoft Windows systems to be more secure than Linux?

    • Could this technology be implemented in the Microsoft Windows systems to be more secure than Linux?

      It certainly could be implemented, but it would have the slight drawback that a huge proportion of apps would stop working.

      It's going to break a lot of stuff on OpenBSD as well, but because of their audience they can get away with e.g telling you not to run Apache because it's insecure. Also, the OSS apps that break because they assume a specific memory management model will get fixed. No-one is going to be a
  • by MuMart ( 537836 ) on Monday October 03, 2005 @10:59AM (#13704112) Homepage
    Sounds like a heap that returns unused pages to the system like this would help the problem described by John Moser in the Gnome Memory Reduction Project here [gnome.org].

    Is it really true that the standard GNU/Linux heap implementation holds onto pages like this when it becomes fragmented? That sounds really primitive to me.

    • Re: (Score:2, Interesting)

      Comment removed based on user account deletion
      • by joib ( 70841 )

        I don't know if GNU malloc uses mmap() or brk() for its allocation, but in both cases small memory chunk that the user allocates are taken from bigger, contiguous blocks of memory.


        It uses mmap() for big allocs (IIRC the threshold is 4 MB) and brk() for smaller ones.


        There would be one solution, and that's using different arenas, or memory regions for allocation. For instance every window might have its own allocation region, so when you close the window/document, the memory BLOCK is freed.


        Something like memo
  • by joib ( 70841 ) on Monday October 03, 2005 @10:59AM (#13704114)
    The upcoming GCC 4.1 release will include a stack protector [gnu.org]. Basically it's a reimplementation of the old propolice patch.

    Hopefully mainstream distros that have been wary of propolice will start using this new feature. And perhaps glibc malloc will borrow a few tricks from this new openbsd malloc too.
  • DEP from Microsoft is only enabled by default on some system binaries. Here is how to enable it for everything: http://www.microsoft.com/technet/security/prodtech /windowsxp/depcnfxp.mspx [microsoft.com]

    My CPU doesn't support DEP in hardware, so I imagine the software-based method of doing this will create quite a speed hit. Anybody have any experience with turning on DEP for all programs?
    • Well, I've turned it on now. No noticeable performance hit, and no mysterious application failures. Just compiled a Visual C++ program and it ran fine. Same for G++ on CYGWIN.

      Also, I have not been hacked anytime in the last ~5 minutes, whatever that's worth (but would I know?).

      As an aside, I read the paper on the Microsoft DEP flaw a few months ago, and wasn't that impressed. It looks very hard to exploit. And since DEP is a added protection mechanism, the existence of a small, hard-to-expliot flaw
  • by master_p ( 608214 ) on Monday October 03, 2005 @11:06AM (#13704189)

    From the kerneltrap.org post:

    He explains that for over a decade efforts have been made to find and fix buffer overflows, and more recently bugs have been found in which software is reading before the start of a buffer, or beyond the end of the buffer.

    The solution that the kerneltrap.org refers to against buffer overflows is to:

    1. unmap memory as soon as it is freed, so as that to cause a SIGSEV when illegaly accessed.
    2. have some free 'guard' space between allocated blocks.

    My opinion is that #1 will slow software down, although it will make it indeed more secure. #2 will make it more difficult to exploit buffer overflows, since the space between two allocated heap blocks will be random (and thus the attacker may not know where to overwrite data).

    Unless I haven't understood well, these solutions will not offer any real solution to the buffer overflow problem. For example, stack-based exploits can still be used for attacks. The solution shown does not mention usage of the NX bit (which is i86 specific). It is a purely software solution that can be applied to all BSD-supported architectures.

    Since all the problems relating to buffers (overflow and underflow) that have costed billions of dollars to the IT industly is the result of using C, doesn't anyone think that it is time to stop using C? there are C-compatible languages that allow bit manipulation but don't allow buffer overflows; e.g. Cyclone [wikipedia.org].

    • by bluefoxlucid ( 723572 ) on Monday October 03, 2005 @01:32PM (#13705699) Homepage Journal
      C is actually the most secure language currently, AFAICT. Languages with higher level intrinsics (C++, Java, Basic, Mono, Objective-C, etc) have a more complex implementation that may allow different exploit vectors; while languages with real-time interpretation or runtime code generation (Java or Mono with JIT) will wind up disabling things like strong memory protection policies (strict Data/Code separation -- code is data when generated at runtime) and may not in their own code create a backup buffer overflow protection.

      In the event of a screw-up on the part of the JIT or runtime programmer for any language, every program is instantly vulnerable, and all of this generic proactive security stuff is disabled because this "secure language" doesn't work in an "inherantly secure" environment, only a much weakened one. C's runtime is rather basic (and it's still huge), as is its language; people still screw that up once in a while, but rarely.

      While these "shiney new secure languages" may boast "immunity to buffer overflows," their runtimes are still designed around other concepts that may leave holes. Look at this memory allocator and think about a bug in the allocator that smashes up its own memory before it gets everything set up; because the new protections aren't yet set in place, it'd be totally vulnerable at that point (no practical exploit of course). A bug that forgets to add guard pages (generates 0 guard pages every time) might occur too in one update. Now add to that something like Java or Mono-- interpreted or not, you're running on a whole -platform- instead of just a runtime. C++ instruments loads of back-end object orientation.

      So in short, C is a very basic language that has easily quantifiable attack vectors, and thus the system can be targeted around these for security. Several such enhancements exist, see PaX, GrSecurity, W^X, security in heap allocators, SELinux, Exec Shield, ProPolice. Higher level languages like C++ implement back-end instrumentation that ramps up complexity and may open new, unprotected attack vectors that are harder to quantify. Very high end languages on their own platform, like Java and Mono, not only implement massive complexity, but rely on a back-end that may lose its security due to bugs. Platform languages may also be interpreted or runtime generated, in which case they may require certain protections like PaX' strict NX policy to vanish; in some cases these models (as an implementation flaw) also don't work well with strict mandatory access control policies under systems like SELinux.

      Face it. C is the best language all around for speed, security, portability, and maintainability. Assembly only brings excessive speed at the cost of all else; and higher level languages sacrifice both speed and real security (despite their handwaving claims of built-in security) at varying degrees for portability, speed of coding, and maintainability. Even script languages working inside a real tightly secured system would more easily fall victim to cross-site scripting, the injection of script into the interpretation line; under such a system, any similar attack is impossible in a C program.

      On a side note, I'd love to see a RAD for C. Think Visual Basic 6.0, but open source, using C/GTK+. Glade comes close. . . .
    • The solution shown does not mention usage of the NX bit (which is i86 specific).

      Actually, x86 is one of the last into town with "NX bit" functionality. POWER (and PPC I guess), PA-RISC, Sparc, Alpha, etc. on the big-iron has had this feature as a standard part of their architecture (along with the OSes that run on them) for bloody ages now... on those CPUs, even Linux has had hardware support since before whenever x86 got NX support.

      http://en.wikipedia.org/wiki/NX_bit [wikipedia.org]

      Although this can stop execution of arb
  • by putko ( 753330 ) on Monday October 03, 2005 @11:11AM (#13704231) Homepage Journal
    Languages like Lisp, Haskell, Scheme and ML allow you to avoid buffer and heap overflows (assuming the language implementation is correct).

    It is therefore, in my opinion, less optimal (from a security perspective) to use something like "C" for a complicated app like sendmail, web server or secure shell daemon (sshd) than it is to use a language like "C".

    • ALL buffer and heap overflows in individual programs are the fault of bad programming, not bad programming languages. If a programmer is not competent enough to test their code, and then blame the programming language when there is a problem, should not be coding. It is simply irresponsible.

      You forgot to add Java to your list of programming languages which goes out of its way to assist in preventing and avoiding both buffer and heap overflows/errors (i.e. bounds checking and similiar technology).
      • Although you can blame the programmer for writing a faulty application, I don't think that is relevant.

        You are a taxi driver. Your job is to deliver people from point A to point B, safely and quickly.

        You can choose from two cars.

        Car A is such that if you make an error when driving, it will do something bad -- perhaps even blow up, killing the passengers. Furthermore, a driver can accidentally drive the car into an obstacle, ruining everything.

        Car B is the same as the first, except there's nothing the driver
        • I like your analogy. I'd change it so that the two cars are:

          - Car A is such that if the taxi driver makes an error, the car explodes, killing everyone in it instantly.

          - Car "C" is such that the taxi driver is a serial killer who picks up passengers, hunts down, tortures, and kills their immediate families, and then slowly, painfully kills the passengers.

          Neither is particularly desirable, but I'd still take car A. :)
      • ALL buffer and heap overflows in individual programs are the fault of bad programming, not bad programming languages.

        I agree, and I'd go one step further to say that we're starting to get into a situation where programmers are learning and using languages (such as Java) that don't allow this particular kind of sloppy coding. The problem is that many of these programmers aren't even aware of the concept of a buffer overflow, let alone how to actively detect or prevent it.

        I happen to be a fan of Java, b

    • All languages allow you to avoid buffer and heap overflows if the language implementation is correct. Buffer and heap overflows are programming errors, not problems with a language. Some languages just have better facilities to force well-behaved code. You can add Java and C# to your list there by the way.
    • Languages like Lisp, Haskell, Scheme and ML allow you to avoid buffer and heap overflows (assuming the language implementation is correct).

      It is therefore, in my opinion, less optimal (from a security perspective) to use something like "C" for a complicated app like sendmail, web server or secure shell daemon (sshd) than it is to use a language like "C".


      The problem is for one reason (or randomness) or another, no mainstream daemons or operating systems or client applications are written in Lisp, Haskell, Sc
    • You're totally right, dude.

      Let me know when you release your Haskell version of Sendmail, and I'll switch over immediately.
  • by Doc Ruby ( 173196 ) on Monday October 03, 2005 @11:13AM (#13704241) Homepage Journal
    I'm surprised that modern heaps still put writable data segments adjacent to executable code segments. Self-modifying code is rare enough that all code should be in read-only (except by privileged processes, like the kernel which sets it up and tears it down) segments, except when a process has privilege to write to its own code segment. Then its code segment should be a data segment, with other security features applied (that are too expensive to apply to every code segment). Generally then, segments are either writeable or executable. Data segments could still get overwritten, which could put unsafe values in unexpected variables (like "write to that file" instead of this file). But at least those insecure operations are limited to the operations programmed into the code, not arbitrary new code inserted into executables by buffer overflows.

    After decades of these problems, and known techniques already applied in Computer Science, its surprising that we're only now seeing these techniques deployed in popular OS'es like OpenBSD. Hopefully the open nature of OpenBSD and other OSS OS'es will see them tested for winning strategies quickly, and widely adopted.
  • by RAMMS+EIN ( 578166 ) on Monday October 03, 2005 @11:18AM (#13704278) Homepage Journal
    This presentation [openbsd.org] (by Theo de Raadt) gives a good overview of the security features in OpenBSD (beyond what's already outlined on the OpenBSD security page [openbsd.org]). It covers W^X, random stack displacements, random canaries to detect stack smashing, random library base addresses, random addresses for mmap and malloc operations, guard pages, privilege revocation, and privilege separation. One thing it doesn't cover is systrace [umich.edu].
  • by justforaday ( 560408 ) on Monday October 03, 2005 @11:18AM (#13704280)
    I call my heap protection mechanism "bumpers" : p
  • by Anonymous Coward
    But Linus doesn't like specs so it got dropped !
  • I wonder of all these creative heap protection mechanisms (GCC 4.1, NX, etc) will largely remove the need for managed code (ie Java and C#). If you can write in C++ and not worry about double-free()s that pretty neat. Maybe even a "disruptive technology".
    • If you are writing in C++, you should
      1. Not be using free in the first place. It's an error to code C++ as if it were C—use new and delete
      2. Not be doing explicit memory management. Use RAII (Resource Acquisition Is Initialization) practices. Not using RAII in C++ is an error. (Beyond resource handling nightmares, you aren't being exception safe without RAII.)
  • by Bruce Perens ( 3872 ) <bruce@perens.com> on Monday October 03, 2005 @11:45AM (#13704618) Homepage Journal
    Electric Fence explicitly allocates dead pages at the end (or configurably, the beginning) of buffers. It can also protect the memory immediately as it is freed. I think it was first published in 1987.

    It may be a legitimate invention - it is cited as prior art in an ATT patent. This is also the first known example of a prior Open Source publication causing a patent filer to cite. ATT also removed a claim from the patent that my work invalidated. Just search for "Perens" in the U.S. patent database to find the patent.

    We don't run it on production programs because of its overhead. To do this sort of protection exhaustively, it requires minimum two pages of the address space per allocation: one dead page between allocations and one page allocated as actual memory. This is a high overhead of page table entries, translation lookaside buffers, and completely destroys locality-of-reference in your application. Thus, expect slower programs and more disk-rattling as applications page heavily. If you are to allocate and release memory through mmap, you get a high system call overhead too, and probably a TLB flush with every allocation and free.

    Yes, it makes it more difficult to inject a virus. Removing page-execute permission does most of that at a much lower cost - it will prevent injection of executable code but not interpreter scripts.

    I don't think the BSD allocator will reveal more software bugs unless the programmers have not tested with Electric Fence.

    Bruce

    • I don't think the BSD allocator will reveal more software bugs unless the programmers have not tested with Electric Fence.

      I think that's exactly the problem though. There are users out there installing applications they found somewhere, by somebody who may or may not have bothered to use a good debugger. This will prevent those unknown bad apps from fouling up the system.

      I'm still on OpenBSD 3.7. I haven't tried the 3.8 builds, but I'm hoping the overhead from this won't be too bad.

    • by stab ( 26928 ) on Monday October 03, 2005 @03:00PM (#13706467) Homepage
      I don't think the BSD allocator will reveal more software bugs unless the programmers have not tested with Electric Fence.

      The OpenBSD allocator already has revealed a number of software bugs (in X11, in ports, they lurk everywhere). Some of the bugs found were years old. Thats the point of the testing process in the OpenBSD release cycle.

      I think you're missing the point behind the integration of these technologies into OpenBSD. The idea is that they are always on, with a performance hit that is acceptable that your day-to-day programs can be protected, and most crucially, used under them. Not sitting in a debug environment getting limited regressions and unit tests that the particular programmer felt like writing (and if I want that, I run it under Valgrind, which has a near-miraculous tendency to find lurking bugs).

      And considering them in isolation is also dangerous. When you combine the address randomization, W^X, heap protection, propolice canaries and local variable re-ordering, you're left with a system that has accepted a reasonable performance hit in return for a large amount of protection against badly written code. Sprinkle in regular audits, timely releases every 6 months to keep our users up-to-date on stable systems, and a 'grep culture' to hunt down related bugs in the source tree when a bug does strike.

      As others have pointed out, other "hero projects" have stuck bits and bobs into their respective distributions. But how many have had the discipline to follow through, maintain and integrate their patches, test the fallout and release a complete OS with thousands of third-party packages year after year... probably only Microsoft, but the first thing they do at the sign of incompatibility is to turn the protection off. Oh well :)
  • From the article:
    " The more hurdles that one has to jump through for good security, the less likely people will go through the trouble. OpenBSD allows even the most inexperienced users to take advantage of these technologies without any effort. "
    Can they fix the government, too? It sounds like the same problem.
  • Performance? (Score:3, Interesting)

    by cardpuncher ( 713057 ) on Monday October 03, 2005 @11:52AM (#13704686)
    I'd be interested to know what the performance impact of this is - exactly what counts as "too much of a slowdown".

    Application heap allocation has "traditionally" been fairly inexpensive unless the heap has to be grown (update a couple of free block pointers/sizes) and the cost of growing the heap (which requires extending the virtual address space and therefore fiddling with page tables which would on a typical CPU require a mode change) is mitigated by allocating more virtual address space than is immediately needed.

    If free space is always unmapped then each block allocation will require an alteration to the page tables, as will each unallocation. Not to mention that could cause the page-translation hardware to operate sub-optimally since the range of addresses comprising the working set will constantly change.

    If most allocations are significantly less than a page size, then the performance impact may be minimal since whole pages will rarely become free, but if allocations typically exceed a page size, that would no longer be true. If the result is that some applications simply implement their own heap management to avoid the overhead, then you've simply increased the number of places that bugs can occur.
  • Didn't Bruce Perens use a similar method of page protection to implement Electric Fence?

    http://perens.com/FreeSoftware/ElectricFence/ [perens.com]

    If this is the case, it's great that such a feature should go into an OS by default. I personally love anything that gives me confidence in the implementation of any applications I write, especially as this type of technique makes debugging much easier.

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...