Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

DieHard, the Software 230

Roland Piquepaille writes "No, it's not another movie sequel. DieHard is a piece of software which helps programs to run correctly and protects them from a range of security vulnerabilities. It has been developed by computer scientists from the University of Massachusetts Amherst — and Microsoft. DieHard prevents crashes and hacker attacks by focusing on memory. Our computers have thousands times more memory than 20 years ago. Still, programmers are privileging speed and efficiency over security, which leads to the famous "buffer overflows" which are exploited by hackers."
This discussion has been archived. No new comments can be posted.

DieHard, the Software

Comments Filter:
  • Correction (Score:5, Insightful)

    by realmolo ( 574068 ) on Monday January 01, 2007 @09:29PM (#17427476)
    "Still, programmers are privileging speed and efficiency over security..."

    Speed and efficiency of *development*, maybe.

    Which is the problem. Modern software is so dependent on toolkits and compiler optimizations and various other "pre-made" pieces, that any program of even moderate complexity is doing things that the programmer isn't really aware of.

  • Re:Correction (Score:5, Insightful)

    by MBCook ( 132727 ) <foobarsoft@foobarsoft.com> on Monday January 01, 2007 @09:46PM (#17427604) Homepage
    This is one of the arguments for a language running on a VM like Java, C#, or Python. They can do runtime checking of array bounds and such and throw an exception or crash instead of silently overwriting some other variable that only may or may not cause a crash or some other noticeable side effect later.
  • Re:Correction (Score:5, Insightful)

    by AKAImBatman ( 238306 ) * <akaimbatman AT gmail DOT com> on Monday January 01, 2007 @09:48PM (#17427618) Homepage Journal
    "Still, programmers are privileging speed and efficiency over security..."

    Speed and efficiency of *development*, maybe.

    No, it was right the first time. Java is several orders of magnitude more secure by default than any random C or C++ program. Yet mention Java on a forum like, say, Slashdot, and you'll hear no end to how much Java sucks because "it's slow". (Usually ignoring the massive speedups that have happened since they last tried it 1996.) It doesn't matter that the tradeoff for that speed is flexibility, security, and portability. They want things to be fast for some undefined quantity of fast.

    In fact, I predict that someone will be along to argue just how slow Java is in 3... 2... 1...
  • wtf? (Score:0, Insightful)

    by Anonymous Coward on Monday January 01, 2007 @09:52PM (#17427658)
    Today's computers have more than 2,000 times as much memory as the machines of yesteryear, yet programmers are still writing code as if memory is in short supply.

    I stopped reading after that first line.

    Programming is not a matter of simply writing until things get full.

  • by jd ( 1658 ) <imipak@ y a hoo.com> on Monday January 01, 2007 @10:28PM (#17427948) Homepage Journal
    ...the number of programmers like ourselves who learned how to code correctly is vanishingly small in comparison to the number of coders who assume that if it doesn't crash, it's good enough. Whether you validate the inputs against the constraints, engineer the program such that constraints must always be met, or force a module to crash when something is invalid so that you can trap and handle it by controlled means - the method is irrelevant. What matters is less that you're using a method than you remember to use a method.

    Even assuming nobody wants to go to all that trouble, there are solutions. ElectricFence and dmalloc are hardly new and far from obscure. If a developer can't be bothered to link against a debugging malloc before testing then you can't expect their software to be immune to such absurd defects. A few runs whilst using memprof isn't a bad idea, either.

    This assumes you're using a language like C, which is not a trivial language to write correct software in. For many programs, you are better off with a language like Occam (provided for Unix/Linux/Windows via KROC) where the combination of language and compiler heavily limits the errors you can introduce. Yes, languages this strict are a pain to write in, but the increase in the initial pain is vastly outweighed by the incredible reduction in agony when debugging - if there's any debugging at all.

    I do not expect anyone to re-write glibc in Occam or any other nearly bug-proof language. It would be helpful, but it's not going to happen.

  • Re:Correction (Score:4, Insightful)

    by Anonymous Coward on Monday January 01, 2007 @11:24PM (#17428332)
    No, we want things to be as fast as they can be.

    Maybe, but most programs are not written in a way which will achieve this goal.

    Programmer time is a limited resource. This is true even on a hobby project with no deadlines and everybody working for free; you want to ship sometime. Making programs run fast takes a lot of programmer time, even when you use a language which is supposedly fast by default such as C or C++.

    C and C++ make you spend a lot of time working around weaknesses in the language and fixing bugs that other languages can never have. A great deal of programmer time is put into developing the broken and slow implementation of half of Common Lisp that every sufficiently complex program must contain.

    All of this time spent is time that does not go into making the program fast.

    By using a language that makes programmers more productive, you get a lot more time to make the program fast. You can do this by optimizing in the "slow" language you started with, by rewriting inner loops in C, by changing the whole algorithm to run on the GPU, etc.

    The 90/10 rule says that your program spends 90% of its time in only 10% of its code, and that optimizing the other 90% of the code is basically a waste. And yet people who want their programs to "go fast" are writing that 90% in a low-level language, effectively wasting a large amount of effort.

    You may also end up getting your program working, realize that it actually is fast enough despite being written in a really slow interpreted language, and spend the time you saved making more cool software. Or you can go back and make the original product fast. It's up to you.

    There are many good reasons to use C, and many good reasons to write entire programs in C, but "it's fast" is not a particularly good reason. An app written in pure C is probably not as fast as it can be unless its scope is very limited.
  • Re:Correction (Score:4, Insightful)

    by Jeremi ( 14640 ) on Monday January 01, 2007 @11:55PM (#17428548) Homepage
    He must be criticizing open source programmers only. Because in business, programmers aren't focussed on speed and efficiency


    Business software isn't the problem. The software that is the problem is the software that runs on every naive home user's PC ... Windows, Outlook, IE, Mozilla, AIM, etc etc. This is the software whose security problems allow spam, credit card fraud, virus outbreaks, etc. And last time I checked, all of that stuff is still written in C or C++, not in any VM.


    Berger sounds like a VM-language bigot (or paid ($30K from MS) .Net Runtime shill)
    who doesn't understand how most software is really made, and prefers to believe in caricatures of programmers.


    Great, you've called the guy a bigot, a shill, and an idiot, without even having understood what he was talking about.

  • by istartedi ( 132515 ) on Tuesday January 02, 2007 @01:14AM (#17429014) Journal

    The worst bugs are the ones that are hard to reproduce. In fact, when faced with a bug that's difficult to reproduce, I've been known to quip "yet another unintentional random number generator". The suggestion that they're going to apply a pseudo-fix that involves random allocations raises all kinds of red flags. I'd much rather have fine-grained control over which sections of code are allowed to access which sections of memory, and be able to track which sections of code are accessing a chunk of memory. I'd much rather have strict enforcement of a non-execute bit on memory that's only supposed to contain data (there is some support for this already). Introducing randomness into memory allocation? Worst. Idea. Ever. It's like throwing in the towel, and if they put that in at low levels in system libs and things like that, we're screwed in terms of every being able to *really* fix the problem. If their compiler is going to link against an allocator that has this capability, I hope they provide the ability to disable it.

  • Re:Correction (Score:5, Insightful)

    by Anonymous Coward on Tuesday January 02, 2007 @01:41AM (#17429154)
    "Java is slow" is the stated reason. As you noted, it is not the actual reason. To tell the actual reason is difficult, but in short Java reminds us too much of what it should have been.

    The basic complaints I have heard are these:

    Complaint 1: Java is slow.
      As you stated, this is not a meaningful complaint.

    Complaint 2: Garbage Collection stinks
      GC is an obvious requirement of a "safe" language. As implemented in Java, it is downright stupid. When doing something CPU intensive, the GC never runs, leading to gobbling up memory until there is no more and thrashing to death. I'm sure that somebody is going to dig up that paging-free GC paper, but pay attention: that is a kernel-level GC.

    Complaint 3: Swing is ugly/leaks memory
      The first is a matter of opinion. The second is well-known. Swing keeps references to long-dead components hidden in internal collections leading to massive memory leaks. These memory leaks can be propagated to the parent application if it is also written in Java.

    Complaint 4: Bad build system
      Java cannot do incremental builds if class files have circular references. In a small project of about ten classes I was working on, the only way to build it was "rm *.class ; javac *.java"

    Complaint 5: Tied class hierarchy to filesystem hierarchy
      This was just stupid and interacts badly with Windows (and anything else with a case insensitive filesystem). It is even worse for someone who is first learning the language. It also makes renaming classes have a very bad effect on source control.

    Complaint 6: Lack of C++ templates
      C++ has some of its own faults. Fortunately its template system can be leveraged to fix quite a few of them. Java's generics have insufficient power to do the same thing.

    Complaint 7: Lack of unsigned integer
      These are oh-so-necessary when doing all kinds of things with binary formats. Too bad Java and all its descendents don't have them.

    Complaint 8: Verbosity without a point
      It has gotten so bad in places that I am strongly tempted to pass Java through the C preprocessor first, but I can't do that very well because of 4.
  • Re:Correction (Score:2, Insightful)

    by tulrich ( 737161 ) <slashdot@tulrich.com> on Tuesday January 02, 2007 @02:53AM (#17429456) Homepage
    Java is several orders of magnitude more secure by default than any random C or C++ program.

    Do you know what "several orders of magnitude" means? For variety, next time you should write "... exponentially more secure ..." or "... takes security to the next level!"

    BTW, it's funny you should mention Java performance in this thread -- one of the DieHard authors published this fascinating paper on Java GC performance: http://citeseer.ist.psu.edu/hertz05quantifying.htm l [psu.edu] -- executive summary: GC can theoretically be as fast as explicit malloc/free, if you're willing to spend 5x memory size overhead (gulp).

  • by Tim C ( 15259 ) on Tuesday January 02, 2007 @03:34AM (#17429610)
    Vista has been in development for around 5 years; unless you were expecting this to be released as a service pack for XP or Server 2003, what's your point? It's in MS's latest release, what more do you want? (Yeah, a shorter release cycle would be nice - except that then people would bitch about the upgrade treadmill...)
  • by pacinpm ( 631330 ) <pacinpm@gmail. c o m> on Tuesday January 02, 2007 @04:36AM (#17429822)
    Problem is it is not so random.
  • Re:Correction (Score:3, Insightful)

    by smallfries ( 601545 ) on Tuesday January 02, 2007 @09:44AM (#17430952) Homepage

    This implies is that because memory is larger less attention can be paid to efficiency, but the hapless programmers don't know better. I used to use quicksort when I had 640 KiB of RAM, but now that I have 8 GiB, I'll just use bubble sort. Brilliant.
    You are really misrepresenting his point here. We both know that bubble sort would run much slower on a 8Gb dataset than quicksort. The real comparison is "should we some really tricky and nasty code for this particular function or should it be a giant lookup table?" When memory is (relatively) cheaper than processor time, the set of tradeoffs changes. Some of these tradeoffs then mean than code can be written more correctly (securely) at the expense of higher memory usage. These tradeoffs are intuitively bad to someone who cut their teeth on a 16bit processor with hardly any memory, but as I'm sitting writing this there is only 130Mb out of 1Gb in use (not counting the cache) - who is to say that doubling memory usage is bad if it removes a whole class of bugs and holes?
  • by grandpa-geek ( 981017 ) on Tuesday January 02, 2007 @10:50AM (#17431460)
    Why not just license under the GPL, LGPL or some other open source license? This business of being "free for non-commercial use" restricts users who use open source software for commercial purposes. This software is really "non-free" according to any definition of the FSF or Open Source Initiative, which explicitly forbid discrimination against fields of endeavor. Perhaps you should say "non-free, but gratis for non-commercial use."

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...