Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

DieHard, the Software 230

Roland Piquepaille writes "No, it's not another movie sequel. DieHard is a piece of software which helps programs to run correctly and protects them from a range of security vulnerabilities. It has been developed by computer scientists from the University of Massachusetts Amherst — and Microsoft. DieHard prevents crashes and hacker attacks by focusing on memory. Our computers have thousands times more memory than 20 years ago. Still, programmers are privileging speed and efficiency over security, which leads to the famous "buffer overflows" which are exploited by hackers."
This discussion has been archived. No new comments can be posted.

DieHard, the Software

Comments Filter:
  • by Anonymous Coward on Monday January 01, 2007 @09:39PM (#17427552)
    This came out in OpenBSD 3.3 [openbsd.org] over three years ago. Nice to see Microsoft keeping up with the times.
     
  • Buggy (Score:3, Interesting)

    by The MAZZTer ( 911996 ) <.moc.liamg. .ta. .tzzagem.> on Monday January 01, 2007 @10:20PM (#17427894) Homepage
    Firefox 2 crashed for the first time ever (I've used it since beta 1 came out) for me today... suspiciously, less than five minutes after I turned DieHard on. Hrm.
  • by Salvance ( 1014001 ) * on Monday January 01, 2007 @10:35PM (#17428020) Homepage Journal
    Sure, but wouldn't it be better if everything ran in it's own virtual session (or within a virtual secure space)? This was Microsoft's original plan with it's Palladium component of Longhorn [com.com], but my understanding is that this was almost entirely scrapped to get Vista out the door.

    Part of the other problem is that most home users expect secure data, but they aren't willing to do anything about it (e.g. set up non-admin users, install virus checkers/firewalls/etc).
  • by NorbrookC ( 674063 ) on Monday January 01, 2007 @11:03PM (#17428196) Journal

    In reading this article, I started to wonder a lot about this. writing to conserve memory is a bad thing? I will say that I haven't noticed that in most software, regardless of whether it's OSS or closed-source. If anything, there seems to be a variation of Parkinson's Law in effect. Yes, computers these days have a lot more memory available, however, the number of applications and the size demands of each application has grown almost in lock-step with that. 15 or so years ago, yes, you had one OS and one application running - maybe, if you were lucky or were running TSR apps, two or three. These days, the OS takes up a hefty chunk, and it's not uncommon to see 8 or 9 (if not more) applications running at once. What they all seem to have in common is that they assume they have access to all the RAM, or as much of it as they can grab.

    I have to wonder if he's actually looked at things these days. I don't see where programming (properly done) to conserve memory is a bad thing. If anything, it seems that few are actually doing it.

  • by Anonymous Coward on Monday January 01, 2007 @11:28PM (#17428364)
    I did a quick read of the whitepaper and sort of see it as heap randomization+. I have very little faith in the claims of low overhead. But leaving that aside, there are 2 major problems here:

    1) If there is a program crash, it may be possible to reproduce the bug on the same computer, but probably not on 2 different ones, such as the user's and the developer's.

    2) It discourages programmers from good design and thorough testing by leading them to believe that bugs won't occur.

    The claim for DieHard (from the whitepaper) is that it "tolerates memory errors and provides probabilistic memory safety". But bugs will still happen! I once added about 10 lines of code to log a bug our team was having a hard time tracking down. It turned out to have its own bug that would be hit if:

    - Two threads were accessing the same buffer
    AND
    - One of them was swapped out during the execution of 3 machine instructions (out of about a million)

    It took my moderately sized customer base 2 weeks to hit it. The only way to avoid memory errors is to make the code bulletproof, which means fixing it when bugs are reported.
  • Old News (Score:1, Interesting)

    by Anonymous Coward on Tuesday January 02, 2007 @12:12AM (#17428648)
    Methodologies for writing secure code have been known for decades. A far more interesting article would be one that attempts to discover or explain why certain engineers and organizations refuse to use them, or, for that matter, why they refuse to learn them in the first place.

    Note: there are new breeds of software developers who do NOT buy into the old-school lines of bullshit that have brought us where we are. Hopefully, these smarter heads will prevail as we move forward.
  • Re:Correction (Score:4, Interesting)

    by evilviper ( 135110 ) on Tuesday January 02, 2007 @01:32AM (#17429114) Journal
    It doesn't matter that the tradeoff for that speed is flexibility, security, and portability. They want things to be fast for some undefined quantity of fast.

    I've got to call you on the "portability" crap.

    Java is about as portable as Flash... Sure, the major platforms are supported, but that's it. 3rd parties spent a lot of time trying to impliment java, but never did get everything 100%. Licensing issues, above all else, made it a real hassle to get Java on platforms like FreeBSD.

    Meanwhile, C and C++ compiler are installed in the base system by default.

    The only "portability" advantage Java has is perhaps in GUI apps, and that's at the expense of a program that doesn't look or work remotely similar to any other app on the system...

    There are a great many reasons people don't use java. Performance is only a minor one.
  • by BagOCrap ( 980854 ) on Tuesday January 02, 2007 @08:15AM (#17430558) Homepage

    Still, programmers are privileging speed and efficiency over security, which leads to the famous "buffer overflows"...
    Am I the only one who finds the above sentence just strange? In my books and experience, speed optimisations most certainly don't result in buffer overflows. Recklessness does, however.
  • Re:Correction (Score:4, Interesting)

    by RAMMS+EIN ( 578166 ) on Tuesday January 02, 2007 @12:36PM (#17432338) Homepage Journal
    In support of your point, I want to share the followind PDF: Lisp as an Alternative to Java [flownet.com]. The study compares programs written in Java, C/C++ (they lump these togeter), and Common Lisp / Scheme (again, lumped together). The findings are basically:

    1. The fastest programs are written in C/C++
    2. On average, Lisp/Scheme programs are fasted, followed by C/C++ programs, and Java programs are way behind.
    3. Development time is shortest for Lisp/Scheme, with Java and C++ being more or less equal.
    4. C/C++ programs used the least memory, with Lisp/Scheme and Java being about equal.
    5. There was very little variation in the run time and development time of Lisp/Scheme programs, and a lot of variation everywhere else.

    The PDF contains some nice graphs illustrating all this.

FORTRAN is not a flower but a weed -- it is hardy, occasionally blooms, and grows in every computer. -- A.J. Perlis

Working...