DieHard, the Software 230
Roland Piquepaille writes "No, it's not another movie sequel. DieHard is a piece of software which helps programs to run correctly and protects them from a range of security vulnerabilities. It has been developed by computer scientists from the University of Massachusetts Amherst — and Microsoft. DieHard prevents crashes and hacker attacks by focusing on memory. Our computers have thousands times more memory than 20 years ago. Still, programmers are privileging speed and efficiency over security, which leads to the famous "buffer overflows" which are exploited by hackers."
Correction (Score:5, Insightful)
Speed and efficiency of *development*, maybe.
Which is the problem. Modern software is so dependent on toolkits and compiler optimizations and various other "pre-made" pieces, that any program of even moderate complexity is doing things that the programmer isn't really aware of.
Re:Correction (Score:5, Insightful)
Re:Correction (Score:5, Insightful)
No, it was right the first time. Java is several orders of magnitude more secure by default than any random C or C++ program. Yet mention Java on a forum like, say, Slashdot, and you'll hear no end to how much Java sucks because "it's slow". (Usually ignoring the massive speedups that have happened since they last tried it 1996.) It doesn't matter that the tradeoff for that speed is flexibility, security, and portability. They want things to be fast for some undefined quantity of fast.
In fact, I predict that someone will be along to argue just how slow Java is in 3... 2... 1...
wtf? (Score:0, Insightful)
I stopped reading after that first line.
Programming is not a matter of simply writing until things get full.
You must remember... (Score:5, Insightful)
Even assuming nobody wants to go to all that trouble, there are solutions. ElectricFence and dmalloc are hardly new and far from obscure. If a developer can't be bothered to link against a debugging malloc before testing then you can't expect their software to be immune to such absurd defects. A few runs whilst using memprof isn't a bad idea, either.
This assumes you're using a language like C, which is not a trivial language to write correct software in. For many programs, you are better off with a language like Occam (provided for Unix/Linux/Windows via KROC) where the combination of language and compiler heavily limits the errors you can introduce. Yes, languages this strict are a pain to write in, but the increase in the initial pain is vastly outweighed by the incredible reduction in agony when debugging - if there's any debugging at all.
I do not expect anyone to re-write glibc in Occam or any other nearly bug-proof language. It would be helpful, but it's not going to happen.
Re:Correction (Score:4, Insightful)
Maybe, but most programs are not written in a way which will achieve this goal.
Programmer time is a limited resource. This is true even on a hobby project with no deadlines and everybody working for free; you want to ship sometime. Making programs run fast takes a lot of programmer time, even when you use a language which is supposedly fast by default such as C or C++.
C and C++ make you spend a lot of time working around weaknesses in the language and fixing bugs that other languages can never have. A great deal of programmer time is put into developing the broken and slow implementation of half of Common Lisp that every sufficiently complex program must contain.
All of this time spent is time that does not go into making the program fast.
By using a language that makes programmers more productive, you get a lot more time to make the program fast. You can do this by optimizing in the "slow" language you started with, by rewriting inner loops in C, by changing the whole algorithm to run on the GPU, etc.
The 90/10 rule says that your program spends 90% of its time in only 10% of its code, and that optimizing the other 90% of the code is basically a waste. And yet people who want their programs to "go fast" are writing that 90% in a low-level language, effectively wasting a large amount of effort.
You may also end up getting your program working, realize that it actually is fast enough despite being written in a really slow interpreted language, and spend the time you saved making more cool software. Or you can go back and make the original product fast. It's up to you.
There are many good reasons to use C, and many good reasons to write entire programs in C, but "it's fast" is not a particularly good reason. An app written in pure C is probably not as fast as it can be unless its scope is very limited.
Re:Correction (Score:4, Insightful)
Business software isn't the problem. The software that is the problem is the software that runs on every naive home user's PC
Berger sounds like a VM-language bigot (or paid ($30K from MS)
who doesn't understand how most software is really made, and prefers to believe in caricatures of programmers.
Great, you've called the guy a bigot, a shill, and an idiot, without even having understood what he was talking about.
Randomness. Nooooo! (Score:3, Insightful)
The worst bugs are the ones that are hard to reproduce. In fact, when faced with a bug that's difficult to reproduce, I've been known to quip "yet another unintentional random number generator". The suggestion that they're going to apply a pseudo-fix that involves random allocations raises all kinds of red flags. I'd much rather have fine-grained control over which sections of code are allowed to access which sections of memory, and be able to track which sections of code are accessing a chunk of memory. I'd much rather have strict enforcement of a non-execute bit on memory that's only supposed to contain data (there is some support for this already). Introducing randomness into memory allocation? Worst. Idea. Ever. It's like throwing in the towel, and if they put that in at low levels in system libs and things like that, we're screwed in terms of every being able to *really* fix the problem. If their compiler is going to link against an allocator that has this capability, I hope they provide the ability to disable it.
Re:Correction (Score:5, Insightful)
The basic complaints I have heard are these:
Complaint 1: Java is slow.
As you stated, this is not a meaningful complaint.
Complaint 2: Garbage Collection stinks
GC is an obvious requirement of a "safe" language. As implemented in Java, it is downright stupid. When doing something CPU intensive, the GC never runs, leading to gobbling up memory until there is no more and thrashing to death. I'm sure that somebody is going to dig up that paging-free GC paper, but pay attention: that is a kernel-level GC.
Complaint 3: Swing is ugly/leaks memory
The first is a matter of opinion. The second is well-known. Swing keeps references to long-dead components hidden in internal collections leading to massive memory leaks. These memory leaks can be propagated to the parent application if it is also written in Java.
Complaint 4: Bad build system
Java cannot do incremental builds if class files have circular references. In a small project of about ten classes I was working on, the only way to build it was "rm *.class ; javac *.java"
Complaint 5: Tied class hierarchy to filesystem hierarchy
This was just stupid and interacts badly with Windows (and anything else with a case insensitive filesystem). It is even worse for someone who is first learning the language. It also makes renaming classes have a very bad effect on source control.
Complaint 6: Lack of C++ templates
C++ has some of its own faults. Fortunately its template system can be leveraged to fix quite a few of them. Java's generics have insufficient power to do the same thing.
Complaint 7: Lack of unsigned integer
These are oh-so-necessary when doing all kinds of things with binary formats. Too bad Java and all its descendents don't have them.
Complaint 8: Verbosity without a point
It has gotten so bad in places that I am strongly tempted to pass Java through the C preprocessor first, but I can't do that very well because of 4.
Re:Correction (Score:2, Insightful)
Do you know what "several orders of magnitude" means? For variety, next time you should write "... exponentially more secure ..." or "... takes security to the next level!"
BTW, it's funny you should mention Java performance in this thread -- one of the DieHard authors published this fascinating paper on Java GC performance: http://citeseer.ist.psu.edu/hertz05quantifying.htm l [psu.edu] -- executive summary: GC can theoretically be as fast as explicit malloc/free, if you're willing to spend 5x memory size overhead (gulp).
Re:Vista already doing some of this (Score:4, Insightful)
Re:Different program? (Score:2, Insightful)
Re:Correction (Score:3, Insightful)
Why the restriction to "non-commercial" use? (Score:2, Insightful)