Security Review Summary of NIST SHA-3 Round 1 146
FormOfActionBanana writes "The security firm Fortify Software has undertaken an automated code review of the NIST SHA-3 round 1 contestants (previously Slashdotted) reference implementations. After a followup audit, the team is now reporting summary results. According to the blog entry, 'This just emphasizes what we already knew about C, even the most careful, security conscious developer messes up memory management.' Of particular interest, Professor Ron Rivest's (the "R" in RSA) MD6 team has already corrected a buffer overflow pointed out by the Fortify review. Bruce Schneier's Skein, also previously Slashdotted, came through defect-free."
Re:ANSI C (Score:1, Informative)
1. Because we, rank and file developers, have to use it afterward (and some of us write in C or C-derived languages, like oh, I don't know, pretty much all applications on your desktop?)
2. Because it is impossible to compare performance of cryptographic algorithms if they are not written in a same language (preferably directly convertable to machine code)
Reference implementation (Score:4, Informative)
In a word, no. A reference implementation is supposed to be a working version of the code, not just a mathematical description. With a working version, it's possible to do things like test its real world performance or cut and paste directly into a program that needs to use the function. That's obviously only possible if you have a version that works on real-world processors.
Consider Skein as an example. One of the things that Bruce Schneier described as a major goal of its design is that it uses functions that are highly optimized in real-world processors. That means that it's possible to make a version that's both very fast and straightforward to program, an important criterion for low-powered embedded applications. You won't discover that kind of detail until you implement it.
Re:ANSI C (Score:3, Informative)
Mathematically anything is feasible, however if you place a real-world constraint such as it requiring an implementation then that greatly narrows the field down.
Furthermore one of the judging factors is the speed and portability of the algorithm upon a wide variety of commonly used platforms - it doesn't make sense to come up with a super-cool hash function that only works well on say an x86.
The short of it is that people make mistakes from time to time, and it is true that perfection is an important factor in crypto-code so the submitters should have been more thorough, From the article it seems that the overwhelming majority of them were - which is a positive.
Re:C isn't the problem, it is really... (Score:3, Informative)
C got const much earlier; it was there in 1989. And at least in the past, a static const int FOO was less useful than #define FOO, it wasn't "constant enough" to define the size of an array. But yes, you see macros too often.
uhh, lint... (Score:4, Informative)
$ cat bo.c
int a[3];
void f()
{
a[3] = 1;
}
$ lint bo.c
bo.c:4: warning: array subscript cannot be > 2: 3
Lint is so basic, I can't imagine not using it....
Re:In defense of C (Score:3, Informative)
Which is why tools like Valgrind or Numega BoundsChecker exist, they provide much more granular information about how memory's being used and abused, the problem you just described would flag up instantly as writing to previously free'd data along with a few source code locations relevant to where it was allocated/free'd.