Forgot your password?
typodupeerror
Encryption Security Technology

Security Review Summary of NIST SHA-3 Round 1 146

Posted by timothy
from the works-in-progress dept.
FormOfActionBanana writes "The security firm Fortify Software has undertaken an automated code review of the NIST SHA-3 round 1 contestants (previously Slashdotted) reference implementations. After a followup audit, the team is now reporting summary results. According to the blog entry, 'This just emphasizes what we already knew about C, even the most careful, security conscious developer messes up memory management.' Of particular interest, Professor Ron Rivest's (the "R" in RSA) MD6 team has already corrected a buffer overflow pointed out by the Fortify review. Bruce Schneier's Skein, also previously Slashdotted, came through defect-free."
This discussion has been archived. No new comments can be posted.

Security Review Summary of NIST SHA-3 Round 1

Comments Filter:
  • ANSI C (Score:5, Insightful)

    by chill (34294) on Sunday February 22, 2009 @02:23PM (#26950515) Journal

    That is what they get for mandating the code be in ANSI C. How about allowing reference implementation in SPARK, ADA or something else using design-by-contract. After all, isn't something as critical as a international standard for a hash function the type of software d-b-c was meant for?

    • Re: (Score:3, Interesting)

      Why do they even have a reference implementation for a hash function in a programming language? Wouldn't just defining the function mathematically be less error prone and just as effective?

      • by pathological liar (659969) on Sunday February 22, 2009 @02:36PM (#26950607)
        ... because implementation is where people screw up.
      • Re: (Score:1, Informative)

        by Anonymous Coward

        1. Because we, rank and file developers, have to use it afterward (and some of us write in C or C-derived languages, like oh, I don't know, pretty much all applications on your desktop?)

        2. Because it is impossible to compare performance of cryptographic algorithms if they are not written in a same language (preferably directly convertable to machine code)

      • Re: (Score:1, Insightful)

        by Anonymous Coward

        Ummmm... because it wouldn't be a reference implementation if it wasn't actually implemented?

      • Re:ANSI C (Score:5, Insightful)

        by John Hasler (414242) on Sunday February 22, 2009 @02:42PM (#26950665) Homepage

        Presumably one of the things they want to evaluate is performance.

        • Indeed. This is usually what defines the winner over the loser. A secure (we hope) hash that is faster/cheaper in cpu cycles will get used more.
      • Re:ANSI C (Score:5, Insightful)

        by IversenX (713302) on Sunday February 22, 2009 @02:44PM (#26950681) Homepage

        Because you can't compile a mathematical definition.

        If we imagine that the hash function came only as a mathematical definition, how would your test your new implementation in LangOfTheWeek is correct?

        Well, you have 2 options. One, you can prove that your program behaves, in every important way, the same as the definition. This is long, tedious work, and most programmers don't even have the necessary skills for this. Two, you can make a reference implemention in some other language, and compare the outputs.

        Now, given, say, 100 programmers each working on their own functions, we should have 1 resulting behaviour. This will mean that everybody implemented the algorithm 100% correctly. However, the actual number will be between 1 and 100, depending on the skills of the programmers, and the care they've taken in implementing the functions.

        Now, what's the result here? (no pun intended). It's likely to be chaos.

        That's why it's very convenient to have a single reference source.

      • by rgmoore (133276) <glandauer@charter.net> on Sunday February 22, 2009 @02:49PM (#26950723) Homepage

        In a word, no. A reference implementation is supposed to be a working version of the code, not just a mathematical description. With a working version, it's possible to do things like test its real world performance or cut and paste directly into a program that needs to use the function. That's obviously only possible if you have a version that works on real-world processors.

        Consider Skein as an example. One of the things that Bruce Schneier described as a major goal of its design is that it uses functions that are highly optimized in real-world processors. That means that it's possible to make a version that's both very fast and straightforward to program, an important criterion for low-powered embedded applications. You won't discover that kind of detail until you implement it.

        • We all know why Bruce's code came out clean...he is actually the Messiah!!! Or at least in the circles I run in he is! :o) Queue the Chuck Norris jokes with Bruce in stead of Chuck.
      • Re: (Score:3, Informative)

        by xquark (649804)

        Mathematically anything is feasible, however if you place a real-world constraint such as it requiring an implementation then that greatly narrows the field down.

        Furthermore one of the judging factors is the speed and portability of the algorithm upon a wide variety of commonly used platforms - it doesn't make sense to come up with a super-cool hash function that only works well on say an x86.

        The short of it is that people make mistakes from time to time, and it is true that perfection is an important facto

      • Re: (Score:3, Insightful)

        by thue (121682)

        An implementation in a programming language is the a way to define a function mathematically.

        That is what a programming language is - a way to precisely define an algorithm.

        But C is a low-level language, and therefore maybe a bad choice for function definitions. Instead, in my experience, implementing an algorithm in a high-level functional language like Haskell will often result in a beautiful and readable mathematical function definition.

        • Re: (Score:1, Troll)

          by iwein (561027)
          Yes, it would be nice to have a way to compile beautiful mathematical functions to machine code. Sadly you're dependent on those people that are writing the grammar and reference implementation of the compiler. What if they need one of those pesky algorithms for that?
          • What if they need one of those pesky algorithms for that?

            Look up bootstrapping. Most modern compilers can compile themselves using a stage or two of simpler compilers, built with either an existing compiler, an assembler, or maybe some hand written machine code. All the important parts of the compiler (parser, optimizer, etc.) can be written in the same language that the compiler can compile.

            • Look up bootstrapping. Most modern compilers can compile themselves using a stage or two of simpler compilers

              Look up trusting trust [bell-labs.com]. Defects in the compiler, whether intentional or unintentional, can propagate themselves to the compiled work, even if the compiled work is the compiler itself.

              • by iwein (561027)

                Look up trusting trust [bell-labs.com]. Defects in the compiler, whether intentional or unintentional, can propagate themselves to the compiled work, even if the compiled work is the compiler itself.

                This always gets me though: why doesn't this apply to C?

                • This always gets me though: why doesn't [the trusting trust attack] apply to C?

                  Bruce Schneier pointed out [schneier.com] that one can bootstrap a compiler using a different implementation of the language as a (probabilistic) measure against defects introduced by trusting trust. Build it on systems with different compilers, bit-compare the binaries generated on each system, and if they match, you can be reasonably sure that there is no such defect. But unlike C, which has implementations from GNU, Borland, Watcom, M$, Green Hills, and numerous other vendors, a lot of the managed languages lack multip

              • Unless you manually prove all the basic results of set theory for yourself (as well as philosophically agreeing that zf is a good choice of a set theory) and then build mathematics from it and derive formal languages from that, you probably shouldn't trust any code on any computer. Even then you can't trust the hardware, or even the wetware in your head.

                I was just explaining how to bootstrap a compiler, not the finer points of epistemology.

        • Re: (Score:3, Insightful)

          by setagllib (753300)

          C is a bad choice for mathematical function definitions, but it's a fantastic choice for integrating into virtually any stage of a software project. It can be used in an OS kernel, a standard portable crypto library (e.g. OpenSSL), embedded firmware, what have you. All of this with NO more library dependencies than the bare minimum memory management, and most crypto/hash functions don't need those because their state fits in a fixed-size structure. So you can have the mythical 100% standalone C code that fi

      • cryptol (Score:4, Interesting)

        by jefu (53450) on Sunday February 22, 2009 @04:52PM (#26951633) Homepage Journal

        But that just raises the question of how to define a hash function mathematically? The lambda calculus, Godel Numbers? Things like cryptographic hash functions don't tend to be nice algebraic thingies like f(x)=x*x+7, especially since they're usually iterative and deliberately messy - the pretty functions are likely to be less secure.

        On the other hand, there are things like cryptol [galois.com] in which you may be able to specify hash functions more mathematically. For example, here is a cryptol implementation of skein [galois.com].

    • Re:ANSI C (Score:5, Insightful)

      by Anonymous Coward on Sunday February 22, 2009 @02:44PM (#26950683)

      What did they get? You realize this is just an ad for Fortify, right? Out of 42 projects, they found 5 with memory management issues using their tool. Maybe instead of switching to SPARK, the 5 teams that fucked up could ask the 37 that didn't for some tips on how to write correct C.

      • PS: (Score:1, Funny)

        by Anonymous Coward

        The alternative was supposed to be throwing money at Fortify by the way. If your conclusion is to switch to SPARK then Fortify needs to work on their PR, *cough*, I mean blogging.

      • dandelion orchard (Score:3, Insightful)

        by epine (68316)

        This just emphasizes what we already knew about C, even the most careful, security conscious developer messes up memory management.

        I know nothing of the sort. How about asking some developers who have a history of getting both the security and the memory management correct which intellectual challenge they lose the most sleep over?

        The OpenBSD team has a history of strength in both areas. I suspect most of these developers would laugh out loud at the intellectual challenge of the memory management required

        • Re: (Score:3, Insightful)

          What I get out of this:

          "We're going to give you more shit to think about by making you use C. if you can't deal with all the stupid shit C throws at you, you suck."

          Which is a shit argument. Just use a better language that gives people less to worry about, and develop from there. Having to debug the shit out of a program for obscure memory management issues shouldn't be a test of your competence. You should be able to focus on the task at hand, nothing else.

    • Re:ANSI C (Score:5, Insightful)

      by OrangeTide (124937) on Sunday February 22, 2009 @03:04PM (#26950835) Homepage Journal

      That is what they get for mandating the code be in ANSI C.

      Because most of the systems out there use C for the performance sensitive bits. (and when asm optimization is done, people generally use a C implementation as a reference since C and asm are similar in many ways).

      When they start doing Linux and Windows and other popular systems primarily in Ada you can start going WTF over people posting ANSI C code. Until Java, Ruby and Python aren't dependent on C/C++ implementations for their functionality we'll just have to suffer with C.

      • by master_p (608214)

        Could not it be done with Haskell? Haskell creates code equal to C especially for math problems (or so they say).

    • by nedlohs (1335013)

      Because you want vendors to just use the reference code, without screwing it up by implementing their own C version from the ADA reference code.

    • Re: (Score:3, Insightful)

      by Anonymous Coward

      That is what they get for mandating the code be in ANSI C. How about allowing reference implementation in SPARK, ADA or something else using design-by-contract. After all, isn't something as critical as a international standard for a hash function the type of software d-b-c was meant for?

      C is the lowest common denominator. Once you have a library in C that's compiled to native code, you can link just about any other language to that library: Perl, Python, Ruby, Java, Lisp, etc., all have foreign function interfaces to C.

      C also can compile to just about any processor out there: x86, SPARC, POWER, ARM, etc. Over the decades it's become very portable and optimized.

      C is also know by a large subset of the programmers out there, so if people want to re-write the algorithm in another language, the

    • Re:ANSI C (Score:4, Insightful)

      by swillden (191260) <shawn-ds@willden.org> on Sunday February 22, 2009 @06:50PM (#26952583) Homepage Journal

      After all, isn't something as critical as a international standard for a hash function the type of software d-b-c was meant for?

      No.

      The key desirable characteristics of a good secure hash function are (in this order):

      1. Security (collision resistance, uniformity)
      2. Performance
      3. Ease of implementation

      If the NIST AES contest is any example, what we'll find is that nearly all of the functions will be secure. That's not surprising since all of the creators are serious cryptographers well-versed in the known methods of attacking hash functions, so unless a new attack comes out during the contest, all of the functions will be resistant to all known attack methods.

      With security out of the way, the most important criterion that will be used to choose the winner is performance. This means that every team putting forth a candidate will want to make it as fast as possible, but it still has to run on a few different kinds of machines. The closer you get to the machine, the faster you can make your code -- and this kind of code consists of lots of tight loops where every cycle counts. But assembler isn't an option because it needs to run on different machines.

      That leaves C as the best choice. It's the portable assembler.

      • Re: (Score:3, Interesting)

        by swillden (191260)

        With security out of the way, the most important criterion that will be used to choose the winner is performance.

        Probably no one will ever see this self-reply, but I just noticed that the Skein site makes a good argument that the paring down of the candidate list should (will?) happen in the other order.

        The idea is that since it's a huge amount of work to cryptanalyze a hash function, and since it's easy to measure performance (in time and space), the thing to do is to first toss out all of the slow and/or memory-hungry candidates. Obviously, if all of the fast and tight candidates were found to have security flaws

  • Disclaimer (Score:2, Interesting)

    I should add that I work for Fortify and that I initiated the SHA-3 review in my spare time as a private project. The Slashdot article on December 21 caught my interest.

    • by Compholio (770966)
      I think that since only 5 of the 42 projects garnered your attention that a better quote to include in the summary would have been:

      We were impressed with the overall quality of the code, but we did find significant issues in a few projects, including buffer overflows in two of the projects.

      If the other 37 projects didn't have any signficant flaws on the first round of this contest then that doesn't say to me "well, obviously no-one can do memory management properly" - it says that people make mistakes.

      • Re: (Score:3, Interesting)

        Yeah, both very good points.

        People do make mistakes. Even geniuses, when they're trying really hard to be careful. Personally, I see recognizing that as a validation for code review (including the automated code review that I do).

        I want the winning entry for this competition to be flawless to the extent that's feasible. Right now, my job includes finding SHA-1 for cryptographic key generation, and telling people to replace that with something better. I don't want to be pulling out SHA-3 in a couple year

    • by Rayban (13436) *

      So which of the vulnerabilities did you hold back on so that you can exploit it in a couple of years? ;)

    • by jd (1658)

      I think that was the article I submitted, expressly with the intent that it would catch the attention of people like yourself who could contribute to the auditing. This improves the quality of the submissions, perhaps identifies flaws in algorithms, and in general leads to a better contest between better competing implementations.

      Personally, I'm gloating a little because the functions I considered to have such cool names (eg: Blue Midnight Wish) all came through clean and are also listed on the SHA3 Zoo as

      • Yes, it was yours. Thank you very much for the inspiration. I'm glad to help in what little way I can.

        I'm really in awe of the people (whoever they are) who are actually evaluating the algorithms themselves.

  • by Anonymous Coward on Sunday February 22, 2009 @02:50PM (#26950737)

    "... because implementation is where people screw up."
    "Bruce Schneier's Skein, ... came through defect-free."

    So by deductive logic, Bruce is a robot. Also previously slashdotted.

  • by SoapBox17 (1020345) on Sunday February 22, 2009 @02:57PM (#26950787) Homepage
    From TFA (and TFS):

    This just emphasizes what we already knew about C, even the most careful, security conscious developer messes up memory management.

    This doesn't follow from TFA. The blog points out two instances of buffer overflows. The first one you could argue they messed up "memory management" because they used the wrong bounds for their array in several places... but they don't sound very "careful" or "security conscious" since checking to make sure you understand the bounds of the array you're using is pretty basic.

    But that's not what bothered me. The second example is a typo where TFA says someone entered a "3" instead of a "2". In what dimension is mis-typing something "messing up memory management"? That just doesn't follow.

    • by drinkypoo (153816)

      In what dimension is mis-typing something "messing up memory management"? That just doesn't follow.

      I haven't evaluated the code in question, because math scares me, but if someone makes a fencepost error (or just a typo - but fenceposting is a common cause of off-by-one errors) it's entirely possible to be mucking with memory that is a byte or a page off from the memory you think you're working on. So that's an example of how mis-typing something could cause an error in memory management (if in one function you have it right, and in another you have it wrong... you can't even get it right by getting luck

      • drinkypoo the math is as easy as 3 >= 3. See here:

        // deal with the length update first
        bcount = ss.sourceDataLength; // previous length
        ss.sourceDataLength = bcount + databitlen; // new length
        if (ss.sourceDataLength < (bcount | databitlen)) // overflow

    • One reply deep in comment 26951319 [slashdot.org] I demonstrate that typing the "3" instead of "2" improperly access memory space that may or may not be allocated. This type of out-of-bounds access is mismanaging memory.

  • In defense of C (Score:5, Insightful)

    by phantomfive (622387) on Sunday February 22, 2009 @03:18PM (#26950937) Journal
    The summary is kind of a troll, since most of the submissions actually managed to get through without ANY buffer overflows.

    Buffer overflows are not hard to avoid, they are just something that must be tested. If you don't test, you are going to make a mistake, but they are easy to find with a careful test plan or an automated tool. Apparently those authors who had buffer overflows in their code didn't really check for them.

    C is just a tool, like any other, and it has tradeoffs. The fact that you are going to have to check for buffer overflows is just something you have to add to the final estimate of how long your project will take. But C gives you other advantages that make up for it. Best tool for the job, etc.
    • Re:In defense of C (Score:4, Insightful)

      by Kjella (173770) on Sunday February 22, 2009 @06:38PM (#26952503) Homepage

      Buffer overflows are not hard to avoid, they are just something that must be tested.

      No, they're a huge pain in the ass 99% of the time, what's worst is that pointers work even when they absolutely shouldn't. I recently worked with some code that instead of making a copy of the data just kept a void pointer to it instead. Since that code was passed just a temporary variable that was gone right after being passed into it, it should error out on any sane system, but the problem is it worked - when it was called later, using the pointer to a temp that's alreday gone it would just read the value out of unallocated memory anyway, and since the temp hadn't been overwritten it worked! Only reason it came up was that when called twice it'd show the last data twice, since both would be pointing to the same unallocated memory location. It's so WTF that you can't believe it. Personally I prefer to work with some sort of system (Java, C#, C++/Qt, anything) that'll give you some head's up that will tell you're out of bounds or otherwise just pointing at nothing.

      • Re: (Score:3, Informative)

        by drspliff (652992)

        Which is why tools like Valgrind or Numega BoundsChecker exist, they provide much more granular information about how memory's being used and abused, the problem you just described would flag up instantly as writing to previously free'd data along with a few source code locations relevant to where it was allocated/free'd.

    • by owlstead (636356)

      I don't think the performance estimate will change much. You only have to check the input, and the complexity is within the rounds of the underlying block cipher/sponge function or whatever is used to get the bits distributed as they should be.

      I blame the bugs on tight time schedules and inexperienced programmers/insufficient review. Basically, cryptographers are mathematicians at heart. There is a rather large likelyhood that C implementations are not their common playing field.

  • by MobyDisk (75490) on Sunday February 22, 2009 @03:46PM (#26951135) Homepage

    I suspect the problem is related to the poor coding practices used in academia. I see college professors who write code that barely compiles in GCC without a bunch of warnings about anachronistic syntax. Some of the C code used constructs that are unrecognizable to someone who learned the language within the past 10 years, and is completely type unsafe.

    I can't tell much from the code on the link, but I do see #define used for constants which is no longer appropriate (yet is EXTREMELY common to see). C99 had the const keyword in it, probably even before that.

    • Re: (Score:2, Insightful)

      by FearForWings (1189605)

      I suspect the problem is related to the poor coding practices used in academia. I see college professors who write code that barely compiles in GCC without a bunch of warnings about anachronistic syntax.

      You know you'll learn a lot in a class when, after being told at the very least his c++ code is using deprecated includes, the professor tells you to just use '-Wno-deprecated'. I've basically come to the conclusion that I am just paying the school for a piece of paper, and I will learn little outside my personal study.

      • by Nigel Stepp (446)

        That depends, what is the class for? If it's a class teaching how to use C++, then you have a point.

        If it's just about any other CS class, however, probably the language you are using doesn't matter so much, but rather what you are using the language to do.

        I'm guessing that in this instance, the fact that the professor is using some wacky set of C constructs is not nearly as important as what is actually being taught, e.g. an algorithm. That is, ignore the deprecated stuff because that's not what is importa

    • by stinerman (812158)

      Oh those were the days. At least my prof explained the difference.

      With a #define, the preprocessor picks up the constant while using const delays it to the compiler. Of course, he said they were equally good, just a matter of style.

      We never did learn why we had to put

      using namespace std;

      near the top of the program other than because we were supposed to use the standard namespace. I never bothered to ask or find out because by that time I had realized I'm an atrocious programmer and wouldn't be doing that

    • Re: (Score:3, Informative)

      by jgrahn (181062)

      I can't tell much from the code on the link, but I do see #define used for constants which is no longer appropriate (yet is EXTREMELY common to see). C99 had the const keyword in it, probably even before that.

      C got const much earlier; it was there in 1989. And at least in the past, a static const int FOO was less useful than #define FOO, it wasn't "constant enough" to define the size of an array. But yes, you see macros too often.

    • I've had to work on an app where the main developer didn't know / didn't care about void *, and used char * everywhere instead. In fact he used char* even when the type was unique, and type cast at every call, and at the beginning of the called function.

      When I called him on it, he said that I was doing philosophy and that he had real work to do.

    • I can't tell much from the code on the link, but I do see #define used for constants which is no longer appropriate (yet is EXTREMELY common to see). C99 had the const keyword in it, probably even before that.

      I don't know where to start here.

      const has been in C since c89.


      const int SIZE=4;
      int data[SIZE];

      is fine in C++ and does the same as using a #define for SIZE.

      It doesn't work in C prior to C99.

      In C99 it defines a VLA - i.e. it's identical to using int SIZE=4; and not using const for the array parameter.

      Ti

      • by MobyDisk (75490)

        n C99 it defines a VLA

        Really? Is that required by the standard, or is that be up to the compiler? I would expect the compiler to treat that example as a fixed-length array.

        (I'll have to go try this in GCC when I get home...)

        • #include <stdio.h>

          const int size = 5;

          int main(void)
          {
          int data[size] = {0};

          printf("%d\n", (int)sizeof data);

          return 0;
          }

          $ gcc -std=c99 -o test test.c
          test.c: In function `main':
          test.c:7: error: variable-sized object may not be initialized
          test.c:7: warning: excess elements in array initializer
          test.c:7: warning: (near initialization for `data')
          $ g++ -o test test.c
          $

          • by MobyDisk (75490)

            Thanks, wow... so, doesn't that mean they broke functionality that worked in previous revisions of C?

            • Thanks, wow... so, doesn't that mean they broke functionality that worked in previous revisions of C?

              ?

              No functionality broken. You can't use a const int to define an array size in C prior to C99 at all. (That's where this entire subthread started - someone was commenting about the use of #define rather than const int which is a reasonable criticism of C++ code but not C code)

              It is an(other) area where C and C++ will forever behave differently (I assume).


              $ cat compiler.c
              #include

              struct c { char c[2]; };

              int t

              • Gaaah. the <stdio.h> is obviously missing. I even previewed that post :-(

              • by MobyDisk (75490)

                Heh, the "somebody" was me :) And you assume correctly about C++. Since I code in C++ not C, I took it for granted that const int worked in C since it works in C++. ugh.

                Thanks for the detailed replies.

  • uhh, lint... (Score:4, Informative)

    by mevets (322601) on Sunday February 22, 2009 @06:29PM (#26952437)

    $ cat bo.c
    int a[3];
    void f()
    {
                    a[3] = 1;
    }

    $ lint bo.c
    bo.c:4: warning: array subscript cannot be > 2: 3

    Lint is so basic, I can't imagine not using it....

    • Re: (Score:1, Insightful)

      by Anonymous Coward

      static int a[3], b = 3;
      void f(void)
      {
        a[b] = 1;
      }

      Lint isn't magic.

      • by mevets (322601)

        no lint is basic, so basic there is no good reason to not always use it. (bo.c) is equivalent to the only flaw described in the published summary. Dynamic analyzers (required to catch your example) are just stuck in the shadow of the halting problem.

    • Lint is so basic, I can't imagine not using it....

      Lint is not found in Ubuntu. Did you mean Splint? And can you recommend anything analogous for C++ programs, for which Splint has no front-end?

    • It's splint these days (at least on Linux). And using splint on any nontrivial large code base will bury you in tons of mostly irrelevant warnings. If you dare to attempt cleaning up the mess, you'll find that you have to annotate your code. And then you have the problem that splint will only spit out correct warnings if your annotations are correct, so you have just doubled the potential sources of error (now it's annotations+code instead of just code).

      Tools like splint just don't understand the flow of

    • Most lints I've seen have atrocious false positive rates.

      I tried to write a minimal program that would give no warnings, I couldn't do it. If my main didn't have a return statement, it complained about that. If it had one that was not reachable, it complained about unreachable code. If it had one that COULD be reached, it complained that return could be called from main.

  • In other news, the first SHA-3 conference will be held in Belgium this week. The NIST hopes to be able to reduce the amount of contestants for the SHA-3 contest to a more manageable level by the end of that; for more info read on here [securityandthe.net].
  • Are they ready for Round 2??
  • by tyler_larson (558763) on Monday February 23, 2009 @03:31AM (#26955333) Homepage

    MD6 by Rivest and Skein by Schneier et. al. seem to be getting a lot of attention, but another celebrity cryptographer, Dan J. Bernstein, also has a hash in this race, called "CubeHash."

    DJB continued his tradition of offering cash rewards for people to find security problems with his code, giving out (so far) monthly prizes of 100 Euros to the most interesting cryptanalysis of CubeHash.

    So far, the primary criticism of CubeHash is that it's slow, running some 10 to 20 times slower than many of the others in the competition. Dan brushes off this criticism by stating on his site [cr.yp.to]: "for most applications of hash functions, speed simply doesn't matter."

    To be honest, when compared efforts like MD6 and Skein, with their mathematic proofs of security, VHDL and other in-hardware reference implementations, and their amazing optimizations in both speed and efficiency (Skein can process half a GByte of data per second on modern hardware, and consumes only 100 bytes) -- entries like CubeHash seem to have that longshot underdog appeal, like a New Zealand soccer World Cup team.

    • by Kjella (173770)

      So far, the primary criticism of CubeHash is that it's slow, running some 10 to 20 times slower than many of the others in the competition. Dan brushes off this criticism by stating on his site: "for most applications of hash functions, speed simply doesn't matter."

      That dude must have missed out on the small thing called "P2P", since most of them rely on making tons of hash checks per block, per file and whatever. And yes, they do need the properties of a good hash not just a checksum so it can't be poisoned with bad data. Any kind of backend processing lots of signed messages? Usuaully you hash the message to check it against the digest, then check the signature of the digest. What about mobile devices with low battery and CPU capacity? I'm not buying it.

What the world *really* needs is a good Automatic Bicycle Sharpener.

Working...