Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Encryption

SHA-3 Finalist Candidates Known 194

Skuto writes "NIST just announced the final selection of algorithms in the SHA-3 hash competition. The algorithms that are candidates to replace SHA-2 are BLAKE, Grøstl, JH, Keccak and Skein. The selection criteria included performance in software and hardware, hardware implementation size, best known attacks and being different enough from the other candidates. Curiously, some of the faster algorithms were eliminated as they were felt to be 'too fast to be true.' A full report with the (non-)selection rationale for each candidate is forthcoming."
This discussion has been archived. No new comments can be posted.

SHA-3 Finalist Candidates Known

Comments Filter:
  • by MrEricSir ( 398214 ) on Friday December 10, 2010 @09:05PM (#34520328) Homepage

    Well that's mathematically sound reasoning!

    • by icebike ( 68054 ) on Friday December 10, 2010 @09:13PM (#34520380)

      Exactly my reaction.

      Is this a beauty contest or what?

      There may be some tendency to think that something that hashes too quickly would be trivial, but without even a glance at the methodology and a modicum of trials this is just like assuming the cute girl is an air-head without so much as a conversation.

      Who are these guys anyway? You expect better from NIST.

      • Beauty contest? Nah, this is fantasy football. Maybe Dancing with the Stars.

        Anybody know what the point spread is on this SHA-3 hash competition? I'd like to get a bet down before the line moves.

      • You didn't think that when sha gave up the goods that fast that you were the only one sha was giving it up to, did you?
      • by nedlohs ( 1335013 ) on Friday December 10, 2010 @11:47PM (#34521166)

        You believe what you read in a slashdot summary???

        • by Skuto ( 171945 ) on Saturday December 11, 2010 @04:26AM (#34521944) Homepage

          "We preferred to be conservative about security, and in some cases did not select algorithms with exceptional performance, largely because something about them made us “nervous,” even though we knew of no clear attack against the full algorithm."

          William Burr, NIST

          • by amorsen ( 7485 )

            The "something" which made them nervous wasn't the speed, and it wasn't necessarily the same for each algorithm...

          • by mbkennel ( 97636 )

            Read the above very very carefully. This is superb government misdirection.

            The reader is encouraged to infer that the exceptional performance made them nervous. That is not what he claimed.

            I suspect (without insider knowledge) that forthcoming statement would be:

            "We preferred to be conservative about security, and in some cases did not select algorithms with exceptional performance, largely because an attack and theory known to government scientists but not to the public can crack a variant or limited case

      • Re: (Score:3, Insightful)

        by Surt ( 22457 )

        Technically, if your hash algorithm is too fast, it gets easier to brute force. So it isn't completely unscientific.

        • Technically, if your hash algorithm is too fast, it gets easier to brute force. So it isn't completely unscientific.

          Only if the input is small, which translates to "only if the protocol designer is clueless". Also, you can always make a fast algorithm slower by iterating it, so your point is irrelevant.

          • by Surt ( 22457 )

            Actually, to defeat a hash, you need only defeat the last repetition, so, no, iteration doesn't help.

            • Actually, to defeat a hash, you need only defeat the last repetition, so, no, iteration doesn't help.

              Cite?

              The sort of attack you're talking about, where speed is a factor, is a dictionary attack. The attacker has reason to suspect that the input is from a relatively small set (e.g. it's a human-selected password) and it's therefore feasible to hash every element in the set and compare each output with the known hash value. If the hash is fast enough and the set is small enough, this may be feasible.

              One way to defeat that attack is to increase the set size, but in many cases (like passwords) that's no

        • by kasperd ( 592156 )

          Technically, if your hash algorithm is too fast, it gets easier to brute force.

          Let's assume somebody came up with a hash function that is 10 times faster than what we would otherwise use, and let's assume it is just as secure except from the minor detail that by being 10 times faster it also becomes 10 times faster to perform a brute force attack. If those assumptions are true, then instead of discarding it altogether, we should find a way to make the brute force attacks slower again. Making the algorithm s

        • By brute forcing, I assume you mean find another input that hashes to a predetermined value, not finding two inputs which happen to hash to the same value. Without some mathematical attacks, this very unlikely, no matter how fast the hash algorithm is.

          A slower hash will make dictionary attacks on salted password tables more difficult - but generally you would simply run the hash many times to increase the computational load for an attacker. Again, the raw speed of the hash algorithm is unlikely to make mu

      • It's not unreasonable to leave out an algorithm that's as secure mathematically as the others as far as we can tell but that has a concerning characteristic. Previously, they've eliminated competitors for having simple mathematical representations and things like that. Since those algorithms were no more secure than the ones without the worrisome attribute, they could be eliminated without much problem. Remember, these are security guys, so they're paranoid about stuff like that.

        I'm a little curious abo
    • Probably so that brute-forcing the plaintext never sounds like a good idea.

      Yeah I know that the increase is exponential (based on the string length) - but if you only do dictionary words... you'll still hit a few passwords.
      • Short strings are supposed to be salted anyway.

    • by Anonymous Coward on Friday December 10, 2010 @09:26PM (#34520460)

      what if they were optimized? would sleep(10) make them finalists?

    • When you want to slow down a fast hash you just do it a lot of times. See PBKDF2, for example.
    • by Fry-kun ( 619632 )

      FTFA: "Cryptologist Ron 'The R in RSA' Rivest withdrew his MD6 process - it was highly-rated but conspicuously sluggish"

      Someone simply misread or misunderstood "sluggish" for "too fast"

    • Well that's mathematically sound reasoning!

      In cryptographic lingo, it means that although the algorithms aren't broken, they have a small security margin, for example 14 of 16 rounds are broken. Since attacks always get better, it's a good idea to pick an algorithm twice as slow with, say, 32 rounds, then to be on the bleeding edge. Sure, you get twice the speed, but you are only one good research paper away from hell.
      In regard to AES, it's largely agreed in the crypto community that NIST went for the performance, and we now trust an algorithm with

  • Bruce Schneier helped to make skein http://www.schneier.com/skein.html [schneier.com]
    • But only he is capable of using it for lossless compression...
  • good! (Score:3, Funny)

    by larry bagina ( 561269 ) on Friday December 10, 2010 @09:12PM (#34520376) Journal
    Our lawyers won't let us convert our svn repositories to git since git uses SHA-1, which is known to be vulnerable to collisions. Hopefully, they pick a SHA-3 soon!
    • I find the suggestion that Lawyers make purely technical decisions in the company to be incredibly confusing.

      Isn't this the sort of decision that the IT staff take?
      • Eh, is letting lawyers make purely technical decisions really that much worse than letting the accountants or the non-IT managers do it?
    • "Our lawyers won't let us convert our svn repositories to git since git uses SHA-1, which is known to be vulnerable to collisions."

      That makes perfect sense. Better to use an SCM that gives no assurance that what you get back is the same as what committed [youtube.com] than use one that was designed in large part to fix that known problem with Subversion, and has been used to make hundreds of thousands of changes to one of the biggest software products on the planet without any such problem.

    • That issue has been debated and refuted before git even existed: http://www.monotone.ca/docs/Hash-Integrity.html [monotone.ca]
      There's also venti of plan9 that uses SHA-1 for content addressing: http://en.wikipedia.org/wiki/Venti [wikipedia.org]

      But of course, they must be all wrong.

  • Skein is broken, last I heard...
    • I've been following the progress on the SHA-3 Zoo [tugraz.at] and I haven't seen anything indicating Skein is broken. I've been following Skein with particular interest because I like how it can be tweaked in various ways to serve particular needs.

    • by Sancho ( 17056 ) *

      It was broken, but it has been fixed.

      Actually, Threefish was broken (which Skein relied upon.)

      • Is this the attack by djb that even he hasn't posted clear details of? Or is this a previous attack that Schneier and company solved with their 2nd round tweaks that improved diffusion?

  • Bah! (Score:5, Interesting)

    by jd ( 1658 ) <imipak@ y a hoo.com> on Friday December 10, 2010 @09:29PM (#34520482) Homepage Journal

    None of the good names survived!

    Still, there was a lot of debate on the SHA3 mailing list governing the criteria as it was felt that some of the criteria were being abused and others were being ignored. I, and a few others, advocated an approach where the best compromise solution was the "winner" for SHA3 but the runner-up that was best for some specific specialist problem (and still ok at everything else, since it's a runner-up, and also free of known issues) would then be considered the winner as "SHA3b". That way, you'd also get a strong specialist hash. The idea for this compromise was due to SHA2 not being widely adopted because it IS ok for everything but not good for anything. Some people wanted SHA3 to be wholly specialised, others wanted it to be as true to the original specs as possible, the compromise was suggested as a means of providing both without making the bake-off unnecessarily complex or having to have a whole parallel SHA3 contest for the specialist system.

    The main problem with the finalists is the inclusion of Skein. The use of narrow-pipe algorithms has been widely criticised by people far more knowledgable than myself because it violates some of the security guarantees that are supposed to be present. The argument for Skein is that the objection is theoretical.

    • I'm really curious as to why Blue Midnight Wish wasn't selected. I've read a bunch of the papers and nobody seemed to be able to come up with any reasonable reason it was weak, and it's very fast.

    • Which specialist problem would SHA3b have been for? Any random specialist problem or is there a particularly important specialist problem? The former doesn't sound very useful, but I don't know what the later would be.

    • I, and a few others, advocated an approach where the best compromise solution was the "winner" for SHA3 but the runner-up that was best for some specific specialist problem (and still ok at everything else, since it's a runner-up, and also free of known issues) would then be considered the winner as "SHA3b".

      That would entirely defeat the point of the competition. The purpose is to select one, good, cryptographic hash that will be called SHA-3 and can therefore be used in places where US government regulations require a strong cryptographic hash.

      The other algorithms aren't going away at the end of the competition. If you have a specialist purpose where one of them is better suited than SHA-3, then you can still pick your own algorithm. It just won't be called SHA-anything.

  • by msauve ( 701917 ) on Friday December 10, 2010 @09:55PM (#34520630)

    Curiously, some of the faster algorithms were eliminated as they were felt to be "too fast to be true."

    Not only is the claimed quote ("too fast to be true") nowhere to be found in the linked article, but there isn't even a basis for that claim.

    • by udittmer ( 89588 ) on Saturday December 11, 2010 @04:19AM (#34521928) Homepage

      Not only is the claimed quote ("too fast to be true") nowhere to be found in the linked article, but there isn't even a basis for that claim.

      There is in fact a basis for that claim, even if it isn't mentioned in that particular article. See http://crypto.junod.info/2010/12/10/sha-3-finalists-announced-by-nist/ [junod.info] for more about that.

    • by Skuto ( 171945 ) on Saturday December 11, 2010 @05:10AM (#34522040) Homepage

      >Not only is the claimed quote ("too fast to be true") nowhere to be found in the linked article, but there isn't even a basis for that claim.

      People read the articles? That's new.

      My original post had no links, because the original announcement was on a password protected mailing list. If you read that (it's been posted elsewhere since), you will see the statement it refers to.

      Some fast algorithms were eliminated based on partial attacks or observations that are not real attacks. This means there's a potential we miss out on a faster but good algorithm, because most partial attacks never make it to full attacks. Using this to eliminate ciphers means the selection is a bit of a black art (that shouldn't surprise insiders too much).

      Some people were advocating the opposite approach, namely to just pick the fastest/smallest ciphers and then see which one wasn't broken at the end of the process. Clearly, NIST is taken a very different approach. And given hash function history, an understandable one.

E = MC ** 2 +- 3db

Working...