Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Encryption Security IT

A Competition To Replace SHA-1 159

SHA who? writes "In light of recent attacks on SHA-1, NIST is preparing for a competition to augment and revise the current Secure Hash Standard. The public competition will be run much like the development process for the Advance Encryption Standard, and is expected to take 3 years. As a first step, NIST is publishing draft minimum acceptability requirements, submission requirements, and evaluation criteria for candidate algorithms, and requests public comment by April 27, 2007. NIST has ordered Federal agencies to stop using SHA-1 and instead to use the SHA-2 family of hash functions."
This discussion has been archived. No new comments can be posted.

A Competition To Replace SHA-1

Comments Filter:
  • Draft location (Score:5, Informative)

    by ErGalvao ( 843384 ) on Wednesday January 24, 2007 @08:20AM (#17736580) Homepage Journal
    The draft can be found (in PDF) here [nist.gov].
  • by Anonymous Coward on Wednesday January 24, 2007 @08:20AM (#17736588)
    ...the magical SHA-24M?
  • by G4from128k ( 686170 ) on Wednesday January 24, 2007 @08:27AM (#17736622)
    The security of a given hash/encryption would seem to be a function of how much effort has gone into breaking it. Lots of algorithms can look good on paper, but until people really tear into the math and code, it's true level of unbreakability is undecidable. A 3 year competition is not likely to bring enough IQ, theorems, malevolence, or brute CPU cycles to bear against any candidate.

    The point is that any attempt to quickly create a new algorithm is likely to create an insecure one. Shouldn't we be trying to create candidate algorithms for the year 2050 to give the algorithms time to withstand attack? Or do we plan to keep creating new algorithms as a serial security-by-obscurity strategy.
    • Re: (Score:3, Interesting)

      by bhima ( 46039 )
      The general consensus among the experts in cryptology is that a competition is far more effective than other methods of designing algorithms. Presumably the 3 years is a function of how long the world can wait as compared to how the experts need to crack it. The thing that makes me wonder is why they waited so long to begin it.

      Characterizing this process as a "serial security-by-obscurity strategy" is completely wrong because due to the very nature of the competition the algorithm is known from the start.
    • by suv4x4 ( 956391 ) on Wednesday January 24, 2007 @08:43AM (#17736766)
      Shouldn't we be trying to create candidate algorithms for the year 2050 to give the algorithms time to withstand attack? Or do we plan to keep creating new algorithms as a serial security-by-obscurity strategy.

      This is what a hash is by design: obscurity. For mathematical reasons alone, you can't have a unique hash for your megabyte message crammed in (say) 256 bytes. Or 512, or 1024 bytes.

      And with a public algorithm spec, it's all about whether there's a determined group to turn it inside-out and make it easy to crack.

      That said, the ability to hack SHA/MD5 given the time and tools, doesn't make hashes useless. A hash by itself can be useless, but coupled with a good procedure that incorporates it, it can raise the security level just enough so it's not reachable by 99.99999...% of the potential hackers out there that will try to break you.

      Security is just an endless race on both sides, and will always be.
      • This is what a hash is by design: obscurity.

        Not unless "obscurity" has been entirely redefined, recently.

        And with a public algorithm spec, it's all about whether there's a determined group to turn it inside-out and make it easy to crack.

        A (mathematically) good algorithm can stand up to such scrutiny. a "determined group" wouldn't make it any weaker. They can only (potentially) expose weaknesses in the algorithm, that allow it to be circumented faster than brute-force alone.

        Security is just an endless race

        • by suv4x4 ( 956391 )
          You know, it's not considered cool anymore to just jump in random posts, split them in sentences and just say "no, it isn't" to every piece without adding anything of substance.

          But.. you'll catch up. Eventually.
          • You know, it's not considered cool anymore to just jump in random posts, split them in sentences and just say "no, it isn't" to every piece without adding anything of substance.

            It is when the original has no substance to begin with...

            Assertions are a perfectly valid response to other assertions.
      • Re: (Score:3, Informative)

        >This is what a hash is by design: obscurity.

        "Security through obscurity" means trying to depend on indefensible secrets. The classic example from 19th century crypto theory is that it's stupid to try to keep your crypto algorithm secret, so you should keep keys secret instead.

        Security through obscurity leads to worldwide breaks when it fails.

        The existing secure hashes have nothing obscure about them. The algorithms are published and open for review. The fact that they're vulnerable to brute force is not
      • This is what a hash is by design: obscurity. For mathematical reasons alone, you can't have a unique hash for your megabyte message crammed in (say) 256 bytes.

        Your point about the impossibility of producing unique M-byte hashes for every N-byte message (where N>M) is of course mathematically correct. But instead of thinking of hashes as working via obscurity, think of the function of the ideal hash to be: the impossibility of finding data with a matching hash without so radically changing the input

        • For example, if someone can produce a page of text that has the same hash value as garbage, or as a video of a monkey, the value of the hash function is not diminished.
          due to the block based nature of many hash algorithms and the nature of many file formats, for many applications if someone can find ANY two inputs that give the same hash you are in shit, this is what has essentially happened with MD5 and is dangerously close (read: isn't known to have been done yet but someone with the NSAs rescources could
          • due to the block based nature of many hash algorithms and the nature of many file formats, for many applications if someone can find ANY two inputs that give the same hash you are in shit

            Could you explain this a bit more fully?

            • with md5 and sha you essentially run the process on a block of data at a time using the results from one block as a set of "starting values" when doing the next block.

              you choose a format (pdf is a good one) where you can easilly build conditionals into the document structure.

              then you prepare a header which is a whole number of blocks in size and run it through your md5 calculator.

              Then you run your collision searcher telling it what starting values you want to find a collision for. This then produces two blo
              • Ah, I see. Quite clever! Thank you for the excellent explanation.

                My feeling is that this technique doesn't sufficiently qualify a hash as broken, since anyone who digitally signs a doc and either remembers the gist of it or keeps a copy for themselves can require the folks using this technique to produce their copy of the doc for discovery... and of course it would never stand up to technical scrutiny, the monkeying would be obvious.

      • by mrmeval ( 662166 )
        sha1sum sha224sum sha256sum sha384sum sha512sum

        I have those on my system.
    • The point is that any attempt to quickly create a new algorithm is likely to create an insecure one. Shouldn't we be trying to create candidate algorithms for the year 2050...

      Competitions like this and the AES competition aren't about inventing new cipher designs; they're about taking the state of the art and creating a standard. The ideas underlying Rijndael are essentially the same as those in Square, which was published back in 1997; while nearly all of the ciphers submitted to the AES competition were
    • Re: (Score:3, Informative)

      by duffbeer703 ( 177751 ) *
      Its not like everyone is starting from a blank slate on the first day of the contest. It's basically a call for the math geeks who design this stuff to polish up whatever they are working on.
    • by Kjella ( 173770 ) on Wednesday January 24, 2007 @09:47AM (#17737416) Homepage
      Let's start with the facts: SHA1 is cryptographically "broken" in the sense there's a "better than brute force" attack which takes about 2^63 operations instead of 2^80 of finding a colliding pair of two random strings.

      It's not a practical attack because 2^63 is still a huge number.
      It's not a "find a collision to a known string" attack which would be stage 2.
      It's not a "find a collision to a known string by appending to a fixed string" attack which would be stage 3.
      It is a sratch in the armor which creates doubt if there are more powerful attacks, nothing more.

      There are strong alternatives like SHA-512 and Whirlpool (AES-based) which it is possible to use today, if you're paranoid more is better. Is it urgent? Not really, even a practical stage 1 and 2 attack would just be "stuff breaks, files corrupt, migrate away". The only one with really nasty consequences is stage three with code injection attacks in software and such.
      • I'm sorry, there are purposes to which hash functions are put to in which even a 'stage 1' (by your nomenclature) break creates serious weaknesses. Basically it creates a situation in which any document of any kind that Alice creates and Bob signs can have a malicious substitute also created by Alice that Bob will have apparently signed.

        The biggest example of this in modern usage is a Certificate Authority or PGP key signature. I would call both of those pretty important.

        The required abilities of cryptog

  • ROT-7 (Score:2, Funny)

    by Chapter80 ( 926879 )
    Rot-7. Everyone's doing ROT-13. I'm going to suggest Rot-7.

    Think about it. You walk into a video store and you see Rot-13 and right next to it you see Rot-7 --which one you gonna spring for?

    Not 13. Seven. Seven Little monkeys sitting on a fence...

    • What about a scheme based on ternary operators - ROT-6.5? 4 times and you're back to where you started, but anyone who is expecting ROT-13 will give up after the first try!
  • by RAMMS+EIN ( 578166 ) on Wednesday January 24, 2007 @08:31AM (#17736656) Homepage Journal
    Schneier proposed such a competition in March 2005: http://www.schneier.com/crypto-gram-0503.html#1 [schneier.com]
    • Re: (Score:3, Informative)

      Yeah. 80% of the crypto world called for one too, they're just not as loud.

      The thing is these kinds of contests take money and time to get running and (at least initially) NIST didn't have the resources to get a competition going. So what they did is organize a hash workshop for Halloween 2005, and had a second one last August following the Crypto conference where initial planning for the contest took place (a work shop that Schneier didn't bother to attend -- I guess he had yet another book to sell).
  • Good News (Score:4, Interesting)

    by Ckwop ( 707653 ) * on Wednesday January 24, 2007 @08:32AM (#17736662) Homepage

    The amount of research done in to hash functions is nothing like the amount that goes in to ciphers. I'm not really sure why this is the case because hashes are much more important than ciphers. Hashes are used in MACs to protect the integrity and authenticity of a message.

    Ask yourself this, is it more important that somebody can read your SSH connection or that somebody can hijack the channel? The reasons for wanting a good hash function suddenly become very clear.

    It's true that hashes are becoming less important as a result of AEAD modes. But they have uses far beyond MACs and it's good to see a competition from NIST to stoke research in to those primitives.

    Simon.

    • Hashes are more important than ciphers. But hashes can only be secured so far. Beyond that, the return is minimal. All hash algorithms will eventually be cracked. It's the nature of hashing that the signature is not necessarily unique. Otherwise, it'd be called compression rather than hashing. The goal is to find an algorithm that will produce unique results under the most common conditions, and be least likely to produce the same result for two messages with purely algorithmic differences.

      On the other hand
      • This doesn't make any sense to me at all. Studied as theoretical objects, hash functions are "keyed" just as ciphers are. Just like ciphers, they are strong only against an attacker with limited computing power, not against an infinitely powerful adversary. Just like ciphers, we have no way of proving them secure, but through crypatanalysis we gain confidence in them.

        And quantum computing has nothing to do with it; there are algorithms that are believed to be strong even in the face of quantum computing,
      • Hashes aren't "more important" than ciphers. They're used for different things - apples and oranges.

        "All hash algorithms will eventually be cracked". What do you mean by "cracked"? How broken does it need to be for it to be "cracked"? Enough that the US government could do it, that a crime ring could do it, that spotty teenages in their bedrooms can do it?

        "On the other hand, a good cipher can potentially be technologically unbreakable". The only unbreakable cipher is the one-time pad. We already know t
        • I have heard, but don't have a source, that elliptic curve is broken by quantum computing as well. I did a bunch of research when doing the initial design of CAKE [cakem.net] because I figure that quantum computing will be a solved engineering problem at some point.

  • by Srin Tuar ( 147269 ) <zeroday26@yahoo.com> on Wednesday January 24, 2007 @08:36AM (#17736702)

    Does anyone know whether or not common protocols and formats such as TLS, ssh, X.509 certs, etc are being updated to use newer hash functions?

    Its easy to change parts of a self-contained system, such as password hashes, but common protocols require interoperability and standards compliance.

    This is actually fairly interesting situation, where NIST certification and platform interoperability may actually be at odds with each other.
       
    • by cpuh0g ( 839926 ) on Wednesday January 24, 2007 @08:41AM (#17736746)
      Most modern protocols and standards are designed to be agile. Basically, this means that they don't mandate any one particular algorithm, but rather are designed such that alternatives can be used. Otherwise, many specs would be woefully out-of-date every few years as computing power and cryptographic algorithms advance. The 3 examples you give above are all considered "agile", read the specs and note that they use algorithm identifiers and allow for a wide variety of different algorithms to be used, none of the above are strictly bound to use SHA-1 or MD5.

      • That doesnt seem to be the case.

        Looking at the RFC for TLS:

        http://www.ietf.org/rfc/rfc2246.txt [ietf.org]
        It seems sha-1 and md5 are the only options for hashes in 1.0.

        Not to mention that the vast majority of existing implemtations would not be interoperable, even if it is technically possible to update the protocol to support newer hash algorithms. (there are asn.1 id's allocated, but the fixed sized buffers for the output of various hash functions may be different, etc, so protocol changes seem mandatory)
      • While I agree that TLS and SSL and the like are flexible, the real barrier is not the specification but how long it take for a critical mass of adoption to make a revised specification useful.
    • by Kjella ( 173770 )
      Typically, they send some sort of algorithm list, choose the best algorithm they both have, and then use a hash to make sure the algorithm list was transferred successfully (so you can't downgrade security by doing a man-in-the-middle on the algorithm list). So basicly replacing SHA1 starts the day one "better than SHA1" client connects to a "better than SHA1" server, without any backward compatibility issues.
      • by Bert64 ( 520050 )
        But if you consider the length of time between AES encryption being available for SSL/TLS use, and microsoft actually supporting it (i believe they still don't) it's going to be years before these new hashing algorithms appear in microsoft products.
  • How about SHA-512? (Score:4, Interesting)

    by ngunton ( 460215 ) on Wednesday January 24, 2007 @08:43AM (#17736770) Homepage
    Anybody know if SHA-512 is mathematically vulnerable to the same kind of attack as SHA-1 (only presumably requiring more computing power)? Or is it really a different kind of beast?
    • by iabervon ( 1971 )
      SHA-512 and SHA-256 are essential the "SHA-2" family, with different details (leading to different output lengths). The SHA-1 flaw doesn't apply to them (which is why NIST is saying to move to SHA-256 from SHA-1 (-512 is a tad excessive if you weren't previously worried about length with 160 bits, and are leaving SHA-1 due to algorithm weakness).

      Also, many uses for a secure hash are still safe with SHA-1 as far as has been published; the only issue presently is that people could prepare pairs of colliding d
  • I always wonder about what would happen if we used multiple hash functions together. E.g. you provide an SHA-1 hash, an MD5 hash, and an RMD-160 hash, all for the same message. Would that be harder to fool (i.e. make the system think you got the original, but it's actually a forgery) than one hash function that generated as many bits? What about weaknesses in the individual hash functions; would you be worse off because a flaw in any one of your hash functions affects you, or better off, because you have mo
    • by rbarreira ( 836272 ) on Wednesday January 24, 2007 @09:05AM (#17736984) Homepage
      Doesn't work very well. Read this:

      http://www.mail-archive.com/cryptography@metzdowd. com/msg02611.html [mail-archive.com]
      • by RAMMS+EIN ( 578166 ) on Wednesday January 24, 2007 @09:37AM (#17737286) Homepage Journal
        Thanks. The post you linked to precisely answers both my questions. I'll restate the questions and copy the answers from the post for /.ers' convenience.

        1) Would multiple hash functions be harder to fool (i.e. make the system think you got the original, but it's actually a forgery) than one hash function that generated as many bits?

        No. In fact, the multiple hash functions perform worse:

        ``Joux then extended this argument to point out that attempts to increase
        the security of hash functions by concatenating the outputs of two
        independent functions don't actually increase their theoretical security.
        For example, defining H(x) = SHA1(x) || RIPEMD160(x) still gives you only
        about 160 bits of strength, not 320 as you might have hoped. The reason
        is because you can find a 2^80 multicollision in SHA1 using only 80*2^80
        work at most, by the previous paragraph. And among all of these 2^80
        values you have a good chance that two of them will collide in RIPEMD160.
        So that is the total work to find a collision in the construction.''

        2) Does using multiple hash functions protect you against the case where one of them gets broken?

        Basically, yes. Just note that your total security is no better than the security of the best hash function (as explained in point 1).
      • Re: (Score:2, Informative)

        by bfields ( 66644 )
        You could take that as a warning against feeding the output of hash functions to each other in series; the OP however was asking about calculating hashes in parallel, and concatenating the output of the different hash functions. Seems to me that that's trivially at least as strong as the strongest of the individual components, but whether it's likely to be worse or better than a single hash of comparable output size sounds like a crapshoot.
        • Read the link properly, that's not what it says or talks about.
          • by bfields ( 66644 )
            OK, (reading to the end this time!) so the point as I understand it is that existing hashes have the property that it's much easier than it should be to find n-way collisions (even when finding ordinary 2-way collisions is hard), and that makes it easier than it should be to turn a collision in one of the concatenated hashes to a collision on the concatenation, by generating an n-way collision for n large enough that it contains collisions on the other hash(es). So while the result is stronger than any of
  • How frustrating! (Score:3, Interesting)

    by Paul Crowley ( 837 ) on Wednesday January 24, 2007 @08:45AM (#17736798) Homepage Journal
    The unfortunate thing is that in order to win this competition, the hash function submitted will have to be pretty conservative - very much like the SHA-512 family. There isn't time to analyze anything really new and have enough confidence in its security to bless it as the new standard for ever on. But if these attacks (and especially the recent attacks on the whole Merkle-Damgard structure) show us anything, it is that the old way turns out not to be the best way to build hash functions, and that more innovative ideas (eg Daemen et al's "belt and mill" proposal RADIOGATUN) should be the way forward.

    What we need is for NIST to pull the rug under everyone near the end, and say "thanks for putting huge amounts of energy and hard work into designing and attacking all these hash functions, now you can all make new proposals based on what we've all learned and we'll start over again!"
    • As other people have pointed out, I'm not necessarily sure that these competitions really result in a whole lot of new development work per se. Rather, they serve as encouragement to researchers in the field, to take whatever they've been working on for the past few years, tidy it up and make it publishable, and submit it as a candidate for standardization.

      The research into new functions progresses more or less on its own in the academic world most of the time. These competitions basically seek to tap into
      • I disagree. The AES competition really galvanized the research community and resulted in some exciting innovations in cryptography - eg the application of bitslicing to the construction of ordinary primitives a la Serpent - and cryptanalsysis - eg Lucks's "saturation attack". And we're seeing similar effects with the eSTREAM competition, which in turn results from the way that all the stream ciphers proposed for NESSIE were broken.
  • One Word.... (Score:5, Interesting)

    by tomstdenis ( 446163 ) <tomstdenis AT gmail DOT com> on Wednesday January 24, 2007 @08:46AM (#17736814) Homepage
    WHIRLPOOL.

    It's a balanced design, an SPN to boot.

    The big problem with the SHA's [and their elk] is that they're all UFN [unbalanced feistel networks], in particular they're source heavy. Which means the the branch/diffusion is minimal (e.g. it's possible to make inputs collide and cancel out differences).

    SPN [substitution permutation networks] like WHIRLPOOL are balanced in their branch/diffusion.

    Best of all, WHIRLPOOL is already out there. just a sign the paper!

    Tom
    • Whirlpool is pretty slow. What do you think of Radiogatun?
      • Whirlpool is slower than SHA ... but SHA is insecure so it's a meaningless comparison. memcpy() is faster than AES-CTR!!! :-)

        Never heard of Radiogatun. To be honest i'm not really into crypto as much as I used to be. The whole debacle with the LT projects threw me off. I'd much rather play a Sonata on my piano than read a dry boring paper about cryptography. :-)

        Tom
        • Of course it's also slower than SHA-256 and SHA-512 which have no reported weaknesses, so ... not no meaningless.
          • Presumably this is because people lost faith in the UFN approach. If that's the case, comparing new designs to the faster, but generally accepted as insecure designs, is not wise nor prudent.

            I'm not saying WHIRLPOOL is perfect, but it's definitely a step in the right direction. Maybe this contest will spur a faster hash which is also as mathematically sound.

            Tom
  • Comment removed based on user account deletion
    • Comment removed based on user account deletion
    • The chance of a different message generating the same hash basically only depends on the number of bits used in the hash. Sure, a combination of hash functions would give more bits. However, I strongly suspect that the combination of two hash functions to create one final hash would always be worse in that respect than a carefully designed hash function with the same number of bits.

      No hashing algorithm can care for more than 2^bits number of different documents.

    • I was wondering the same thing, and apparently so were a few other people besides. There's another discussion of it further up in the thread, and the quote which seems to be the final answer doesn't seem to be too hot on the idea. Here it is [slashdot.org] (quoting here from another source):

      "...attempts to increase the security of hash functions by concatenating the outputs of two independent functions don't actually increase their theoretical security. For example, defining H(x) = SHA1(x) || RIPEMD160(x) still gives you

  • FYI (Score:2, Offtopic)

    by trifish ( 826353 )
    This "news" is several months old.

    Oh well I know, it's Slashdot.
    • No, it's not. The draft requirements and evaluation criteria were announced just yesterday.
      Unless you live in a place where January 23, 2007 is several months ago....
      • by trifish ( 826353 )
        Wrong. If you read the first line of the summary, it says: "NIST is preparing for a competition to augment and revise the current Secure Hash Standard." THAT is several months old news.
        • Yes, I can see how a line containing some of the background of the story would change the fact that yesterday's publication of the draft requirements for the hash candidates actually occurred yesterday and not several months ago.
          • by trifish ( 826353 )
            You're a bit slow. Anyway "yesterday's publication of the draft requirements for the hash candidates" is still part of the preparations for the contest. They just elicit comments on the submission requirements. The contest has not begun. And now the point: The preparations began MONTHS ago. Got it now?
  • TF title is wrong. It says: "A Competition To Replace SHA-1". But, it's to replace the whole SHA family, which includes both SHA-1 and SHA-2.

    SHA-2 includes SHA-256 and SHA-512. Why the whole SHA family? Because its design is not very trustworthy anymore since the "Chinese" attacks in 2005.
    • Re:Wrong (Score:4, Informative)

      by Fahrenheit 450 ( 765492 ) on Wednesday January 24, 2007 @11:05AM (#17738410)
      Again you are wrong (and somewhat right about the incorrect title at the same time, iI suppose). The point of this workshop is to revise and amend FIPS 180-2. Now, while the SHA-2 line of hashes are laid out in FIPS 180-2, it is not the case that SHA-2 and the like will be thrown out. They meet the requirements laid out in the call, and frankly NIST would be insane to not make it one of the workshop's submissions. It may very well fall out that the SHA-2 is just fine and indeed the best candidate submission.

      As for the Chinese attacks, they haven't shown any real applicability to SHA-2 as of yet.
      • by trifish ( 826353 )
        As for the Chinese attacks, they haven't shown any real applicability to SHA-2 as of yet.

        The keyword is "yet". The only thing that "protects" SHA-2 from the attacks are a bunch of extra rotations. If you think NIST and NSA are going to rely on that as a strong form or protection, you're a bit naive.
  • Can we make a real competition and call it Hashing Idol where every week another function gets voted out? Or they could compete in a head to head. Two functions enter ring. One function leaves.
    ...
    Have I been watching too much TV?

  • by evilviper ( 135110 ) on Wednesday January 24, 2007 @09:50AM (#17737444) Journal
    I have a perfect solution to the hashing problem, for verifying the data integrity between two points...

    You simply have to find autistic twins. The one at the source looks through the MB file, then writes a hash, explaining that it "smells like 5 green triangles". If the twin at the destination agrees, you know you have a match.

    It's nearly impossible, even to brute-force this method... I mean, you need to covertly aquire a sample of their DNA, and wait several years for the clone to mature.

    Of course, this method's weakness is that it doesn't scale-up effectively. There are only so many autistic twins out there, and human cloning technology is still quite expensive.
    • Re: (Score:3, Funny)

      by T5 ( 308759 )
      The problem with your idea is that you're post smells like 5 green triangles too! As do a lot of other posts on /. Like this one.
  • My expert advice is that now that we've seen what happened to the SHA-1 family, I think they should just skip the inevitable upcoming round of exploits for the SHA-2 family and go straight for a new SHA-3 family.
  • by iamcf13 ( 736250 ) on Wednesday January 24, 2007 @05:09PM (#17744130) Homepage Journal
    ...is provably collision-resistant.

    http://senderek.de/SDLH/ [senderek.de]

    'Proof' by Ron 'RSA' Rivest...

    http://diswww.mit.edu/bloom-picayune/crypto/13190 [mit.edu]

    SDLH is simple and secure to any number of bits of security desired once set up properly.

    Factoring the modulus in SDLH is the only way to break it.

    For that you need a state of the art number factoring algorithm (currently General Number Field Sieve [wikipedia.org] or Shor's Algorithm [wikipedia.org]).

    Case closed.
    • And who would you trust to generate the shared N that everybody uses? Whoever knows p and q will trivially be able to break the hash function every way you can.

      Other serious problems with this hash function are (1) The output is much too long (2) it's far too regular to substitute for a random oracle where one is needed, and (3) it's much, much too slow.

      It's cool - and the proof is cool - but it's not a serious contender for normal applications.
      • by iamcf13 ( 736250 )
        And who would you trust to generate the shared N that everybody uses? Whoever knows p and q will trivially be able to break the hash function every way you can.

        Write your own bignum package and run it on an un-networked computer. I did just that not long ago in 100% C code (Visual C++) using only a small __asm function to read the Pentium CPU cycle counter and the CString 'datatype'.

        FWIW, it generated 2 1024(+) bit 100% provable prime numbers that could be multiplied together into a modulus in under 10 minu
  • Sun will propose an encryption scheme, it will be rejected, and Sun will release an open source alpha version of it, written in slow and unusable form, then make a press release about how their rejected product will replace something that isn't an encryption scheme at all *cough* Fortress *cough*

  • From the official requirements PDF:

    "A.3 The algorithm must support
    224, 256, 384, and 512-bit message
    digests, and must support a maximum
    message length of at least 264 bits."

    Someone either forgot the ^ carat, or thinks the world can get by on nine bytes of data at a time.

No spitting on the Bus! Thank you, The Mgt.

Working...