Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Security Encryption

PGP Moving To Stronger SHA Algorithms 247

PGP Corp. is moving to a stronger SHA Algorithm (SHA-256 and SHA-512) as consequence of the research conducted by the team at Shandong University in China who broke the SHA-1 algorithm. (See this earlier story for more information on the SHA-1 vulnerability.)
This discussion has been archived. No new comments can be posted.

PGP Moving To Stronger SHA Algorithms

Comments Filter:
  • Not a solution (Score:4, Insightful)

    by Esine ( 809139 ) <admin@tohveli.net> on Sunday February 20, 2005 @05:02PM (#11730353) Homepage
    They're just trying to avoid the problem, not solve it. Moving to SHA-512 is not a solution. :/
    • Re:Not a solution (Score:5, Insightful)

      by anothergene ( 336420 ) on Sunday February 20, 2005 @05:06PM (#11730377) Homepage Journal

      They're just trying to avoid the problem, not solve it. Moving to SHA-512 is not a solution. :/


      Could also be a stop gap solution. At least it will be harder to break in the mean time until a real solution is devised.

    • Re:Not a solution (Score:2, Insightful)

      by Storlek ( 860226 )
      What solution is there? Moving to a stronger hashing algorithm is surely better than doing nothing at all.
    • Re:Not a solution (Score:5, Insightful)

      by Anonymous Coward on Sunday February 20, 2005 @05:15PM (#11730422)
      What, then, is?

      Moving to Tiger? Or Whirlpool? Or RIPEMD-160?

      The amount of effort it took to discover the weakness in SHA-1 was incredible, and SHA-256 and SHA-512 are even more complex. Tiger and Whirlpool are relatively untested, and RIPEMD-160 was put out as an update after the original RIPEMD was broken (Much like SHA-0).

      SHA-256 and SHA-512 are the most likely successors to the throne, because they're based on an algo that is STILL, despite being "broken", known to have very strong collision resistance.
      • Re:Not a solution (Score:5, Informative)

        by Anonymous Coward on Sunday February 20, 2005 @06:30PM (#11730847)
        As it turns out PGP (well, GPG) already has support for RIPEMD160 built in to it. To use this:
        gpg --clearsign --digest-algo RIPEMD160 foo.txt

        gpg -b --armor --digest-algo RIPEMD160 foo.tar.gz

        (First one: Clear signuatre; second one: detached signature)
    • Re:Not a solution (Score:4, Interesting)

      by Mr2cents ( 323101 ) on Sunday February 20, 2005 @05:20PM (#11730456)
      Why not sign using two hashes? You'll need to find a chunk of data that generates two collisions with two different hashing algorithms. Let'em chew on that one!
      • Re:Not a solution (Score:4, Informative)

        by Dolda2000 ( 759023 ) <fredrik.dolda2000@com> on Sunday February 20, 2005 @07:32PM (#11731195) Homepage
        I've already replied [slashdot.org] to a similar question.

        In short, having two different hashes doesn't add more security (at least not significantly more) than just doubling the hash length.

        • Re:Not a solution (Score:2, Interesting)

          by Anonymous Coward
          having two different hashes doesn't add more security (at least not significantly more) than just doubling the hash length

          Sure it does, because you're talking about two different algorithms. If a fatal flaw is found in one algorithm, you're still left with *something*, vs. being left with no pants.
          • Re:Not a solution (Score:3, Insightful)

            by darkonc ( 47285 )
            Sure it does, because you're talking about two different algorithms.

            Not really. SHA1+MD5 can be expressed as a singular algorithm that produces the combined signature.... thing is, you now end up with one algorithm broken in two different ways that may actually allow for an easier breakage down the road (it's a bit harder to predict, given that you're now looking at a relatively ad-hock concatonation).

            It's not that it's a known breakage -- rather that you're now looking at a very ad-hock union that

      • Re:Not a solution (Score:2, Informative)

        by vagabond_gr ( 762469 )
        Such use-whatever-you-can solutions can indeed make intruder's life harder, but cannot offer true security. Even using two algorithms concurrent collisions will exist (due to the infinite number of collisions for each algorithm). If someone can find collisions for each hash function, nothing can guarantee that he will not find one for both. The problem is that the algorithm's security foundations are shaken, so we can no longer trust it.

        It's like using two passwords instead of one. Of course it's better
    • Re:Not a solution (Score:5, Informative)

      by CajunArson ( 465943 ) on Sunday February 20, 2005 @05:31PM (#11730514) Journal
      I do see your point, but remember that you could argue the RSA is useless because if I did it over a 32 bit address space it's easy to prime-factorize any number and therefore increasing it to a 2048 bit space is "just avoiding the problem". As CPU power increases it becomes more economical to move to more complex hash/ecryption schemes over larger address spaces. And there's even good news: it's a hell of a lot cheaper for me to move my PC to a new SHA system than it will be to crack it, even with the algorithm issues.
      • Re:Not a solution (Score:3, Informative)

        by theLOUDroom ( 556455 )
        I do see your point, but remember that you could argue the RSA is useless because if I did it over a 32 bit address space it's easy to prime-factorize any number and therefore increasing it to a 2048 bit space is "just avoiding the problem".

        You are comparing apples to oranges.
        We're talking about a mathematical breakthrough, not the release of the newest processor.

        This problem isn't arising because we have faster processors, it's arising because someone has discovered a fundamental flaw in the algori
    • Re:Not a solution (Score:3, Informative)

      It seems like the way to fix the problem (make the encrypted data difficult enough to decode using brute force methods) is to move up to stronger algorithms. This happens continuously, it doesnt mean that the old alogorithm was initially flawed, but rather it has become obsolete due to increasing computing power. As computing power increases, this means it takes less time to decode an algorithm using a trail and error brute force process.

      The user should be able to choose from several algorithms dependin
      • It seems like the way to fix the problem (make the encrypted data difficult enough to decode using brute force methods) is to move up to stronger algorithms. This happens continuously, it doesnt mean that the old alogorithm was initially flawed, but rather it has become obsolete due to increasing computing power.

        Actually, in this case the algorithm IS flawed.
        The issue here is not that we have more computing power, it's that someone has found a mathematical method to "beat" SHA1.

        The entire point of a
    • Re:Not a solution (Score:5, Informative)

      by Dolda2000 ( 759023 ) <fredrik.dolda2000@com> on Sunday February 20, 2005 @05:50PM (#11730594) Homepage
      It's the same solution that's been used with RSA for ages. When 512-bit keys were broken, 1024-bit keys were recommended. Now when they're almost broken, 2048-bit keys are recommended. I hear that some are already recommending 4096-bit keys.

      There's no fool-proof "solution" to this problem. The key (no pun intended) is to keep a high enough ratio between hash length (or key length) and the kind of processing power that potential crackers (including the NSA) can be thought to have access to.

      Thus, as the processing power of the world increases, so do we increase the hash/key lengths. There's nothing strange about that, if you ask me -- especially considering how the required processing power increases exponentially with the hash/key length in use.

      • Re:Not a solution (Score:5, Insightful)

        by uhoreg ( 583723 ) on Sunday February 20, 2005 @07:50PM (#11731283) Homepage
        1. SHA-256 is not just SHA-1 with more bits; it's a different algorithm. So moving from SHA-1 to SHA-256 is not the same as moving from RSA-512 to RSA-1024. (However, moving from SHA-256 to SHA-512 would be.)
        2. RSA was never broken in the same way that SHA-1 is now (allegedly -- since the paper is not yet published) broken, or that MD5 is broken. SHA-1 is broken in the sense that the researchers were able to find a collision in much less than the expected 2^80 calculations. This indicates that the algorithm is weaker than previously believed, and may soon result in much quicker attacks. RSA-512 is broken because computing power has caught up with it, and it's possibly economical to build a computer that can crack 512-bit RSA keys. Weaknesses that are solely due to key/hash size may be fixed by switching to a larger size. Weaknesses that are inherent in the algorithm may not be able to be fixed in this way.
        • RSA was never broken in the same way that SHA-1 is now

          That's bullshit, 1024-bit keys were supposed to be safe for the foreseeable future ("military strength"), but increasingly better factorization algorithms have been devised over time.

    • They are pretty forward about it. After all, PGP stands for Pretty Good Protection, not Totally Secure Protection. If you want it to be totally secure you have to use one-time pads or quantum cryptography. But 99.99% secure is enough for most people.
    • Or... they could do a Microsoft and pretend that the vulnerability doesn't exist until after a patch has been released.

      Hmmm.... methinks that perhaps moving to SHA-512 in the meantime might be a safer alternative.
    • What do you propose? That they change this universe's mathematics, so SHA-1 is secure again?
  • by ABeowulfCluster ( 854634 ) on Sunday February 20, 2005 @05:02PM (#11730354)
    I think I'll wait for the SHA-65000 algorithm instead.. it'll be harder to crack.
  • Come on... (Score:4, Informative)

    by debilo ( 612116 ) on Sunday February 20, 2005 @05:03PM (#11730362)
    ... who broke the SHA-1 algorithm.

    They did not break it. They just found a way to reduce the number of trials needed to find a collision.
    • Re:Come on... (Score:5, Insightful)

      by no parity ( 448151 ) on Sunday February 20, 2005 @05:07PM (#11730383)
      They did not break it. They just found a way to reduce the number of trials needed to find a collision.

      That is what's usually referred to as "breaking" a hash algorithm.

      • Re:Come on... (Score:4, Informative)

        by octaene ( 171858 ) <[moc.liamg] [ta] [nosliwsb]> on Sunday February 20, 2005 @05:11PM (#11730405)
        Finally, someone who has a clue! no parity is absolutely right. All they did was provide a hash that produces 1 collision as a proof that they have an algorithm that makes finding collisions easier. This doesn't mean we all need to rush out and change our public/private keys...
        • Re:Come on... (Score:5, Informative)

          by menscher ( 597856 ) <menscher+slashdot@u i u c . e du> on Sunday February 20, 2005 @05:29PM (#11730505) Homepage Journal
          All they did was provide a hash that produces 1 collision

          No, they didn't. No hash has been produced. Only a claim that they can do it in 2^69 operations. The collisions they gave were for SHA-0 and for a reduced-round version (58 rounds instead of 80) of SHA-1. Unless someone can extend the break (which is likely) then it's still quite secure.

        • Re:Come on... (Score:2, Interesting)

          by Anonymous Coward
          They didn't produce a hash, they produced a technique better than brute force for producing arbitrary hashes.

          The way you describe it makes it sound like they stumbled upon a collision.

      • That is what's usually referred to as "breaking" a hash algorithm.

        ... by people who don't know better.

    • Re:Come on... (Score:2, Insightful)

      by Anonymous Coward
      They did not break it. They just found a way to reduce the number of trials needed to find a collision.

      And what exactly would you consider broken? Since when was "it don't work as we thought" not good enough?

      Let me give you an example. You sign your Last Will and Testament digitally. You can do that; the courts will uphold it. Now, these fine researchers can concoct a new Will that says something different, but still appears to be signed by you.

      Of course you already knew they could do that, but you thou
      • Re:Come on... (Score:3, Interesting)

        by sahonen ( 680948 )
        Okay, even if you can find a collision in, say, a day... Great. You can find a collision in a day. But how many collisions will you have to sort through before you find one that even resembles a will, especially one that, say, gives all your property to me?
        • Re:Come on... (Score:4, Insightful)

          by Anonymous Coward on Sunday February 20, 2005 @05:37PM (#11730539)
          Okay, even if you can find a collision in, say, a day... Great. You can find a collision in a day. But how many collisions will you have to sort through before you find one that even resembles a will, especially one that, say, gives all your property to me?

          Oh, sure, lots. But if the SHA-1 is being used for, say, passwords - where all that's stored and checked is the hash - then ANY collision will do. So if you can find a collision in a day, you can break into any system using SHA-1 for password authentication in a day.

          That's broken.
          • Re:Come on... (Score:4, Insightful)

            by CastrTroy ( 595695 ) on Sunday February 20, 2005 @06:00PM (#11730641)
            Didn't they already prove this broken by creating a database of all hashes possible for all alpha-numeric passwords up to a certain length. I think it was for a different hash though. Anyway, if you're going to spend all the computation power to break passwords, you might as well just make a reverse hash database, it will be much more useful to you.
            • Re:Come on... (Score:4, Informative)

              by uhoreg ( 583723 ) on Sunday February 20, 2005 @07:32PM (#11731198) Homepage
              All hash algorithms are vulnerable to this if you don't use a salt (or too small of a salt). UNIX-like OSes have been using salt for a very long time (if not forever). See, for example, the crypt(3) man page. If you use a large enough salt, precomputed hash databases are pretty much useless.
          • Re:Come on... (Score:3, Insightful)

            by uhoreg ( 583723 )
            What you are describing is a different type of attack from what the Chinese researchers discovered. Their attack allows them to generate two messages that have the same hash; it doesn't allow them to generate a message that hashes to a fixed value. So password hashing is still safe -- AFAIK, there are no known attacks against it other than brute force (or rubber hose).
        • Re:Come on... (Score:5, Informative)

          by kiltedtaco ( 213773 ) on Sunday February 20, 2005 @06:14PM (#11730748) Homepage
          MD5 and SHA-1 are both iterated hashes. They work by take one block, hash it, then use the output from that round as the IV for hashing the next. This allows a curious sort of failure:

          The attack on MD5 worked independently from the initial state of the cipher, i.e., any arbitrary message could be prepended to the calculated collision, and the hashes would still collide. It doesn't matter what the text before the discovered collision block is. It could be anything (plus padding to make it to a multiple of the block length.)

          This makes the break a much more serious problem than simply finding two completely random messages that happen to have the same hash. It's only a guess at the moment, but I assume the SHA-1 attack will work the same way. The brief findings mentioned using the same sort of attack, hopefully the results will be similar.

          (Side note 1: The term used by every cryptographer i've ever encountered is "break". Feel free to use what you want, but don't claim that "break" is for some reason incorrect. If you want to argue about it, see my prior post on "Stealing" vs. "Copyright Infringement.")

          (Side note 2: Even if one was going to brute force SHA-1, you would still get the same failure mode as described. When trying all the possible hashes, you would simply use the output of SHA1 of the nefarious file as the IV in the brute-force attack. Iterated hashes, in my very uneducated opinion, are on their way out. What they will be replaced with, however, I have no idea. )
          • Are you sure about this? as i recall you could easily append, but the collision needed to be at the start.
            • Re:Come on... (Score:3, Informative)

              by kiltedtaco ( 213773 )
              I re-read the paper, and realize there is more than one way to interpret a part of it. I'm looking, but until then don't trust what I just posted. I may be forced to mod myself -5 misread the fine paper.
    • Re:Come on... (Score:3, Insightful)

      by slavemowgli ( 585321 ) *
      From a cryptography point of view, that *is* breaking it.
      • And even if it isn't it is much better to talk of a non-broken algorithm as broken than talking of a broken one as anything less than broken.
    • Re:Come on... (Score:4, Interesting)

      by abelsson ( 21706 ) on Sunday February 20, 2005 @05:16PM (#11730434) Homepage
      No, they did indeed break it. An attack is now practical for a well funded adversary, where it wasn't before - practical attacks being known is the very definition of when a cryptographic algorithm is considered broken.
      • Re:Come on... (Score:3, Interesting)

        by menscher ( 597856 )
        No, it's not practical for a well-funded adversary. Their attack only made it 2048 times easier. That's not particularly significant, in itself. What *is* significant is that it suggests that other attacks might be possible. But as it stands, SHA-1 is quite secure.

        Fighting the FUD....

        • Re:Come on... (Score:5, Interesting)

          by abelsson ( 21706 ) on Sunday February 20, 2005 @05:49PM (#11730590) Homepage

          Bruce Schneier estimates that a SHA-1 collision finding machine, built along the same lines as the old DES cracker would cost $25M-$38M and could do the needed 2^69 calculations in 56 hours [schneier.com]. distributed.net has already completed a 2^64 operation challenge a few years ago, which along with Moores law puts 2^69 ops into the realm of the possible.

          Fighting the FUD, indeed.

          • distributed.net has already completed a 2^64 operation challenge a few years ago, which along with Moores law puts 2^69 ops into the realm of the possible.

            What's interesting about this is that a project like this [zdnet.co.uk] might actually have a chance of succeeding now.


            See this link, [xbox-linux.org] section 1.2 for a little more detail on the subject.
            While this doesn't help them with discovering Microsoft's private key, it could allow them to generate a modified version of a tool like GRUB who's bits hash to the same value a
          • Re: (Score:2, Funny)

            Comment removed based on user account deletion
    • And since "large number of trials to find a collision" is one of the major selling points of an algorithm like that, they effectively broke it.

      Yes its still mostly secure, but do you really want to trust that "mostly"? Better to go to something better before the shit hits the fan.
  • by bird603568 ( 808629 ) on Sunday February 20, 2005 @05:04PM (#11730366)
    wouldn't the problem still exist but the odds of cracking it would be so huge it wouldn't be worth it?
    right? correct me if im wrong.
    • There is no way to make an encryption algorithm without it being utterly uncrackable, if even by trial and error.

      There's always "problems", the game is making them as insignificant as possible.
    • The problem in SHA-512, as known now, would be far more unlikely than the inherent "weakness" in the original SHA-1 hashing (i.e, you can [just about] always find a collision by trying enough combinations).

      [just about] => if one hash was unique for a very specific data set with data sizes larger than the hash size, one could argue that this would also be a problem in the hashing algorithm. If you were able to backconvert to password + salt from hash, that would be minusminusgood.

    • by pclminion ( 145572 ) on Sunday February 20, 2005 @05:22PM (#11730464)
      Well, until mankind figures out a way around the pigeonhole problem -- which is NEVER -- this "problem" will always exist.

      What we should be asking ourselves is, is there a way to construct a hashing algorithm for which the OPTIMAL method for finding collisions is a brute force search? So far it hasn't been done, and it hasn't been definitely proven to be possible or impossible, either.

      I see a lot of people on these forums complaining that we should "just" make a hash algorithm that is unbreakable. It's a logical impossibility. Can you fit an infinite number of things into a finite number of holes and guarantee that each hole has at most one object in it? I hope people are capable of grasping that, at least.

      • Slight clarification.

        It is a logical impossibility to make one that dodges the pigeonhole principle, i.e., one that is "collisionless."

        This is different from whether one can be "broken," i.e., a message can be found that collides in less than brute force time (2^80 for SHA1).
  • by }InFuZeD{ ( 52430 ) on Sunday February 20, 2005 @05:04PM (#11730367) Homepage
    Is there a reason to wait until someone breaks the existing algorithm before moving to a stronger one?

    It seems to me that if you start working on implementing the stronger ones BEFORE your existing one is broken?

    An ounce of prevention...
    • by Anonymous Coward
      Is there a reason to wait until someone breaks the existing algorithm before moving to a stronger one?
      It seems to me that if you start working on implementing the stronger ones BEFORE your existing one is broken?


      Because of the chance that someone might find a weakness in the supposedly stronger one before a weakness is found in the supposedly weaker one.

      Since you don't know which algorithm is going to be broken first, you pick one based on other advantages, like wider availability and more efficient calc
    • They started moving to SHA-256 last September. According to TFA:

      Jon Callas ... addressed the company's design philosophy in a September 2004 ... article ... At the same time, PGP engineers began implementing a shift from SHA-1 to the stronger algorithms (SHA-256 and SHA-512)
      • If this hack was for a 128 bit SHA (which its not) that results in reduction of search space by a factor of 2048 or so, then there is a good chance that the same technique can reduce a 256 bit SHA by a factor of 4096, 4194304 or 8589934592.

        Hashes major weakness is the fact that its block related. As soon as you find one block you can swap out in a file, then the rest doesn't matter at all. A hash that can keep state is much harder to mess with but that gets rid of the ability to hash as stream process.
        • Personally, I'm more worried about the possibility that further investigation will break SHA-1 wide open. SHA-256 breaking won't be practical with current techniques for quite some time.
  • but why not take a hash of a hash ?
    if its broken once - all you get is another hash and with no way of telling if you`ve cracked it or not, its useless

    • by Shazow ( 263582 ) <{andrey.petrov} {at} {shazow.net}> on Sunday February 20, 2005 @05:11PM (#11730402) Homepage
      Technically, that would simply double the number of operations required to perform the decryption, which does not effectively raise its complexity..

      ie. say it takes n time to crack a hash, then cracking a hash of a hash would take 2n...
      O(2n) is still O(n)

      Of course that's assuming they aren't doing it by "eye" and they have some sort of solid algorithm to do it.

      - shazow
    • by Sweetshark ( 696449 ) on Sunday February 20, 2005 @05:43PM (#11730566)
      but why not take a hash of a hash ?
      Because breaking the hash means finding two documents resulting in the same hash. If the first hash ist the same for both documents all hashes of hashes will be the same too.
      What you could do is using different hash-algos, but it increases the amount of code to be managed and reviewed thoroughly (security by obscurity rarely works). And it increases the size of the digest - SHA-256 does that too but it keeps the algorithm simple.
  • Bah. (Score:3, Funny)

    by koreaman ( 835838 ) <uman@umanwizard.com> on Sunday February 20, 2005 @05:07PM (#11730379)
    Who needs fancy things like PGP? I encrypt all my sensitive data in ROT-13, and it hasn't been cracked yet!
  • by ehiris ( 214677 ) on Sunday February 20, 2005 @05:09PM (#11730392) Homepage
    Would current customers have to buy PGP again? I see the problem as a bug not an "old version" weakness.
    • I think it's a bit of both. The current weakness shown is still so computationally intensive that, IIRC, it's equal to less than two decades of Moore's observation. Ok, that's quite a lot of time, but it's not like it was thought that it could "never" be cracked (by Turing machines) before the discovery.
  • Will GPG follow it? should they do it?
  • by papercut2a ( 759330 ) on Sunday February 20, 2005 @05:16PM (#11730431) Journal

    There's a discussion about this very subject going on on the IMC's discussion list for OpenPGP [imc.org]. From reading the posts, particularly the ones by PGP's Jon Callas, I don't think that PGP has officially decided to implement this change just yet. (On the list, the thread titled "SHA-1 broken" is the one you will want to follow.)

    But then, I could have missed something.

  • by __aaijsn7246 ( 86192 ) on Sunday February 20, 2005 @05:17PM (#11730436)
    http://lists.gnupg.org/pipermail/gnupg-users/2005- February/024862.html [gnupg.org]

    Atom Smasher atom at smasher.org
    Wed Feb 16 21:56:25 CET 2005

    Hash: SHA256

    this should help put the (alleged until proven otherwise) SHA-1 break into
    perspective. thanks to Sascha Kiefer for giving me the idea.

    let's say that unbroken SHA-1 represents a 100 meter (328 ft) wall. if a
    break allows a collision to be found in merely 2^69 operations (on
    average), that would mean the wall has crumbled to 4.9 cm (1.9 in) tall.
    that's broken!!

    OTOH, let's say that unbroken MD5 represents a 100 meter (328 ft) wall.
    comparing unbroken MD5 to broken SHA-1 means the wall would actually grow
    from 100 meters (328 ft) tall to 3.2 km (1.99 miles) tall. SHA-1, even if
    it's broken enough to find a collision in 2^69 operations (on average), is
    still stronger than MD5 was ever meant to be.

    again, using unbroken MD5 as our reference of a 100 meter (328 ft) wall,
    unbroken SHA-1 would be a wall 6553.6 km (4072 miles) tall. SHA-1 was
    intended to be incredibly stronger than MD5.

    - -- ...atom
  • I realize that this means that 2 messages can be generated with the same hash. However, does this really signify such a big weakness. The person generating the hashes has no control over the content of either of the messages, nor do they have control over what the resulting hash will be. So, you can, in a reasonable amount of time, generate 2 arbitrary messages with the same, yet still arbitrary hash. So what. Unless you can generate meaningful messages with identical hashes, you don't really accomplis
    • I believe with the md5 research by the same people, they were able to generate 2 "visibly" identical messages with the same hash by appending junk to the end of one of the messages.

      so basically "This is a message" produced the same hash as "This is another message "©{"
  • Phones tapped? (Score:4, Insightful)

    by theLOUDroom ( 556455 ) on Sunday February 20, 2005 @06:05PM (#11730677)
    So what do you guys wanna bet that at least a few of these researchers have their phones tapped at this point?

    I can't think of any intelligence agency that that wouldn't like a few days head start with any more findings these guys come up with.

    I'm not really headed anywhere specfic with this comment, other than getting this thought out there. People have been bugged to gain access to much less exciting information than this. [spybusters.com]
  • According to the article,

    Jon Callas ... addressed the company's design philosophy in a September 2004 ... article entitled "Much ado about hash functions" . At the same time, PGP engineers began implementing a shift from SHA-1 to the stronger algorithms (SHA-256 and SHA-512)

    So they were actually ahead of things, not reacting to the break.
  • by Gemini ( 32631 ) on Sunday February 20, 2005 @09:44PM (#11731936)
    To forestall the obvious question about GnuPG compatibility, GnuPG has had SHA-256, SHA-384, and SHA-512 since version 1.2.2 (2003-05-01) so it will interoperate nicely with the new PGP.

    Incidentally, despite what the article implies, PGP has actually had SHA-256 support for a while now. It's not exposed in the GUI, but if you use GnuPG to generate a SHA-256 message, PGP can handle it.

    In terms of what the SHA-1 "break" means, it is certainly time to start migrating to something stronger, but it is not time to panic and start revoking keys. Think of this as the MD5 situation in the late 1990s: a flaw was found, people migrated away, and when the serious MD5 crack was found last year, most people had already stopped using it.

    The sky isn't falling. It's just a wake up call to start moving to something better.
  • by cpeikert ( 9457 ) <<ude.tim.mula> <ta> <trekiepc>> on Monday February 21, 2005 @12:19AM (#11733091) Homepage
    Some comments (that I have not seen emphasized much) about all this SHA-1 stuff:
    • The fact that there is a 2^{69}-time attack (versus a 2^{80} naive attack) on SHA-1 may only be the tip of the iceberg. Once the methods of attack are published and studied widely, other more efficient attacks may be found (historically, this is more likely than not). Saying "we're still safe; the attack is too slow and doesn't find a second preimage" is very naive.
    • Hash algorithms like SHA-1 are used for more than just digital signatures. They are often used to achieve certain strong properties (like chosen-ciphertext security) in some public-key encryption algorithms (like OAEP). So saying "this only affects signatures" is wrong -- we don't yet know what effect these attacks might have on the security of the many other cryptographic schemes and protocols that use hashing primitives.

Understanding is always the understanding of a smaller problem in relation to a bigger problem. -- P.D. Ouspensky

Working...