Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Security

Schneier et al Report PGP Vulnerability 206

SpaceTaxi writes: "Researchers reported that they were able to intercept and modify a PGP encrypted message so that, IF it is sent back to the attacker, the original message could be read by the attacker." The paper comes from Kahil Jallad, Jonathan Katz, and Bruce Schneier. Here is the Yahoo! article.
This discussion has been archived. No new comments can be posted.

Schneier et al Report PGP Vulnerability

Comments Filter:
  • The flaw affects software using Pretty Good Privacy, the most popular tool for scrambling e-mail.

    Only the PGP *program* seems to be affected, not the actual OpenPGP standard. Thank god.
    • by Publicus ( 415536 ) on Monday August 12, 2002 @02:52PM (#4055943) Homepage

      If you had read the article, the program isn't really flawed either. This is a case in which a human can be fooled into sending decrypted garbage back to the villian, who can then do his decryption to get the original text of the message.

      I'm not sure the software should be able to prevent this. Of course, it would be nice if it did, but this is just one of those common sense things that hopefully users of PGP will be smart enough to understand.

      I could always reply to the sender of an encrypted message and say "could you send that to me unencrypted, I can get it to work." If they fell for it and sent it to me unencrypted, you could say the software failed, I suppose, but really it's the human's fault.

      • by SirSlud ( 67381 ) on Monday August 12, 2002 @02:59PM (#4055990) Homepage
        Yeah, this exploit falls under the 'social engineering' side more than anything. Who on earth would use PGP for their communications, but have no hesitations replying to suspicious email from unknown people. Even if one were to reply, one shouldn't include the body of the mail in question out of sheer para^H^H^H^Hcommon sense!

        • Yeah, this exploit falls under the 'social engineering' side more than anything.

          Not really, it is a significant protocol flaw. The exploit of the flaw is slightly cumbersome but there are plenty of circumstances where it can be applied, in particular any application where an encrypted PGP message is sent to an automated daemon.

          The flaws in the Enigma cipher were equally obscure. Protocol design should be secure against social engineering. Had the German operators used the devices correctly the attacks would have been much harder.

        • How can "you should not quote a message to which you are replying" possibly be common sense? An algorithm that is vulnerable to a "social engineering" attack this simple should not be advertised as a secure algorithm. Encryption and signatures must become transparent and reliable if they are to be used by a large number of people.

          I once participated in a similar discussion where I argued that headers like "to", "cc", and "date" should be included in the hash when signing a message, because people will send short messages such as "Today's meeting has been cancelled" whether you want them to or not. (I can't find that discussion now.) I was and still am shocked that the majority of participants in the discussion felt that the hole was the fault of the sender for not including enough context in the original signed message, or of the recipient for not noticing that the message lacked context and/or not suspecting that the e-mail might have been forwarded.
          • Re:Common sense??? (Score:3, Insightful)

            by SirSlud ( 67381 )
            Oh, there's always perfection to strive for - its simply a matter of weighing the cost of making some foolproof vs. placing some amount of onus on the user to understand the scope and mechanics of the tool they are using.

            Personally, I think the smarter and more transparent you make tools, the dumber people are allowed to be. In that respect, I'm very wary of fully transparent solutions for the simple reason that once you become sufficiently detached from the mechanics of a tool, you become *much* more susceptable and vulnerable to social engineering (cause your brain isn't used to the mental safety checklist of your actions), and more vulnerable to being a victim of an attack and not knowing it. I think you should only the take "The Technology Should Be Fully Transparent" route if you are 100% sure you will never introduce a bug into that technology and expose unprepared people to social/tech engineering exploits.

            I guess that makes me an elitist, although the argument has held up pretty darn well in the physical technology world ... I prefer the term realist. People are never going to be perfect, but the more foolproof you make the technology, the more people are free from any responsibility or accountability from accidents stemming from the use of the tool, even if that accident ends up having been easily avoided with a little common sense.

            This also brings up a more interesting point; should this kind of technology be accessible to somebody with no investment in education of encryption tools and concepts? I believe that anybody who requires truely secure communication, from your CEO to your Anthromorphic Fetisher who's terrified those jocks in dorm room 4B are going to sniff his porn emails might consider that some investment in learning the tools that will offer them protection are simply a fair cost of requiring a truely secure communication pipe. Thats also the conclusion that the physical technology world made - generally, technologies with smaller user bases require more training to use those technologies, simply because the cost to foolproof-ize that technology isn't worth it given the low amount of users.

            All that said, to be honest, I don't use PGP, so I'm really not aware of the installed user base, nor the various pros and cons of trying to entrench PGP to Every User and Every Desk. Is that truely the intended goal? Secure communications for every email flying about? Sure seems like alotta wasted cycles ... =)
      • I agree with the paper's assessment, and disagree with yours; specifically:

        Typical users of e-mail encryption software are not educated in good security practices;it is therefore important to design robust software whose security is not compromised even when the software is used in a naive manner.
        • By your theory, we should eliminate passwords...let's examine your logic:

          Typical users of password-locked software are not educated in good security practices; it is therfore important to design robust software who's secure is not comprompised even passwords are used in a naive manner.

          In other words, passwords aren't good enough because because stupid people pick easy-to-guess passwords.

          Better start using biometrics I guess. :)

          • I see nothing wrong with that theory. It is, in fact, correct, that one of the main weaknesses of computer systems today is poor passwords.
          • That is why any good password system checks to see if their password is trivial and tells them they are stupid and to choose again =) Now I personally believe that the ideal password system is somehwere between password=last name and password= 9IbiH.5! (of course that is not nor has it ever been a real password of mine, I'm not that silly)
          • Yes, we should try to eliminate passwords, for security against bad passwords, for security against social engineering (calling a victim, posing as tech support, and asking for a password), and for convenience. How about this:

            Web sites would stop using passwords, or at least make passwords optional. The login procedure would be similar to the current "lost password" procedure, involving e-mail, instant messages, or perhaps something better integrated with browsers. Many users would log in, accept a permanent login cookie, and be happy that they didn't have to think up and write down Yet Another Password. Mobile users might choose to use a password for the few services that they use in multiple locations if they can only access e-mail from one location. Paranoid users would choose session cookies instead of permanent cookies and might disable logging in by e-mail after setting a password.
        • repeat after me (Score:2, Insightful)

          by Anonymous Coward
          Security is a process, not a technology.

          What are they teaching you kids in school these days? Let's try that again:
          Security is a process, not a technology.

          If Typical User of E-Mail Encryption Software is not educated in good security practices, then there's no technology in the world that's going to help him out. Plug up one "naivety" hole? Wow, I guarantee you he'll find 100 more. Teach him about security processes, not technologies.

        • Yes, given that human error is inevitable, it's important to design systems to limit the damage it can cause.

          The aviation industry has been doing that for generations.

          The aviation industry has also put enormous effort into educating pilots so that they make fewer errors. There are some errors that can't be contained or corrected, like flying into a mountain range or failing to check the fingerprint on a public key.
      • by Beryllium Sphere(tm) ( 193358 ) on Monday August 12, 2002 @03:18PM (#4056109) Journal
        The fact that human intervention is required also limits the damage that can be done.

        The attack would need to be repeated for every new value of the session key, or in other words for every message.

        Even the most naive person, after a few rounds, would either get suspicious or stop using PGP.

        There are times when disclosure of even one or two messages would be catastrophic, of course.

        I'd argue that there is a design flaw here: a failed decryption should only return one bit of information, namely "decryption failed", and not provide a potential adversary with algorithm output. The subtlety is that the attack doesn't involve a failed decryption. It's a valid decryption, with correct key, of unwanted ciphertext.

        Signing before encryption would be a countermeasure.

        This attack lends some support to a heretical suggestion Larry Randall made on the pgp-users mailing list. He suggested restricting distribution of the "public" key to only authorized correspondents. Sounds nonsensical at first, and doesn't apply to most threat models and usage models, but he's got a point. If you allow anybody in the world to send you encrypted email, you're allowing anyone in the world to operate your decryption system with chosen input. It's like going out in public without your tinfoil hat :-)
        • Signing before encryption would be a countermeasure.

          How? I thought an encrypted and signed message was just an encrypted message with a signature tacked on. The adversary could just discard the signature and perform the attack.

        • Signing before encryption would be a countermeasure.

          Shouldn't you sign after encryption? The point of the signature is so that you can tell if the contents of the message have been tampered with. This approach requires that the encrypted message be replaced with gibberish, which would include the encrypted signature, too. But if the signature is unencrypted, there would be no excuse for it being turned into gibberish and it would serve as a valid check that the original message had not been tampered with.

          • There are problems with signing after encryption, depending on what your security needs are.

            More detail than you want in
            R. Anderson and R. Needham, ``Robustness Principles for Public Key Protocols,'' in Lecture Notes in Computer Science 963, Don Coppersmith (Ed.), Advances in Cryptology - CRYPTO '95, pp. 236-247. Springer-Verlag, 1995

            My idea had been that if someone replaced the encrypted message with gibberish then the signature wouldn't verify. It wouldn't work anyway, for the reason jareds pointed out -- as long as you accept *any* unsigned messages then you're vulnerable to gibberish that doesn't look like a signed message.

            Anyway, the best attack on PGP is still installing a keystroke logger in a Trojan. Or putting a fake public key on the servers and ensnaring people who don't check key signatures.

        • Restricting distribution of the public key isn't necessary. Restricting automatic decryption is a better idea: if you get an encrypted message, don't let the user include it in a reply to anyone who didn't sign it.

          The attack message can't be signed (either by the attack or by the original sender), because it's a specially-constructed ciphertext where the attacker doesn't know the plaintext (until you send it back), and it's not the original message, so nobody could have signed it.

          With this UI, you get a bunch of garbage and when you reply, your mailreader doesn't let you include the garbage, so you just say "You sent me garbage; you'll have to try again" rather than sending back the garbage.
    • Actually, it looks like it affects the standard, but implementations that deviate from the standard (i.e., GnuPG in some respects) may be less vulnerable.

      According to the Columbia University research paper (or at least one of them), the flaw does effect the OpenPGP standard, and specifically, they successfully exploited the flaw in GnuPG as well when compression was not used. The authors note that the flaw even exists when compression is employed, but that GnuPG deviates enough from the OpenPGP compression standards that compressed GnuPG messages are relatively safe. Apparently GnuPG employes a non-standards-required message integrity check that foils the attack.
    • I'm not sure how you came to that conclusion. Check out section 5 of the paper.
    • Only the PGP *program* seems to be affected, not the actual OpenPGP standard. Thank god.

      That is completely wrong. According to the paper it is the specification that is broken NOT the implementation

      In fact the implementations turn out to be more secure than the spec for purely fortuitous reasons.

      The attack is possible because of confused management of compressed and uncompressed message types. It is a flaw in the PGP envelope scheme.

      It does not appear to me at the moment that the attack can be extended to S/MIME, certainly S/MIME does not use completely different formats for compressed and uncompressed data so the immediate cause is not there. I suspect that Bruce will have looked at S/MIME before publishing the paper.

      I think that people need to rexamine a couple of things.

      First the parrot like claim that people should avoid using S/MIME and that PGP is the one true security specification.

      Second the claim that open review is a panacea for security issues. The spec and PGP code were both open and there is a significant flaw.

      Third the gullability of Slashdot readers who all appear to have taken the first post in the thread as accurate rather than read the actual paper.

  • by krog ( 25663 ) on Monday August 12, 2002 @02:51PM (#4055928) Homepage
    leaving the door open for instances like this.

    PEBKAC conquers all, as usual.
  • ... he hasn't posted an article since Jul 15th [slashdot.org]!

    Is he still employed with OSDN??

    Inquiring minds want to know!
  • ENCRYPTED.TXT ...but it is corrupt. Could you please send me a copy? Here is my public PGP key:
  • First the SSL bug, now this? Looks like we have to go back to two paper cups and a piece of string for sending encrypted messages to each other...
  • Please stop (Score:5, Funny)

    by Anonymous Coward on Monday August 12, 2002 @02:53PM (#4055949)
    Every day it seems like there is some new vulnerability discovered in one of our beloved secure communication tools/protocols (PGP, SSL, SSH, etc). This really hurts me a lot, as I feel my trust has been shattered.

    For this reason, I ask... no beg... all hackers, researchers, programmers, etc to please stop reporting these security problems. Find something? Keep it quiet! Don't tell anyone, and then no one will know, and we'll all still be safe. Maybe in a few years, you can quietly patch it up, and we'll all go on like nothing has happened. Sound good?

    Let's all follow Microsoft's lead on this one. Thanks guys!
  • OpenPGP standard (Score:2, Informative)

    by Nex6 ( 471172 )

    Nonetheless, an update to the OpenPGP standard was to be released Monday to coincide with the announcement of the flaw. Many developers already have begun to write software fixes, Callas said.

    looks like it might be a little more then something just in PGP, if they are releasing an update to the
    openPGP standard.

    altho i suspect PGP is the "most" vulenerable to this , It would be interesting to see what other openPGP software is really effected.

    Nex6
    • Re:OpenPGP standard (Score:2, Informative)

      by Gemini ( 32631 )
      In reality, by default, no OpenPGP software is really affected by this. Both PGP and GnuPG compress the messages which halts the attack. On top of that, GnuPG uses a message integrity check which also halts the attack.

      A given message is only vulnerable if the sender disables compression and message integrity checking. It is unfortunate the news reports don't say this (not as good a story, I suppose), but the paper makes it quite clear.
      • In reality, by default, no OpenPGP software is really affected by this. Both PGP and GnuPG compress the messages which halts the attack.

        Actually that is not accurate. PGP and GPG will both use compression if the payload is compressible but send the content plaintext otherwise.

        Both programs are vulnerable if you send a gzip file.

        Stop spreading Complacency and False Certainty, it is just as bad as FUD

  • Compressed data (Score:5, Informative)

    by mukund ( 163654 ) on Monday August 12, 2002 @02:58PM (#4055983) Homepage
    The abstract of the paper suggests that the attacks largely fail when the data is compressed before encryption. From the GNU Privacy Guard manpage of version 1.0.7, the default is to use RFC1950 [ietf.org] compression (which is ZLIB compressed data format) and the default compression level of the zlib library (normally 6). Note that all this applies to GNU Privacy Guard 1.0.7. According to the same manpage, the NAI PGP implementation uses RFC1951 [ietf.org] compression, which is the DEFLATE compressed data format.
    • Re:Compressed data (Score:1, Interesting)

      by Anonymous Coward
      I heard (on the internet, so it must be true) that using compression with SSL may result in weaker encryption (ie mod_gzip over https). Anyone know if this is true or not? Is it safe to use mod_gzip and friends over https?
      • Compressed streams often have a 'signature' at the beginning of the stream. When this signature is encrypted, then the attacker knows a little bit of unencrypted data and the 'corresponding' encrypted data. This will make a plaintext attack more feasible. It may still require the attacker to eavesdrop many packets and spend a lot of cpu time on them, but it will be less than with completely random data. However, not compressing also aids the plaintext attack, especially when the data is known or patterns of it are guessable, such as english text (alphabetic statistics), protocols "GET /cgi-bin/login.cgi", "HELO myserver", headers "X-Mime-Type:", etc.

        To be most secure, you should only encrypt and transmit absolutely random data.
    • The abstract of the paper suggests that the attacks largely fail when the data is compressed before encryption.

      True, but it also mentions:
      In the case of GnuPG (when compression is used), the attack fails only due to the presence of a message integrity check which is not explicitly required2 as part of the OpenPGP specification.

      So, yes the attack doesn't work on GnuPG compressed data, but it looks like the GnuPG developers will have to close some other potential security flaws.

    • The attack also fails if the reply containing the "garbled" message is encrypted. Always encrypt replies to encrypted messages to avoid giving attackers a plaintext.
    • Yes, but it still fails on data that's already compressed. According to the paper (as opposed to the abstract), both PGP and GPG disable compression on data that's already compressed, thus allowing the attack to succeed.
  • by DTC ( 450482 )
    I had been wondering what John Katz has been up to, so it's good to hear that he's been keeping busy. Now perhaps he'll have some time to review some movies; I've been seeing entirely too many, since I don't know what NOT to watch anymore ;)
  • I use alcohol to encrypt my email messages to specific people, people like ex-gfs, college professors, old bosses, etc. Example: Ihate tyou. WHY doaNt you JSust dddieee!@#! My MMMOOOM tlds mee yYoyu wass BadDS KNwesss. True its not the as secure as PGP but it has it's uses.
  • From CNN here. [cnn.com]
  • *Slashdot* Jon Katz? A cryptographer??

    I thought he was just a bloviated wannabe essayist, not a crypto analyst. Surely this can't be the same guy...

  • by Shamanin ( 561998 ) on Monday August 12, 2002 @03:06PM (#4056027)
    Errata from the desk of Bruce Schneier: Pay no attention to p. 584-587 of Applied Cryptography - 2nd Edition... I didn't know what I was talking about... now I do.
    • Errata from the desk of Bruce Schneier: Pay no attention to p. 584-587 of Applied Cryptography - 2nd Edition... I didn't know what I was talking about... now I do

      Huh ? those pages are part of chapter 24 "Example Implementations" and describe how pgp works and explain key-signing and the trust-model that Philip Zimmermann built into it.

      The text is in no way wrong or even outdated. Mr. Schneier writes good text who wouldn't be written much differently today.

    • It's called "Secrets and Lies". Also, see the article at The Atlantic. [theatlantic.com]
  • by AftanGustur ( 7715 ) on Monday August 12, 2002 @03:11PM (#4056053) Homepage


    Imagine a user who has configured his software to automatically decrypt any encrypted e-mails he receives.
    An adversary intercepts an encrypted message C sent to the user and wants to determine the contents P of this message.
    To do so, the adversary creates some new C and sends it to the user; this message is then automatically decrypted by the user s computer and the user is presented with the corresponding message P .
    To the user, P appears to be garbled; the user therefore replies to the adversary with, for example, What were you trying to send me? , but also quotes the garbled message P .
    Thus, the user himself unwittingly acts as a decryption oracle for the adversary.

    PGP and GnuPG use both symmetric and asymetric encryption algorithms to encrypt data. First a random key (S) is generated and the data (C) encrypted with it (giving you C'). The symetric key is then encrypted using the asymetric key (public key) giving you S'. When the sessage is sent the encrypted key S' is sent along with the data.

    What appears to be happening is that Mr Schneier and buddies have figured out a way to create a data part C', so that when it is decrypted, the orinal symmetric key (S) can be obtained from it.

    This means that :

    Even if someone tricks you into decrypting a message for him, then that attack will only reveal the contents of that particilar message. (your private key, and all other encrypted data, is still safe)

    PGP has not be 'broken', nobody can read you encrypted emails without your help.

    This is not the end of PGP/GnuPG.

    • by russotto ( 537200 ) on Monday August 12, 2002 @03:26PM (#4056177) Journal
      As the article notes, this isn't a new attack; Schneir and Katz had a paper on the general principle two years ago; it has been up on the Counterpane Labs site for some time now.

      BTW, you don't get S, the session key. You get a new message, P which is related to M in a manner you chose.

      Easy example (not real life): Suppose the message C is encrypted using any algorithm in Electronic Code Book mode. To sucker the user into decrypting that, I send him a message C' which includes all the ciphertext blocks which were in the original message C (but not in the same order). He decrypts that (giving P) and quotes it back to me as a garbled message. I now build a codebook with P and C', and use that to decrypt C.

      If another mode is used, as in PGP, a more complicated method of constructing C' is required (and is given in the paper), but it still works.
    • cribs (Score:4, Informative)

      by 56 ( 527333 ) on Monday August 12, 2002 @03:35PM (#4056237)
      This is how the allies broke the German enigma in World War 2.

      I'm surprised that that counterpane is reporting this as though it were some new idea, it's not.

      This is the problem with programs like PGP, they're so well made that they allow a user who has no idea how they work to use them. Unfortunately, that can lead to the simplest of attacks to work. //begin not-so-obscure geek reference
      Cryptonomicon: Waterhouse breaks the cipher used by Shafthoe et al by ensuring that the word 'crocodile' was used in the ciphertext and using it as a crib. Same deal. //end not-so-obscure geek reference
      • Re:cribs (Score:3, Informative)

        by karlm ( 158591 )
        All of the ciphers (except single DES and IDEA) used in PGP are believed to be strong against known-plaintext and chosen-plaintext linear and differential cryptanalysis, related key attacks, etc. If you're looking for known plaintext, you've got it in the compression headers. (PGP uses zip compression by default.) Since PGP uses CFB mode, you can simply rearrange the code blocks and trick the user into decrypting the message (adding an extra random ciphertext block at the end or keeping the last block in the same position) and then have the user send you the decrypted garbage and piece back together the message. This attack would also work if you could trick the user into encrypting a chosen message with the same key and IV as the message you wish to crack. (Not feasable with the PGP user interface.) This break has nothing to do with cribs.

        Also note if you keeep everything the same but the last byte and trick the user into quoting the entire decrypted message, including the garbled last byte, in his/her reply, you can break PGP that way.

        Note that PGP and GnuPG both use zip compression by default and so this attack only has a probability of 1 in 4 billion of suceeding and requires user interaction for each attempt. If you turn off compression AND are dumb enough to quote all of the garbage back to the attacker, this attack can be used agaisnt you. This attack is somewhat feasable, but requires some social engineering or some users that are dumb in just the right ways.

        Note that if OCB mode were used instead of CFB mode, this attack would not work. Unfortunately, OCB mode is patent encumbered.

      • Re:cribs (Score:3, Interesting)

        >This is how the allies broke the German enigma in World War 2.

        I haven't read about any chosen-ciphertext attacks during the Enigma crack. One line of attack was that messages began with a repetition of a three-character sequence, so certain keys were known to be impossible for given ciphertext. Another was that some operators got sloppy and used guessable keyboard combinations (the Hut 6 people called those the "sillies"). Then there was the commander who sent the exact same status report every day with a different key.

        Unless "This is how" refers to depending on mistakes by the target. The German Navy codes took longer to crack because the operators were better disciplined. Venona is a superb example of waiting for the target to do something stupid -- the US was decrypting one time pads. Absolutely impossible even in theory, except that the cipher clerks at the Soviet embassy were re-using keying material.

        >This is the problem with programs like PGP, they're so well made that they allow a user who has no idea how they work to use them

        You've got a point there. On the other hand, a hard-to-use program just makes it easier to screw up. For example, early versions of PGP required a manual step to self-sign your public key. The result was that even a professional cryptographer wound up putting a non-self-signed key onto the key servers.

        I worry about just your point -- security may absolutely require users to be knowledgeable. If so, it becomes in general impossible.
      • So, to make this attack viable this has to happen:
        1. Alice sends encrypted message to Bob.
        2. Eve intercepts message and rearranges blocks.
        3. Bob receives message from Eve, saves it and decrypts it by issuing gpg message.gpg. Then, Bob looks at message.txt, sees it's junk and sends an unecrypted message to Alice with message.txt attached to it. I can't think of anyone dumb enough to do that.
    • I'm confused. If party A sends an encrypted message to party B using B's public key, wouldn't party B reply to party A using party A's public key thereby making the garbled text unreadable to interceptor C?
  • For my own sake, because I may not be reading this right:

    If someone manages to get me to send (them? anyone?) a message they already know the contents of encrypted with (my?, the person I'm sending the message to?)'s private key then they can decrypt the message and (read it?, figure out the private key?).

    1.) This seems pretty unlikely to work, unelss minor modifications don't bother the attack (like adding a > in front of each line of the previous email)

    2.) let's say john.doe@someplace.com sends me a message and it's encrypted and signed. If I accept it and it shows that john.doe@someplace.com's signature is valid (which it must or I will delete it) then how can the attacker know the contents of the email unless they have already managed to get john.doe@someplace.com's private key? If they already have his private key, then they can decrypt any message I send to him anyhow. I don't really see how they could get my private key and at this point, if I can't trust john.doe@someplace.com and I send him an email then my comprimise is an issue of trust rather than a PGP flaw.

    Please clue me in if there is anything in this that I have not really understood.
    • If someone manages to get me to send (them? anyone?) a message they already know the contents of encrypted with (my?, the person I'm sending the message to?)'s private key then they can decrypt the message and (read it?, figure out the private key?).

      If someone intercepts a message C encrypted with your public key, and they can get you to decrypt a message C' that they create by modifying C, they can recover at least part of the original message that was encrypted to produce C.

      1.) This seems pretty unlikely to work, unelss minor modifications don't bother the attack (like adding a > in front of each line of the previous email)

      You don't understand. It doesn't matter whether your reply is ecrypted or signed or whatever. What matters is that your reply includes the decrypted C'. Presumably, your adversary will be able to notice and remove a couple of "> ".

      2.) let's say john.doe@someplace.com sends me a message and it's encrypted and signed. If I accept it and it shows that john.doe@someplace.com's signature is valid (which it must or I will delete it) then how can the attacker know the contents of the email unless they have already managed to get john.doe@someplace.com's private key? If they already have his private key, then they can decrypt any message I send to him anyhow. I don't really see how they could get my private key and at this point, if I can't trust john.doe@someplace.com and I send him an email then my comprimise is an issue of trust rather than a PGP flaw.

      Two points:

      1) Even if the attack didn't work for encrypted and signed messages, but just for encrypted messages, that would still be a very big deal.

      2) I think that a signed and encrypted message is just an encypted message with a signature tacked on, so the adversary could just discard the signature and perform the attack.

      • "You don't understand. It doesn't matter whether your reply is ecrypted or signed or whatever. What matters is that your reply includes the decrypted C'. Presumably, your adversary will be able to notice and remove a couple of "> "."

        This is exactly my point, every '>', changes the hash of the block before encryption. A '>' on every line would make a significant, if not unpredictable difference. Nonetheless it could be, eventually, figured out how many '>' + ' ' are before each line. The attacker would also have to predict any line wrapping.

        "2) I think that a signed and encrypted message is just an encypted message with a signature tacked on, so the adversary could just discard the signature and perform the attack."

        PGP signatures are not simply tagged onto the end of a message. They are an MD5 hash of the original message which is then encrypted with the private key of the sender (which allows the public key to be used to verify the signature). When the message is recieved, the decrypted (plaintext) of the sent message is then hashed by the recipient (i.e., my) PGP and then I compare the two hash values. Someone wishing to modify the contents of the message must have the ability to sign the message with the "sender's" key. In order to do that, they must have their private key.

        If signatures were simply tagged onto the end of a file then they wouldn't matter at all.

        Which is my point, you can't trust any content, even encrypted content, unless it is signed. This is a social eneneering hack and has nothing to do with the PGP standard as far as I can tell.
        • This is exactly my point, every '>', changes the hash of the block before encryption. A '>' on every line would make a significant, if not unpredictable difference. Nonetheless it could be, eventually, figured out how many '>' + ' ' are before each line. The attacker would also have to predict any line wrapping.

          True. I didn't think that was what you meant because it didn't seem like it would be difficult at all to figure out how many "> " there were on each line. Presumably that would be either 0 or 1.

          PGP signatures are not simply tagged onto the end of a message. They are an MD5 hash of the original message which is then encrypted with the private key of the sender (which allows the public key to be used to verify the signature). When the message is recieved, the decrypted (plaintext) of the sent message is then hashed by the recipient (i.e., my) PGP and then I compare the two hash values. Someone wishing to modify the contents of the message must have the ability to sign the message with the "sender's" key. In order to do that, they must have their private key.

          If signatures were simply tagged onto the end of a file then they wouldn't matter at all.

          I simply don't understand what you're talking about. Of course signatures would work fine if they were just tacked on the end, for exactly the reason you describe. Indeed, most implementations let you generate a signature for a file as a separate file.

          I may indeed have been wrong, PGP may produce Encrypt(Sign(Message)) for encrypted and signed messages. However, the decryption of C' would be garbled, and would probably not be in the format of a signed message at all. It would appear to have no signature, not an invalid signature.

          Which is my point, you can't trust any content, even encrypted content, unless it is signed. This is a social eneneering hack and has nothing to do with the PGP standard as far as I can tell.

          Well, of course a chosen chiphertext attack may require social engineering. However, it is absurd to use a protocol subject to a chosen chipertext attack when one is available that cannot be so attacked.

        • I was wrong about signing. It is indeed performed as Encrypt(Sign(Message)). Furthermore, my suggestion that D(C') would be garbled so as not to be recognizable as a signed message was stupid. OpenPGP requires the decrypted message to be a valid OpenPGP message. The paper clearly pointed out the attacker must not garble the first block of the encrypted message, which includes the header indicating what type of OpenPGP message it is. Thus, a signed message would still be marked as a signed message, and since the part after the first block would be garbled, it would be definitely cause a major warning/error.
  • It's nice to see someone's name in a slashdot story that you know for once. Just though I'd give Kahil some props ....
    • I'll offer a second helping of props for that gold-toothed pgp-craxxor.

      (INSERT EMBARASSING STORY HERE)
  • of an earlier announcement of a vulnerability here [packetstormsecurity.nl] found by some folks at Bell Labs.

    So is this new (albeit social engineering) vulnerability just "asking the million questions" in one shot?

  • by Anonymous Coward
    This type of attack was mentioned in Applied Cryprography by Schneier himself, p42.....

    Yawn....
  • So, if I were to create my own encryption, Would it be better then someone elses that is widely spread? what all would it take to make it really secure?
    Help me out here I'm thinking of doing something like this for a portal system I'm itching to write in perl possibly having 2 different encryptions, everything goes through the first encryption so the entire database is encrypted, then all passwords, etc. go through a 2nd encryption algorithm after the first to make them more secure, Does this seem like a good idea?
    • Garage encryption... not difficult to acheive but dicey. That is, you can conceive of a great many encryption schemes that are trivial to the uninformed to break, thus obtaining an initial payoff. However, there is something often overlooked which can be abused. The reason why we have such old standards today is because they have been picked over and they appear to have no exploitable flaws (other than to brute force and social engineering attacks, as in this case).

      Generally, two encryptions are not more secure as one. There is a theory (forget the name) that states that if some message K is encrypted by algorithm A, then encrypted by B, the result is no more secure than the stronger of A and B.
  • by TrentTheThief ( 118302 ) on Monday August 12, 2002 @03:25PM (#4056167)
    Please, read this article a with an eye to word meanings and English usage.

    This is a setup and usage problem in the email client, not in a flaw in PGP.

    If a person is fool enough to leave their keyring available to the mail client (that's what the floppy disk in my pocket is for), to not remove their passphrase from memory, and to automatically include the plain-text version of an encrypted message when replying, they deserve no security.

    This so-called "flaw" in PGP is on a par with calling an OUTLOOK email flaw a virus.
    • This so-called "flaw" in PGP is on a par with calling an OUTLOOK email flaw a virus

      Well, actually, it's a flaw with the plug-in for Outlook. This shakes people a bit, but it's really not quite as big as we'll make it out to be. There've been a lot bigger flaws in M$ products that haven't gotten as much attention....

  • Yahoo Links (Score:1, Offtopic)

    by cheinonen ( 318646 )
    I know it's off topic, but since everyone seems to have pop-ups here with such a passion, why doesn't Slashdot adopt a policy to not link to stories (like this) at Yahoo!, who has pop-ups, and instead only link to sites that are pop-up free? I'm sure this story is going to be picked up by many other sites, but Yahoo! will get all that traffic, and keep serving up the pop-up ads, as we all go there.
  • by Anonymous Coward
    Was the inclusiion of Jon Katz in the study.

    I assume they used all his civil rights encrypted emails from his excellent Hellmouth series to demonstrate the exploit.

    I would be surprised if he actually had time to study anything between his pandering to children, and RPG'ing to understand the socio-economic realities of the real world.

    he must be really multi-talented.
  • by Eric_Cartman_South_P ( 594330 ) on Monday August 12, 2002 @03:36PM (#4056245)
    PGP Announces today that it will change its name to SGP.

    Sorta' Good Privacy.

  • by Featureless ( 599963 ) on Monday August 12, 2002 @03:40PM (#4056267) Journal
    It hinges on being able to intercept a message, add some random data to the encrypted blocks containing its payload, and then for the recipient to decrypt it, and respond "hey Ed, what's with this garbled message you just sent me?" with that decrypted message quoted below. And, naturally, for the attacker to be able to intercept that response as well.

    The basic idea of a "chosen cyphertext" attack is that if you can see a decryption of blocks you mangle, you can work backwards to get the plaintext in the unmangled blocks. You might consider this an attack on the user interface or the protocol rather than the algorithm. You should just never be quoting failed decryptions...

    The talk about compression preventing the attack is not referring to the compression of cyphertext by you (i.e. ZIP'ing the payload before sending). That doesn't make a difference. It involves the DEFLATE compression the PGP/GPG software applies (and it generally does so only for uncompressed plaintext) both before and after encryption. You may already be realizing, randomizing compressed data will cause the decompression to fail with an error; that will make it much less likely for the user to disclose the failed decryption.

    Fixing this is a good idea. Until it is fixed, if someone sends you garbage, don't reply, or if you do, don't quote their message in your reply. However, this is not the end of the world. The foundation is still sound, the attack is only useful on a per-message basis, and your keys are not affected by this strategy.

    I do have a question for the crowd; it seems to me that this is an attack on "encrypted" messages, as opposed to "encrypted and signed" messages. I am assuming that the use of signatures will also foil this attack, but I would welcome comments from others on that subject.
    • Somewhere else in this forest of responses there's a discussion of whether signatures have protective value.

      Signatures do make the message tamper-evident, they can't be forged the same way a CRC could, but so what? Do you break interoperability by having the software error out whenever a message doesn't have a valid signature? Or do you have humans check the signature? OK then, what do you tell them? Not to forward the decryption? You're not any better off then than you were with unsigned messages.

      Maybe if you used a MAC with the session key...

      BTW, assuming that the attacker can intercept and modify messages is just standard procedure. If you could trust the network you wouldn't need crypto. The only thing that's specialized is depending on getting a quoted reply.

      >The foundation is still sound, the attack is only useful on a per-message basis, and your keys are not affected by this strategy

      You're batting a thousand. Correct on every point.

  • Bad Headline! (Score:3, Informative)

    by tqbf ( 59350 ) on Monday August 12, 2002 @03:42PM (#4056287) Homepage
    If "Kahil Jallad, Jonathan Katz, and Bruce Schneier" write a paper, the abbreviation is "Jallad et al". If Schneier is the LAST author in the list, it probably means he did very little except motivate the paper and help brainstorm.
    • Perhaps the order is alphabetical by last name?

      Regards,
      Slak
    • Yeah, but if it had anything remotely like Jon Katz in the headline, no one would have read it at all!
    • Bzzt! Wrong. While listing in order of importance is one way, also common is listing alphabetically, especially if a paper had few contributors (unlike, say, some physics experiments where the contributors list may be longer than the paper).

      Since the order isn't Katz, Jallad, Schneier, we can't assume that Schneier is a minor contributor. Since RSA published as RSA rather than ARS, we _can_ assume something about importance there. But that's not the case here, is it?
    • Not if it plays any role that the names of the authors are sorted alphabetically (J, K, S).
  • Jonathan Katz (Score:4, Informative)

    by lblack ( 124294 ) on Monday August 12, 2002 @03:48PM (#4056322)
    In case anybody is actually confusing him with another Meestah Katz:

    This should put the confusion to rest. [columbia.edu]

  • Well known "flaw" (Score:3, Interesting)

    by srn_test ( 27835 ) on Monday August 12, 2002 @04:10PM (#4056506) Homepage
    This is a well known attack, isn't it? I can remember giving a talk on how to use PGP and telling people to never:

    a) Sign random garbage sent to them by anyone (and sent it back), or
    b) Decrypt stuff and send it back.
  • The paper comes from Kahil Jallad, Jonathan Katz, and Bruce Schneier.

    Darnit, I thought I'd configured my prefs to filter out everything by Jon Katz but this article still got through! It's a conspiracy to make me crazy!

    .

  • The paper comes from Kahil Jallad, Jonathan Katz, and Bruce Schneier

    Damnit, I thought my filter on Slashdot was supposed to take his stuff out!

    CowboyNeal! Your stupid filter isn't working!

  • This is good work, because it illustrates the real-world importance (i.e., feasibility) of "chosen-ciphertext attacks." PGP and GnuPG are vulnerable to these attacks *by design*. It's not a mathematics problem, or an implementation problem, or a standards problem, but simply a requirements problem. Nobody thought to use CCA-secure encryption in PGP et al. Nor is anyone really to blame: these kinds of attacks weren't even formalized, nor reasonable solutions proposed, until a few years ago. It requires specialized cryptosystems, built from the ground up, to offer provable security against such attacks.

    Chosen-ciphertext attacks take advantage of some kind of "malleability" in the ciphertext. For example, say Eve intercepts some RSA ciphertext C of a message M from Alice to Bob. She wants to know M, but can't solve RSA. Well, she just multiplies C by, say, 3^e (mod n) [where e is the encryption exponent in the public key] and sends it off to Bob. Bob decrypts it and gets some junky-looking message M', and sends it along to Alice, saying "what gives?" Little does he know that M' = 3M, whence Eve intercepts M', divides by 3, and viola. In reality the situation is more complicated (RSA is used to encrypt session keys, not full messages), but the principle is the same.

    Chosen-ciphertext-secure cryptosystems generally embed some "proof" that whoever generated the ciphertext "knows" the message to which it decrypts. For example, they use (idealized) hash functions, or encryption of the same message under two different keys. This way, Eve can't generate a valid ciphertext by modifying something Alice sent to Bob. If the proof doesn't check out, then Bob can tell that he may be under a chosen-ciphertext attack, and will throw away the ciphertext. In the previous scenario, Bob can't tell whether the ciphertext was built maliciously, or was just the encryption of some garbage that Alice legitimately sent. Here, he can.

    I like this work because it moves chosen-ciphertext attacks from the realm of "paranoid academics modelling an unrealistically powerful adversary" to the "real world." Now people may think twice before chosing weaker crypto than is currently available.

The key elements in human thinking are not numbers but labels of fuzzy sets. -- L. Zadeh

Working...