Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Encryption Security Technology

Another New AES Attack 93

Jeremy A. Hansen writes "Bruce Schneier gives us an update on some ongoing cryptanalysis of AES. 'Over the past couple of months, there have been two new cryptanalysis papers on AES. The attacks presented in the paper are not practical — they're far too complex, they're related-key attacks, and they're against larger-key versions and not the 128-bit version that most implementations use — but they are impressive pieces of work all the same. This new attack, by Alex Biryukov, Orr Dunkelman, Nathan Keller, Dmitry Khovratovich, and Adi Shamir, is much more devastating. It is a completely practical attack against ten-round AES-256.' While ten-round AES-256 is not actually used anywhere, Schneier goes on to explain why this shakes some of the cryptology community's assumptions about the security margins of AES."
This discussion has been archived. No new comments can be posted.

Another New AES Attack

Comments Filter:
  • That's nice... (Score:4, Insightful)

    by clone53421 ( 1310749 ) on Friday July 31, 2009 @03:56PM (#28902085) Journal

    But all I really want is something that'll crack a RAR password without taking months. (AES-128)

    • Re: (Score:1, Insightful)

      by Anonymous Coward

      RAR passwords? Everyone seems to do it via brute force or nearly brute-force, assuming Eugene Roshal is good at crypto. He isn't bad (good enough to use elliptic curve in the registration scheme), but apparently, the people who write the password crackers are terrible.

      It's not the AES-128 you want to focus on. It's the hash protecting the AES-128 decryption key in the header.

      It's iterated (hence the slow speed of traditional brute-force) and salted, but the salt isn't very big. It's therefore possible to us

  • first post? (Score:1, Redundant)

    All your AES are belong to us!

  • by Anonymous Coward on Friday July 31, 2009 @04:00PM (#28902139)

    AES-256 by definition has 14 rounds. AES-128 has ten rounds. Ten rounds were determined by the designer to give enough security to support a 128 bit keyspace. Not 256 bits. For 256 bits, the designers specified 14 rounds.

    AES is based on a cipher called Rijndael, whose number of rounds, number of key bits, and maybe block size (not sure of the last) can be set arbitrarily. So there is such a cipher as 10-round Rijndael-256. For that matter, there is even 1-round Rijndael-256, which is of course insecure. And there's 1000-round Rijndael-128, which is secure but dirt slow. The AES standardization process used Rijndael parameter settings which the designers claimed to be as fast as possible while still being secure to the strength specified by the key size. That is, the used the minimum sufficiently-secure number of rounds for the key size.

    Got that? For AES-128, the designers said 10 rounds was enough. For AES-256, this new research showed that 10 rounds is not enough, which is what the designers pretty much said all along, though nobody had a specific proof of that until now.

    • by johannesg ( 664142 ) on Friday July 31, 2009 @04:18PM (#28902365)

      Do you know why AES-256 is apparently more vulnerable than AES-128? Reading the article, attacks on AES-256 have apparently reduced the search time far more (to 2^119) than they have for AES-128 (which still stands at 2^128). Shouldn't a longer key make the attack more difficult as well because it increases the search space?

      • Well, if you had asked whether more rounds make the attack more difficult, then I would have an answer: more rounds don't necessarily make the attack more difficult.

        To verify this take a rubiks cube in its solved state. Hold it such that your fingers touch the top middle and bottom middle square. Now begin to rotate the right side of the cube by one turn, then turn the entire cube by 90 degrees. Repeat this. After some time you will notice that the cube begins to return to the starting position, although it

      • by Joce640k ( 829181 ) on Friday July 31, 2009 @05:22PM (#28903047) Homepage

        AES-256 and AES-192 are really AES-128 in disguise. They were created only to meet NIST requirements for three different key sizes, not from any practical security reasons (128 bits is definitely enough to prevent brute-force cracking).

        The AES algorithm needs 128 bits of key for each pass through the encryption loop.

        For AES-256 they select 128 bits from the 256-bit key for each round. Some of the key bits don't make it into the encryption loop until quite late in the process so in the final output they've only had a few rounds of encryption and can be brute-forced with much less than 2^256 effort. When you have some of the key you can go back and get a few more bits, and so on...

        nb. The designers weren't stupid, they designed AES-256 to completely lose the key and this attack doesn't work against all twelve rounds of AES-256. The surprise is that somebody managed to extract the key out of a ten round version. This was unexpected.

        nb. In AES-128 *all* of the key bits have been through *all* the rounds of encryption so inferring anything about the key by looking at the output is much more difficult (and hopefully impossible).

    • by Ex-Linux-Fanboy ( 1311235 ) on Friday July 31, 2009 @04:21PM (#28902401) Homepage Journal

      To be more precise, Rijndael has two parameters:

      • Key size, which can be 128, 160, 192, 224, or 256 bits in size
      • Block size, which can also be 128, 160, 192, 224, and 256 bites in size

      This means Rijndael is a set of 25 different ciphers; AES is a subset of three of these ciphers. The number of rounds is derived from the maximum of these two parameters; for a 256-bit key and 128-bit block, it is defined as 14 rounds. Fewer rounds means we're not analyzing Rijndael, but a reduced-round Rijndael variant.

      Related key attacks, by and large, are only an issue with "make a hash out of a block cipher" constructions. I don't know offhand if this is an issue with Whirlpool [wikipedia.org], a hash construction using an AES variant; as I recall, some changes were made to the key schedule of Whirlpool.

    • by evanbd ( 210358 ) on Friday July 31, 2009 @04:45PM (#28902663)

      Reduced-rounds attacks are a standard cryptographic technique. You start by breaking a reduced strength version of the cipher with a completely impractical attack that's marginally better than brute force. Then someone comes along and observes that they can improve your attack to more rounds or shorter time. Then that repeats a few times. Eventually, the cipher is broken.

      No, they haven't broken AES. However, this is a step along the way. If the designers of AES had known that there was a good attack against the 10-round version, they wouldn't have recommended 14 rounds -- standard practice is to include a larger safety factor than that. This is a big deal, not because you can now break AES, but because the attacks are much closer to doing that than previously thought. Hence, the recommendation by Schneier to move to 28 rounds -- improve the safety factor. Attacks always get better, never worse. It's possible (though unlikely) that there are unpublished attacks on AES known by some organizations -- and the closer to a real break the publicly known attacks are, then the more plausible that scenario becomes. Attacks that get this close and weren't anticipated by the cipher designers are scary things.

      Also, this is a related-key attack -- meaning the attacker needs two keys that are related somehow and the same piece of plaintext encrypted with both. If the implementation of AES that you use does a good job of selecting a truly random key, then the attacker can't implement this attack because he can't get you to use the requisite pair of related keys. That doesn't mean it isn't a valid attack, just that it's an attack that can be defended against. Again, the biggest worry is that someone will take this attack and realize how to improve upon it to make an attack that's even better.

      • Re: (Score:1, Offtopic)

        Attacks always get better, never worse.

        That's only because my last boss hasn't worked in cryptanalysis (yet). If he ever does, we will all be safe again.

        • by afabbro ( 33948 )

          That's only because my last boss hasn't worked in cryptanalysis (yet). If he ever does, we will all be safe again.

          Are you trying to impress us by being mysterious? Or do you often write online posts that only you understand?

          • Re: (Score:3, Funny)

            by darpo ( 5213 )
            This is Slashdot, home of thousands of Asperger's sufferers. He probably has a whole world that's known only to him.
          • I forgot my sarcasm tag apparently. Attacks get better, not worse in cryptanalysis because my old boss hasn't been able to make them worse. He can make anything worse (it's his specialty). We'd all be safe again if he worked in the field because he'd mess the entire field up so bad no one would ever be able to decrypt anything again. I'd say you made me ruin the joke by explaining it, but if I had to explain it, then it couldn't have been that good. I almost included a bit more with the post, but I thought

      • Hence, the recommendation by Schneier to move to 28 rounds -- improve the safety factor. Attacks always get better, never worse. It's possible (though unlikely) that there are unpublished attacks on AES known by some organizations -- and the closer to a real break the publicly known attacks are, then the more plausible that scenario becomes. Attacks that get this close and weren't anticipated by the cipher designers are scary things.

        The problem isn't the ability to move to 28 rounds for security now, but w

        • Re: (Score:3, Insightful)

          by dch24 ( 904899 )
          Good point.

          If we move to 28 rounds now, then the hope is that by the time AES-256 with 14 rounds is broken, there will not be much valuable data left encrypted with it.

          I think it's a safe assumption that the value of data decreases with time.
          • Re: (Score:3, Insightful)

            by setagllib ( 753300 )

            If attackers against any system have the resources to store all of the system's traffic in the hopes of decrypting it with a complete break later (e.g. as WEP was broken after months/years of wireless traffic), then the fact is they'll have a lot of sensitive information. To an individual, corporation or defence organisation, there is plenty of "old" data that would be very damaging for others to have, and yet in general the old data inches closer to exposure. So sure, it drops in value, but never enough to

      • by slackergod ( 37906 ) on Friday July 31, 2009 @08:20PM (#28904703) Homepage Journal

        Another (somewhat less-well known) thing that can be done is to use OAEP+ (http://en.wikipedia.org/wiki/Optimal_Asymmetric_Encryption_Padding) to encrypt the datablocks that you're transmitting. The link is to OAEP but OAEP+ is probably what you'd want to use with AES... I don't have a link handy, and the basic principle of the two is the same...

        The OAEP algorithm scrambles your data chunks by XORing your plaintext with randomly generated bits, but done in a way that's recoverable IF and ONLY IF you have the entire ciphertext decoded (designed for RSA, but can apply to AES). This means that the same key+plaintext will always result in different ciphertext, and also means that in order to get any useful bits of key/plaintext information, the attacker must get them all, or they're just guessing as to which set of random bits OAEP used (and it generally puts 128 bits worth in).

        While the actual OAEP protocol is a block-level action, and the safe version adds 128 bits of randomness (and thus size), the general idea can be modified to be as cheap or expensive as you want... the idea in general makes many asymetric ciphers MUCH more secure.

    • Re: (Score:1, Informative)

      by Anonymous Coward

      "Attacks only get better. They never get worse."

      Breaking reduced-round versions of a cipher show that they are not as strong as originally hoped. Beating 10 out of 14 rounds show that this algorithm has serious problems.

  • AES crack (Score:5, Funny)

    by mwvdlee ( 775178 ) on Friday July 31, 2009 @04:06PM (#28902207) Homepage

    So I guess this is an AES-hole?

    • Re: (Score:3, Funny)

      by ozbird ( 127571 )
      Sounds like the cryptographers need to do their belts up a notch; nobody likes to see AES cracks.
  • Practical? (Score:5, Insightful)

    by GigsVT ( 208848 ) on Friday July 31, 2009 @04:07PM (#28902211) Journal

    I'm not sure how practical it is for any "programmer on the streets" to pay attention to this sort of thing.

    Time and again it's the stupid stuff that gets us... broken implementations, not broken algorithims. Like the null terminated strings in SSL certs, or the Debian ssh keys being one out of only 64k possible.

    I say this because I have to constantly hear stupid stuff from fellow programmers like "MD5 is broken!!!11". They make design choices based off these unlikely attacks, without fully understanding the real nature of this stuff.

    • bruce schnier said this is no big deal. and he know. the media is hyping this but not a thang to worrie about.
    • Re:Practical? (Score:5, Informative)

      by UltimApe ( 991552 ) on Friday July 31, 2009 @04:15PM (#28902337)

      I've seen real world attacks against md5 where being used as a checksum/verification. Malicious individuals injected code, but the md5 didn't change. http://en.wikipedia.org/wiki/MD5#Vulnerability [wikipedia.org] We researched it in a security course I took recently.

      • by DarkOx ( 621550 )

        Right I think the message though should be; if you are not going to take your time to understand why its broken and if its still safe to use for your application and are going to just use something else instead than make that something else another well known well researched standard.

        For example
        MD5 is broken, so you decided to use sha1 --good idea
        MD5 is broken, so you decided to write your own hash function --bad idea

    • Re: (Score:2, Insightful)

      > I'm not sure how practical it is for any "programmer on the streets" to pay attention to this sort of thing.

      These "any programmer on the street" guys hopefully never implement anything in the vicinity of crypto code.

      You do not need to read the papers. Reading something like http://www.daemonology.net/blog/2009-06-11-cryptographic-right-answers.html [daemonology.net] -- if you happen to trust Colin Percival -- should be enough, if you do not try to be creative in what you use.

      What is so bad about considering MD5 broken a

      • His "CTR+HMAC" method relies completely on the quality of your nonce for security.

        Software-generated nonces are notoriously error-prone, especially on systems which need to generate millions of them every day. Several SSL cracks have been based on guessing parts of the nonce.

        • A CTR denialist? They still make you?

          A nonce doesn't need to be secret or unpredictable, but only unique. You can just use a counter. If you need a unique start, you can use a large random number to start the sequence, and it's easy to modularize a good cryptographic RNG. (Not that it's necessarily easy to create a good RNG.) There are a variety of ways to generate a good nonce.

          The only other reasonable alternative is the venerable CBC mode, but using a non-repeating IV (a nonce by another name) for that mo

          • Ummmm ... ok. I can see how that works.

            OTOH I'm very worried about this statement in his "why I recommend using the Encrypt-then-MAC composition" page:

            "Of these three, only Encrypt-then-MAC is provably secure, in the sense of guaranteeing INT-CTXT (integrity of ciphertexts -- it's unfeasible for an attacker to construct a valid ciphertext.."

            He seems to be contradicting himself there - in CTR mode you can easily flip bits of the cyphertext then make a new MAC.

            • He seems to be contradicting himself there - in CTR mode you can easily flip bits of the cyphertext then make a new MAC.

              Your "attack" works against any plain hash used a MAC. Good MACs, like HMAC [wikipedia.org], are not vulnerable to such trivial tampering: because an attacker does not have the secret key used to construct the original MAC, he can't create a valid MAC for his modified ciphertext.

              The paper the author is implicitly mentioning, by the way, is Hugo Krawczyk's "The Order of Encryption and Authentication for Pr [iacr.org]

            • in CTR mode you can easily flip bits of the cyphertext then make a new MAC.

              No. MAC != hash. You cannot compute a new MAC unless you have the MAC key.

      • I trust Colin Percival implicitly. His speaking matches his writing to an astonishing extent.
    • I say this because I have to constantly hear stupid stuff from fellow programmers like "MD5 is broken!!!11.

      While MD5 can not be considered broken as a hash for data verification where not malicious modification is suspected, enough potential practical "attacks" have been published to suggest that all new cryptographic code should use something else.

      We do work for banking organisations, and a number of them now specify that MD5 should not be used as a has function in any new project work and should be phased out of existing code where practical. If our clients consider it broken, then we have to, or at least we h

    • by rtechie ( 244489 )

      Please mod the OP up.

      At least 90% of problems with cryptographic systems are based on implementation, not broken algorithms. There are countless examples of this. While a new attack against AES is important, it's really only of interest to the relatively small group of mathematicians doing algorithm design. Vulnerabilities in the SSL signing process (for example) are of much more interest to programmers.

    • by jhol13 ( 1087781 )

      MD5 is broken.

      For example now http://www.copyclaim.com/ [copyclaim.com] is completely useless: I can trivially "sign" two texts with same hash.

  • ... just yet. ON the other hand it is always necessary to follow what happens in cipher breaking for an undertsanding on what to use and what hot.

    My impression in this thogh, is that people just invest more time in more obscure attacks. Related key attacks have no relevance in most apllications. Still the right thing to do for the crypto researchers if more general attacks fail, but less revant to practive, if at all.

  • by al0ha ( 1262684 ) on Friday July 31, 2009 @04:15PM (#28902339) Journal
    The best minds in the world work on cracking them and come up with theoretical proofs of a weakness which ultimately prove to everyone, beyond the shadow of a doubt, the security of the algorithm. Too bad many corporations don't understand and try to create closed cryptographic algorithms which, in almost every case, turn out to be very lame.
    • by natehoy ( 1608657 ) on Friday July 31, 2009 @04:23PM (#28902421) Journal
      Like one of my bosses once said, years ago, "If we implement industry standards in our processes, then we'll be doing things just like everyone else does! Where's the competitive advantage in THAT?"
    • The best minds in the world work on cracking them and come up with theoretical proofs of a weakness which ultimately prove to everyone, beyond the shadow of a doubt, the security of the algorithm.

      It only proves beyond a shadow of a doubt the maximum security of the algorithm while the actual security remains in question.

  • by mlts ( 1038732 ) * on Friday July 31, 2009 @04:45PM (#28902673)

    Even though AES is far from being truly broken, I wonder if it's time for NIST to start working on the AES2 spec. Maybe Serpent would be a good candidate because it was discussed that it had a larger margin of safety than Rijndael/AES.

    As stated in TFA, attacks only get better and better, so every decade or so, maybe it would be time to consider another standard encryption algorithm. The reason DES lasted so long as an algorithm was that cryptography was not as vital to day to day operations as it is now, so a complete break would have been more of an academic excercise than one that would get the cryptographer financial gain. These days, if a blackhat does a break, or reduces the keyspace to a low level where brute forcing is possible, there are billions of dollars to be gained.

    • attacks only get better and better, so every decade or so, maybe it would be time to consider another standard encryption algorithm.

      That does nothing to protect all of the existing AES data. And you can't go back and simply re-encrypt the old data to the new standard. The whole idea of encrypting it in the first place was that it was likely to get stolen somewhere along the way and when it did it would never be of any use to the thief. There is a lot of AES protected data that has been copied and can simpl

      • The whole idea of encrypting it in the first place was that it was likely to get stolen somewhere along the way and when it did it would never be of any use to the thief.

        I disagree with your use of the word 'never'.

        While we would like to design cryptographic tools that last forever, it's really hard.

        For one, there's (almost always) the brute-force attack. By buying more computers, you can always do it faster, since it by its nature is embarrassingly parallel.

        The best we can hope for is that for all thieves, their (perceived expected) cost of breaking the crypto exceeds their (perceived expected) gain.

        As long as computers yield more cycles per dollar over time, we will have

    • Maybe Serpent would be a good candidate because it was discussed that it had a larger margin of safety than Rijndael/AES.

      With all the talk of 28-round AES, Serpent is definitely worth another look. The authors purposefully made it 32 rounds even though they thought that 16 rounds would be very secure. It was rejected in the AES competition because 10-14 round Rijndael was faster than 32-round Serpent, but if it moved to 28-rounds, I doubt Rijndael would still be faster than Serpent.

    • The reason DES lasted so long as an algorithm was that cryptography was not as vital to day to day operations as it is now, so a complete break would have been more of an academic excercise than one that would get the cryptographer financial gain.

      That's utter nonsense. DES was used extensively, and for plenty of critical data. What do you think all those web browsers have been using for SSL?

      DES STILL hasn't been broken. It's short key-length just ran up against Moore's law. 3DES is still a sound algo, b

  • TwoFish (Score:2, Interesting)

    They should have picked TwoFish.
    • Re: (Score:3, Interesting)

      by QuoteMstr ( 55051 )

      Maybe. Twofish is almost as fast as AES, and possibly more secure. Schneier has a lengthy discussion in Practical Cryptography on possible weaknesses in AES that are a result of its simple algebraic structure, and to this day there are no successful attacks against Twofish or its 64-bit-blocked ancestor Blowfish. Then again, AES has received more scrutiny.

      • by afabbro ( 33948 )
        One of the comments on Schneier's site was that Serpent/Twofish was largely scored down because of speed (which was one of the design criteria). If more rounds are added to AES, its advantage over Serpent may vanish.
        • The cardinal sin of security is sacrificing safety for speed. It applies to all work, really, but in security, the effects are particularly harmful.

          • by dido ( 9125 )

            At around the time the AES contest was ongoing, I was doing work on writing a cryptographic layer for an embedded system based on small microcontrollers. I wrote assembler implementations of Serpent, Twofish, RC6, and Rijndael for the embedded microcontroller (it was an 8-bit 8051-type controller, if I recall correctly), and Rijndael was consistently the most efficient, so it came as no surprise to me that Rijndael was several months later declared the Advanced Encryption Standard. One of the criteria for

    • Re: (Score:2, Interesting)

      by whoisisis ( 1225718 )

      > They should have picked TwoFish.

      I would choose TwoFish over AES because TwoFish was very close to being picked as a standard,
      and didn't make it. That means AES gets all the attention, and "nobody" attacks TwoFish.

      However, if they'd chosen TwoFish, would we today be reading about a new veakness of TwoFish,
      and would you have made a comment on how they should've picked AES ?

      • Technically, we'd still be reading about a new 'veakness' in AES, because AES was the name given to the algorithm that won the context. Rijndael would still be called Rijndael, TwoFish would be called AES.
  • of Enigma and Crypto AG :)
    Hi the the NSA and GCHQ :)
  • The best attack against DES breaks 15 out of 16 rounds faster than brute force. However for the full 16 rounds, the best attack against DES is brute force. Likewise, the best attack against SKIPJACK breaks 31 out of 32 rounds. In both cases NSA was fairly involved with the development of the algorithms and they just happen to have no "security margin". Perhaps that means NSA was ignorant of the methods (such as impossible differential cryptanalysis) that the academic sector developed. Perhaps it means
  • Honestly I want to see more articles on how people are attacking RC6 and Twofish. Some of us like the 1:1 complexity algorithms, not the Rijndael compromise that was decided upon for AES which gets slower the more secure you want it. I want more exposure of Twofish.

No spitting on the Bus! Thank you, The Mgt.

Working...