Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Encryption Math

MIT Research: Encryption Less Secure Than We Thought 157

A group of researchers from MIT and the University of Ireland has presented a paper (PDF) showing that one of the most important assumptions behind cryptographic security is wrong. As a result, certain encryption-breaking methods will work better than previously thought. "The problem, Médard explains, is that information-theoretic analyses of secure systems have generally used the wrong notion of entropy. They relied on so-called Shannon entropy, named after the founder of information theory, Claude Shannon, who taught at MIT from 1956 to 1978. Shannon entropy is based on the average probability that a given string of bits will occur in a particular type of digital file. In a general-purpose communications system, that’s the right type of entropy to use, because the characteristics of the data traffic will quickly converge to the statistical averages. ... But in cryptography, the real concern isn't with the average case but with the worst case. A codebreaker needs only one reliable correlation between the encrypted and unencrypted versions of a file in order to begin to deduce further correlations. ... In the years since Shannon’s paper, information theorists have developed other notions of entropy, some of which give greater weight to improbable outcomes. Those, it turns out, offer a more accurate picture of the problem of codebreaking. When Médard, Duffy and their students used these alternate measures of entropy, they found that slight deviations from perfect uniformity in source files, which seemed trivial in the light of Shannon entropy, suddenly loomed much larger. The upshot is that a computer turned loose to simply guess correlations between the encrypted and unencrypted versions of a file would make headway much faster than previously expected. 'It’s still exponentially hard, but it’s exponentially easier than we thought,' Duffy says."
This discussion has been archived. No new comments can be posted.

MIT Research: Encryption Less Secure Than We Thought

Comments Filter:
  • by For a Free Internet ( 1594621 ) on Wednesday August 14, 2013 @01:59PM (#44567373)

    I thought this was News for Nerds, but instead we are reading about Math, which is some kind of religion, and I am an Atheist.

  • good news for NSA (Score:5, Interesting)

    by minstrelmike ( 1602771 ) on Wednesday August 14, 2013 @02:00PM (#44567381)
    According to the Wired article on the huge Utah data center, its purpose is to store encrypted messages from foreign embassies and eventually, some time in the future, decrypt them and gain insight into how the 'enemy' (any foreigner) thinks. That time is now exponentially closer.
    • by DigitAl56K ( 805623 ) on Wednesday August 14, 2013 @02:03PM (#44567427)

      I severely doubt this is news to the NSA.

      • Re: (Score:3, Interesting)

        by BronsCon ( 927697 )
        Shit I'm not even a crypto expert and it wasn't news to me. If you know what part of a stream of data is supposed to look like and you know where in the stream that part of the data should be, you can attack that part of the stream to determine at least a portion of the decryption key. From there, you try the partial key at set intervals within the datastream and look for anything else familiar, such as file headers or plain ol' empty space, additional patches of data you can fill in from things you already
        • Re:good news for NSA (Score:5, Informative)

          by Shaiku ( 1045292 ) on Wednesday August 14, 2013 @05:07PM (#44568841)

          I read the article. The impression I got was that it will still take the same time today that it would have taken yesterday to break encryption, but it turns out that the metric used to demonstrate an algorithm's effectiveness at hiding information was inadequate for electronic communication. In a nutshell, the latest math explains that most encryption systems are vulnerable to side-channel attacks, even if you might not have realized it. But side-channel attacks have been employed for a long time, so those who do security already knew this anecdotally.

        • by doublebackslash ( 702979 ) <doublebackslash@gmail.com> on Wednesday August 14, 2013 @05:38PM (#44569075)

          I'll undo my moderation in this thread just to tell you that you are wrong. One cannot determine the key from the ciphertext. If they can this is known as a "break" in the cipher.

          A "break" in a cipher does not mean that it is practical to find the key, merely that it is more feasible than mere brute force. For example, a "break" could reduce the effective strength of a cipher from 256 bits to 212 bits under a known plaintext attack. This is a BAD break in the cipher given current standards, but it is the cipher is still completely uncrackable in human (or even geologic) timescales.

          The "weeks or months" number, by the way, has nothing to do with cracking cryptographic keys. I would surmise that is a number more geared towards cracking passwords, which is an entirely different topic. Also, for some realistic numbers on cracking encryption keys, check out Thermodynamic limits on cryptanalysis [everything2.com]

          • by blincoln ( 592401 ) on Wednesday August 14, 2013 @08:31PM (#44570357) Homepage Journal

            Actually, you're both wrong.

            For certain types of encryption, you are right - a known-plaintext attack that easily reveals the key is a fatal problem for the encryption method. This is true of AES, for example. The converse is also true - currently, knowing the plaintext and encrypted values for an AES-encrypted block of data does not let an attacker determine the encryption key in a reasonable amount of time. It still requires testing every possible key to see if it produces the same encrypted block given the known plaintext.

            Other types of encryption are absolutely vulnerable to known-plaintext attacks. I'm less familiar with this area, but certain common stream ciphers (like RC4) are literally just an XOR operation, and so if you know the plaintext and ciphertext, you can obtain the keystream by XORing them together.

            • Some stream ciphers are as you say, but the keystream is not the same as the underlying key. One can't guess the next character in the keystream without deriving the key. Most modern stream ciphers use internal feedback much in the same way that block ciphers use external feedback modes, like CBC, to prevent these attacks.

              In any system without feedback like this it is always considered insecure to re-use a key at all.

      • by mcrbids ( 148650 )

        We have no reason to believe that, despite the resources of the NSA, that they are significantly ahead of the public face of encryption technologies. In fact, it has been noted numerous times that cryptographers working for the NSA aren't paid nearly as well as the private sector positions;

        It's reasonable, then, to assume, that the NSA doesn't have any magic secrets other than gag orders alleged by affected parties [arstechnica.com].

    • This is hardly news at Fort Meade. If we're hearing about it now, the NSA probably has had the same knowledge for years.

      • Re:good news for NSA (Score:5, Interesting)

        by Anonymous Coward on Wednesday August 14, 2013 @02:19PM (#44567587)

        Maybe, maybe not. Consensus has shifted, and many researchers no longer believe that the NSA has the best and the brightest, or that they possess much fundamental cryptographic insight not already available to civilian researchers.

        When the NSA tried to sneak a back door into an optional random number generator specified in a recent NIST specification, they were almost immediately caught by academics. http://en.wikipedia.org/wiki/Dual_EC_DRBG

        On the other hand, operationally they're clearly second to none. Security engineering and penetration involve much more than basic mathematical insight.

        • by Anonymous Coward

          On the other hand, operationally they're clearly second to none. Security engineering and penetration involve much more than basic mathematical insight.

          Edward Snowden proved the first point wrong and the second point right.

        • by minstrelmike ( 1602771 ) on Wednesday August 14, 2013 @03:18PM (#44568005)

          When the NSA tried to sneak a back door into an optional random number generator specified in a recent NIST specification, they were almost immediately caught by academics. http://en.wikipedia.org/wiki/Dual_EC_DRBG [wikipedia.org]

          They probably should have taken lessons from Xerox if they wanted to embed random numbers in documents.

        • by lgw ( 121541 )

          I'm not sure what the intent was with Dual_EC_DRBG! It's a bit silly to believe it was "sneaking in a backdoor" because (1) people figured it out using techniques the NSA knew were public, and more importantly (2) the dang thing is so slow there's no way anyone ever would have used it in the first place.

          The first you can argue was NSA arrogance, but the second? The second is just weird. I could believe the NSA trying to sneak in a backdoor, but one that obviously no one would use? I don't even?

          • Re: (Score:3, Insightful)

            by Anonymous Coward

            If the NSA was only concerned with open source cryptographic products and protocols, you would have a point. But aside from government procurement, NIST standards are in practice used to specify deliverables for corporate security products. Getting Duel_EC_DRBG into a NIST standard is the equivalent of putting a backdoor into an ISO standard for door locks.

            Once in the standard, the NSA can then lean on vendors to use the broken algorithm, and the vast majority of users of that product would be none the wise

            • by lgw ( 121541 )

              OK, but why on earth would the NSA need a backdoor into a US government-procured system? They have the key to the front door!

              And again there's the "far to slow to actually use" thing. It's 100 to 1000 times as slow as the other choices IIRC.

      • Times have changed (Score:3, Interesting)

        by Anonymous Coward

        I don't have insider knowledge, this is just speculation based on societal trends. Where cryptography used to be the almost exclusive realm of governments to protect their secrets, it is now quite mainstream. Encryption protects e-commerce transactions among other things that are useful for the average person and vital to our businesses. It is now a field that university researchers pay attention to (where only cryptographers under the employ of spy agencies did previously) and companies spend their own mon

    • Re:good news for NSA (Score:4, Informative)

      by Bob the Super Hamste ( 1152367 ) on Wednesday August 14, 2013 @02:10PM (#44567493) Homepage
      But at the same time

      It’s still exponentially hard

      .

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      Bad news for the NSA. Known insecurity can be fixed either through patch or brute force (bigger key). The NSA, I'm sure, prefers secret insecurity.

    • by freeze128 ( 544774 ) on Wednesday August 14, 2013 @03:07PM (#44567927)
      Good! If it gets exponentially closer, that means it will never arrive!
    • This works only if the content is only encrypted _once_.
      If you encrypt it twice, there will be no correlation, no recognizable content.

      • That is why I always double encrypt everything in ROT-13.
      • I'm no cryptology expert, far from it, but that was my first thought as well. If you can analyze the data by using guesses as to what the unencrypted data looks like, then encoding twice would make that magnitudes more difficult, as you'd have to analyze the output of every conceivable key.

      • This is a widely held misconception. Double encryption is not significantly stronger than single encryption due to the meet-in-the-middle attack [wikipedia.org].
    • That time is now exponentially closer.

      I strongly suspect that you do not understand the meaning of "exponential" in the mathematical context appropriate for this subject.

      I gather that there may be a dilution of the meaning in slang to equate "exponentially" with "a lot". That is slang's problem.

  • Just Great (Score:5, Funny)

    by Anonymous Coward on Wednesday August 14, 2013 @02:03PM (#44567415)

    Just great, Now instead of 100 Quintillion years, it's only going to take 100 Trillion years to decrypt my porn

  • Huh? (Score:4, Insightful)

    by Black Parrot ( 19622 ) on Wednesday August 14, 2013 @02:03PM (#44567425)

    What correlation between the plaintext and cyphertext are they talking about?

    Also, I think there is a theorem about modern crypto systems that says if you can guess one bit, the rest doesn't get any easier.

    • Re:Huh? (Score:5, Interesting)

      by Arker ( 91948 ) on Wednesday August 14, 2013 @02:12PM (#44567517) Homepage

      Any correlation between plain and cipher. For instance if you can deduce that a particular string will occur at a particular point in the plaintext, then you can isolate the cipher equivelant and use that as a lever to break the rest of the ciphertext. You dont have to deduce it with certainty for this to be important, even if you have to try and discard a number of possible correlations before you find one that holds up.

      This is a pretty basic old-school cryptographic method, kind of fun to think that fancy-pants mathematicians have been missing it all these years.

      • Re: (Score:3, Informative)

        by Anonymous Coward

        There is no "cipher equivalent", unless you're doing something stupid like using ECB mode. [wikipedia.org]
        No modern encryption scheme works by simple one-to-one substitution; you use a nonce [wikipedia.org] or an IV [wikipedia.org] with a chaining mode so that even if the same plaintext appears several times, either in the same document or over multiple messages, it will "never" (neglible chance) encode to the same value twice.

        • by sinij ( 911942 )
          Without getting into boring details, as poster above mentioned, it is ensuring correct implementation of known secure algorithms that is important. Not entropy or some other pseudo-scientific attempt to get a shortcut to tenure.

          Short of breakthrough in quantum computing modern crypto is secure. If you are using AES-256 or anything else FIPS certified - you are still going to be OK.
          • by lgw ( 121541 )

            Shannon entropy and unicity distance [wikipedia.org] has more to do with provably unbreakable system than practically unbreakable. Why is a one-time pad unbreakable (assuming a good RNG)? When can a shorter key be unbreakable? What's the minimum key length needed to make an ideal cypher unbreakable for a given plaintext? Why is compression before encryption so important, how exactly how important is it?

            Purely academic questions like this are mocked by engineers in every field, but it's that sort of pure research that

      • by NotQuiteReal ( 608241 ) on Wednesday August 14, 2013 @02:31PM (#44567689) Journal
        Use Word! Those zippy-looking XML-ish .docx files are all messed up!
      • by Speare ( 84249 )
        If you want a visual analogy that works, think of the "WOPR guesses launch codes" scene in War Games. In that movie, it's really just eye candy to drive tension in the plot, but it works in that general way for larger texts. If WOPR could somehow compute or infer that the third digit of the launch code is A, and can't be any other letter, then it "locks" that digit down and looks for other inferences it can make. Code breaking and sudoku overlap here too.
        • Out of my field, but IIRC modern crypto systems aren't just substitutions that leave the cyphertext for a character in the same place as the plaintext. Everything gets scrambled all around.

      • by delt0r ( 999393 )
        True, but even "valid" breaks need memory on the order of a little less than 2^128 and dito for crypto operations and often plain text/cipher text size.

        There is a world of difference between practical breaks and theoretical ones. OF course there have been plenty of practical breaks as well. But at this point, this has not lead to one, and not really sure if it would lead to better breaks.
        • by Arker ( 91948 )
          It's not supposed to lead to better cracks. It's supposed to lead to a more accurate mathematical representation of how difficult cracks are to achieve.
    • by Morpf ( 2683099 )

      Well actually: If you guessed one bit correctly and you knew this, you would have made the problem half as easy. But maybe I just understood you wrong, so feel free to correct me. ;)

      • True, but narrowing 2^1000 possibilities for the plaintext down to 2^999 doesn't feel like a lot of progress.

    • by sinij ( 911942 )
      This has to do with theoretical vs. practical attacks against algorithms. Crypto algorithms evaluated based on concept of existential forgery, meaning that adversary can establish some correlation between encrypted message and truly random message. We are talking q2^128 for most cases. This does not mean that practical attack is available, or that it can be effectively computed.
    • Re:Huh? (Score:5, Informative)

      by Trepidity ( 597 ) <[delirium-slashdot] [at] [hackish.org]> on Wednesday August 14, 2013 @02:32PM (#44567701)

      As usual, the paper [arxiv.org] makes more sense than the press release, but is less grandiose in its claims.

      It's a fairly technical result that finds some appeals to the asymptotic equipartition property [wikipedia.org] lead to too-strong claims, compared to a more precise analysis.

      • As usual, the paper makes more sense

        Thank you for the legwork.

        I shall honour your work by ... well, RTFP-ing!.

        And ... It looks like "we told you to not do that ; this is another way of saying `don't do that`". Where "that" is "using a plaintext with predictable contents.

        And that is why, back in the early 1990s, the first Zimmerman distribution of PGP included a suggestion to use an efficient compression algorithm on a message (packet, whatever) before starting encryption ; because that hammers out the re

    • Re:Huh? (Score:5, Funny)

      by Hatta ( 162192 ) on Wednesday August 14, 2013 @03:01PM (#44567907) Journal

      Also, I think there is a theorem about modern crypto systems that says if you can guess one bit, the rest doesn't get any easier.

      Nah, once you guess one bit, the only bit left is zero.

  • Interesting times (Score:4, Insightful)

    by DigitAl56K ( 805623 ) on Wednesday August 14, 2013 @02:10PM (#44567489)

    There was also an article on Slashdot just over a week ago about a separate advance against RSA.
    http://it.slashdot.org/story/13/08/06/2056239/math-advance-suggest-rsa-encryption-could-fall-within-5-years [slashdot.org]

    A picture is emerging where not only are the tools available to the layman for protecting information difficult to use, their is a good chance that they also do not offer as much protection as we have long held them to provide.

  • FUD (Score:4, Interesting)

    by sinij ( 911942 ) on Wednesday August 14, 2013 @02:10PM (#44567491)
    This is well-known FUD that is making life difficult in government-facing Information Assurance circles. We are still talking ^n where to bruteforce N >>> heat death of universe. This is such unlikely cause of concern that effort currently spent on mitigating and testing is much better spent on ensuring proper implementation and validation of modern cryptographic algorithms. Instead all they care about is entropy assessment and don't care that it is for the implementation of ROT13.
    • Re: (Score:3, Insightful)

      by Anonymous Coward

      With all due respect, "citation needed". The authors of the paper aren't FUDsters spewing soundbites for the media, they are presenting it at the International Symposium on Information Theory before their peers. I can't tell from the link whether the paper has been accepted by a peer-reviewed journal or whether it's still in review, so some skepticism might be called for before uncritically accepting the conclusions, but this is still a far cry from FUD.

      I'd like to see something more than just a dismissiv

      • Re: (Score:2, Interesting)

        by sinij ( 911942 )
        This isn't dismissive hand wave. What they discovered is a marginal concern, especially when dealing with on-the-way-out algorithms (e.g. 3DES). Authors are FUDsters not because what they discovered is false, but because they are making huge deal out of it, and some illiterate CIOs within government circles listened and redirected resources to mitigate this non-issue.
        • This isn't dismissive hand wave. What they discovered is a marginal concern, especially when dealing with on-the-way-out algorithms (e.g. 3DES).

          "Dismissive hand wave" refers to your terse dismissal and accusations of FUD while providing nothing more than personal opinion as evidence. If there is a basis for your assertions, prove it with links to actual proof that this is nothing.

          Authors are FUDsters not because what they discovered is false, but because they are making huge deal out of it, and some illiterate CIOs within government circles listened and redirected resources to mitigate this non-issue.

          You must be in the field, then, and have inside knowledge. You come across as someone who is offended by the behavior of attention seeking scientific peers and are calling them out. Fine. But the MIT research article and the paper it describes don't support your claims

          • by sinij ( 911942 )
            So you think describing in incomprehensible math what boils down to a type of vocabulary attack, and then somehow concluding that our RNG isn't good enough (never mind the elephant in the room that your implementation+policy is vulnerable to such attack) is not FUD?
            • So you think describing in incomprehensible math what boils down to a type of vocabulary attack, and then somehow concluding that our RNG isn't good enough (never mind the elephant in the room that your implementation+policy is vulnerable to such attack) is not FUD?

              Yes, I don't think it is FUD. I may not think it is earth-shatteringly profound and proof that the sky is falling and cryptography is now broken forever, but reading the actual paper, they don't either.

              "Incomprehensible math"? What? It's a math paper, written by mathematicians, presented at a MATH symposium. It's comprehensible to the authors and the audience at the symposium, regardless of whether non-mathematicians can comprehend it. You act as though this were an attempt by hucksters to confuse by g

              • by sinij ( 911942 )
                Fear = our secrets going to get hacked
                Uncertainty = we just don't know how to quantify risks, because Step 3: Entropy!
                Doubt = everything we know about cryptography is wrong, because Flawed Example!

                I stand on my point that this paper, as far as practical cryptography goes, is FUD. I am willing to consider that it might be viewed differently through the lens of theoretical science.
        • by lgw ( 121541 )

          It's quite unlikely the authors "are making huge deal out of it". Never, ever confuse the journalist writing about science with the scientist.

          • by sinij ( 911942 )
            I am not. This is being made a huge deal out of right now by people who matter (but shouldn't) since about three years ago. This paper isn't even the first time academics parade this flavor of red herring, this why I find this specific instance so annoying. Insufficient entropy for random seeding my foot. We know how to seed, have done it for decades without any issues. Now they want to see formal analysis of this (and nothing else). How is that going to result in better cryptography?
  • Cooty Rats Semen

    (If you don't get it, you need to see: http://www.imdb.com/title/tt0105435/ [imdb.com] )

  • Cryptop = buying time
  • by Anonymous Coward on Wednesday August 14, 2013 @02:22PM (#44567611)

    It is (as given on the paper) the "National University of Ireland, Maynooth" and NOT simply "University of Ireland". "The constituent universities are for all essential purposes independent universities, except that the degrees and diplomas are those of the National University of Ireland with its seat in Dublin". I'm from Ireland and had no clue WTF "University of Ireland" was going to be and had it not been for the MIT connection would have assumed it was one of those places you send a few dollars to get a "fake" degree. When and if it's truncated you might see "NUI", "NUIM" or "NUI Maynooth".

    • National University of Ireland, Maynooth"

      You think it sounds confusing? Meh!

      It took me about 6 clicks to get to http://www.nuim.ie/ [www.nuim.ie]

      Mathematical skill does note require presence at a "major university" (though there is a strong correlation, distorted by (common) mathematical geniuses who really do not give a shit about conventionality. Perelman, (sp?), the recent proposer of a proof of the Something Big Conjecture being a case in point.

  • Common mistake. (Score:5, Interesting)

    by Hatta ( 162192 ) on Wednesday August 14, 2013 @02:27PM (#44567649) Journal

    I remember reading in an ecology textbook about researchers who wanted to model reforestation after a Mt. St. Helens erupted. They used the average seed dispersion as input to their model, and found that reforestation occured much, much faster.

    Turns out the farthest flung seeds take root just as well as the average seed, and they grow and disperse seeds. And the farthest flung of those seeds grow and disperse seeds, compounding the disparity between average and extreme seed dispersion.

    Just something to keep in mind when you're working with averages.

    • So they forgot to take into account that the median seed had to compete with a bunch of other seeds, while the farthest seed didn't? Sounds like shoddy prediction work to me.

  • Isn't this (one reason) why any good encryption system compresses what it is encrypting first? To maximize the data's entropy?

  • by Geirzinho ( 1068316 ) on Wednesday August 14, 2013 @02:45PM (#44567799)

    How is this in principle different from the known plaintext attacks (https://en.wikipedia.org/wiki/Known-plaintext_attack [wikipedia.org])?

    These assume that the attacker knows both the encrypted version of the text and the original it was based on, and tries to glean information from their correlation.

    Modern ciphers are made resistant even to chosen plaintext attacks, where the analyst knows the key and can tailor-make pairs of plain- and ciphertext.

    • by cryptizard ( 2629853 ) on Wednesday August 14, 2013 @11:12PM (#44571037)
      Pretty sure what they are saying here is that having a lot of Shannon entropy in your key is not enough for security. The paper seems to be deliberately obtuse though, which is really annoying. I am a cryptographer and it doesn't make a whole lot of sense to me right away. They note that if you draw a word from some stochastic process then the difficulty in guessing that word may not be very high, even if the entropy is high. This is completely intuitive and known.

      Imagine you have an algorithm that generates an n-bit secret key. First, it flips a random bit b. If b = 0, then it just outputs a string of n zeroes as the key. If b = 1, then it outputs n random bits. The entropy of this process is n bits, which seems good, but cryptographically it is terrible because half the time it just uses a fixed key of all zeroes. Instead of Shannon entropy, cryptographers uses a different form called min entropy which is inversely proportional to the most likely event. So in the above case, the min entropy would only be one bit, which properly reflects how bad that algorithm is.

      It's late, and I might be missing something, but it doesn't seem like anything that wasn't known before. Particularly, they talk about distributions with high entropy but which are not uniform, and in cryptography you always assume you have uniform randomness. It has been known for quite a while that many things are not even possible without uniform randomness. For instance, it is known that encryption cannot be done without uniform randomness.
  • We'd send our drones after them if they wouldn't hack them and send them back.

  • they found that slight deviations from perfect uniformity in source files, which seemed trivial in the light of Shannon entropy, suddenly loomed much larger

    Okay, but can't they simply apply an xor mask to the plaintext to make it perfectly uniform, and then encrypt the masked version?

    For example, let's say it turns out that iterating on the SHA512 function [SHA512(key), SHA512(SHA512(key)), etc.] yields an arbitrarily long xor mask that has perfect uniformity, and is statistically indistinguishable from a random sequence. You then apply that mask to the plaintext before encrypting it to destroy its non-uniformity. Wouldn't that be the fix?

    Or is the proble

    • by Anonymous Coward

      For example, let's say it turns out that iterating on the SHA512 function [SHA512(key), SHA512(SHA512(key)), etc.] yields an arbitrarily long xor mask that has perfect uniformity, and is statistically indistinguishable from a random sequence. You then apply that mask to the plaintext before encrypting it to destroy its non-uniformity. Wouldn't that be the fix?

      The point of this paper is that iterated SHA512, or any other cryptographic operation you care to name, doesn't have perfect uniformity, and those dev

  • by Dorianny ( 1847922 ) on Wednesday August 14, 2013 @03:52PM (#44568261) Journal
    Can a knowledgeable party weigh in on what this research means to whole-disk encryption, where an attacker has knowledge of what significant amounts of data, specifically the operating system files, look like un-encrypted? It would seem to me that such knowledge makes the sort of attack described by the article much easier.
  • 'Itâ(TM)s still exponentially hard, but itâ(TM)s exponentially easier than we thought,' Duffy says.

    So, what, rather than a computer taking until the heat death of the universe to crack my 4096 bit key it will only take until our Sun goes super nova?

    brb, generating 8192 bit keys.

  • $10,000 to whomever can break this cypher.

    sekg 1408 drnh @$?" xxth bhg9 douche bag

    hjmp llmo 3860 ++%# jjgj mmnm muggle

  • "A codebreaker needs only one reliable correlation between the encrypted and unencrypted versions of a file in order to begin to deduce further correlations."

    It is necessary to encrypt twice, using 2 different encryption methods. Then it will be impossible to find one reliable correlation.
    • This is a widely held misconception. Double encryption is not significantly stronger than single encryption due to the meet-in-the-middle [wikipedia.org] attack.
      • "This is a widely held misconception. Double encryption is not significantly stronger than single encryption due to the meet-in-the-middle attack."

        I suggested using 2 different un-related encryption methods. Because the 2nd method is entirely different from the first, MITM does not function. Using 2 different un-related encryption methods protects against other attacks, also.

        Meet-in-the-middle applies to using the same encryption method two times, using different keys.

        • Note that the free TrueCrypt offers encryption using 2 or more different encryption methods [truecrypt.org], with different keys for each method. They call it cascade encryption. Unfortunately, that term is used also for encryption using 2 or more keys with the same encryption method.
        • Umm... no. It applies to any two encryption methods. I don't know why you would think it has to be the same cipher twice.
          • It applies to any two encryption methods. I don't know why you would think it has to be the same cipher twice.

            So you're saying that, in Soviet Russia, you use ROT-26 followed with ROT-52 ?
            (OK, I'm done beating every single resident bacterial cell in a dead horse to death)

        • Sorry to reply again, but I just wanted to point out that various forms of the MiTM attack have been used to attack block ciphers because you can view them as a large network of smaller components. You can come from both ends and "meet in the middle" of the cipher to gain advantage sometimes. This is how AES was first broken.
          • "You can come from both ends and "meet in the middle" of the cipher to gain advantage sometimes."

            My understanding, which may be mistaken, is that MITM attacks, or any kind of attacks, on data that has been encrypted with two or more encryption methods are unlikely to be successful. Since the patterns of encryption are different, finding coincidences is unlikely.

            I was unable to find good information. Can you tell me where to find useful research?
  • So...it's easier to determine the encryption key if already have the unencrypted version of a file? Yeah, that's real helpful. You don't really need the key if you already have a decrypted version. Just don't ever leave a decrypted version around and even then, don't use the same key for each file. Problem solved.
    Except they figured this out just in time for quantum computers to ruin all encryption.

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (10) Sorry, but that's too useful.

Working...