Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

×
Encryption

Math Advance Suggest RSA Encryption Could Fall Within 5 Years 282

Posted by Soulskill
from the which-will-be-about-the-time-encryption-becomes-useless-anyway dept.
holy_calamity writes "The two encryption systems used to secure the most important connections and digital files could become useless within years, reports MIT Technology Review, due to progress towards solving the discrete logarithm problem. Both RSA and Diffie-Hellman encryption rely on there being no efficient algorithm for that problem, but French math professor Antoine Joux has published two papers in the last six months that suggest one could soon be found. Security researchers that noticed Joux's work recommend companies large and small begin planning to move to elliptic curve cryptography, something the NSA has said is best practice for years. Unfortunately, key patents for implementing elliptic curve cryptography are controlled by BlackBerry."
This discussion has been archived. No new comments can be posted.

Math Advance Suggest RSA Encryption Could Fall Within 5 Years

Comments Filter:
  • by Anonymous Coward on Tuesday August 06, 2013 @05:23PM (#44491181)

    https://www.schneier.com/essay-198.html

  • OpenSSL? (Score:5, Interesting)

    by pope1 (40057) on Tuesday August 06, 2013 @05:24PM (#44491203) Homepage

    OpenSSL [openssl.org] has had a good working implementation of ECDSA/ECDH for years: http://wiki.openssl.org/index.php/Elliptic_Curve_Cryptography [openssl.org]

    What exactly does BlackBerry have chained down that we don't have an open solution for?

  • by slew (2918) on Tuesday August 06, 2013 @06:09PM (#44491689)

    Why elliptic curves when we can go back to good old fashioned original RSA that uses prime number factoring as the problem? No patent nonsense to worry about there.

    Sometimes the past needs to remain in the past...

    Although prime factoring is considered a hard problem, the sparse distribution of prime numbers (~x/ln(x)) makes RSA increasingly inefficient in that superlinearly large moduli (to match large primes) need to be used to increase security linearly.

    Lest nostalgia continue to be your guide, the original RSA was also found to be broken and needed to be patched to avoid the insecurity

    1. Messages corresponding to small input values could be simply inverted ignoring the modulus operation (just doing numerical root estimation to invert the exponentiation). The larger the modulus, the more "insecure" messages there are.

    2. Encryption is deterministic so is subject to dictionary attacks.

    When people say they are using RSA today, they are usually using RSA-OAEP (optimal asymmetric encryption padding) which patches these two specific vunerablities of RSA.

    FYI, the original RSA was patented (although later RSA labs decided to not enforce the patent and let it expire). This patent nonsense around RSA was a big issue in its day...

  • by Panaflex (13191) <convivialdingo&yahoo,com> on Tuesday August 06, 2013 @06:39PM (#44492005)

    I'm surprised to see other people going in the direction I've been going for about 3 years now. Really. I thought I was quite alone in my path, LOL.

    I need to read this paper still, but if it's taking the same path I did, then it's not a peachy as some think.

    I'm only am amateur, so take this from the point of view as someone who kicks back with a beer and enjoys solving impossible computational problems.

    I don't think it's that close to being broken... I think it'll take a huge computing effort (think multi-terabyte databases) to generate the tables across the PQ space required so that existing problems can be used to quickly find paths and intersections. At the beginning you're looking at only a VERY SMALL speedup from modern sieving, but once the tables get generated (years of effort) you'll eventually see faster and faster improvements. At least, that's with my algorithm, which I'm sure is far from perfect and only works on a certain set of primes right now. Which is about 20%. Which is far from optimal.

    So yeah, progress. But I'm unconvinced that this will work for all primes.

    I'm going to read the paper now... which I'm sure is far better than what I've been doing.

  • by gnasher719 (869701) on Tuesday August 06, 2013 @06:49PM (#44492131)

    (One way this could fail is the following: factoring I think is in a no-mans land between P and NP, not known to be in P nor known to be NP-complete. If NP collapses into P then so must factoring, but it could be that factoring is some weird-ass O(n^23) algorithm or something while every NP-complete problem can't be done in less than, say, O(n^6000).)

    Consider this: Performing 2^256 operations is physically impossible (based on the quantum mechanical minimum energy to do anything, and the total amount of energy in the universe). 100 digit numbers are about 330 bits in size. If factoring n bit numbers required n^30 operations, then factoring just one 330 bit number would be physically impossible.

  • by mstefanro (1965558) on Tuesday August 06, 2013 @06:50PM (#44492149)

    There does not appear to exist any single piece of evidence that DLP (discrete logarithm problem)
    will benefit from algorithms running in polynomial time. The recent work of Antoine Joux that they
    are referring to (one of which I assume to be http://arxiv.org/pdf/1306.4244v1.pdf [arxiv.org]) provides
    improvements of quasi-polynomial agorithms for breaking DLP. But there is no reason to believe
    that these improvements can lead to a polynomial-time attack. And as long as this does not happen,
    those attacks can still be defeated by increasing the key size.

  • by amorsen (7485) <benny+slashdot@amorsen.dk> on Tuesday August 06, 2013 @06:52PM (#44492161)

    There are plenty of sane reasons to use little endian. It means the same pointer can point to a value as 8-bits, 16-bits, 32-bits and so on, and it will get the right value as long as the value does not overflow.

    Big-endian only exists because Latin languages write their numbers wrong -- text is written left-to-right but numbers are written right-to-left. This mess has also caused the middle-endian date and time formats currently in use. ISO tries to fix the date format, but unfortunately does it by standardizing exactly the big-endian way that feels so alien to humans.

    If computers had been invented by someone writing either left-to-right or right-to-left consistently, big-endian would not have occurred to them. They would naturally write the smallest value first, just like the Arabic inventors of the numbers. Alas...

    (Yes I am a convert. I used to think little endian was a sick joke.)

  • Re:OpenSSL? (Score:2, Interesting)

    by Anonymous Coward on Tuesday August 06, 2013 @08:22PM (#44492853)

    Except thats not true. The patent often claimed to cover point compression specifies fields of characteristic 2 (not the large prime fields most implementations actually use; as the certicom patents cover a ton of characteristic 2 field stuff).

    It's also the case the point compression was described in publication at least as far back as 1986, making it too old to be patentable in general:

    Quoting DJB:

    Miller in 1986, in the paper that introduced elliptic-curve cryptography, suggested compressing a public key (x,y) to simply x: ``Finally, it should be remarked, that even though we have phrased everything in terms of points on an elliptic curve, that, for the key exchange protocol (and other uses as one-way functions), that only the x-coordinate needs to be transmitted. The formulas for multiples of a point cited in the first section make it clear that the x-coordinate of a multiple depends only on the x-coordinate of the original point.'' This is exactly the compression method that I use.

    (http://cr.yp.to/ecdh/patents.html)

  • Re:RSA = out of date (Score:5, Interesting)

    by dcl (680528) on Tuesday August 06, 2013 @10:00PM (#44493407)

    There is another promising public key encryption method known as NTRUEncrypt (http://en.wikipedia.org/wiki/NTRUEncrypt). It's lattice based, and apparently it will still be effective in a post quantum computing world where RSA/Elliptic curve methods will fail.

  • by cas2000 (148703) on Tuesday August 06, 2013 @10:03PM (#44493419)

    Big-endian only exists because Latin languages write their numbers wrong -- text is written left-to-right but numbers are written right-to-left.

    huh? numbers are written in exactly the same order they would be expressed in words - "fifty-one thousand, three hundred and forty-eight" == 51,348

    being trained from a young age to read numbers like that, i have no idea whether it really would have been just as easy to learn to read the digits in reverse order ("8 4 3 1 5") but it doesn't seem so to me. In fact it seems competely unnatural - the kind of thing you might do just to prove you can rather than because it's any better or more efficient.

    This mess has also caused the middle-endian date and time formats currently in use.

    it's really only americans who do this (MDY). and maybe the japanese because of the "Operation Blacklist" post-WWII occupation led by Gen. MacArthur. Everyone else uses DMY or YMD because the middle-endian american date format is alien - people naturally order things from either smallest to largest (or least significant to most significant) or from largest to smallest (most to least).

    ISO tries to fix the date format, but unfortunately does it by standardizing exactly the big-endian way that feels so alien to humans.

    YYYY-MM-DD seems perfectly natural to me, not at all alien. I was raised on D-M-Y but figured out for myself that YYYYMMDD is the only format that sorts properly (and then later learnt there was as ISO standard for it).

    YYYYMMDD also has the advantage of being unambiguous - you don't have to guess whether whoever wrote the date is american and if so, whether they're using a sane or insane date format or not - for days >12, it's easy to figure out: 8-13-2013 can't be anything but 13th August....but 8-7-2013 could be July 8 (sane) or August 7 (insane).

    worse, there's absolutely no way to tell except to look for other dates on the same page (or journal/ledged/book/web site/etc) and check to see if any of them have day numbers > 12.

All the simple programs have been written.

Working...