Forgot your password?
typodupeerror
Encryption Security

Factoring Breakthrough? 492

Posted by michael
from the for-as-long-as-men-are-capable-of-evil dept.
An anonymous reader sent in: "In this post to the Cryptography Mailing List, someone who knows more about math than I do claimed "effectively all PGP RSA keys shorter than 2k bits are insecure, and the 2kbit keys are not nearly as secure as we thought they were." Apparently Dan Bernstein of qmail fame figured out how to factor integers faster on the same cost hardware. Should we be revoking our keys and creating larger ones? Is this "the biggest news in crypto in the last decade," as the original poster claims, or only ginger-scale big?"
This discussion has been archived. No new comments can be posted.

Factoring Breakthrough?

Comments Filter:
  • by Hew (31074) on Tuesday February 26, 2002 @01:14PM (#3071053) Homepage
    Try viewing the postscript file using the online viewer here [samurajdata.se] instead.
    • by killmenow (184444) on Tuesday February 26, 2002 @01:26PM (#3071154)
      Or view it as this [rr.com] PDF.

      Now let's see how well RR's server can handle the /. effect. :^)
    • by Cy Guy (56083) on Tuesday February 26, 2002 @02:01PM (#3071407) Homepage Journal
      Or you can (try to) view in plain text via the Google archive here [google.com]. Here's the Preface:
      Preface
      This paper is an excerpt from a grant proposal that I submitted to NSF DMS at the beginning of October 2001.

      The same techniques can be applied to other congruence-combination algorithms for factoring, discrete logarithms, class groups, etc. See [3] for a bibliography.

      Priority dates. I realized on 13 September 2000 that special-purpose hardware would change the exponent in the cost of integer factorization. I announced the exponent reduction from 3 + o(1) to 2:5 + o(1) for real (two-dimensional) circuits in a seminar at Butler University on 23 March 2001, a rump-session presentation at Eurocrypt 2001 on 7 May 2001, and a talk at the Algorithms and Number Theory conference at Dagstuhl on 14 May 2001. I realized on 9 August 2001 that the sieving exponent could easily be reduced from 2:5 + o(1) to 2 + o(1).


    • plus, it'll avoid the /.ing of tonga ...
  • not surprising... (Score:4, Insightful)

    by lyapunov (241045) on Tuesday February 26, 2002 @01:17PM (#3071073)
    Cryptography is going to be a perpetual game of "measure, counter-measure" as computing power increases and people develop more clever ways of doing things.

    Does anybody have good sources about this? Ones based on historical encryption and decryption that lead into modern times would be ideal.
    • Re:not surprising... (Score:4, Interesting)

      by monkeydo (173558) on Tuesday February 26, 2002 @01:31PM (#3071199) Homepage
      You are right, and this is a major stumbling block to widespread acceptance of encryption in the civilian world. The military and other organizations with a strong need to keep secrets are used to playing these games, but corporate America just isn't. Current applications aren't flexible enough to plug-and-play cryptography, changing crypto systems often means a complete redeployment of the application, or worse yet a new application.

      Imagine the conversation with the CIO when you tell him he has to throw out his 1 year old meesaging platform because some guy figured out how to factor very large numbers effeciently and your current platform doesn't support eliptical curve cryptography.
    • Re:not surprising... (Score:4, Informative)

      by Plutor (2994) on Tuesday February 26, 2002 @01:44PM (#3071304) Homepage
      Read the book The Code Book [amazon.com] by Simon Singh. It's a fantastic mix of technical cryptography and historical perspectives.
  • by Carmody (128723) <slashdot AT dougshaw DOT com> on Tuesday February 26, 2002 @01:17PM (#3071077) Homepage Journal
    The NSA factors numbers, and their work is top-secret. When I read stories like this, I wonder if people are just discovering things that the NSA has known about for years. If the NSA could factor 2 Kbit keys, would they tell people? Probably not.

    So when you ask "Are our keys secure" the logical follow-up question is, "From who?"

    From me? Yes. I probably couldn't factor a 1000 digit number.

    From your boss? Yes. You could use rot-13 and your boss would probably be baffeled.

    From your boss' lawyers? From the police? Here is where we get into the gray area; where the article becomes relevant

    From the government? I think you were kidding yourself when you thought it was secure in the first place. I find it easy to believe that the NSA is far ahead of the public in the encryption arms-race.

    • by wiredog (43288) on Tuesday February 26, 2002 @01:21PM (#3071110) Journal
      I read somewhere that the RSA public key algorithm was invented at GCHQ, and kept secret, years before RSA invented it.
    • by Anonymous Coward
      You could use rot-13 and your boss would probably be baffeled.

      Especially if you misspell everything!
    • by monkeydo (173558) on Tuesday February 26, 2002 @01:25PM (#3071145) Homepage
      From the referenced post:

      Note that there have been rumors of an RSA cracker built by a
      three-letter agency in custom silicon before this, but until
      analyzing Bernstein's paper I had always dismissed them as
      ridiculous paranoid fantasies. Now it looks like such a device
      is entirely feasible and, in fact, likely.


      There has always been speculation that the NSA could break RSA, but it was dissmised as paranoid by most "in the know." Most of the mathematicians didn't believe that they were that much ahead of the rest of us. Now that this technique is known it explains how the spooks may be able to break crypto everyone else believed was "unbreakable" if they had previously made this discovery.
    • by JordoCrouse (178999) on Tuesday February 26, 2002 @01:33PM (#3071210) Homepage Journal
      From the government? I think you were kidding yourself when you thought it was secure in the first place. I find it easy to believe that the NSA is far ahead of the public in the encryption arms-race.

      Exactly! One of the most lucid posts I have ever seen on /. The alphabet soup agencies spend millions of dollars and hire the most brilliant minds in the world (not just the US), and their whole existance is based on the premise that they need to be able to find out what every human on earth is doing at any point in time.

      I have never thought that I could put one by the government, and I have never encrypted my documents because I was worried that some spook might read it. If they want my password, credit card number or DNA bad enough, they're going to get it no matter what I do. I encrypt my data because I'm more worried about script kiddies and regular old fashioned crooks.

      • I think that government agencies are more interested in having the potential to know whay any single person or group is doing, rather than literally needing to know what everyone is doing, all of the time.

        What worries me is the possibility that corporations could have effectively the same amount of power, with none of the public scrutiny, accountability, or mission to "protect" (at least in theory) those they watch. As you say, individuals can (and do) protect their privacy in dealings with each other, with or without the threat of government intervention. Massive corporations, OTOH, are effectively immune to any power less than the largest national governments.

        • by rho (6063) on Tuesday February 26, 2002 @07:24PM (#3074411) Homepage Journal
          What worries me is the possibility that corporations could have effectively the same amount of power, with none of the public scrutiny, accountability, or mission to "protect" (at least in theory) those they watch.

          What public scrutiny? Do you know what the NSA is doing? Do you thing your drunk, philandering senator knows? Or even cares?

          This is a dangerous attitude--whereas a corporation could learn all about you, the worst they'll do with the information is use it to sell you more bric-a-brac, and if you discover that they're invading your privacy, you can at least sue them.

          If the government is gathering this data, it can use it to take, with force, everything you own because you smoked a joint in 1963. Plus, if you find out the government is invading your privacy, you can only... well... you can only grease up your sphincter to help with the penetration. And, depending on how you find out what the government is doing, they can shoot you.

          Corporations do bad things, but the worst things are done by governments, not corporations. Even the worst things done by corporations are done by the government at the corporations' behest (vis. DMCA).

    • by Syberghost (10557) <syberghost AT syberghost DOT com> on Tuesday February 26, 2002 @01:34PM (#3071220) Homepage
      Remember what happened with DES. The NSA said "make these changes. We can't tell you why." IBM made the changes.

      20 years later, when differential cryptography was "discovered", it turned out those changes made it more resistant to differential cryptography...

      • 20 years later, when differential cryptography was "discovered", it turned out those changes made it more resistant to differential cryptography...

        Wait, I don't understand that. Is this good or bad? Resistant meaning that you couldn't use DES for this type of new and better cryptography or the opposite that the DES was made stronger by the NSA changes... I'm confused.

        -Russ

        • by Zathrus (232140) on Tuesday February 26, 2002 @02:22PM (#3071612) Homepage
          Ok, I'm paraphrasing stuff I previously read on /.

          Which, of course, means that this is the absolute truth, so please repeat it as such.

          DES has a large space of possible keys to use. At some point in time (I don't know that it was 20 years prior to the general knowledge about differential cryptography, but it was numerous years prior at lest) the NSA quietly told everyone that a certain portion of that keyspace should not be used. Ever. They didn't say why. They just said that it shouldn't be used for secure applications.

          Eventually someone discovered differential crypto. It revealed that the keyspace that the NSA said not to use for DES was very, very weak and could be cracked rather trivally. The rest of the keyspace was still secure though (within the scope of the original security on DES at least).

          What he's saying is that the NSA knew about this a long, long time before anyone else had figured out why. It is not unreasonable to believe that they've figured out other "magic" to make crypto either harder or easier to crack, despite claims otherwise.

          The NSA exists to protect US national secrets. Crypto is their business. Knowing how to crack crypto tells you how safe your own crypto is. They have a very large, very undisclosed budget. Contrary to popular belief, not everyone in the government is incompetent. You may put together your own conclusions from there. Please wait in line for your aluminum foil beanie though.
          • by Paul Crowley (837) on Tuesday February 26, 2002 @06:59PM (#3074202) Homepage Journal
            That's not quite right.

            The mysterious tweak was not restricting a portion of the keyspace; it was the choice of "S-boxes". In DES, the S-boxes are a set of 8 functions that take 6-bit inputs and return 4-bit outputs. They're not specified algorithmically; the standard just says "S-box 1: 0 -> 14, 1 -> 4..." and so on: eight tables, each of which contains 64 4-bit numbers. The S-boxes are central to DES's security; the only other operations in the cipher are bit shuffles and XOR.

            When DES was launched, people noticed pretty quickly that these tables had not been filled randomly; they did not pass randomness tests. But IBM (who designed DES) and the NSA (who approved it) were tight-lipped; not only about their design, but about the whole design of DES. Naturally, people suspected a back door.

            When differential cryptanalysis was discovered, it was shown that the S-boxes had been specifically hardened against it, and that this was the souce of the pattern seen. Don Coppersmith of IBM had independently discovered DC, calling it the T-attack (T for "tickle"), and had worked out how to defend DES against it.

            However, when Mitsuru Matsui discovered linear cryptanalysis, it was found that DES was not specifically hardened against it, and indeed the best academic attack against DES is a linear attack. Since the NSA approved DES, perhaps they did not know about linear cryptanalysis either.

            Of course the real NSA back door was always the 56-bit key, and the best practical attack is still brute-force key search.
        • by Ralph Wiggam (22354) on Tuesday February 26, 2002 @02:23PM (#3071624) Homepage
          For the first time I know of, the NSA is actually the good guys in a Slashdot post.

          The NSA recommended changes to DES that made it a better, less crackable, scheme. Years later, when a new type of code breaking was publicly discovered, people looked back and noticed the changes the NSA had made were directly influenced by this "new" type of code breaking. The bottom line is that the NSA is, and always has been, leaps and bounds ahead of all non-classified "state of the art" cryptography.

          Could the original poster give a link? I would love to read the story.

          -B
        • by gweihir (88907) on Tuesday February 26, 2002 @03:24PM (#3072193)
          Wait, I don't understand that. Is this good or bad?

          It supposedly improved DES. But it also implies that the NSA might have knowen about differential cryptoanalysis 20 years before public research discoverd it. The implication is that they might know a lot of other things that are not yet knowen in the public crypto research community. On the other hand, they might only have had a hunch, or there might have been other weaknesses in the old design (they changed the s-boxes, as far as I remember), that they could find and the effect on differential cryptoanalysis is accidental.

          But there is also another limiting factor: If they can break, e.g. AES or RSA far easier than the public suspects, they don't want the public to know! After all when it is knowen a cipher is insecure, people will stop using it or improce its security. This is analog to not exposing a highly placed intelligence source.

          If you plan a major terrorist attack and use email for the related communication, you might have to worry. Otherwise, as long as you use cipthers that are belived to be secure for the near future by current published research, you should not need to worry.
    • No data security is really secure against a government focused on you -- if they can't break the crypto, they'll Trojan the machine, plant a spy camera to capture the passphrase, or squeeze the information out of you and/or your correspondents.

      The realistic target is making it cost too much to target you. (Note that cost != money -- the usual government policy in that regard is "spend all you want; we'll tax more". Real costs to governments are man-hours of specially trained personnel, risk of exposure and embarassment, or risk of exposure and loss of ability to use the same trick again.)

    • From the government?

      Forget encryption. Piss them off and they'll come after you directly.

    • by Anonymous Coward
      Security doesn't equal encryption.
      Your stuff may be easily available even if it was protected by 4096-bit keys - and taking advantage of this is where they are even better at (than in breaking the laws of physics or something).
  • by Dolph (132127) on Tuesday February 26, 2002 @01:18PM (#3071086)
    I use a 4096-bit GPG key. It may take a day to encrypt a message, but at least the encryption can't be broken (yet).
    • can't be broken (yet).

      Not until we get quantum computers, that will break it almost instantly :-)
    • > I use a 4096-bit GPG key. It may take a day to encrypt a message, but at least the encryption can't be broken (yet).

      Ah, but what of the Great Unwashed, who figured "PGP? Too complicated! I can't be bothered! They'll just decrypt it anyways."

      If this math turns out to be real, the Great Unwashed was right for anyone with less than a 4096-bit key ;-)

  • by BigBadAssMonkey (266366) on Tuesday February 26, 2002 @01:18PM (#3071088) Homepage
    http://cr.yp.to/papers.html

  • Circut for integer factorization?

    Reminds me of a certain movie...

  • I wonder how long the NSA has know about this. I'm betting a decade...

    I haven't hit a top limit on the GPG key yet. I had an obnoxiously long 4096 bit one I was testing with for a while and PGP was able to encrypt messages to it but was unable to import the private key. Oh well, time to move to an obnoxiously obnoxious 8096 bit one.

    Suddenly the 128 bit netscape encryption isn't looking so good (Not that it was before...)

    • Re:Hmm.... (Score:5, Informative)

      by jkujawa (56195) on Tuesday February 26, 2002 @01:25PM (#3071149) Homepage
      The 128 bits Netscape uses are for a symetric key. It takes considerably less bits for a symetric key to be secure, than an asymetric key. (I forget the equivalency, but ISTR that 128 bits symetric is roughly equivalent of 2048 bits asymetric.)
      And the symetric keys netscape uses don't depend on factoring primes to be secure ...
      Although the key exchange that netscape uses to send the session key probably does.
      • by beej (82035) on Tuesday February 26, 2002 @01:41PM (#3071277) Homepage Journal
        Any key can be cracked--you just have to search through all of them. Phew! Now, for 128 bits, that's a lots of numbers to search. For 2048 bits, it's more than you can possibly imagine.

        So the trick is to find a shortcut or a flaw in the algorithm that allows you to get the data without searching all the keys.

        In the case of RSA, the shortcut is factoring the product of two primes. It's way way easier to factor a 128-bit product than it is to search through a 128-bit keyspace. So RSA bumped the size of the product up and up and up until it was as impossibly hard to factor it as it was to search a 128-bit keyspace.

        Other algorithms have other shortcuts, too. Remember when a weakness was found in the session key choosing algorithm for Netscape? The keyspace was reduced from 128 bits to 24 bits or something like that, and then a search could be made on it.

        Anything you can do to avoid trying all the keys is the name of the game. Unless you're some kind of quantum computer freak. ;-)

      • The 128 bit DES key is then encrypted using RSA (I think). Not sure how many bits the RSA key is.

        HH
      • The symmetric key used by SSL (usually for the RC4 algorithm) is negotiated using an asymmetric public key cryposystem. (usually RSA) If that can be broken easily, the keys to the symmetric algorithm are right there.


        The real reason a symmetric algorithm is used for the bulk of an SSL session is that it is much less computationally intensive than an asymmetric algorithm with a similar degree of security.


        Note that these algorithms are independently pluggable, so a more secure or larger key size asymmetric algorithm could be used alongside the same old 128 bit RC4.

        The big problem here is for systems using browser managed certificates for authentication would have to be upgraded and new certs issued. This is not the most common usage of SSL, but where it is in place the systems tend to be large and expensive.

      • Re:Hmm.... (Score:4, Funny)

        by sludg-o (120354) on Tuesday February 26, 2002 @04:27PM (#3072724)
        And the symetric keys netscape uses don't depend on factoring primes
        to be secure ...


        Good, because here's a script I wrote that factors any prime number in constant time:

        #/usr/local/bin/perl5 -w

        print "Please enter a prime number";

        chomp($prime = <STDIN>) ;

        print "The factors of $prime are $prime and 1";

        exit(0);

        Of course, you really DO need to input a prime for it to work.
  • Just wait... (Score:5, Insightful)

    by JohnBE (411964) on Tuesday February 26, 2002 @01:20PM (#3071106) Homepage Journal
    Shouldn't we all hang on until crypto experts validate this? Is it theoretical? How much does the attack cost? etc. etc.

    I wouldn't start sending those revocation certificates just yet.
    • by nomadic (141991) <[nomadicworld] [at] [gmail.com]> on Tuesday February 26, 2002 @01:44PM (#3071303) Homepage
      Crypto experts? Don't you realize the average slashdot poster is an expert on all technical and mathematical subjects, no matter how esoteric? Come on, get with the program...
    • And miss foaming at the mouth at the earliest oppourtunity? Hardly! :) But yeah, like Mr. infinate compression in Fl. Prove it or shut up. In fact I wish these people would actually construct a working model before opening thei mouths. Would save us all a lot of time.
      • The difference is Bernstein supplies the math. By that logic, Einstein should have never released the theory of relativity, because he couldn't prove it at the time.. Good idea!
  • To quote another: (Score:2, Redundant)

    by PureFiction (10256)
    "Holy shit. The math works. Bernstein has found ways using additional hardware to eliminate redundancies and inefficiencies which appear in any linear implementation of the Number Field Sieve. We just never noticed that they were inefficiencies an redundancies because we kept thinking in terms of linear implementations. This is probably the bigest news in crypto in the last decade."

    Yeah, this is big news. It also sheds new light on the relaxation of the export constraints. The NSA has dedicated hardware performing this same procesing, and probably for the last 5-10 years...

    "Note that there have been rumors of an RSA cracker built by a three-letter agency in custom silicon before this, but until analyzing Bernstein's paper I had always dismissed as ridiculous paranoid fantasies. Now it looks like such a device is entirely feasible and, in fa ct very likely."

    Time to make new keys...
  • Don't Panic (Score:5, Informative)

    by SiliconEntity (448450) on Tuesday February 26, 2002 @01:27PM (#3071161)
    I am a co-author of RFC 2440, the OpenPGP standard. It's important to put this result into perspective. Dan Bernstein is the first to say that it is too early to tell whether his design for a factoring machine would be practical for keys of the size in commmon use today. See for example this recent Usenet posting [google.com], where he says,

    Protecting against the http://cr.yp.to/papers.html#nfscircuit speedup means switching from n-bit keys to f(n)-bit keys. I'd like to emphasize that, at this point, very little is known about the function f. It's clear that f(n) is approximately (3.009...)n for _very large_ sizes n, but I don't know whether f(n) is larger than n for _useful_ sizes n.

    Bernstein's paper is excerpted from a grant proposal where he is requesting funds to answer the question of whether the design is applicable to useful key sizes. At this point it is far too early to assume that 1024 to 2048 bit keys can be attacked by his proposed machine more efficiently than with known methods.

  • 1.5 bits lost? (Score:2, Insightful)

    by nagora (177841)
    If this new method speeds the calculation by a factor of three, and each extra bit in the keys doubles the amount of time needed then surely this "breakthough" amounts to everyone losing less than 1.5 bits of security, doen't it?

    The poster seems to think speeding the calculation by 3x means reducing the strength of 300bits to that of 100bits. I know this is plain wrong but I'm not sure of the correct value.

    TWW

    • No, someone has been spreading around an erroneous interpretation of the paper. From the abstract:

      "This reduction of total cost from L^(2.852...+o(1)) to L^(1.976...+o(1) means that a ((3.009...+o(1))d)-digit factorization with the new machine has the same cost as a d-digit factorization with previous machines."

      In plain terms: A factorization of a number that has 3 times as many digits will have the same cost as a the number did before.

      Hope this clarifies why this is a breakthrough (that may be important).
  • Reward (Score:5, Funny)

    by suso (153703) on Tuesday February 26, 2002 @01:36PM (#3071236) Homepage Journal
    Is he going to pay someone $5000 if they can prove him wrong? (qmail joke)
  • by Spatch3 (47581)
    You could view this [cr.yp.to] post script file online here [samurajdata.se], or you could use the Windows, OS/2 or Linux viewers available here [wisc.edu].
  • by cperciva (102828) on Tuesday February 26, 2002 @01:39PM (#3071265) Homepage
    This isn't really a big deal, nor is it surprising.

    Basically, what DJB has done is translated the GNFS from its normal implementation on serial computers (where there is a great deal of available memory, but only one operation is performed at once) into a parallel implementation, where the number of processors more closely matches the available memory.

    The "decreased cost" is misleading here, since it is calculated on the assumption that sieving would have been done by a single processor with access to a huge memory... this quite simply was never the case.

    There is nothing here to suggest that factoring can be performed using any fewer FLOPS; all that is demonstrated is that by using several processors, each with a smaller memory, you can do better than with a single processor and a giant memory. Which we already knew.

    To summarize: DON'T PANIC!
    • by The Pim (140414) on Tuesday February 26, 2002 @03:03PM (#3071975)
      The "decreased cost" is misleading here, since it is calculated on the assumption that sieving would have been done by a single processor with access to a huge memory... this quite simply was never the case.

      There is nothing here to suggest that factoring can be performed using any fewer FLOPS; all that is demonstrated is that by using several processors, each with a smaller memory, you can do better than with a single processor and a giant memory. Which we already knew.

      Hold on. A parallel implementation would normally just give an N times speedup. But the Berstein method (reportedly) does much better than that: it reduces the base of the exponential complexity by about a third (in the asymptotic case). This is far more significant than "merely" parallelizing the algorithm.

  • by coyote-san (38515)
    I've seen a lot of comments about how this means that all SSL and PGP encryption is transparent, etc.

    Please, get a grip.

    Most "transient" connections still use 512 bit keys. If this effectively reduces the key size by 3, that's still 170 bits. That's far larger than the RSA key that took years to crack a few years back.

    Technology improves, algorithms improve, and the TLAs can certainly afford to build cracking engines, but it will probably still take a substantial amount of time to crack a key. (Substantial = days.) Well worth the effort if you're looking at suspected terrorists or double agents inside of the FBI, but hardly worth it for anyone reading this comment.

    What we *do* need to worry about is the effect on long-term keys. E.g., root CA keys often have 20-year lifetimes, something that now seems foolish with 1k bit keys.
  • by weird mehgny (549321) on Tuesday February 26, 2002 @01:44PM (#3071302)
    .deen uoy noitpyrcne eht all is sihT
  • dnetc (Score:2, Interesting)

    by AdTropis (6690)
    so when can i get a distributed.net client that makes use of this?
  • cr.yp.to mirror (Score:4, Informative)

    by Xanni (29201) on Tuesday February 26, 2002 @01:52PM (#3071346) Homepage
    See also my Australian mirror at: http://www.glasswings.com.au/cr.yp.to/papers.html# nfscircuit [glasswings.com.au] Share and enjoy, *** Xanni ***
  • by Glock27 (446276) on Tuesday February 26, 2002 @01:55PM (#3071366)
    Are there open-source elliptic curve cryptosystems [umich.edu] available? It is thought that these are more difficult to brute-force than crypto based on factors.

    Well, to answer my own question, on Freshmeat there appear to be one [freshmeat.net] or two [freshmeat.net].

    Have fun!

    299,792,458 m/s...not just a good idea, its the law!

  • Who cares (Score:3, Interesting)

    by wk633 (442820) on Tuesday February 26, 2002 @02:14PM (#3071545)
    The TLAs will just install a wiretapper on your keyboard anyways.
  • ...it's not clear this has any impact on crypto security today. There are lots of O(1)'s and nobody can be sure just how big they are for the real keys that are used today. Still, it can't do any harm to keep on your toes and make your keylength as long as your hardware will allow.
  • by Thagg (9904) <thadbeier@gmail.com> on Tuesday February 26, 2002 @02:46PM (#3071829) Journal
    A friend of mine worked for Cray Computer Corporation until the untimely death of Seymour Cray. The last machine they were working on was a monster, that might make more sense in terms of today's developments.

    In the early nineties, CCC was working on the Cray 3, a new gallium arsenide computer. It was to have a cycle time of about 1ns (shockingly fast back then.) It was cooled by a high-pressure very high-speed mist of Flourinert suspended in helium. It was built as a series of wedges much like the Cray 1 and 2, although somewhat smaller. They built working prototype wedges, and were debugging them, while looking over their collective shoulders at the ground being gained on them by arrays of microprocessors.

    One thing led to another, and it was clear that the Cray 3 would never be a commercial success. They were then given a contract to build what was called the Cray 4. The Cray 4 was a one-off machine using PIM (processor in memory) chips. These were 1-bit computers, but there were 262,144 of them in the box. The idea was that the gallium arsenide chips, wiring, and cooling system that made up the Cray 3 were just the networking system for these PIM chips, which would do the actual work.

    Anyway, Cray died, and then CCC quickly died, and I don't believe that the machine was ever finished.

    thad
  • Well i couldn't get to the original site [cr.yp.to], but i see that NEC's citeseer has it [nec.com].
  • by chongo (113839) on Tuesday February 26, 2002 @04:49PM (#3072907) Homepage Journal
    Some have suggested that people should move away from RSA crypto and start using Elliptic Curve (EC) crypto. In fact the paper [cr.yp.to], if appicable to "useful" key sizes, suggest that the opposite is true.

    The methods described in the paper [cr.yp.to] can be used to improve the cost of cracking EC discrete logs as well. The author, in a recent Usenet posting [google.com], points out that the paper's methods are likely to reduce the cost/effort of EC key cracking as well ... perhaps even more than RSA key factoring.

    The paper, combined with other other EC strength [slashdot.org] concerns suggests that EC is not the technology to turn to at the moment.

    In other words, if this paper has you concerned about the security of RSA keys by factoring, then you should be even more concerned about the safety of Elliptic Curves as well.

    But as others, including the author, have stated (in large friendly letters): DONT PANIC! It is not known if the ideas expressed in the paper are applicable to key sizes that are in common use.

  • by Phil Gregory (1042) <phil_g+slashdot@pobox.com> on Wednesday February 27, 2002 @09:33AM (#3076964) Homepage

    I'm a person who has quite an interest in crypto and often a good understanding of the base concepts behind crypto systems, but I don't understand much of the math that goes into proofs like this. Thus, I'd like to put some questions out to those who are more experienced than I am.

    First, does this work? Have people independently verified that DJB is correct? (I went looking for some peer review via Google and didn't turn up anything that looked conclusive.)

    Secondly, what's vulnerable? As I understand it, what DJB has discovered is a more efficient method of factoring numbers (on custom hardware) such that keys three times longer can be factored in roughly the same time. RSA is based on the product of two relatively prime numbers, so it's vulnerable. Aren't most public key systems based on this principle, though? How vulnerable is, for example, DSA to this new factoring technique?


    --Phil (This article went up yesterday; hope someone's still around to read my post...)

An Ada exception is when a routine gets in trouble and says 'Beam me up, Scotty'.

Working...