Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Encryption Security

Bernstein's NFS analyzed by Lenstra and Shamir 168

kousik writes "The analysis of Bernstein's NFS by Arjen Lenstra, Adi Shamir, Jim Tomlinson, Eran Tromer has been put up on cryptosavvy. Seems interesting it comes from Lenstra and Shamir. Lenstra lead the 1994 factorisation of RSA 129. From the abstract: ... We also propose an improved circuit design based on a new mesh routing algorithm, and show that for factorization of 1024-bit integers the matrix step can, under an optimistic assumption about the matrix size, be completed within a day by a device that costs a few thousand dollars..."
This discussion has been archived. No new comments can be posted.

Bernstein's NFS analyzed by Lenstra and Shamir

Comments Filter:
  • by Beryllium Sphere(tm) ( 193358 ) on Tuesday June 04, 2002 @03:54PM (#3640304) Journal
    "In particular, we show that 1024-bit RSA keys are as secure as many
    believed them to be."

    "We thus
    conclude that the practical security of RSA for commonly used modulus
    sizes is not significantly affected"

    Sounds like it only speeds up one step of the factoring process, which is important to keep an eye on but not grounds for alarm.
  • by mridley ( 571519 ) on Tuesday June 04, 2002 @03:55PM (#3640308) Homepage
    Well the /. story exerpt is kind of alarmist but I think the more relevant quote from the paper is "However, the theoretical analysis shows that the cost of the relation collection step cannot be significantly reduced, regardless of the cost of the matrix step. We thus conclude that the practical security of RSA for commonly used modulus sizes is not significantly affected..." (typos probably mine)
  • by russotto ( 537200 ) on Tuesday June 04, 2002 @03:56PM (#3640315) Journal
    The most important part of the abstract is the finding that "for a given cost, the new method can factor integers that are 1.17 times larger (rather than 3.01)." This means that even if the new factoring method scales to "small" numbers of bits like 1024, a 1024 bit key is only reduced to the security of an 875 bit key, not a 342 bit key. This is a big difference! It goes from "uh oh, better revoke all my keys right now" to "Hmm, might want to think about increasing them in the future".
  • Cliff notes version (Score:5, Interesting)

    by Ashurbanipal ( 578639 ) on Tuesday June 04, 2002 @03:59PM (#3640345)
    Basically, Dan Bernstein (who has written useable but controversial alternatives to BIND and SENDMAIL) figured out a new method for breaking RSA encryption based on custom hardware. The fellows mentioned in the headline, who are also legit crypto guys, have analysed Dr. Bernstein's work and make the following observations:

    1) it's not quite as fast as Bernstein estimated (about half as fast for cliff notes purposes)
    2) the hardware could be affordable (others have claimed costs that are only feasible for governments)
    3) you don't have to revoke all your RSA keys because there are steps that precede the application of the Berstein method that still take absurd amounts of time and horsepower.

    Oh, yeah, and it has nothing to do with Sun's NFS (Network File System, a lame and usually insecure way to share files).

    Bernstein will no doubt reply. He isn't a shy guy from my experience.
  • by Anonymous Coward on Tuesday June 04, 2002 @04:02PM (#3640365)
    funny, i think the /. excerpt should be a bit more alarmist, and the paper should be less conservative in its claims. why? because the relation collection step, while the most expensive in terms of cpu time, is also the easiest to parallelize and distribute. any moron in charge of network administration for a relatively large company can put sieving clients on all the desktops and get all the relations he needs in a few weeks without spending any money. the thing that kept this from being a threat was the inability of our hypothetical net amdin to perform the matrix step. if the method presented in this paper works for the matrix step, then i consider RSA 1024 to be broken.
  • by TJ6581 ( 182825 ) on Tuesday June 04, 2002 @04:33PM (#3640569)
    At the very end when they are doing their calculations they use DRAM and a FPGA components. They then go on to state:


    The bandwidth of the fastest PC memory is 3.2GB/sec


    The GX specs specifically state that they support 4.2 GB per second. They also state that memory latency is about 40ns. I checked pricewatch and found at least 6ns for pretty cheap. There are to many areas where it says "at least", "probably" or "about" for calculations regarding how much time it takes. They might be right but their "proof" consists of restating mathmatics rules and estimations. They probably should have spent more time on actual calculations and proofs

  • by Eran Tromer ( 583435 ) on Tuesday June 04, 2002 @05:12PM (#3640877) Homepage
    I am a coauthor of the paper. Indeed, there are many rough estimates in the paper: the theoretical analysis (and indeed, the NFS method itself) relies on some unproven conjectures and asymptotical analysis. Also, the practical cost estimates rely on technological and ecnomical considerations which are rapidly changing. None the less, we believe the assumptions and their conclusions are quite sound when used as order-of-magnitude estimates.

    As for the two technical points you mentioned:

    > > The bandwidth of the fastest PC memory is 3.2GB/sec
    > The GX specs specifically state that they support 4.2 GB per second.


    Indeed, but both PC3200 (DDR400) and dual PC800 (RDRAM) have a bandwidth of 3.2GB/sec.

    > I checked pricewatch and found at least 6ns for pretty cheap

    These "6ns" parts do not have a 6ns random-access latency. For instance, check these figures [ntsi.com].

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (5) All right, who's the wiseguy who stuck this trigraph stuff in here?

Working...