Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Security Math

Factors Found in 200-Digit RSA Challenge 184

diodesign writes "The two unique prime factors of a 200-digit number have been discovered by researchers at Bonn University (Germany) and the CWI (Netherlands). The number is the largest integer yet factored with a general purpose algorithm and was one of a series of such numbers issued as a challenge by security company RSA security in March 1991 in order to track the real-world difficulty of factoring such numbers, used in the public-key encryption algorithm RSA. RSA-200 beats the previous record number 11281+1 (176 digits, factored on May 2nd, 2005), and RSA-576 (174 digits, factored on December 3rd, 2003)."
This discussion has been archived. No new comments can be posted.

Factors Found in 200-Digit RSA Challenge

Comments Filter:
  • 55 CPU years (Score:4, Insightful)

    by Inkieminstrel ( 812132 ) on Tuesday May 10, 2005 @04:49PM (#12492368) Homepage
    The article says it took 55 CPU years to factor the number, though they did it in parallel for about a year and a half. I'd hate to imagine the teams that we don't hear about who are, say, 30 CPU years into the problem who just found out it's already been done.
  • Re:55 CPU years (Score:5, Insightful)

    by 0x461FAB0BD7D2 ( 812236 ) on Tuesday May 10, 2005 @04:52PM (#12492401) Journal
    Well, those teams could still pursue with their endeavors, hopefully beating the time used by these researchers.

    This would mean that their algorithm and/or heuristics is/are superior, which would be beneficial to everyone, including these researchers who "won".

    The good thing about research like this is that no one really loses.
  • Waste of time! (Score:5, Insightful)

    by Tom7 ( 102298 ) on Tuesday May 10, 2005 @04:52PM (#12492411) Homepage Journal
    I think these RSA challenges are a waste of computer power. It's trivial to compute how much computing resources it will take to factor numbers using an existing algorithm on paper, and you get a more accurate estimate than you get from sampling experimentally. I'm all for the development of new, faster algorithms and implementations, but the challenge should be for the development and demonstration of these algorithms, not the brute-force months-on-end search that ensues.
  • Re:Waste of time! (Score:5, Insightful)

    by Anonymous Coward on Tuesday May 10, 2005 @05:03PM (#12492515)
    If someone claims to have found a better factoring method, then it's easier for RSA to check that p,q < n and p*q = n, than to read his algorithm and analysis and award a prize based on that. (Imagine how many crackpots would apply with 100 pages long "proofs").
  • Re:Waste of time! (Score:1, Insightful)

    by Anonymous Coward on Tuesday May 10, 2005 @05:03PM (#12492519)
    Except that the majority of the factoring algorithms aren't just an algorithm. They change greatly with various parameters used to start, seed or otherwise define the algorithm.

    Add to that little tricks such as using multiple algorithms for different parts of the solution area [because some algorithms work better under different conditions] and even the "paper" estimate becomes hazy.

    That's ignoring the advances in computing processing, communication, and programming which have large practical effects on the actual implimentation.

  • Notation? (Score:3, Insightful)

    by man1ed ( 659888 ) on Tuesday May 10, 2005 @05:03PM (#12492520) Homepage Journal
    Does anyone know what the notation "11281+1" means?
  • Re:Waste of time! (Score:5, Insightful)

    by Vellmont ( 569020 ) on Tuesday May 10, 2005 @05:04PM (#12492529) Homepage

    It's trivial to compute how much computing resources it will take to factor numbers using an existing algorithm on paper, and you get a more accurate estimate than you get from sampling experimentally

    You can certainly make a decent estimate of how long it will take, but you're never going to get a close approximation of the real-world performance of your implementation until you actually write the code and run it.

    The other side is that theoretical calculations are nice, but there's nothing quite like actual verification. It's much easier for a programmer to justify using larger key lengths when someone has actually cracked smaller key lengths rather than using calculations based on estimates of computing power.
  • Re:Notation? (Score:3, Insightful)

    by petermgreen ( 876956 ) <plugwash@nOSpam.p10link.net> on Tuesday May 10, 2005 @05:30PM (#12492803) Homepage
    ^ is kinda a dirty hack notation where you can't superscript

    my guess is that someone copypasted it and in doing so lost the superscript (it should be noted that slashdot don't allow superscripting at least in comments)
  • Re:Waste of time! (Score:3, Insightful)

    by Tom7 ( 102298 ) on Tuesday May 10, 2005 @05:41PM (#12492914) Homepage Journal
    I said it's worthwhile to make good implementations. I think we should do this. Then, based on our understanding of the code's behavior (and probably some short-lived experiments), we can extrapolate to find the expected time to completion. This is also better for randomized algorithms that actually running it, since by running it we only get one point randomly sampled from the probability distribution. They clearly knew that the experiment would take approximately 1.5 years to run, otherwise they wouldn't run it.

    What we should not do is, once we figure out how long something is going to take, to actually run it if the answer is totally pointless. This last step is a waste of time.
  • Comment removed (Score:4, Insightful)

    by account_deleted ( 4530225 ) on Tuesday May 10, 2005 @06:07PM (#12493162)
    Comment removed based on user account deletion
  • Re:Waste of time! (Score:3, Insightful)

    by Tom7 ( 102298 ) on Tuesday May 10, 2005 @09:44PM (#12494788) Homepage Journal
    It took them a year and a half of computer time on, I believe, a cluster. I am arguing that that computational resource was wasted (plust the cost of powering and maintaining the cluster, etc.).

    Like I said, actual cracked keys are far easier to justify to a programmer than theoretical calculations. Actual cracked keys can be trusted 100%. Calculations of performance from unknown researches can be trusted much less than that.

    I strongly disagree. A theoretical analysis is better because you can prove that it works for all cases (not just the one you experimentally verify), and because you get a more accurate picture for a probabilistic algorithm.

"If I do not want others to quote me, I do not speak." -- Phil Wayne

Working...