Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Security

Turns out, Primes are in P 444

zorba1 writes "Manindra Agrawal et. al. of the Indian Institute of Technology Kanpur CS department have released a most interesting paper today. It presents an algorithm that determines whether a number is prime or not in polynomial time. While I haven't gone through the presentation in detail, it looks like a promising, albeit non-optimized, solution for the famous PRIMES in P problem."
This discussion has been archived. No new comments can be posted.

Turns out, Primes are in P

Comments Filter:
  • by Mr. Sketch ( 111112 ) <<moc.liamg> <ta> <hcteks.retsim>> on Wednesday August 07, 2002 @12:15AM (#4023200)
    I can tell if a number is prime via:

    bool isprime(p)
    for i = 2 to sqrt(p)
    if p mod i == 0
    return false
    endif
    endfor
    return true
    endfunc

    If I'm not correct, that algo is O(n), thus polynomial, thus in P. But for very large p, that algo is impractical.
  • by fatmav ( 148996 ) on Wednesday August 07, 2002 @12:19AM (#4023221) Homepage
    the ps version looks much better:
    http://www.cse.iitk.ac.in/primality.ps [iitk.ac.in]
  • by Anonymous Coward on Wednesday August 07, 2002 @12:20AM (#4023226)
    For P, it has to be polynomial in the size of _the input_. The input size here is log(n) since it requires log(n) bits to represent n. log(n)^12 hence is polynomial (which i believe their algo guarantees), whereas sqrt(n) is not.
  • Re:Nobel Prize Time (Score:2, Informative)

    by RackinFrackin ( 152232 ) on Wednesday August 07, 2002 @12:20AM (#4023229)
    If I'm reading this correctly, we've got a nearly guaranteed winner of the Nobel Prize here.

    This would be more in line for a Fields Medal than a Nobel Prize.
  • arg. I did it again (Score:3, Informative)

    by orz ( 88387 ) on Wednesday August 07, 2002 @12:30AM (#4023269)
    hm... I'm not sure why it removes comparison symbols when set to plain text... oh well, I wrote out "is less than" this time

    They're saying the the time T necessary to determine whether or not an N digit number is prime satisfies this equation:

    T is less than N ^ k + a

    for some values (can be any finite value) of k and a.

    Basically, it's a statement about how well an algorithm scales to REALLY large numbers.

  • by Anonymous Coward on Wednesday August 07, 2002 @12:35AM (#4023287)
    preview is your friend
  • by IntelliTubbie ( 29947 ) on Wednesday August 07, 2002 @12:36AM (#4023293)
    For those of you wondering about the implications for cryptography, this does not imply that composite numbers can be factored in polynomial time. This algorithm is simply a primality test -- that is, it tells you whether or not a number has any proper divisors (in polynomial time), but it doesn't tell you what these divisors actually are. Determining whether a number is prime has always been considerably easier than finding the prime factorization.

    In fact, for schemes like RSA -- where the key is the product of two large primes -- we already know that the number is composite, by definition, so a more efficient primality test doesn't give us any new information.

    Cheers,
    IT
  • Funny (Score:2, Informative)

    by Alizarin Erythrosin ( 457981 ) on Wednesday August 07, 2002 @12:37AM (#4023298)
    It's funny when I read the comments, and I see all kinds of stuff that reminds me of my Discrete Structures class (we did the P and NP stuff at the end)...

    Makes me wonder what this means for computer theory, but if you think about it, polynomial time can still be slow for very large n with very big powers... although not as bad as an exponential with large n's (assuming you go out far enough that the exponential will grow faster then the polynomial)

    Kudos to the team that discovered this

  • by Goonie ( 8651 ) <robert.merkel@be ... g ['ra.' in gap]> on Wednesday August 07, 2002 @12:39AM (#4023307) Homepage
    The technical definition is kinda long and complex, but in essence it's like this. Given a problem of some size n, a polynomial time algorithm is guaranteed to give a solution in time proportional to a polynomial of n. If a polynomial-time algorithm exists that solves a problem, then the problem is said to be in polynomial time.

    To give an example, say you've got a list of numbers and you want to know the sum. That can be done in linear time - ie, the time taken is proportional to the length of the list of numbers. The size of the problem (n) is defined by the length of the list and the time taken (T) is as follows: T = c1 * n + c0, where c1 and c0 are some fixed constants. The formula for T is a polynomial, and so the problem "LIST-SUM" is in polynomial time. It would still be in polynomial time if the formula for T was a polynomial with n^2, n^3, n^50 terms in it, or even terms like n^1.5 (because as n grows very large an n^1.5 term will always be smaller than an n^2 term).

    Showing you an example of something outside polynomial time is a little more difficult, but some standard examples are SAT (the satisfiability problem) or the travelling-salesman problem, which you can read about in any book on the subject.

  • by ipfwadm ( 12995 ) on Wednesday August 07, 2002 @12:40AM (#4023311) Homepage
    They could have easily taken over the infrastructure of a modernized computer-bent, encryption-shielded society such as the US or Japan.

    Primality testing and factorization are not one and the same. It is possible to know that a number is not prime without knowing its factors. Breaking encryption requires factoring the product of two huge primes (it is already known that the number you're trying to factor is NOT prime, so Primes being in P is more or less useless by itself for this particular application), and factorization has yet to be shown to be in P.
  • by Erpo ( 237853 ) on Wednesday August 07, 2002 @12:40AM (#4023318)
    look here [wikipedia.com].
  • Re:p=np? (Score:1, Informative)

    by Anonymous Coward on Wednesday August 07, 2002 @12:41AM (#4023319)
    If by "superclass", you mean problems that are (thought to be) harder than the other set of problems, then yes.
    But technically speaking, NP-complete problems actually form a "sub"class of NP problems.
    And yes, primes is not _known to_ be a NP-complete problem, so this doesn't really affect complexity of 3-SAT directly.
  • by jeffy124 ( 453342 ) on Wednesday August 07, 2002 @12:41AM (#4023321) Homepage Journal
    a simple way to think of it:

    an NP-complete (NP=non-polynomial) problem is one that can be solved, but takes about 8*age_of_universe time to solve. To get around this, approximation algorithms are used, but these can never give a 100% guarantee of finding the correct solution, nor may provide the same solution if it were to execute on the same data twice.

    a polynomial-time problem is one that can be solved within our lifetimes, guarantee 100% accuracy, and can always generate the same solution for the same data.

    there's a LOT more to it. The book Intro to Algorithms has a good chapter on the topic of NP-completeness, which will explain the intricate and gory details.
  • Implications (Score:5, Informative)

    by davemarmaros ( 598966 ) on Wednesday August 07, 2002 @12:41AM (#4023322)
    There are 2 different problems:
    1) Determining if a number is prime [is 909 prime?]
    2) Determining the factors of a number [what are the factors of 909?]

    This article claims to be able to solve problem 1 in Polynomial time.

    However, problem 2 is MUCH harder, and that is the one which will break cryptography as we know it. This article does not claim to solve problem 2, so we're safe for now.
  • Re:Cryptography (Score:3, Informative)

    by God! Awful ( 181117 ) on Wednesday August 07, 2002 @12:41AM (#4023324) Journal

    Out of interest, will this finding have any impact on the effectiveness of present day cryptography?

    Probably not. While it is possible that this research could lead to results in speeding up factoring, a faster algorithm for determining whether a number is prime is not going to compromise the security of RSA.

    Your RSA key pair is derived from 2 large primes. The way we generate keys is to randomly test large random numbers to see if any of them are prime. Ergo, we must already have an efficient formula for determining if a number is prime or not.

    FYI, the most commonly used algorithm is Euler's formula. Euler's formula doesn't actually tell you if a number is prime, but it will usually give a non-zero output if the number is not prime, so if you run it enough times with different inputs, you can be 99.99999% sure that a number is prime. However, a small percentage of numbers are "pseudoprimes" -- numbers that are not prime but which will also satisfy Euler's formula. Therefore, after you discover a candidate prime, you should use a different (slower) formula to double-check.

    Since this is fairly common knowledge among geeks who use encryption, I'm somewhat surprised that so many people here jumped to the same conclusion you did.

    -a
  • by Anonymous Coward on Wednesday August 07, 2002 @12:42AM (#4023332)
    There seems to be a lot of confusion in the previous posts. This algorithm has nothing to do with factoring or breaking RSA encryption. It has everything to do with making it easier and more reliable to generate RSA keys.

    Generating an RSA key pair requires choosing two very large numbers and making sure that they are both primes. This is very time consuming, and the best current algorithms only tell you that this number is a prime with 99.99...% probability (exact value depends on the number of iterations).

    An efficient algorithm that was not probabalistic would be a very good thing.
  • by platipusrc ( 595850 ) <erchambers@gmail.com> on Wednesday August 07, 2002 @12:46AM (#4023348) Homepage
    NP stands for Nondeterministic Polynomial.

    ie, it can be completed by a nondeterministic machine in polynomial time. The main problem with NP algorithms is that there aren't any nondeterminisitic machines around. (A nondeterministic machine can attempt all paths to try to reach a conclusion at once whereas a deterministic machine can only try one at a time.)
  • by FalafelXXX ( 598968 ) on Wednesday August 07, 2002 @12:51AM (#4023367)
    The famous result by Miller 1976 (and indepdently rediscovered(?) by Rabin 1980) already did that. The only difference is that their algorithm was in RP (randomized polynomial). Namely, if the algorithm says it is prime it might be wrong (with probablity half, say), and if it says that the number is not prime, then it is not prime for sure.

    Now, if you have a number n, you run this algorithm, say 20*log(n) times. If the algorithm says it is prime on all executions that it is prime, you know damn sure it is. If it says it isn't, you are sure it isn't. There is a rediclously tiny probablity that if the algorithm claims that it is prime in all executions, that it is still not prime. This probablity is so small, that it can be essentially ignored. Now, random bits are cheap nowadays, so this is quite satisfactory. This is in fact the algorithm that turned the RSA crypto system into a practical and useful algorithm, because suddently finding primes became easy.

    To break RSA, and become really famous, one has to come up with a polynomial time algorithm for factoring. It might even be that RSA can be broken without factoring, but this is still an open question (I think).

    Ahh, and BTW. Polynomial time means polynomial time in the size of the input. So if the number is n, the size of the input is O(log(n)), and the running time needs to be O( (log(n))^(O(1)) ).

    Ok. End of boredom.

  • by sasami ( 158671 ) on Wednesday August 07, 2002 @12:57AM (#4023384)
    This result, if true, is very interesting from a theory standpoint.

    As far as practice, it's fairly irrelevant. Probabilistic primality testing can be done in constant time with bounded error.

    The Miller-Rabin test [mit.edu] will tell you if a number is prime with at most 1/4 probability of error. That sounds ridiculous, but the catch is that you can iterate it using a random parameter. Do the test twice and your probability drops to 1/16. Do it fifteen times and your chances of being wrong are about one billionth.

    If you're truly paranoid, do it 50 times. That'll bring the error rate of the algorithm magnitudes below the error rate of your hardware.

    ---
    Dum de dum.
  • by shepd ( 155729 ) <slashdot@org.gmail@com> on Wednesday August 07, 2002 @01:15AM (#4023429) Homepage Journal
    >I'm not sure why it removes comparison symbols when set to plain text...

    Slashdot removes left angle brackets in an attempt to stop abuse. Since it still lets raw right angle brackets through for old style quoting (which I prefer), the left ones have to go on unverified tags.

    To display a left angle bracket despite that you'll need to type its ISO code, which renders the bracket unusable for tags (which is a good thing).

    ie: < is entered with this: &lt;

    Just something to note down FFR. Oh, and &nbsp; can be handy if you want to try to slip through some important, on-topic simple tables or ascii art. Sometimes. But not lately.

    - o
    <
    \__/
  • by MrByte420 ( 554317 ) on Wednesday August 07, 2002 @01:23AM (#4023443) Journal
    So I'm already sensing the level of confusion rising as this is a very confusing topic. Here's a quick review. Note: I'm going to do this on a higher level and not start talking about Formal Languages as this is not the place to teach it. So in loose terms, Problems that in P are easily solveable. For example, sorting is a problem in p. Proof: I can sort a set of n numbers in no worse than n^2 time using a bubble sort. (Yes - I know there's faster but this is an example). The bubble sort just compares every number to every other number. Assuming you didn't optimize the algorithm you'd compare each number to every other number and they'd be sorted in no worse than n*n = n^2 comparisons. So what is NP? NP are problems that given a proposed solution we can verify that the solution is correct or not in polynomial time. An example of this is factoring. (note: it is not known whether factoring is in P). Given current methods we know factoring a big number into its prime factors. But if I was to tell you that p=q*r you could very quickly multiply q*r and see if it is equal to p and "verify" my answer. Another way to think about it is you can try out one branch of computation in polynomial time. So what is NP-complete? NP complete problems are is follows. A problem is NP-Complete iff 1) The problem is in NP 2) A solution in polynomial time to this problem would yield a polynomial time solution to all other problems in NP. That is, no other problem in NP is harder than NP-Complete and if one NP-Complete problem is solveable in Polynomial time than all of NP is solveable in polynomail time, P=NP and you will win doctorates and a nobel prize, turing award and a million bucks from the clay institute for proving this. Sigh - you are probally still confused.. :)
  • Knuth (Score:0, Informative)

    by Anonymous Coward on Wednesday August 07, 2002 @01:29AM (#4023455)
    This is old news (1997). If one looks at TAOCP Volume 2, third edition, page 396, there are three mentioned algorithms. One algorithm operates at O(log n)^5, proves with rigor, and depends on the proof of the extended Riemann Hypothesis(tm). There is also aother that is in (log n)^O(log log log n) that does not depend on unproven hypothesises, yet is not exactly P.
  • Re:Knuth (Score:3, Informative)

    by MrByte420 ( 554317 ) on Wednesday August 07, 2002 @01:35AM (#4023464) Journal
    a Proof is not rigorus if it depends on unproven theorems. There are many examples if theories that were thought to be true and were later proved to be false. This proof relies on nothing more than a little abastract algebra, some number theory and good ole plain algebra..
  • by Mornelithe ( 83633 ) on Wednesday August 07, 2002 @02:09AM (#4023536)
    I probably really shouldn't be replying, because it's been a while since I read how it works, but I can copy the algorithm and tell you where I think it would break (if at all). Please correct me where I'm wrong.
    1. Generate two random primes p and q
    2. Calculate n = pq and phi(n) = (p - 1)(q - 1) = n - (p + q) + 1 (Note: phi(n) is the number of primes less than n (Euler's totient function, I believe). phi(p) = p - 1 for prime p, and phi(pq) = phi(p)phi(q) for p relatively prime to q (note, this step breaks if p or q aren't prime))
    3. Generate e
    4. Calculate d, the inverse of e (mod phi(n)) (i.e. d*e = 1 (mod phi(n)))
    5. is the enciphering key, is the deciphering key
    6. For plaintext P, you get ciphertext C by doing: C = P^e mod n, and get P back by doing P = C^d mod n

    So, now there's the matter of why it works. Here we go:

    • Because of Fermat's Little Theorem, we know that a^(phi(n)) = 1 (mod n)
    • Since ed = 1 (mod phi(n)) we have: ed = 1 + k*phi(n) for some integer k
    • So, if we encipher and decypher, we have: (P^e)^d = P^(ed) (mod n)
    • Which also means we have: P^(1 + k*phi(n)) = P*(P^(k*phi(n))) = P*((P^phi(n))^k) = P*(1^k) = P (mod n)

    So when p or q are not prime, phi(n) != (p - 1)(q - 1), so when you calculate d, you'll get something that doesn't negate the encrypting process (because its not a multiplicative inverse mod the real phi(n)), so you'll probably get junk when you decipher.

    I don't really feel like doing a detailed analysis of the algorithm, but I imagine that this isn't used as a primality test because it's running time probably isn't polynomial time.

  • by khuber ( 5664 ) on Wednesday August 07, 2002 @02:37AM (#4023591)
    Don't you mean 9000000.....?

    -Kevin

  • Re:p=np? (Score:2, Informative)

    by Anonymous Coward on Wednesday August 07, 2002 @02:38AM (#4023597)
    Indeed, primality has been known since the 80's to belong to the class RP (problems solvable in expected polynomial time by a randomised algorithm). It has never (in recent years) been suspected of being NP-complete. Most experts don't even think factoring is NP-complete.

    (All this assuming P!=NP, or else all these distinctions collapse.)
  • by matrix0040 ( 516176 ) on Wednesday August 07, 2002 @03:20AM (#4023667)
    no you dont need recursive call. as r is O(log(n)) so size of r is O(log log(n))) so if an exponential time algorithm is used for checking the primality of r, it'll be exp in log(log(n)) i.e. linear in log(n)

    Same goes with q. as it's "small" you can afford an exponential algoritm.

    also x is a variable, those eqns (12) really are polynomial eqns.

  • by plaa ( 29967 ) <{if.iki} {ta} {nenaksin.opmas}> on Wednesday August 07, 2002 @03:21AM (#4023671) Homepage
    (Note: phi(n) is the number of primes less than n (Euler's totient function, I believe). phi(p) = p - 1 for prime p, and phi(pq) = phi(p)phi(q) for p relatively prime to q (note, this step breaks if p or q aren't prime))

    A slight error: phi(n) is the number of positive integers less than n, which are relatively prime to n (ie. gcd(n,x)=1). Therefore, if p is a prime, it is also relatively prime to all smaller integers, so phi(p)=p-1.

    The function that tells the number of primes smaller than n is pi(n), the prime counting function.

    Refs: Totient Function [wolfram.com] Prime Counting Function [wolfram.com] (MathWorld's luckily back online!)
  • Size matters (Score:3, Informative)

    by deblau ( 68023 ) <slashdot.25.flickboy@spamgourmet.com> on Wednesday August 07, 2002 @03:30AM (#4023691) Journal
    Note that this algorithm takes O((log n)^12). For this to actually be faster than, say, factoring n directly, and assuming a multiplicative factor of 1 in the order statistic, n has to be at least 3*10^22, or roughly 75 bits long. This algorithm is probably very ineffective at factoring small integers.
  • Re:I'm dead tired (Score:3, Informative)

    by matrix0040 ( 516176 ) on Wednesday August 07, 2002 @03:31AM (#4023693)
    no it wont as q is "small", as proved in lemma 4.2 r is O(log(n)^6) and so is an upper bound on q. so the size of q is O(log(log(n)) and an exponential time in that will be linear in log(n) !!
  • by deblau ( 68023 ) <slashdot.25.flickboy@spamgourmet.com> on Wednesday August 07, 2002 @03:44AM (#4023721) Journal
    From looking at the algo, I can't figure out what 'x' (or maybe it's a chi) is? Can someone help? I've looked it over, but couldn't find a definition of it. I'm also assuming that the 'if (r is prime)' line is a recursive call to itself? Also, how do we determine 'q' the 'largest prime factor of r-1' ? Another recursive call to get the factors? I must admit, I'm kind of lost by the algo, but it's still interesting.
    OK, I'll address these points in order:

    First off, 'x' doesn't matter. The loop at the bottom checks a congruence of two polynomials over two finite rings (if I'm reading it right, the first is generated by x^r-1 and the second by the input n). Simplistically, this amounts to grinding out the coefficients of the two polynomials and verifying that the difference of the polys equals zero, modulo the ring generator. The actual 'value of x' is never used.

    Second, if you check the order statistic calculation, they're assuming worst-case on factoring 'r' (they apply order r^1/2 for that factorization). They then make an assumption that O(r^1/2) = O((log n)^3), or that O(r) = O((log n)^6), which seems rather suspect (as if they knew the answer ahead of time and plugged in a recursive value for it). Nevertheless, they do go to some length to show that such an r exists, and that it requires at most O((log n)^6) iterations of the first loop to find it.

    As for 'q', I think again it is determined by brute-force factoring r-1. On the one hand, r is small; on the other hand, that doesn't mean a damn thing when it comes to dealing with order statistics, which I think is also a little suspect.

  • by khuber ( 5664 ) on Wednesday August 07, 2002 @04:29AM (#4023786)
    Er, I would call a simple divide "considerable".

    Assuming you meant "wouldn't call", division is definitely "considerable". Remember we are talking about large numbers. Try doing long division on paper for 35184535666823 divided by 4194319 (answer is 8388617) and you can see there is some work involved, even with these small numbers.

    The paper method of long division is O(n^2) and it turns out it can be done more efficiently: As I understand it, you can do division in the steps required for multiplication. Therefore the number of operations required to divide two n digit numbers is bounded by the best multiplication which is O(n lg n lg lg n) (from Knuth Volume II).

    This is about 57 for 10 digits and about 182 for 20 digits. You can see that doubling the number of digits here more than triples the required number of operations to compute this result! Likewise 30 digits require about 6 times more operations. You can see that the "n times" grows faster than the number of digits. Thus, division gets slower and slower the more digits you have to divide.

    -Kevin

  • Re:Oh yeah? Well... (Score:1, Informative)

    by Anonymous Coward on Wednesday August 07, 2002 @04:55AM (#4023838)
    I thought things that sounded the same but were spelled differently were homophones. I thought homonyms were spelled the same with different meaning.

    YOUR NIT HAS BEEN PICKED!!!
  • Re:p=np? (Score:1, Informative)

    by Anonymous Coward on Wednesday August 07, 2002 @05:38AM (#4023924)
    NP-complete problem are no superclass of NP, but they are considered to be the hardest problems in NP. "The hardest problems in NP" means that if one finds and deterministic polynomial algorithm for a NP-complete problem, you could use this algorithm as a subprocedure for any problem in NP (every instance of a problem in NP can be reformulated as an instance of this NP-complete problem in det. pol. time) and therefore P=NP.
    But it is more likely that there exists no det. pol. time alg. for any NP-complete probleme and so P != NP)
  • Re:gimps (Score:3, Informative)

    by fatphil ( 181876 ) on Wednesday August 07, 2002 @06:31AM (#4023989) Homepage
    It has no benefit to the GIMPS folk, or the GFN searchers (google for "GFN
    Gallot"), because in those case P+/-1 is _completely_ factorable and
    therefore a single positive PRP test, using Pocklington's theorem, or the
    Lucas analogue thereof ( http://primepages.org/ ), proves absolutely the
    primality of the candidate.

    This new test is for testing numbers _of no special form_.

    Phil
  • by Goonie ( 8651 ) <robert.merkel@be ... g ['ra.' in gap]> on Wednesday August 07, 2002 @08:14AM (#4024176) Homepage
    Well, there are things that are definitely outside polynomial time - the halting problem for instance, which is undecidable no matter how much time you have. There are also problems that provably take exponential time even on a nondeterministic Turing machine. However, you are quite correct that TSP or SAT just might be in P (but it's pretty damn unlikely).

    I was trying to keep it simple because the original poster said that he didn't know anything about theoretical CS.

  • Re:Cryptography (Score:1, Informative)

    by Anonymous Coward on Wednesday August 07, 2002 @10:43AM (#4024972)
    He didn't jump to a conclusion. He asked a question.

"Ninety percent of baseball is half mental." -- Yogi Berra

Working...