Turns out, Primes are in P 444
zorba1 writes "Manindra Agrawal et. al. of the Indian Institute of Technology Kanpur CS department have released a most interesting paper today. It presents an algorithm that determines whether a number is prime or not in polynomial time. While I haven't gone through the presentation in detail, it looks like a promising, albeit non-optimized, solution for the famous PRIMES in P problem."
I always thought is was in P (Score:0, Informative)
bool isprime(p)
for i = 2 to sqrt(p)
if p mod i == 0
return false
endif
endfor
return true
endfunc
If I'm not correct, that algo is O(n), thus polynomial, thus in P. But for very large p, that algo is impractical.
for the sake of our eyes (Score:5, Informative)
http://www.cse.iitk.ac.in/primality.ps [iitk.ac.in]
Re:I always thought is was in P (Score:5, Informative)
Re:Nobel Prize Time (Score:2, Informative)
This would be more in line for a Fields Medal than a Nobel Prize.
arg. I did it again (Score:3, Informative)
They're saying the the time T necessary to determine whether or not an N digit number is prime satisfies this equation:
T is less than N ^ k + a
for some values (can be any finite value) of k and a.
Basically, it's a statement about how well an algorithm scales to REALLY large numbers.
Re:arg. I did it again (Score:1, Informative)
Factoring might still be NP (Score:5, Informative)
In fact, for schemes like RSA -- where the key is the product of two large primes -- we already know that the number is composite, by definition, so a more efficient primality test doesn't give us any new information.
Cheers,
IT
Funny (Score:2, Informative)
Makes me wonder what this means for computer theory, but if you think about it, polynomial time can still be slow for very large n with very big powers... although not as bad as an exponential with large n's (assuming you go out far enough that the exponential will grow faster then the polynomial)
Kudos to the team that discovered this
Re:What's Polynomial Time? (Score:5, Informative)
To give an example, say you've got a list of numbers and you want to know the sum. That can be done in linear time - ie, the time taken is proportional to the length of the list of numbers. The size of the problem (n) is defined by the length of the list and the time taken (T) is as follows: T = c1 * n + c0, where c1 and c0 are some fixed constants. The formula for T is a polynomial, and so the problem "LIST-SUM" is in polynomial time. It would still be in polynomial time if the formula for T was a polynomial with n^2, n^3, n^50 terms in it, or even terms like n^1.5 (because as n grows very large an n^1.5 term will always be smaller than an n^2 term).
Showing you an example of something outside polynomial time is a little more difficult, but some standard examples are SAT (the satisfiability problem) or the travelling-salesman problem, which you can read about in any book on the subject.
Re:If this is true... (Score:5, Informative)
Primality testing and factorization are not one and the same. It is possible to know that a number is not prime without knowing its factors. Breaking encryption requires factoring the product of two huge primes (it is already known that the number you're trying to factor is NOT prime, so Primes being in P is more or less useless by itself for this particular application), and factorization has yet to be shown to be in P.
If you're confused by "P" and "NP".... (Score:5, Informative)
Re:p=np? (Score:1, Informative)
But technically speaking, NP-complete problems actually form a "sub"class of NP problems.
And yes, primes is not _known to_ be a NP-complete problem, so this doesn't really affect complexity of 3-SAT directly.
Re:What's Polynomial Time? (Score:2, Informative)
an NP-complete (NP=non-polynomial) problem is one that can be solved, but takes about 8*age_of_universe time to solve. To get around this, approximation algorithms are used, but these can never give a 100% guarantee of finding the correct solution, nor may provide the same solution if it were to execute on the same data twice.
a polynomial-time problem is one that can be solved within our lifetimes, guarantee 100% accuracy, and can always generate the same solution for the same data.
there's a LOT more to it. The book Intro to Algorithms has a good chapter on the topic of NP-completeness, which will explain the intricate and gory details.
Implications (Score:5, Informative)
1) Determining if a number is prime [is 909 prime?]
2) Determining the factors of a number [what are the factors of 909?]
This article claims to be able to solve problem 1 in Polynomial time.
However, problem 2 is MUCH harder, and that is the one which will break cryptography as we know it. This article does not claim to solve problem 2, so we're safe for now.
Re:Cryptography (Score:3, Informative)
Out of interest, will this finding have any impact on the effectiveness of present day cryptography?
Probably not. While it is possible that this research could lead to results in speeding up factoring, a faster algorithm for determining whether a number is prime is not going to compromise the security of RSA.
Your RSA key pair is derived from 2 large primes. The way we generate keys is to randomly test large random numbers to see if any of them are prime. Ergo, we must already have an efficient formula for determining if a number is prime or not.
FYI, the most commonly used algorithm is Euler's formula. Euler's formula doesn't actually tell you if a number is prime, but it will usually give a non-zero output if the number is not prime, so if you run it enough times with different inputs, you can be 99.99999% sure that a number is prime. However, a small percentage of numbers are "pseudoprimes" -- numbers that are not prime but which will also satisfy Euler's formula. Therefore, after you discover a candidate prime, you should use a different (slower) formula to double-check.
Since this is fairly common knowledge among geeks who use encryption, I'm somewhat surprised that so many people here jumped to the same conclusion you did.
-a
helps RSA key generation - NOT factoring (Score:1, Informative)
Generating an RSA key pair requires choosing two very large numbers and making sure that they are both primes. This is very time consuming, and the best current algorithms only tell you that this number is a prime with 99.99...% probability (exact value depends on the number of iterations).
An efficient algorithm that was not probabalistic would be a very good thing.
Re:What's Polynomial Time? (Score:2, Informative)
ie, it can be completed by a nondeterministic machine in polynomial time. The main problem with NP algorithms is that there aren't any nondeterminisitic machines around. (A nondeterministic machine can attempt all paths to try to reach a conclusion at once whereas a deterministic machine can only try one at a time.)
We already knew that... (Score:5, Informative)
Now, if you have a number n, you run this algorithm, say 20*log(n) times. If the algorithm says it is prime on all executions that it is prime, you know damn sure it is. If it says it isn't, you are sure it isn't. There is a rediclously tiny probablity that if the algorithm claims that it is prime in all executions, that it is still not prime. This probablity is so small, that it can be essentially ignored. Now, random bits are cheap nowadays, so this is quite satisfactory. This is in fact the algorithm that turned the RSA crypto system into a practical and useful algorithm, because suddently finding primes became easy.
To break RSA, and become really famous, one has to come up with a polynomial time algorithm for factoring. It might even be that RSA can be broken without factoring, but this is still an open question (I think).
Ahh, and BTW. Polynomial time means polynomial time in the size of the input. So if the number is n, the size of the input is O(log(n)), and the running time needs to be O( (log(n))^(O(1)) ).
Ok. End of boredom.
Primality testing has never been hard (Score:5, Informative)
As far as practice, it's fairly irrelevant. Probabilistic primality testing can be done in constant time with bounded error.
The Miller-Rabin test [mit.edu] will tell you if a number is prime with at most 1/4 probability of error. That sounds ridiculous, but the catch is that you can iterate it using a random parameter. Do the test twice and your probability drops to 1/16. Do it fifteen times and your chances of being wrong are about one billionth.
If you're truly paranoid, do it 50 times. That'll bring the error rate of the algorithm magnitudes below the error rate of your hardware.
---
Dum de dum.
Re:arg. I did it again (Score:2, Informative)
Slashdot removes left angle brackets in an attempt to stop abuse. Since it still lets raw right angle brackets through for old style quoting (which I prefer), the left ones have to go on unverified tags.
To display a left angle bracket despite that you'll need to type its ISO code, which renders the bracket unusable for tags (which is a good thing).
ie: < is entered with this: <
Just something to note down FFR. Oh, and can be handy if you want to try to slip through some important, on-topic simple tables or ascii art. Sometimes. But not lately.
- o
<
\__/
MrByte's 1 Page P, NP, NP-Complete Primer (Score:2, Informative)
Knuth (Score:0, Informative)
Re:Knuth (Score:3, Informative)
Re:We already knew that... (Score:5, Informative)
So, now there's the matter of why it works. Here we go:
So when p or q are not prime, phi(n) != (p - 1)(q - 1), so when you calculate d, you'll get something that doesn't negate the encrypting process (because its not a multiplicative inverse mod the real phi(n)), so you'll probably get junk when you decipher.
I don't really feel like doing a detailed analysis of the algorithm, but I imagine that this isn't used as a primality test because it's running time probably isn't polynomial time.
Re:Crypto repercusions? (Score:2, Informative)
-Kevin
Re:p=np? (Score:2, Informative)
(All this assuming P!=NP, or else all these distinctions collapse.)
Re:What is 'x' and how is 'q' calculated? (Score:3, Informative)
Same goes with q. as it's "small" you can afford an exponential algoritm.
also x is a variable, those eqns (12) really are polynomial eqns.
Re:We already knew that... (Score:2, Informative)
A slight error: phi(n) is the number of positive integers less than n, which are relatively prime to n (ie. gcd(n,x)=1). Therefore, if p is a prime, it is also relatively prime to all smaller integers, so phi(p)=p-1.
The function that tells the number of primes smaller than n is pi(n), the prime counting function.
Refs: Totient Function [wolfram.com] Prime Counting Function [wolfram.com] (MathWorld's luckily back online!)
Size matters (Score:3, Informative)
Re:I'm dead tired (Score:3, Informative)
Re:What is 'x' and how is 'q' calculated? (Score:4, Informative)
First off, 'x' doesn't matter. The loop at the bottom checks a congruence of two polynomials over two finite rings (if I'm reading it right, the first is generated by x^r-1 and the second by the input n). Simplistically, this amounts to grinding out the coefficients of the two polynomials and verifying that the difference of the polys equals zero, modulo the ring generator. The actual 'value of x' is never used.
Second, if you check the order statistic calculation, they're assuming worst-case on factoring 'r' (they apply order r^1/2 for that factorization). They then make an assumption that O(r^1/2) = O((log n)^3), or that O(r) = O((log n)^6), which seems rather suspect (as if they knew the answer ahead of time and plugged in a recursive value for it). Nevertheless, they do go to some length to show that such an r exists, and that it requires at most O((log n)^6) iterations of the first loop to find it.
As for 'q', I think again it is determined by brute-force factoring r-1. On the one hand, r is small; on the other hand, that doesn't mean a damn thing when it comes to dealing with order statistics, which I think is also a little suspect.
Re:I always thought is was in P (Score:3, Informative)
Assuming you meant "wouldn't call", division is definitely "considerable". Remember we are talking about large numbers. Try doing long division on paper for 35184535666823 divided by 4194319 (answer is 8388617) and you can see there is some work involved, even with these small numbers.
The paper method of long division is O(n^2) and it turns out it can be done more efficiently: As I understand it, you can do division in the steps required for multiplication. Therefore the number of operations required to divide two n digit numbers is bounded by the best multiplication which is O(n lg n lg lg n) (from Knuth Volume II).
This is about 57 for 10 digits and about 182 for 20 digits. You can see that doubling the number of digits here more than triples the required number of operations to compute this result! Likewise 30 digits require about 6 times more operations. You can see that the "n times" grows faster than the number of digits. Thus, division gets slower and slower the more digits you have to divide.
-Kevin
Re:Oh yeah? Well... (Score:1, Informative)
YOUR NIT HAS BEEN PICKED!!!
Re:p=np? (Score:1, Informative)
But it is more likely that there exists no det. pol. time alg. for any NP-complete probleme and so P != NP)
Re:gimps (Score:3, Informative)
Gallot"), because in those case P+/-1 is _completely_ factorable and
therefore a single positive PRP test, using Pocklington's theorem, or the
Lucas analogue thereof ( http://primepages.org/ ), proves absolutely the
primality of the candidate.
This new test is for testing numbers _of no special form_.
Phil
Re:What's Polynomial Time? (Score:3, Informative)
I was trying to keep it simple because the original poster said that he didn't know anything about theoretical CS.
Re:Cryptography (Score:1, Informative)