Turns out, Primes are in P 444
zorba1 writes "Manindra Agrawal et. al. of the Indian Institute of Technology Kanpur CS department have released a most interesting paper today. It presents an algorithm that determines whether a number is prime or not in polynomial time. While I haven't gone through the presentation in detail, it looks like a promising, albeit non-optimized, solution for the famous PRIMES in P problem."
for the sake of our eyes (Score:5, Informative)
http://www.cse.iitk.ac.in/primality.ps [iitk.ac.in]
Re:for the sake of our eyes (Score:2, Interesting)
Re:for the sake of our eyes (Score:2)
Either use package txfonts or maybe pxfonts (uses only PostScript fonts), or mathptmx (keeps Computer Modern fonts for math, but this doesn't always work), or install the vectorial T1 version of the ECM fonts and make sure those are used instead of the bitmap ones.
Re:for the sake of our eyes [OT] (Score:2)
On the MaxOSX it looks great (I'm guessing they use the Mac's native font rendering), but on my XP box it looks like something from the mid 80's compared to the cleartype that everything else is done with.
don't get me started on the ActiveX control...
Re:for the sake of our eyes (Score:2)
http://www.cse.iitk.ac.in/news/primality.ps [iitk.ac.in]
What's Polynomial Time? (Score:2, Interesting)
Steve
Re:What's Polynomial Time? (Score:2)
T a)
for some value (can be any finite value) of k and a.
Oops. HTML (Score:2)
T N ^ k + a
for some values (can be any finite value) of k and a.
Basically, it's a statement about how well an algorithm scales to REALLY large numbers.
arg. I did it again (Score:3, Informative)
They're saying the the time T necessary to determine whether or not an N digit number is prime satisfies this equation:
T is less than N ^ k + a
for some values (can be any finite value) of k and a.
Basically, it's a statement about how well an algorithm scales to REALLY large numbers.
Re:arg. I did it again (Score:2, Informative)
Slashdot removes left angle brackets in an attempt to stop abuse. Since it still lets raw right angle brackets through for old style quoting (which I prefer), the left ones have to go on unverified tags.
To display a left angle bracket despite that you'll need to type its ISO code, which renders the bracket unusable for tags (which is a good thing).
ie: < is entered with this: <
Just something to note down FFR. Oh, and can be handy if you want to try to slip through some important, on-topic simple tables or ascii art. Sometimes. But not lately.
- o
<
\__/
Re:arg. I did it again (Score:2)
Re:arg. I did it again (Score:2, Offtopic)
Since slashdot doesn't seem to be doing that I feel like the mode shouldn't be called "Plain Old Text".
Ah well. I'm just bitter because I screwed up twice in a row.
Re:What's Polynomial Time? (Score:5, Informative)
To give an example, say you've got a list of numbers and you want to know the sum. That can be done in linear time - ie, the time taken is proportional to the length of the list of numbers. The size of the problem (n) is defined by the length of the list and the time taken (T) is as follows: T = c1 * n + c0, where c1 and c0 are some fixed constants. The formula for T is a polynomial, and so the problem "LIST-SUM" is in polynomial time. It would still be in polynomial time if the formula for T was a polynomial with n^2, n^3, n^50 terms in it, or even terms like n^1.5 (because as n grows very large an n^1.5 term will always be smaller than an n^2 term).
Showing you an example of something outside polynomial time is a little more difficult, but some standard examples are SAT (the satisfiability problem) or the travelling-salesman problem, which you can read about in any book on the subject.
Re:What's Polynomial Time? (Score:3, Informative)
I was trying to keep it simple because the original poster said that he didn't know anything about theoretical CS.
Re:What's Polynomial Time? (Score:2, Informative)
an NP-complete (NP=non-polynomial) problem is one that can be solved, but takes about 8*age_of_universe time to solve. To get around this, approximation algorithms are used, but these can never give a 100% guarantee of finding the correct solution, nor may provide the same solution if it were to execute on the same data twice.
a polynomial-time problem is one that can be solved within our lifetimes, guarantee 100% accuracy, and can always generate the same solution for the same data.
there's a LOT more to it. The book Intro to Algorithms has a good chapter on the topic of NP-completeness, which will explain the intricate and gory details.
Re:What's Polynomial Time? (Score:2, Informative)
ie, it can be completed by a nondeterministic machine in polynomial time. The main problem with NP algorithms is that there aren't any nondeterminisitic machines around. (A nondeterministic machine can attempt all paths to try to reach a conclusion at once whereas a deterministic machine can only try one at a time.)
Ignore parent (Score:2)
An NP-complete problem does not take 8 times the age of the universe to solve. This completely missed the point. Every P or NP problem can be expressed in terms of a variable "n", which represents the input size. There are many practical problems where the best-known P algorithm is slower than the best NP algorithm for typical values of n. However, computational theory tells us that as n increases, the P algorithm will eventually beat the NP one.
-a
Re:What's Polynomial Time? (Score:4, Funny)
MC Polynomial.
And he sang a song..."P can't Touch This". Before the drum break of this song, Poly sang:
It's a prime, because you know
P can't touch this
P can't touch this
Stop...Polynomial Time...
Thus giving rise to a branch of mathematical order functions denoting the complexity of a problem...
Either that or it's defined pretty well here [ic.ac.uk].
Re:What's Polynomial Time? (Score:5, Funny)
The island has a number of strange customs.
1) All the women on this island are called 'Polly' in reverence to the island's god, Polynose.
2) The men of the island are very philosphical (maybe because all the women are called Polly, so it gets very confusing). They spend most of their time poring over mathematical problems.
3) The island has strict laws on the use of technology. Telephones are not allowed, aircraft are not allowed to land there, in fact the only way to reach the island is by boat, nevertheless it is very popular with tourists.
4) It is considered offensive to Polynose for anyone who is not a Polynosian woman (a Polly) to prepare food. Since the island is popular with holiday makers though, all of the enterprising Polly's have opened small restaurants called 'Polly-meal-time'.
The men of the island, in order to discuss their mathematical musings, recently opened a cafe. To distinguish this from all the restaurants, they named it 'Polly-no-meal-time'.
The article reports that recently a boatload of mathematicians visited Polynose, and told the island's men how to check if a number is prime.
Thus the headline 'Primes can now be solved in Polly-no-meal-time'.
Crypto repercusions? (Score:2, Interesting)
While that may not be thrilling at first, let's use the RCA contest for money as an example. We get a 1024 bit number containing 200 digits in decimal formm, which is the product of exactly two prime numbers. We know then that:
1. We only need to find one prime to easily find the other.
2. The digits in the factors can total no more than 200 digits.
3. One of the factors contains less than 100 digits.
Start at 10^100 and count down using this algorythm, and youll find it in P time instead of NP time. It'll still take forever, literally and figuratively, but wouldn't it take significantly less time than before?
Toodles
I don't think so. (Score:2)
Re:Crypto repercusions? (Score:2)
Ah, no. First, note that a 2^1024-big number has more than 300 decimal digits, and so a 2^512-big number has more than 150. Then, even if primality testing took only 1 operation, we'd still need to perform something like 2^511 operations by your method. At 10^24 (one trillion trillion; unfathomably many) operations per second, this'd *still* stake 10^112 times longer than the estimated lifetime of the universe (1.5*10^10 yrs) to complete!
There are, however, faster ways of factorization than testing all the numbers in (1..sqrt(N)) to see if they are factors of N. They are not noted in (or relevant to) the paper mentioned by this article.
[nb: See other comments for why this is, in *practical* use, not such a big improvement on Miller-Rabin and other randomized methods which have been known for decades.]
Please STOP! (Score:4, Insightful)
It's the Sieve of Eratosthenes. A number n is of size log(n). This is a deterministic algorithm; why bring up NP? What is the time complexity of division? And here's a hint: you start with n-digit (n=100) numbers and present an algorithm that runs in time 10^n. This is in P?
Has anyone actually read the paper? The algorithm is outlined, with a complexity analysis. Don't forget, P-time doesn't mean usable.
Re:Crypto repercusions? (Score:2, Informative)
-Kevin
Re:Crypto repercusions? (Score:2)
The answer for 10 and 100 is of course 89.
If this turns out to be true... (Score:2, Interesting)
Re:If this turns out to be true... (Score:2)
Factoring might still be NP (Score:5, Informative)
In fact, for schemes like RSA -- where the key is the product of two large primes -- we already know that the number is composite, by definition, so a more efficient primality test doesn't give us any new information.
Cheers,
IT
Re:Factoring might still be NP (Score:2)
So perhaps this algorithm makes RSA, DSA, etc. even stronger because it will be easier to guarantee that the factors are prime instead of assuming it with 99,999999999999% probability.
What is 'x' and how is 'q' calculated? (Score:4, Interesting)
Re:What is 'x' and how is 'q' calculated? (Score:3, Informative)
Same goes with q. as it's "small" you can afford an exponential algoritm.
also x is a variable, those eqns (12) really are polynomial eqns.
Re:What is 'x' and how is 'q' calculated? (Score:4, Informative)
First off, 'x' doesn't matter. The loop at the bottom checks a congruence of two polynomials over two finite rings (if I'm reading it right, the first is generated by x^r-1 and the second by the input n). Simplistically, this amounts to grinding out the coefficients of the two polynomials and verifying that the difference of the polys equals zero, modulo the ring generator. The actual 'value of x' is never used.
Second, if you check the order statistic calculation, they're assuming worst-case on factoring 'r' (they apply order r^1/2 for that factorization). They then make an assumption that O(r^1/2) = O((log n)^3), or that O(r) = O((log n)^6), which seems rather suspect (as if they knew the answer ahead of time and plugged in a recursive value for it). Nevertheless, they do go to some length to show that such an r exists, and that it requires at most O((log n)^6) iterations of the first loop to find it.
As for 'q', I think again it is determined by brute-force factoring r-1. On the one hand, r is small; on the other hand, that doesn't mean a damn thing when it comes to dealing with order statistics, which I think is also a little suspect.
Re:What is 'x' and how is 'q' calculated? (Score:2)
Doing shit modulo (x^r-1) means that as soon as you multiply your
polynomials together and get any terms over x^(r-1) you can replace
x^(r+d) with x^d, because (x^(r+d)-x^d) == 0 (mod (x^r-1))
e.g. modulo x^2-1
(ax+b)^2 == a^2x^2 + 2abx + b^2 == (2ab)x + (a^2+b^2)
FatPhil
Funny (Score:2, Informative)
Makes me wonder what this means for computer theory, but if you think about it, polynomial time can still be slow for very large n with very big powers... although not as bad as an exponential with large n's (assuming you go out far enough that the exponential will grow faster then the polynomial)
Kudos to the team that discovered this
I'm dead tired (Score:3, Interesting)
"Let q be the largest prime factor of r-1"
Won't getting q boost the thing back into power n complexity?
Re:I'm dead tired (Score:2)
Re:I'm dead tired (Score:3, Informative)
If you're confused by "P" and "NP".... (Score:5, Informative)
Implications (Score:5, Informative)
1) Determining if a number is prime [is 909 prime?]
2) Determining the factors of a number [what are the factors of 909?]
This article claims to be able to solve problem 1 in Polynomial time.
However, problem 2 is MUCH harder, and that is the one which will break cryptography as we know it. This article does not claim to solve problem 2, so we're safe for now.
Re:Implications (Score:2)
Here, quick math trick that will save people a bit of time. It's always easy to tell if a number is divisible by three, just add all the digits together, and if the result is divisible by three, then so is the original number. 909 = 9 + 0 + 9 = 18 (divisible by three). Oh, and you can take it a step further (18 = 1 + 8 = 9) if the result is still too long.
Therefore, this number showed up right away to me as being divisible by three, and quick division will show that 303 * 3 = 909.
Re:Implications (Score:2)
Not all public-key cryptography is based on the difficulty of factoring numbers. There are a number of other one-way functions (such as elliptic curves) that are being used in cryptography. So I wouldn't say it'll break crypto "as we know it", but it would certainly freak some people out.
Discrete log also unaffected (Score:2)
Only ... algorithms like RSA will be broken. Symmetric-key cryptosystems will be unaffected.
As far as I know, public key crypto based on a discrete logarithm [google.com] will be unaffected as well.
Impact on Crypto is Postive (Score:2, Insightful)
Prime based encryption schemes (RSA,etc) are based on the complexity of factoring large numbers into primes, not the the ability to determine if a number is prime or not.
Incidently, many implementiotions make use of pseudo-primes, as the ability to validate primeness is (or was) cumbersome, and not actual primes. This should give implementations the ability to ensure key pair values are actual primes which would strengthen the resulting encryption.
Nothing revolutionary (Score:3, Interesting)
All this would mean is that now instead of verifying that a number is prime with a (1-10^-10) level certainty in polynomial time, it could now be done with certainty, so there would be no revolution in cryptology, as some other posters suggest.
--Sam L-L
We already knew that... (Score:5, Informative)
Now, if you have a number n, you run this algorithm, say 20*log(n) times. If the algorithm says it is prime on all executions that it is prime, you know damn sure it is. If it says it isn't, you are sure it isn't. There is a rediclously tiny probablity that if the algorithm claims that it is prime in all executions, that it is still not prime. This probablity is so small, that it can be essentially ignored. Now, random bits are cheap nowadays, so this is quite satisfactory. This is in fact the algorithm that turned the RSA crypto system into a practical and useful algorithm, because suddently finding primes became easy.
To break RSA, and become really famous, one has to come up with a polynomial time algorithm for factoring. It might even be that RSA can be broken without factoring, but this is still an open question (I think).
Ahh, and BTW. Polynomial time means polynomial time in the size of the input. So if the number is n, the size of the input is O(log(n)), and the running time needs to be O( (log(n))^(O(1)) ).
Ok. End of boredom.
Re:We already knew that... (Score:2)
Re:We already knew that... (Score:5, Informative)
So, now there's the matter of why it works. Here we go:
So when p or q are not prime, phi(n) != (p - 1)(q - 1), so when you calculate d, you'll get something that doesn't negate the encrypting process (because its not a multiplicative inverse mod the real phi(n)), so you'll probably get junk when you decipher.
I don't really feel like doing a detailed analysis of the algorithm, but I imagine that this isn't used as a primality test because it's running time probably isn't polynomial time.
Re:We already knew that... (Score:2, Informative)
A slight error: phi(n) is the number of positive integers less than n, which are relatively prime to n (ie. gcd(n,x)=1). Therefore, if p is a prime, it is also relatively prime to all smaller integers, so phi(p)=p-1.
The function that tells the number of primes smaller than n is pi(n), the prime counting function.
Refs: Totient Function [wolfram.com] Prime Counting Function [wolfram.com] (MathWorld's luckily back online!)
Re:We already knew that... (Score:2)
When you build a key with three primes you have one public key, one private key and two that will work for the hackers.
When you build a key out of four primes you end up with the two keys you expect and 6 or 9 others.
You can do this by building your own RSA like system with 32 bit keys and plug in some small random even "prime" and see how many other keys work.
Not all keys work but some of the combos will.
Re:We already knew that... (Score:2)
For testing why things break because you can factor a supposedly prime number, an even number will work as well as any other and its sure speeds up the factoring if you have to do it by hand.
Knuth has some interesting bits on this in his books.
Re:We already knew that... (Score:2)
Rabin's test says that if there is no witness below 2ln^2(n), then the
number is certainly prime.
The repeated-by-a-fixed-small-number PRP-test MR test is still a
compositeness test, or a Probable Primality test, and does not give a
certain answer.
A PRP is not _proven_ prime.
See professor Caldwell's Prime Pages at:
http://primepages.org/
FatPhil
Re:We already knew that... (Score:2)
In anycase, for cryptography you would probably run the randomized algorithm on a bunch a numbers until you found a number to be a prime with high probability, then you would run this to verify that it is a prime with higher certainty. The certainty sorta depends on the length of the proof for this algorithm. Since for a sufficiently complex proof there is a non-zero probability that the proof is not correct, and the probabilistic algorithms often have simple proofs that we may be more certain of.
Re:We already knew that... (Score:2)
Nope. You need to learn what O() means.
He never said log2(n). He said log(n) without a base. O() measures what happens when n gets arbitrarily large and the statements are true for ANY fixed base. That's why no base it given, it is irrelevant. The difference between log2(n) and log10(n) is a fixed factor of 3.3. The O() of a problem tells you weather it takes you microseconds or gigayears to solve it. There really isn't any meaningful difference between 1 microsecond and 3.3 microseconds, or between 1 gigayear and 3.3 gigayears.
And here's the kicker: Nothing prevents me from using a base equal to the number itself.
As I said, O() measures what happens when n gets arbitrarily large. If you are using an arbitrarily large base then what you've got is a quantum computer. This problem and the harder problem of actually finding the factors have already been proven solvable in polynomial time for quantum computers.
-
Primality testing has never been hard (Score:5, Informative)
As far as practice, it's fairly irrelevant. Probabilistic primality testing can be done in constant time with bounded error.
The Miller-Rabin test [mit.edu] will tell you if a number is prime with at most 1/4 probability of error. That sounds ridiculous, but the catch is that you can iterate it using a random parameter. Do the test twice and your probability drops to 1/16. Do it fifteen times and your chances of being wrong are about one billionth.
If you're truly paranoid, do it 50 times. That'll bring the error rate of the algorithm magnitudes below the error rate of your hardware.
---
Dum de dum.
Re:Primality testing has never been hard (Score:2)
Depending upon what version Pentium CPU you're using you can accomplish that with 2 or 3 steps.
Old joke, but couldn't resist
-
Re:Primality testing has never been hard (Score:2)
Just consider how fast it's on some of the better known commercial operating systems, because even that 1/4 error probability is magnitudes below the error rate of your platform
Re:Primality testing has never been hard (Score:2)
What nobody knows though is if error is good or bad. If Miller-Rabin says a number is prime, and you use it for an application that requires a prime number, the application will still work. At least that is true for all the applications I know of that require a prime number (Public Key Encryption), there are probably others.
I've speculated that the existance of non-primes that work is one of the things that makes public key encryption hard enough to be useful. I can't prove it though, and offer it only as an interesting (but likely wrong) point to consider.
MrByte's 1 Page P, NP, NP-Complete Primer (Score:2, Informative)
O(num_bits**12) time estimates (Score:5, Interesting)
We give a deterministic O((log n)**12) time algorithm for testing whether a number is prime.
[Sorry, the Slashdot filter does not allow me to superscript the 12.]
The algorithm takes O(log2(n)**12) time, where n is number being factored. If we optimistically assume that this algorithm can test the primality of a 16-bit number in one microsecond, then here is how long it would take to test time primality of some larger numbers.
I don't know what a realistic base time for this algorithm really would be, and I don't know where the cross over point against existing exponential time deterministic primality testing algorithms would be, but at least this provide a sense of how log2(n)**12 grows.
bad in math? (Score:2, Redundant)
log2(32) = 5
log2(64) = 6
log2(128) = 7
log2(256) = 8
by your assumption a*log2(16)^12 + b = 1 ms
for simplicity, let's ignore the constant b.
then:
a*log2(16)^12 = a * 4^12 = 1 ms (by assumption)
a*log2(32)^12 = a * 5^12 = 14.5 ms
a*log2(64)^12 = a * 6^12 = 129.75 ms
a*log2(256)^12 = a * 8^12 = 4096 ms
Re:bad in math? (Score:3, Insightful)
Size matters (Score:3, Informative)
Re:Size matters (Score:2)
real world numbers it will be much lower.
Secondly, it's not a factoring algorithm, so your final comment is a bit odd.
Phil
It would be interesting... (Score:2, Funny)
This fact is not new... (Score:2, Interesting)
It basically worked by finding out if numbers were composite, but the algorithm used could be "inverted" to tell you if a number was prime or not by telling you if it was composite or not.
It was very well suited to parallel implementations, too.
Re:I always thought is was in P (Score:5, Informative)
mod parent up (Score:2)
Re:I always thought is was in P (Score:2)
Pardon my french, but isn't sqrt(n) is better than log(n)^12?
Re:I always thought is was in P (Score:2)
Yet, sqrt(n) is still polynomial time.
Re:I always thought is was in P (Score:2)
Re:I always thought is was in P (Score:3, Insightful)
Re:I always thought is was in P (Score:3, Informative)
Assuming you meant "wouldn't call", division is definitely "considerable". Remember we are talking about large numbers. Try doing long division on paper for 35184535666823 divided by 4194319 (answer is 8388617) and you can see there is some work involved, even with these small numbers.
The paper method of long division is O(n^2) and it turns out it can be done more efficiently: As I understand it, you can do division in the steps required for multiplication. Therefore the number of operations required to divide two n digit numbers is bounded by the best multiplication which is O(n lg n lg lg n) (from Knuth Volume II).
This is about 57 for 10 digits and about 182 for 20 digits. You can see that doubling the number of digits here more than triples the required number of operations to compute this result! Likewise 30 digits require about 6 times more operations. You can see that the "n times" grows faster than the number of digits. Thus, division gets slower and slower the more digits you have to divide.
-Kevin
Re:Nobel Prize Time (Score:2, Informative)
This would be more in line for a Fields Medal than a Nobel Prize.
Re:Nobel Prize Time (Score:2)
Re:Nobel Prize Time (Score:2)
best ECPP implementation out there, Marcel Martin's 'Primo', seems to behave
as if it has exponent ~4.5-5.
ECPP is non-deterministic though. But it looks like this one is too.
So this looks as if it may be worse than the state of the art.
I'll wait for Lenstra and Pomerance to say their piece though, before making
my mind up.
FatPhil
Re:Nobel Prize Time (Score:2)
However, word on teh grapevine I have access to says that it seems
that Henrik Lenstra (not to be confused with Arjen), has already
declared taht he believes the proof to be correct and elegant.
And that's about as high a recommendation as it gets.
I'll try to knock up an implementation of his algorithm some time soon, and
see what the practical big-Oh looks like.
Phil
Re:Big consequences related to encryption (Score:5, Funny)
Well, encryption based on the multiplication of large primes, anyway.
Yeah... that step in key generation where you check whether a candidate key is prime or not will now be performed with 100% confidence instead of that annoying 99.999999999999999% confidence we used to have.
-a
Re:Big consequences related to encryption (Score:2, Insightful)
This may seem like a strange question, but isn't a non-prime that passes the 99.99999999999% check just as good as a prime in encryption? I mean, seriously, is anyone really going to notice it's not prime? Sure, they could accidently stumble on the 'wrong' factors while trying decode it, but, seriously, halving the time to decode your message isn't a huge mistake, considering we're talking on the order of centuries at least. ;)
Re:Big consequences related to encryption (Score:2)
Re:Big consequences related to encryption (Score:2, Interesting)
Consider a simple public key encryption algorithm based on the fact proved in any beginning number theory book that for primes p, q
a^x = a (mod pq) if x = 1 (mod m)
where m = (p - 1)(q - 1)
Now choose your favorite number f and use Euclidean algorithm to efficiently find a number g such that
fg = 1 (mod m)
You may have to try another value of f if the Euclidean algorithm terminated before reaching 1, but it won't take many guesses. Now publish the number f and mod m as your public key and keep g private.
Someone sends text t to you by sending t^f (mod m).
Now you just raise that message to the power g and reduce mod m to recover the original text. (This follows immediately by combining the above statements).
Finally, I'll get to the point. This algorithm is simply busted if p and q are not prime because t^fg will not equal t mod m unless you are very lucky. In fact, if you want to add a bunch of nines to your percentage certainty, just encrypt and decrypt a sample message text and verify that it works.
Re:Big consequences related to encryption (Score:2)
Re:Big consequences related to encryption (Score:3, Interesting)
actually test to see if this exponentiation procedure works.
RSA does work with carmichael numbers instead of primes, for example, because
a^c == a (mod c) for all a s.t. (a,c)==1, c a carmichael number
Try it - it works!
Phil
Re:Big consequences related to encryption (Score:2)
No, most messages will decode incorrectly.
So, if the key obviously doesn't work, then you can assume that the modulus isn't prime and try another.
In fact, following that argument through, you could have a test suite of various messages to encode. If they work, you can be reasonably sure the number is prime. Obviously, with this approach you'd need to be reasonably sure you had a prime, otherwise you'd waste a lot of time.
Disclaimer: I dropped out of Uni level Maths before I got to anything particularly hard, so most of this is over my head.
Re:Big consequences related to encryption (Score:5, Insightful)
Of course, we also have a polynomial-time algorithm for prime factorization (Shor's Algorithm). It's just that it requires a quantum computer, which is difficult to build. So far, the biggest number factored is 15... 1024 bit keys will be safe for a while yet. I believe it's 15 - 20 years until they're broken, if Moore's Law holds for quantum computers in terms of maximum number of qubits possible (so far, it roughly has, but then, we're only at about 7 qubits).
Re:Big consequences related to encryption (Score:2)
Hmm... you know, I've been thinking... if anyone actually saves some of those packets floating around on the 'net, it my be possible to decrypt ALL of them in that time frame. In other words, even if it's encrypted, be aware that it may not be secure for the remainder of your life, perhaps much, much less. I wonder if I'll have the same credit card numbers in 15 years. Alternatively, I wonder if anybody will think about this more than a year before it's possible.
Another interesting case where it's faster to wait for the hardware than to start chugging away with what we've got right now.
Re:Big consequences related to encryption (Score:3, Insightful)
The question is (Score:3, Funny)
Re:If this is true... (Score:5, Informative)
Primality testing and factorization are not one and the same. It is possible to know that a number is not prime without knowing its factors. Breaking encryption requires factoring the product of two huge primes (it is already known that the number you're trying to factor is NOT prime, so Primes being in P is more or less useless by itself for this particular application), and factorization has yet to be shown to be in P.
Re:Cryptography (Score:3, Informative)
Out of interest, will this finding have any impact on the effectiveness of present day cryptography?
Probably not. While it is possible that this research could lead to results in speeding up factoring, a faster algorithm for determining whether a number is prime is not going to compromise the security of RSA.
Your RSA key pair is derived from 2 large primes. The way we generate keys is to randomly test large random numbers to see if any of them are prime. Ergo, we must already have an efficient formula for determining if a number is prime or not.
FYI, the most commonly used algorithm is Euler's formula. Euler's formula doesn't actually tell you if a number is prime, but it will usually give a non-zero output if the number is not prime, so if you run it enough times with different inputs, you can be 99.99999% sure that a number is prime. However, a small percentage of numbers are "pseudoprimes" -- numbers that are not prime but which will also satisfy Euler's formula. Therefore, after you discover a candidate prime, you should use a different (slower) formula to double-check.
Since this is fairly common knowledge among geeks who use encryption, I'm somewhat surprised that so many people here jumped to the same conclusion you did.
-a
Re:Oh yeah? Well... (Score:3, Funny)
Aha... is that based on a precomputed sieve with 10^100 elements, by any chance? ;-)
(it's googol, by the way. Google.com is obviously not prime, based on the fact that they haven't IPO'd yet)
Re:Oh yeah? Well... (Score:2)
`www.google.com'. Google is highly esteemed among hackers for its
significance ranking system, which is so uncannily effective that many
users consider it to have rendered other search engines effectively
irrelevant. The name `google' has additional flavor for hackers because
most know that it was copied from a mathematical term for ten to the
hundredth power, famously first uttered by a mathematician's infant
child.
---------
googol
n : a cardinal number represented as 1 followed by 100 zeros
(ten raised to the power of a hundred)
There is a HUGE difference between the two.
Re:Knuth (Score:3, Informative)
Re:Knuth (Score:2)
Re:p=np? (Score:3, Interesting)
Re:p=np? (Score:2, Funny)
Re:p=np? (Score:2)
Re:p=np? (Score:2, Informative)
(All this assuming P!=NP, or else all these distinctions collapse.)
Re:helps RSA key generation - NOT factoring (Score:2)
relies on also works with pseudo-primes (that's what makes them
psuedoprimes, the order-counting gives the same result as if it were prime).
_However_, RSA using a pseudoprime is easier to factor using the
factor-finding algorithms (but not using the composite-splitting algorithms).
Phil
Re:Why does polynomial time matter? (Score:2)
matter and energy in the universe.
A P algorithm that can never be performed due to its order or constant
factor has no practical use, only theoretical interest.
Theoretical interest is still very interesting.
Practical presses my bottons more though.
Phil
Re:gimps (Score:3, Informative)
Gallot"), because in those case P+/-1 is _completely_ factorable and
therefore a single positive PRP test, using Pocklington's theorem, or the
Lucas analogue thereof ( http://primepages.org/ ), proves absolutely the
primality of the candidate.
This new test is for testing numbers _of no special form_.
Phil