Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Security

Turns out, Primes are in P 444

zorba1 writes "Manindra Agrawal et. al. of the Indian Institute of Technology Kanpur CS department have released a most interesting paper today. It presents an algorithm that determines whether a number is prime or not in polynomial time. While I haven't gone through the presentation in detail, it looks like a promising, albeit non-optimized, solution for the famous PRIMES in P problem."
This discussion has been archived. No new comments can be posted.

Turns out, Primes are in P

Comments Filter:
  • by fatmav ( 148996 ) on Wednesday August 07, 2002 @12:19AM (#4023221) Homepage
    the ps version looks much better:
    http://www.cse.iitk.ac.in/primality.ps [iitk.ac.in]
  • I don't know anything about theoretical CS. What's polynomial time?

    Steve
    • They're saying the the time T necessary to determine whether or not an N digit number is prime satisfies this equation:

      T a)
      for some value (can be any finite value) of k and a.
      • They're saying the the time T necessary to determine whether or not an N digit number is prime satisfies this equation:

        T N ^ k + a

        for some values (can be any finite value) of k and a.

        Basically, it's a statement about how well an algorithm scales to REALLY large numbers.
        • arg. I did it again (Score:3, Informative)

          by orz ( 88387 )
          hm... I'm not sure why it removes comparison symbols when set to plain text... oh well, I wrote out "is less than" this time

          They're saying the the time T necessary to determine whether or not an N digit number is prime satisfies this equation:

          T is less than N ^ k + a

          for some values (can be any finite value) of k and a.

          Basically, it's a statement about how well an algorithm scales to REALLY large numbers.

          • by shepd ( 155729 )
            >I'm not sure why it removes comparison symbols when set to plain text...

            Slashdot removes left angle brackets in an attempt to stop abuse. Since it still lets raw right angle brackets through for old style quoting (which I prefer), the left ones have to go on unverified tags.

            To display a left angle bracket despite that you'll need to type its ISO code, which renders the bracket unusable for tags (which is a good thing).

            ie: < is entered with this: &lt;

            Just something to note down FFR. Oh, and &nbsp; can be handy if you want to try to slip through some important, on-topic simple tables or ascii art. Sometimes. But not lately.

            - o
            <
            \__/
            • You have to escape < with &lt;, because otherwise the parser wouldn't be able to tell when an element tag was beginning or you meant less than. > isn't required to be escaped for this reason, because it is clear whether it is closing a tag or not. You also have to escape the ampersand, because otherwise the parser would have to scan ahead to know if you were specifying an entity like &nbsp; or you just meant & and whatever.
            • by orz ( 88387 )
              But I set it to plain text! Shouldn't slashdot automatically replace my (less-than) symbol with &lt or whatever?

              Since slashdot doesn't seem to be doing that I feel like the mode shouldn't be called "Plain Old Text".

              Ah well. I'm just bitter because I screwed up twice in a row.
    • by Goonie ( 8651 ) <robert.merkel@b[ ... g ['ena' in gap]> on Wednesday August 07, 2002 @12:39AM (#4023307) Homepage
      The technical definition is kinda long and complex, but in essence it's like this. Given a problem of some size n, a polynomial time algorithm is guaranteed to give a solution in time proportional to a polynomial of n. If a polynomial-time algorithm exists that solves a problem, then the problem is said to be in polynomial time.

      To give an example, say you've got a list of numbers and you want to know the sum. That can be done in linear time - ie, the time taken is proportional to the length of the list of numbers. The size of the problem (n) is defined by the length of the list and the time taken (T) is as follows: T = c1 * n + c0, where c1 and c0 are some fixed constants. The formula for T is a polynomial, and so the problem "LIST-SUM" is in polynomial time. It would still be in polynomial time if the formula for T was a polynomial with n^2, n^3, n^50 terms in it, or even terms like n^1.5 (because as n grows very large an n^1.5 term will always be smaller than an n^2 term).

      Showing you an example of something outside polynomial time is a little more difficult, but some standard examples are SAT (the satisfiability problem) or the travelling-salesman problem, which you can read about in any book on the subject.

    • a simple way to think of it:

      an NP-complete (NP=non-polynomial) problem is one that can be solved, but takes about 8*age_of_universe time to solve. To get around this, approximation algorithms are used, but these can never give a 100% guarantee of finding the correct solution, nor may provide the same solution if it were to execute on the same data twice.

      a polynomial-time problem is one that can be solved within our lifetimes, guarantee 100% accuracy, and can always generate the same solution for the same data.

      there's a LOT more to it. The book Intro to Algorithms has a good chapter on the topic of NP-completeness, which will explain the intricate and gory details.
      • NP stands for Nondeterministic Polynomial.

        ie, it can be completed by a nondeterministic machine in polynomial time. The main problem with NP algorithms is that there aren't any nondeterminisitic machines around. (A nondeterministic machine can attempt all paths to try to reach a conclusion at once whereas a deterministic machine can only try one at a time.)
      • Ignore the parent post, since it is wrong. The previous poster did a much better job of explaining the concept of polynomial time.

        An NP-complete problem does not take 8 times the age of the universe to solve. This completely missed the point. Every P or NP problem can be expressed in terms of a variable "n", which represents the input size. There are many practical problems where the best-known P algorithm is slower than the best NP algorithm for typical values of n. However, computational theory tells us that as n increases, the P algorithm will eventually beat the NP one.

        -a
    • by Kaz Riprock ( 590115 ) on Wednesday August 07, 2002 @01:05AM (#4023410)
      Back in the days of parachute pants, leg warmers, and the 80's, there was a man....a man with a rapping and dancing vision. His name...

      MC Polynomial.

      And he sang a song..."P can't Touch This". Before the drum break of this song, Poly sang:

      It's a prime, because you know
      P can't touch this
      P can't touch this
      Stop...Polynomial Time...

      Thus giving rise to a branch of mathematical order functions denoting the complexity of a problem...

      Either that or it's defined pretty well here [ic.ac.uk].

    • by Salsaman ( 141471 ) on Wednesday August 07, 2002 @10:28AM (#4024861) Homepage
      There is a remote island in the South Pacific called 'Polynosia' (not to be confused with 'Polynesia').

      The island has a number of strange customs.

      1) All the women on this island are called 'Polly' in reverence to the island's god, Polynose.

      2) The men of the island are very philosphical (maybe because all the women are called Polly, so it gets very confusing). They spend most of their time poring over mathematical problems.

      3) The island has strict laws on the use of technology. Telephones are not allowed, aircraft are not allowed to land there, in fact the only way to reach the island is by boat, nevertheless it is very popular with tourists.

      4) It is considered offensive to Polynose for anyone who is not a Polynosian woman (a Polly) to prepare food. Since the island is popular with holiday makers though, all of the enterprising Polly's have opened small restaurants called 'Polly-meal-time'.

      The men of the island, in order to discuss their mathematical musings, recently opened a cafe. To distinguish this from all the restaurants, they named it 'Polly-no-meal-time'.

      The article reports that recently a boatload of mathematicians visited Polynose, and told the island's men how to check if a number is prime.

      Thus the headline 'Primes can now be solved in Polly-no-meal-time'.

      /me ducks :-)

  • Crypto repercusions? (Score:2, Interesting)

    by Toodles ( 60042 )
    I am by no means a heavy duty math cruncher or cypherpunk, but how exactly is this going to affect number and factoring? I don't know of any advanced prime number search algorythms, but Sieve of Erothenes (did I get that right?) solved in NP time. (Each number is check is evenly divisible by an earlier prime, and if none found, add to list of primes, lather rinse repeat)If primes can be found in P time, finding the first 50 prime numbers would take the same time as finding the first 50 three hundred digit primes.

    While that may not be thrilling at first, let's use the RCA contest for money as an example. We get a 1024 bit number containing 200 digits in decimal formm, which is the product of exactly two prime numbers. We know then that:
    1. We only need to find one prime to easily find the other.
    2. The digits in the factors can total no more than 200 digits.
    3. One of the factors contains less than 100 digits.

    Start at 10^100 and count down using this algorythm, and youll find it in P time instead of NP time. It'll still take forever, literally and figuratively, but wouldn't it take significantly less time than before?

    Toodles
    • Cracking any specific key-length is P, but cracking RSA in general remains NP, since that method requires checking a number of potential primes proportional to 2^(N/2) or so where N is the key size.
    • Start at 10^100 and count down using this algorythm, and youll find it in P time instead of NP time. It'll still take forever, literally and figuratively, but wouldn't it take significantly less time than before?

      Ah, no. First, note that a 2^1024-big number has more than 300 decimal digits, and so a 2^512-big number has more than 150. Then, even if primality testing took only 1 operation, we'd still need to perform something like 2^511 operations by your method. At 10^24 (one trillion trillion; unfathomably many) operations per second, this'd *still* stake 10^112 times longer than the estimated lifetime of the universe (1.5*10^10 yrs) to complete!

      There are, however, faster ways of factorization than testing all the numbers in (1..sqrt(N)) to see if they are factors of N. They are not noted in (or relevant to) the paper mentioned by this article.

      [nb: See other comments for why this is, in *practical* use, not such a big improvement on Miller-Rabin and other randomized methods which have been known for decades.]

    • Please STOP! (Score:4, Insightful)

      by phliar ( 87116 ) on Wednesday August 07, 2002 @12:50PM (#4025718) Homepage
      Everyone: just because you start out by writing "I'm no mathematician, but..." doesn't means you can pull crap right out of your ass. Words mean things, and when you talk about math, words mean things exactly. Please don't misuse them.

      It's the Sieve of Eratosthenes. A number n is of size log(n). This is a deterministic algorithm; why bring up NP? What is the time complexity of division? And here's a hint: you start with n-digit (n=100) numbers and present an algorithm that runs in time 10^n. This is in P?

      Has anyone actually read the paper? The algorithm is outlined, with a complexity analysis. Don't forget, P-time doesn't mean usable.

  • ...then you can bet the NSA has had this algorithm for decades.
  • by IntelliTubbie ( 29947 ) on Wednesday August 07, 2002 @12:36AM (#4023293)
    For those of you wondering about the implications for cryptography, this does not imply that composite numbers can be factored in polynomial time. This algorithm is simply a primality test -- that is, it tells you whether or not a number has any proper divisors (in polynomial time), but it doesn't tell you what these divisors actually are. Determining whether a number is prime has always been considerably easier than finding the prime factorization.

    In fact, for schemes like RSA -- where the key is the product of two large primes -- we already know that the number is composite, by definition, so a more efficient primality test doesn't give us any new information.

    Cheers,
    IT
    • Key generation for algorithms like RSA already use numbers that are prime with a high probability. There are quick algorithms for that. But key generation algorithms currently don't run for several months to ensure for 100% that the factors are prime.

      So perhaps this algorithm makes RSA, DSA, etc. even stronger because it will be easier to guarantee that the factors are prime instead of assuming it with 99,999999999999% probability.

  • by Mr. Sketch ( 111112 ) <`mister.sketch' `at' `gmail.com'> on Wednesday August 07, 2002 @12:37AM (#4023297)
    From looking at the algo, I can't figure out what 'x' (or maybe it's a chi) is? Can someone help? I've looked it over, but couldn't find a definition of it. I'm also assuming that the 'if (r is prime)' line is a recursive call to itself? Also, how do we determine 'q' the 'largest prime factor of r-1' ? Another recursive call to get the factors? I must admit, I'm kind of lost by the algo, but it's still interesting.
    • no you dont need recursive call. as r is O(log(n)) so size of r is O(log log(n))) so if an exponential time algorithm is used for checking the primality of r, it'll be exp in log(log(n)) i.e. linear in log(n)

      Same goes with q. as it's "small" you can afford an exponential algoritm.

      also x is a variable, those eqns (12) really are polynomial eqns.

    • by deblau ( 68023 ) <slashdot.25.flickboy@spamgourmet.com> on Wednesday August 07, 2002 @03:44AM (#4023721) Journal
      From looking at the algo, I can't figure out what 'x' (or maybe it's a chi) is? Can someone help? I've looked it over, but couldn't find a definition of it. I'm also assuming that the 'if (r is prime)' line is a recursive call to itself? Also, how do we determine 'q' the 'largest prime factor of r-1' ? Another recursive call to get the factors? I must admit, I'm kind of lost by the algo, but it's still interesting.
      OK, I'll address these points in order:

      First off, 'x' doesn't matter. The loop at the bottom checks a congruence of two polynomials over two finite rings (if I'm reading it right, the first is generated by x^r-1 and the second by the input n). Simplistically, this amounts to grinding out the coefficients of the two polynomials and verifying that the difference of the polys equals zero, modulo the ring generator. The actual 'value of x' is never used.

      Second, if you check the order statistic calculation, they're assuming worst-case on factoring 'r' (they apply order r^1/2 for that factorization). They then make an assumption that O(r^1/2) = O((log n)^3), or that O(r) = O((log n)^6), which seems rather suspect (as if they knew the answer ahead of time and plugged in a recursive value for it). Nevertheless, they do go to some length to show that such an r exists, and that it requires at most O((log n)^6) iterations of the first loop to find it.

      As for 'q', I think again it is determined by brute-force factoring r-1. On the one hand, r is small; on the other hand, that doesn't mean a damn thing when it comes to dealing with order statistics, which I think is also a little suspect.

    • x is just a symbol. The proof is performed using polynomial ring.

      Doing shit modulo (x^r-1) means that as soon as you multiply your
      polynomials together and get any terms over x^(r-1) you can replace
      x^(r+d) with x^d, because (x^(r+d)-x^d) == 0 (mod (x^r-1))

      e.g. modulo x^2-1

      (ax+b)^2 == a^2x^2 + 2abx + b^2 == (2ab)x + (a^2+b^2)

      FatPhil
  • Funny (Score:2, Informative)

    It's funny when I read the comments, and I see all kinds of stuff that reminds me of my Discrete Structures class (we did the P and NP stuff at the end)...

    Makes me wonder what this means for computer theory, but if you think about it, polynomial time can still be slow for very large n with very big powers... although not as bad as an exponential with large n's (assuming you go out far enough that the exponential will grow faster then the polynomial)

    Kudos to the team that discovered this

  • I'm dead tired (Score:3, Interesting)

    by Henry V .009 ( 518000 ) on Wednesday August 07, 2002 @12:40AM (#4023315) Journal
    I'm dead tired and will look at the paper in the morning. But right now I have a problem with step 6:
    "Let q be the largest prime factor of r-1"
    Won't getting q boost the thing back into power n complexity?
    • I took a second look, and I'm pretty sure about it. To complete this algorithm for n, for every prime rn, you will need to find the largest prime factor of r-1.
    • Re:I'm dead tired (Score:3, Informative)

      by matrix0040 ( 516176 )
      no it wont as q is "small", as proved in lemma 4.2 r is O(log(n)^6) and so is an upper bound on q. so the size of q is O(log(log(n)) and an exponential time in that will be linear in log(n) !!
  • by Erpo ( 237853 ) on Wednesday August 07, 2002 @12:40AM (#4023318)
    look here [wikipedia.com].
  • Implications (Score:5, Informative)

    by davemarmaros ( 598966 ) on Wednesday August 07, 2002 @12:41AM (#4023322)
    There are 2 different problems:
    1) Determining if a number is prime [is 909 prime?]
    2) Determining the factors of a number [what are the factors of 909?]

    This article claims to be able to solve problem 1 in Polynomial time.

    However, problem 2 is MUCH harder, and that is the one which will break cryptography as we know it. This article does not claim to solve problem 2, so we're safe for now.
    • 1) Determining if a number is prime [is 909 prime?]

      Here, quick math trick that will save people a bit of time. It's always easy to tell if a number is divisible by three, just add all the digits together, and if the result is divisible by three, then so is the original number. 909 = 9 + 0 + 9 = 18 (divisible by three). Oh, and you can take it a step further (18 = 1 + 8 = 9) if the result is still too long.

      Therefore, this number showed up right away to me as being divisible by three, and quick division will show that 303 * 3 = 909.
    • However, [determining the factors of a number] is MUCH harder, and that is the one which will break cryptography as we know it.

      Not all public-key cryptography is based on the difficulty of factoring numbers. There are a number of other one-way functions (such as elliptic curves) that are being used in cryptography. So I wouldn't say it'll break crypto "as we know it", but it would certainly freak some people out.
  • There have been comments as to the relation of this finding to the strength of modern prime depending crypto algorithms. Based on my understanding won't this increase the stength of crypto not decrease it.

    Prime based encryption schemes (RSA,etc) are based on the complexity of factoring large numbers into primes, not the the ability to determine if a number is prime or not.

    Incidently, many implementiotions make use of pseudo-primes, as the ability to validate primeness is (or was) cumbersome, and not actual primes. This should give implementations the ability to ensure key pair values are actual primes which would strengthen the resulting encryption.

  • by CmdrSam ( 136754 ) on Wednesday August 07, 2002 @12:45AM (#4023340)
    We already have probabilistic algorithms that can tell whether a number is prime or not in polynomial time to any degree of certainty you wish.

    All this would mean is that now instead of verifying that a number is prime with a (1-10^-10) level certainty in polynomial time, it could now be done with certainty, so there would be no revolution in cryptology, as some other posters suggest.

    --Sam L-L
  • by FalafelXXX ( 598968 ) on Wednesday August 07, 2002 @12:51AM (#4023367)
    The famous result by Miller 1976 (and indepdently rediscovered(?) by Rabin 1980) already did that. The only difference is that their algorithm was in RP (randomized polynomial). Namely, if the algorithm says it is prime it might be wrong (with probablity half, say), and if it says that the number is not prime, then it is not prime for sure.

    Now, if you have a number n, you run this algorithm, say 20*log(n) times. If the algorithm says it is prime on all executions that it is prime, you know damn sure it is. If it says it isn't, you are sure it isn't. There is a rediclously tiny probablity that if the algorithm claims that it is prime in all executions, that it is still not prime. This probablity is so small, that it can be essentially ignored. Now, random bits are cheap nowadays, so this is quite satisfactory. This is in fact the algorithm that turned the RSA crypto system into a practical and useful algorithm, because suddently finding primes became easy.

    To break RSA, and become really famous, one has to come up with a polynomial time algorithm for factoring. It might even be that RSA can be broken without factoring, but this is still an open question (I think).

    Ahh, and BTW. Polynomial time means polynomial time in the size of the input. So if the number is n, the size of the input is O(log(n)), and the running time needs to be O( (log(n))^(O(1)) ).

    Ok. End of boredom.

    • Does anybody know what happens to RSA if in fact one of the "prime" numbers is not prime? My guess is that this does not make it suddenly easy to break, or make it fail to encode (because either of those would imply a fast way to determine if a number is prime by seeing if it works in RSA). I would guess that a tiny fraction of messages would decode to garbage rather than to the desired text, but does anybody know for sure?
      • by Mornelithe ( 83633 ) on Wednesday August 07, 2002 @02:09AM (#4023536)
        I probably really shouldn't be replying, because it's been a while since I read how it works, but I can copy the algorithm and tell you where I think it would break (if at all). Please correct me where I'm wrong.
        1. Generate two random primes p and q
        2. Calculate n = pq and phi(n) = (p - 1)(q - 1) = n - (p + q) + 1 (Note: phi(n) is the number of primes less than n (Euler's totient function, I believe). phi(p) = p - 1 for prime p, and phi(pq) = phi(p)phi(q) for p relatively prime to q (note, this step breaks if p or q aren't prime))
        3. Generate e
        4. Calculate d, the inverse of e (mod phi(n)) (i.e. d*e = 1 (mod phi(n)))
        5. is the enciphering key, is the deciphering key
        6. For plaintext P, you get ciphertext C by doing: C = P^e mod n, and get P back by doing P = C^d mod n

        So, now there's the matter of why it works. Here we go:

        • Because of Fermat's Little Theorem, we know that a^(phi(n)) = 1 (mod n)
        • Since ed = 1 (mod phi(n)) we have: ed = 1 + k*phi(n) for some integer k
        • So, if we encipher and decypher, we have: (P^e)^d = P^(ed) (mod n)
        • Which also means we have: P^(1 + k*phi(n)) = P*(P^(k*phi(n))) = P*((P^phi(n))^k) = P*(1^k) = P (mod n)

        So when p or q are not prime, phi(n) != (p - 1)(q - 1), so when you calculate d, you'll get something that doesn't negate the encrypting process (because its not a multiplicative inverse mod the real phi(n)), so you'll probably get junk when you decipher.

        I don't really feel like doing a detailed analysis of the algorithm, but I imagine that this isn't used as a primality test because it's running time probably isn't polynomial time.

        • by plaa ( 29967 )
          (Note: phi(n) is the number of primes less than n (Euler's totient function, I believe). phi(p) = p - 1 for prime p, and phi(pq) = phi(p)phi(q) for p relatively prime to q (note, this step breaks if p or q aren't prime))

          A slight error: phi(n) is the number of positive integers less than n, which are relatively prime to n (ie. gcd(n,x)=1). Therefore, if p is a prime, it is also relatively prime to all smaller integers, so phi(p)=p-1.

          The function that tells the number of primes smaller than n is pi(n), the prime counting function.

          Refs: Totient Function [wolfram.com] Prime Counting Function [wolfram.com] (MathWorld's luckily back online!)
      • When you build a key with from two primes you have two keys that work, one is private and one is public.

        When you build a key with three primes you have one public key, one private key and two that will work for the hackers.

        When you build a key out of four primes you end up with the two keys you expect and 6 or 9 others.

        You can do this by building your own RSA like system with 32 bit keys and plug in some small random even "prime" and see how many other keys work.

        Not all keys work but some of the combos will.
    • I think you're missing the point of Rabin's test.

      Rabin's test says that if there is no witness below 2ln^2(n), then the
      number is certainly prime.
      The repeated-by-a-fixed-small-number PRP-test MR test is still a
      compositeness test, or a Probable Primality test, and does not give a
      certain answer.

      A PRP is not _proven_ prime.

      See professor Caldwell's Prime Pages at:
      http://primepages.org/

      FatPhil
    • Yeah, in fact someone told me about a determinastic polynomial algorithm almost a year ago. Maybe he was a reviewer for this paper, I dunno. But I assume someone more knowledgable will pipe up on this eventually. What I know is that there are some known composites that look like primes to the Miller and Rabin algorithms, and they can wreak havock to encryption. I think there is a test for these, but there may be more of em out there. Carmichel is the name that pops in my head, but I'm probably wrong.

      In anycase, for cryptography you would probably run the randomized algorithm on a bunch a numbers until you found a number to be a prime with high probability, then you would run this to verify that it is a prime with higher certainty. The certainty sorta depends on the length of the proof for this algorithm. Since for a sufficiently complex proof there is a non-zero probability that the proof is not correct, and the probabilistic algorithms often have simple proofs that we may be more certain of.
  • by sasami ( 158671 ) on Wednesday August 07, 2002 @12:57AM (#4023384)
    This result, if true, is very interesting from a theory standpoint.

    As far as practice, it's fairly irrelevant. Probabilistic primality testing can be done in constant time with bounded error.

    The Miller-Rabin test [mit.edu] will tell you if a number is prime with at most 1/4 probability of error. That sounds ridiculous, but the catch is that you can iterate it using a random parameter. Do the test twice and your probability drops to 1/16. Do it fifteen times and your chances of being wrong are about one billionth.

    If you're truly paranoid, do it 50 times. That'll bring the error rate of the algorithm magnitudes below the error rate of your hardware.

    ---
    Dum de dum.
    • If you're truly paranoid, do it 50 times. That'll bring the error rate of the algorithm magnitudes below the error rate of your hardware.

      Depending upon what version Pentium CPU you're using you can accomplish that with 2 or 3 steps.

      Old joke, but couldn't resist :)

      -
    • The Miller-Rabin test [mit.edu] will tell you if a number is prime with at most 1/4 probability of error. That sounds ridiculous, but the catch is that you can iterate it using a random parameter. Do the test twice and your probability drops to 1/16. Do it fifteen times and your chances of being wrong are about one billionth. If you're truly paranoid, do it 50 times. That'll bring the error rate of the algorithm magnitudes below the error rate of your hardware.

      Just consider how fast it's on some of the better known commercial operating systems, because even that 1/4 error probability is magnitudes below the error rate of your platform
      • What nobody knows though is if error is good or bad. If Miller-Rabin says a number is prime, and you use it for an application that requires a prime number, the application will still work. At least that is true for all the applications I know of that require a prime number (Public Key Encryption), there are probably others.

        I've speculated that the existance of non-primes that work is one of the things that makes public key encryption hard enough to be useful. I can't prove it though, and offer it only as an interesting (but likely wrong) point to consider.

  • So I'm already sensing the level of confusion rising as this is a very confusing topic. Here's a quick review. Note: I'm going to do this on a higher level and not start talking about Formal Languages as this is not the place to teach it. So in loose terms, Problems that in P are easily solveable. For example, sorting is a problem in p. Proof: I can sort a set of n numbers in no worse than n^2 time using a bubble sort. (Yes - I know there's faster but this is an example). The bubble sort just compares every number to every other number. Assuming you didn't optimize the algorithm you'd compare each number to every other number and they'd be sorted in no worse than n*n = n^2 comparisons. So what is NP? NP are problems that given a proposed solution we can verify that the solution is correct or not in polynomial time. An example of this is factoring. (note: it is not known whether factoring is in P). Given current methods we know factoring a big number into its prime factors. But if I was to tell you that p=q*r you could very quickly multiply q*r and see if it is equal to p and "verify" my answer. Another way to think about it is you can try out one branch of computation in polynomial time. So what is NP-complete? NP complete problems are is follows. A problem is NP-Complete iff 1) The problem is in NP 2) A solution in polynomial time to this problem would yield a polynomial time solution to all other problems in NP. That is, no other problem in NP is harder than NP-Complete and if one NP-Complete problem is solveable in Polynomial time than all of NP is solveable in polynomail time, P=NP and you will win doctorates and a nobel prize, turing award and a million bucks from the clay institute for proving this. Sigh - you are probally still confused.. :)
  • by Adam J. Richter ( 17693 ) on Wednesday August 07, 2002 @01:50AM (#4023500)

    We give a deterministic O((log n)**12) time algorithm for testing whether a number is prime.

    [Sorry, the Slashdot filter does not allow me to superscript the 12.]

    The algorithm takes O(log2(n)**12) time, where n is number being factored. If we optimistically assume that this algorithm can test the primality of a 16-bit number in one microsecond, then here is how long it would take to test time primality of some larger numbers.

    • 2**12 times as long for a 32-bit number = 4096 microseconds = 4 milliseconds,
    • 4**12 times as long for a 64-bit number = 16,777,216 microseconds = 16 seconds,
    • 8**12 times as long for a 128-bit number = 68,719,476,736 microseconds = 68,719 seconds = 19 hours,
    • 16**12 times as long for a 256-bit number = 281,474,976,710,656 microseconds = 9 years.

    I don't know what a realistic base time for this algorithm really would be, and I don't know where the cross over point against existing exponential time deterministic primality testing algorithms would be, but at least this provide a sense of how log2(n)**12 grows.

    • bad in math? (Score:2, Redundant)

      by RelliK ( 4466 )
      log2(16) = 4
      log2(32) = 5
      log2(64) = 6
      log2(128) = 7
      log2(256) = 8

      by your assumption a*log2(16)^12 + b = 1 ms
      for simplicity, let's ignore the constant b.
      then:

      a*log2(16)^12 = a * 4^12 = 1 ms (by assumption)
      a*log2(32)^12 = a * 5^12 = 14.5 ms
      a*log2(64)^12 = a * 6^12 = 129.75 ms
      ...
      a*log2(256)^12 = a * 8^12 = 4096 ms
      • Re:bad in math? (Score:3, Insightful)

        by kavau ( 554682 )
        Hold your breath... the algorithm is log2(n)^12 where n represents the number to be tested, not the number of digits. If you denote the number of digits by m, that is, m = log2(n), you get a complexity of O(m^12). The algorithm is therefore polynomial in the number of digits, with a very large exponent of 12. This large exponent could easily hamper the practical use of the algorithm, as Adam correctly demonstrated. The upshot is: Adam is right, RelliK is wrong!
  • Size matters (Score:3, Informative)

    by deblau ( 68023 ) <slashdot.25.flickboy@spamgourmet.com> on Wednesday August 07, 2002 @03:30AM (#4023691) Journal
    Note that this algorithm takes O((log n)^12). For this to actually be faster than, say, factoring n directly, and assuming a multiplicative factor of 1 in the order statistic, n has to be at least 3*10^22, or roughly 75 bits long. This algorithm is probably very ineffective at factoring small integers.
    • Firstly note that the 12 is a proven upper bound, and for the majority of
      real world numbers it will be much lower.

      Secondly, it's not a factoring algorithm, so your final comment is a bit odd.

      Phil
  • ...if he HAD found a way to do factoring in P time... gotta wonder what would happen if he took a holiday to the states - I'm sure SOMEONE would try to have a go at him for breaking encryption
  • I wrote a paper back in 1990 about a prime number sieve that was basically an O(n^2) algorithm.
    It basically worked by finding out if numbers were composite, but the algorithm used could be "inverted" to tell you if a number was prime or not by telling you if it was composite or not.
    It was very well suited to parallel implementations, too.

One man's constant is another man's variable. -- A.J. Perlis

Working...