Time Running Out for Public Key Encryption 300
holy_calamity writes "Two research teams have independently made quantum computers that run the prime-number-factorising Shor's algorithm — a significant step towards breaking public key cryptography. Most of the article is sadly behind a pay-wall, but a blog post at the New Scientist site nicely explains how the algorithm works. From the blurb: 'The advent of quantum computers that can run a routine called Shor's algorithm could have profound consequences. It means the most dangerous threat posed by quantum computing - the ability to break the codes that protect our banking, business and e-commerce data - is now a step nearer reality. Adding to the worry is the fact that this feat has been performed by not one but two research groups, independently of each other. One team is led by Andrew White at the University of Queensland in Brisbane, Australia, and the other by Chao-Yang Lu of the University of Science and Technology of China, in Hefei.'"
Prime number factoring is easy (Score:1, Informative)
Non-subscription link (Score:4, Informative)
post-quantum cryptography (Score:5, Informative)
"PQCrypto 2006: International Workshop on Post-Quantum Cryptography"
http://postquantum.cr.yp.to/ [cr.yp.to]
From The link:
Elliptic curve cryptography (Score:5, Informative)
not quite... (Score:2, Informative)
I work in the field and i have to comment, (Score:5, Informative)
-NMR: Most advanced no decoherence, but severe scalability problems. Nobody knows if they can ever put more than 10 qubits (
-Quantum Dots: Nice but Semiconductors have a hell of excitaions and decoherence
-Spintronics: Interesting, but it will take a time until it is under control
-Ions: well advanced, good control, some scalability problem (not necessarily IMHO)
-Atoms: advancing (-> Atom Chip), could be fine
-Superconducting qubits: Right now decoherence problems, which may be solved.
Missing import facts about shors algorithm (Score:1, Informative)
I'm skeptical (Score:5, Informative)
There have been quite a few different methods of quantum computing developed that take advantage of several types of quantum processes in nature. I worked on bulk-spin-resonance QC as a research assistant at MIT.
To the best of my knowledge, every method so far developed runs into coherence and noise limitations that make it very difficult to scale them up. It's usually not too hard to build a 3- or 4- qubit quantum computer, but scaling up the size seems to itself have an exponential characteristic to the problem. Basically, it's very hard to build a practical quantum computer that works on the scale necessary to factor even modest sized numbers. The engineering challenges to make any of these methods at all practical are bafflingly hard - the underlying science and math are pretty straightforward on the other hand, and the algorithms are undoubtedly cool as hell.
I understand these days the interesting work is on trapped-ion approaches and semi-conductor approaches.
Anyway, Shor's algorithm has been around for years. The theory behind QCs is fairly well understood, the experimental difficulties are huge.
Basically, unless this represents a real breakthrough, i.e. a technique that is not just scalable in theory but can be demonstrated practically to be linearly more difficult to scale up the number of qubits, then it's not a breakthrough that anybody needs to worry about yet.
Without seeing this article's full text though, it's hard to really know, but I gather optical approaches have been tried before and haven't gotten any further than anybody else has.
Re:Tor like oatmeals! (Score:4, Informative)
Shor's algorithm (Score:1, Informative)
Re:I'm not sure how big of a deal this is. (Score:5, Informative)
See http://en.wikipedia.org/wiki/Numbers_station [wikipedia.org]
The one-time pad is in no danger of being broken by quantum computers or anything else because it's provably unbreakable. (Unless there is operator error, and sometimes that's the case)
The Good Guys(tm) want to have this so that they know what The Bad Guys(tm) might have, and that way they can change their systems before they are cracked. I could imagine some crime syndicate paying the millions for a working quantum computer and the PhD talent to run it so that they could break into international banking systems.
On the flip side, pressing exactly two HD-DVDs with random data, and distributing these to your bankings sites for the most sensitive information is getting more and more cost effective.
Re:not quite... (Score:3, Informative)
Fortunately, symetric keys are still fine, Quantum Computers can calculation square roots faster, but this only halves the execution time, which is easily accounted for by double the key size. Even with a Quantum Computer cracking AES 512, or 2048 is intractable.
Re:not quite... (Score:5, Informative)
No. With classical algorithms, RSA encryption and signature verification are O(n^2), while RSA decryption and signing are O(n^3).
and cracking is [considered] exponential in the length of the key
No. All modern factorization algorithms are subexponential; this is why a 1024 bit RSA key is roughly as secure as an 80 bit symmetric encryption key.
Re:not quite... (Score:5, Informative)
I can't read the actual article at home so I don't know how large their machine is. Shor's algorithm has actually been run on a 4-qubit machine before so the summary is incorrect. I believe that the number they factored was 15. The point being that I need a quantum machine large enough to factor the RSA number. As building a 8-qubit machine is not as simple as slapping two 4-qubit machines together (because of problems with quantum coherence) there will always be a state-of-the-art for how large a Quantum Computer can be, and public crypto with keys significantly larger than that will be safe until a larger machine is developed. Sort of a faster version of the battle between cryptographers and cryptoanalysts that we see at the moment.
You'll notice that nobody made the same claims when EPFL sieved a 1024-bit number recently - instead everyone said use larger keys. The situation is likely to be the same as Quantum Computers increase in size. Lastly, not all public key crypto is shafted, only things that rely on factorisation as a problem. ECC will be quite safe until (if?) somebody develops a quantum algorithm for discrete logs.
Disclaimer: I don't do research in quantum - I work in cryptography, but the quantum guys have an office down the corridor and occasionally I understand what they are talking about. Ashley, don't beat me around the head for getting the details wrong
Re:Elliptic curve cryptography (Score:2, Informative)
Re:Elliptic curve cryptography (Score:4, Informative)
Re:bigger keys? (Score:5, Informative)
The whole point of a One time Pad is that there is no such thing as an algorithm to crack it without quite some information in addition to the ciphertext. The beauty of a One Time Pad is that you can crank through every possible key, but that doesn't get you anything. Sure, you may wind up with some keys that take the ciphertext and make perfectly intelligible English out of it, but there are an enourmous number of messages of a given length, and any of them could be an equally valid. So, cracking a message properly encrypted with a OTP basically amounts to creanking through every possible bit combination the same length as the message, and then guessing arbitrarily which one is the "solution."
In practice, the only time OTP's get broken is when they are used wrong. For example, a message is enciphered with a particular pad, transmitted, and then through a beaurocratic fuckup, the same message also gets transmitted as plaintext. Then, somebody fucks up and uses the same OTP (now a TTP!) on another message. the cryptanalyst gives the old captured OTP a whirl and gets lucky. The OTP is only vulnerable to the CHF algorithm. (Cascading Human Fuckups.)
Re:bigger keys? (Score:5, Informative)
For the best known classical factoring algorithms, doubling the key length will basically multiply the number of operations required to factor by itself. For Shor's algorithm, doubling the key length might multiply the time to factor by four, but given how quickly computers get faster, that's basically worthless.
trapdoor one-way permutation candidates (Score:5, Informative)
It starts out with the fact that public key encryption relies on the existence of one trapdoor one-way functions. Now in practice we mainly instantiate these functions with the RSA function (f_e(x):=x^e mod n with trapdoor p,q such that pq=n). But there is no reason to believe this is the only possible example of trapdoor OWF! Admitedly in the 80s when this concept was first being explored there were quite a few failures when trying to base implementations on NP-Complete and/or NP-Hard problems (think knapsack for example). But since we already had RSA with all it's nice properties (efficiency, elegance and simplicity) the research community was not overly motivated to find others.
There have been and to this day still are other lines of research. Take Ajtai and Dwork's work in the direction [acm.org] of basing PKE on worst-case hardness of the shortest vector problem (SVP) or Micciancio's work [psu.edu]on generalizing the knapsack problem such that average-case hardness of approximating the answer can be reduced to worst-case hardness of certain lattice based problems.
Another general direction has been to come up with groups and fields over which solving the DLP is difficult. (For example torus-based crypto [uci.edu] and generalized Jacobian groups [uwaterloo.ca]). AFAIK for most of these candidates there are no (known efficient) reductions from the DLP problem over Z_p or elliptic curves to the DLP in these new groups. Thus it is not immediately clear how or if Schorr's algorithm would break such systems.
In any case there is reason to believe that there can not be (or that we can't find) good candidates for trapdoor OWFs in the quantum computational model. After all there is such a thing as Quantum P and Quantum NP. Though the inequality of these set's of problems doesn't directly imply the existence of quantum trapdoor OWFs it is a good indication there of.
So basically the message is : Relax! The PKE world is by no means on the brink of an apocalypse. At most (and best in my opinion) we're in for a bout of some serious foundations research. to me that just sounds like more funding for applied mathematicians and complexity theorists from various corners and a WHOLE bunch of new candidates and interesting results.
Re:Just RSA, actually (Score:5, Informative)
Well, put briefly, the existence of secure public-key cryptography is equivalent to the existence of trap-door one-way functions. Suppose we have a public-key cryptosystem consisting of an encryption function E and a decryption function D, with a secret key Ks and a public key Kp. Let p be the plaintext and c be the ciphertext. Then, c=E(p,Kp) (we encrypt the plaintext with the public key to get the ciphertext), and p=D(c,Ks) (we decrypt the ciphertext with the secret key to get the plaintext back). Now, the public key Kp is known to an attacker, and so are the functions E and D, so in principle the attacker could do a brute-force search of the keyspace to find the secret key Ks corresponding to a given Kp using them. Thus, there exists another decryption function Dp using the public key rather than the secret key: p=Dp(c,Kp). To prove the cryptosystem is secure, we have to prove there's no way to compute Dp efficiently.
Now, a one-way function is exactly what we need. A one-way function o is a function that is easy to compute (can be done in polynomial time), but its inverse is hard (can't be done in polynomial time). It's fairly easy to prove that if a function is in P, then it's inverse must be at most NP. Well, strictly speaking P and NP are for decision problems, so we should refer to FP and FNP. If it's in FP, then the output can be at most polynomially large in the input length, so we can invert by doing a brute-force search of all possible inputs shorter than that bound, and a nondeterministic Turing machine can check them all in parallel. Thus, one-way functions exist only if P != NP (which is equivalent to FP != FNP). Otherwise anything we could compute efficiently we could also invert efficiently. Actually, it turns out that the inverses of one-way functions must be UP (unambiguous polynomial time). That is, there exists a nondeterministic Turing machine to compute them such that for any accepting input, exactly one path accepts (general NP problems can have more than one accepting path). It's believed, but not proven, that UP is smaller than NP; no NP-complete problems are known to be in UP. Thus, the existence of one-way functions is stronger than P != NP, since it also implies UP is strictly larger than P.
Of course, we need to be able to decrypt efficiently if we know the secret key, so we need something more specific than a one-way function: a trap-door one-way function, for which there is an algorithm to compute the inverse in FP if we have some additional piece of information, the trap-door. In complexity-theoretic terms, what we need for public-key cryptography is a family of trap-door one-way functions (functions in P with inverses in UP) parametrized by the public keys, and the secret keys are the corresponding trap doors (inverses in P if we also have the secret key as an input). A few functions, like RSA or discrete logarithms, really look like what we want, but none have ever been proved to be, and a proof that they are would necessarily include P vs. NP as a special case as describe above.
Anyway, BQP is the complexity class of problems tractable on quantum computers, analogous to P for Turing machines. It's a bounded error probability class, like BPP. BQP is the set of all decision problems which have an algorithm on a quantum computer that computes them in polynomial time with an error probability less than one-third (this bound is an arbitrary choice, we can reduce the error probability exponentially with a linear number of repetitions, and the class would be identical for any probability less than one-half). BQP is necessarily at least as large as P, and the existence of Shor's algorithm shows that factorization is in BQP, so BQP is probably strictly larger than P (although it hasn't been proven that you can't factor in P). NP probably contains problems that are not in BQP (no NP-complete problem is known to be in BQP), but proving this is equivalent to proving P != NP. So, if we assume quantum computers are feasible to build on a practical scale
Re:bigger keys? (Score:3, Informative)
I was trying to address the specific point of the parent poster (that quantum computers gave instant results), not the difficulties of using Shor's algorithm in a practical setting to break cryptography.
There seems to be a general belief that quantum computing can try an algorithm against all possible inputs and give you the best/correct input in one iteration, which is just absolutely not true.
Re:More like the Chinese gov (Score:2, Informative)
China just sends out science like "Ph.D flash mobs" if they are interested.
ie. informants in the Chinese diaspora.
No need for fancy, expensive legends ect.
1000's of students reporting back around the world.
Or they get you to set up in China and clone your work down the street.
FACTS about quantum and public-key crypto (Score:5, Informative)
Here are some facts to fix the clutter:
1. Shor's algorithm works on quantum computers and can factor integers in polynomial time. This breaks all public-key systems that depend on the hardness of factoring, including RSA, Rabin, Paillier, and XTR.
2. A different version of Shor's algorithm also computes discrete logarithms (again, in poly time). This breaks all public-key systems that depend on the hardness of discrete log, over *any* cyclic group. This includes ElGamal, even over "exotic" groups like those associated with elliptic curves.
3. Nevertheless, factoring and discrete log are different beasts and are not known to be equivalent to each other. Still, Shor's algorithm (in different versions) solves them both.
4. Shor's algorithm does not yet break all known public-key cryptosystems. Systems based on lattices, for example, do not appear to be affected as of yet. These include Ajtai-Dwork and a couple systems by Regev. NTRU is based on lattices, but is based on some not-so-natural assumptions (i.e, the assumption that "NTRU is secure").
5. Public key encryption is (probably) *not* equivalent to trapdoor permutations (or even trapdoor one-way functions). TDPs are a much stronger notion and are not strictly needed to do secure public-key encryption. For example, ElGamal and lattice-based systems are not based on trapdoor primitives per se.
Re:not quite... (Score:3, Informative)
ECC with correctly chosen parameters cannot be mapped/isomorphic/whatever to the appropriate finite field. However that doesn't mean that someone might develop a technique to do so with currently secure ECC parameters in the future.