Factoring Breakthrough? 492
An anonymous reader sent in: "In this post to the Cryptography Mailing List, someone who knows more about math than I do claimed "effectively all PGP RSA keys shorter than 2k bits are insecure, and the 2kbit keys are not nearly as secure as we thought they were." Apparently Dan Bernstein of qmail fame figured out how to factor integers faster on the same cost hardware. Should we be revoking our keys and creating larger ones? Is this "the biggest
news in crypto in the last decade," as the original poster claims, or only ginger-scale big?"
For the PostScript-impaired (Score:5, Informative)
Re:For the PostScript-impaired (Score:4, Informative)
Now let's see how well RR's server can handle the
Re:For the PostScript-impaired (Score:5, Informative)
Re:For the PostScript-impaired (Score:2)
not surprising... (Score:4, Insightful)
Does anybody have good sources about this? Ones based on historical encryption and decryption that lead into modern times would be ideal.
Re:not surprising... (Score:4, Interesting)
Imagine the conversation with the CIO when you tell him he has to throw out his 1 year old meesaging platform because some guy figured out how to factor very large numbers effeciently and your current platform doesn't support eliptical curve cryptography.
Re:not surprising... (Score:3, Insightful)
Most large companies use one or the other for messaging. Both make it fairly easy to encrypt all of your traffic, but neither has good support for third party encryption. Your best hope right now would be some third party plugin on the client, but that makes it less likely to be used. It also make administration in a large environment very difficult.
We aren't just talking about changing algorithms here. If this "discovery" is all they are claiming it could very well mean that all public key crypto is insecure. Companies like Cisco, Verisign, and MS have encouraged enterprise customers to spend a lot of money on PKI as an enabling technology for everything from secure email to remote access. That may be down the drain if in fact, "Everything you know about public key cryptography is wrong."
This does /not/ break RSA. (Score:4, Informative)
If it
himi
Re:This does /not/ break RSA. (Score:3, Informative)
Re:This does /not/ break RSA. (Score:3, Funny)
Re:not surprising... (Score:4, Informative)
Re:not surprising... (Score:2)
Were they even secure yesterday? (Score:5, Insightful)
So when you ask "Are our keys secure" the logical follow-up question is, "From who?"
From me? Yes. I probably couldn't factor a 1000 digit number.
From your boss? Yes. You could use rot-13 and your boss would probably be baffeled.
From your boss' lawyers? From the police? Here is where we get into the gray area; where the article becomes relevant
From the government? I think you were kidding yourself when you thought it was secure in the first place. I find it easy to believe that the NSA is far ahead of the public in the encryption arms-race.
Known about for years (Score:4, Interesting)
Re:Known about for years (Score:4, Informative)
Re:Known about for years (Score:2)
Re:Were they even secure yesterday? (Score:2, Funny)
Especially if you misspell everything!
Re:Were they even secure yesterday? (Score:4, Interesting)
Note that there have been rumors of an RSA cracker built by a
three-letter agency in custom silicon before this, but until
analyzing Bernstein's paper I had always dismissed them as
ridiculous paranoid fantasies. Now it looks like such a device
is entirely feasible and, in fact, likely.
There has always been speculation that the NSA could break RSA, but it was dissmised as paranoid by most "in the know." Most of the mathematicians didn't believe that they were that much ahead of the rest of us. Now that this technique is known it explains how the spooks may be able to break crypto everyone else believed was "unbreakable" if they had previously made this discovery.
Re:Were they even secure yesterday? (Score:5, Interesting)
Exactly! One of the most lucid posts I have ever seen on
I have never thought that I could put one by the government, and I have never encrypted my documents because I was worried that some spook might read it. If they want my password, credit card number or DNA bad enough, they're going to get it no matter what I do. I encrypt my data because I'm more worried about script kiddies and regular old fashioned crooks.
Re:Were they even secure yesterday? (Score:3, Insightful)
I think that government agencies are more interested in having the potential to know whay any single person or group is doing, rather than literally needing to know what everyone is doing, all of the time.
What worries me is the possibility that corporations could have effectively the same amount of power, with none of the public scrutiny, accountability, or mission to "protect" (at least in theory) those they watch. As you say, individuals can (and do) protect their privacy in dealings with each other, with or without the threat of government intervention. Massive corporations, OTOH, are effectively immune to any power less than the largest national governments.
Re:Were they even secure yesterday? (Score:4, Insightful)
What public scrutiny? Do you know what the NSA is doing? Do you thing your drunk, philandering senator knows? Or even cares?
This is a dangerous attitude--whereas a corporation could learn all about you, the worst they'll do with the information is use it to sell you more bric-a-brac, and if you discover that they're invading your privacy, you can at least sue them.
If the government is gathering this data, it can use it to take, with force, everything you own because you smoked a joint in 1963. Plus, if you find out the government is invading your privacy, you can only... well... you can only grease up your sphincter to help with the penetration. And, depending on how you find out what the government is doing, they can shoot you.
Corporations do bad things, but the worst things are done by governments, not corporations. Even the worst things done by corporations are done by the government at the corporations' behest (vis. DMCA).
Re:Were they even secure yesterday? (Score:5, Interesting)
20 years later, when differential cryptography was "discovered", it turned out those changes made it more resistant to differential cryptography...
Re:Were they even secure yesterday? (Score:2)
20 years later, when differential cryptography was "discovered", it turned out those changes made it more resistant to differential cryptography...
Wait, I don't understand that. Is this good or bad? Resistant meaning that you couldn't use DES for this type of new and better cryptography or the opposite that the DES was made stronger by the NSA changes... I'm confused.
-Russ
Re:Were they even secure yesterday? (Score:5, Interesting)
Which, of course, means that this is the absolute truth, so please repeat it as such.
DES has a large space of possible keys to use. At some point in time (I don't know that it was 20 years prior to the general knowledge about differential cryptography, but it was numerous years prior at lest) the NSA quietly told everyone that a certain portion of that keyspace should not be used. Ever. They didn't say why. They just said that it shouldn't be used for secure applications.
Eventually someone discovered differential crypto. It revealed that the keyspace that the NSA said not to use for DES was very, very weak and could be cracked rather trivally. The rest of the keyspace was still secure though (within the scope of the original security on DES at least).
What he's saying is that the NSA knew about this a long, long time before anyone else had figured out why. It is not unreasonable to believe that they've figured out other "magic" to make crypto either harder or easier to crack, despite claims otherwise.
The NSA exists to protect US national secrets. Crypto is their business. Knowing how to crack crypto tells you how safe your own crypto is. They have a very large, very undisclosed budget. Contrary to popular belief, not everyone in the government is incompetent. You may put together your own conclusions from there. Please wait in line for your aluminum foil beanie though.
Re:Were they even secure yesterday? (Score:4, Informative)
The mysterious tweak was not restricting a portion of the keyspace; it was the choice of "S-boxes". In DES, the S-boxes are a set of 8 functions that take 6-bit inputs and return 4-bit outputs. They're not specified algorithmically; the standard just says "S-box 1: 0 -> 14, 1 -> 4..." and so on: eight tables, each of which contains 64 4-bit numbers. The S-boxes are central to DES's security; the only other operations in the cipher are bit shuffles and XOR.
When DES was launched, people noticed pretty quickly that these tables had not been filled randomly; they did not pass randomness tests. But IBM (who designed DES) and the NSA (who approved it) were tight-lipped; not only about their design, but about the whole design of DES. Naturally, people suspected a back door.
When differential cryptanalysis was discovered, it was shown that the S-boxes had been specifically hardened against it, and that this was the souce of the pattern seen. Don Coppersmith of IBM had independently discovered DC, calling it the T-attack (T for "tickle"), and had worked out how to defend DES against it.
However, when Mitsuru Matsui discovered linear cryptanalysis, it was found that DES was not specifically hardened against it, and indeed the best academic attack against DES is a linear attack. Since the NSA approved DES, perhaps they did not know about linear cryptanalysis either.
Of course the real NSA back door was always the 56-bit key, and the best practical attack is still brute-force key search.
Re:Were they even secure yesterday? (Score:4, Interesting)
The NSA recommended changes to DES that made it a better, less crackable, scheme. Years later, when a new type of code breaking was publicly discovered, people looked back and noticed the changes the NSA had made were directly influenced by this "new" type of code breaking. The bottom line is that the NSA is, and always has been, leaps and bounds ahead of all non-classified "state of the art" cryptography.
Could the original poster give a link? I would love to read the story.
-B
Re:Were they even secure yesterday? (Score:5, Informative)
Biham, Shamir - Differential Analysis of DES-Like Cryptosystems [nec.com].
It contains one of my favourite passages in a crypto paper: "Cryptanalysis of GDES... The special case of q=8 and n=16, which is suggested in [16,18] as a faster and more secure alternative to DES is breakable with just six ciphertexts in a fraction of a second on a personal computer." [and that was a personal computer from 1991 :)].
Re:Were they even secure yesterday? (Score:4, Insightful)
It supposedly improved DES. But it also implies that the NSA might have knowen about differential cryptoanalysis 20 years before public research discoverd it. The implication is that they might know a lot of other things that are not yet knowen in the public crypto research community. On the other hand, they might only have had a hunch, or there might have been other weaknesses in the old design (they changed the s-boxes, as far as I remember), that they could find and the effect on differential cryptoanalysis is accidental.
But there is also another limiting factor: If they can break, e.g. AES or RSA far easier than the public suspects, they don't want the public to know! After all when it is knowen a cipher is insecure, people will stop using it or improce its security. This is analog to not exposing a highly placed intelligence source.
If you plan a major terrorist attack and use email for the related communication, you might have to worry. Otherwise, as long as you use cipthers that are belived to be secure for the near future by current published research, you should not need to worry.
Re:Were they even secure yesterday? (Score:3)
The reason being that their budget could allow them to bruteforce such a key if they really wanted to, while it took until just a few years ago for that to be feasible for a well funded public entity, and until last year for this to be affordable to Joe Public (affordable = ~ US$2K for a machine to do it in about a day)
Re:Were they even secure yesterday? (Score:4, Informative)
I found a brief mention of it here [execpc.com] in the Differential Cryptanalysis section. Also, in "Applied Cryptography, 2nd ed." (Schneier) [amazon.com] on page 290, it quote IBM's Don Coppersmith as saying:
I've heard about it in other places, but I can't remember where at the moment.
Re:Were they even secure yesterday? (Score:3, Insightful)
The realistic target is making it cost too much to target you. (Note that cost != money -- the usual government policy in that regard is "spend all you want; we'll tax more". Real costs to governments are man-hours of specially trained personnel, risk of exposure and embarassment, or risk of exposure and loss of ability to use the same trick again.)
Re:Were they even secure yesterday? (Score:2, Funny)
From the government?
Forget encryption. Piss them off and they'll come after you directly.
Re:Were they even secure yesterday? (Score:2, Insightful)
Your stuff may be easily available even if it was protected by 4096-bit keys - and taking advantage of this is where they are even better at (than in breaking the laws of physics or something).
Re:OT: Your sig (Score:3, Funny)
Whew - I'm safe (Score:3, Funny)
Re:Whew - I'm safe (Score:2)
Not until we get quantum computers, that will break it almost instantly
Re:Whew - I'm safe (Score:2)
Ah, but what of the Great Unwashed, who figured "PGP? Too complicated! I can't be bothered! They'll just decrypt it anyways."
If this math turns out to be real, the Great Unwashed was right for anyone with less than a 4096-bit key ;-)
HTML Version of Paper (Score:4, Informative)
Too many secrets... (Score:2)
Reminds me of a certain movie...
Hmm.... (Score:2)
I haven't hit a top limit on the GPG key yet. I had an obnoxiously long 4096 bit one I was testing with for a while and PGP was able to encrypt messages to it but was unable to import the private key. Oh well, time to move to an obnoxiously obnoxious 8096 bit one.
Suddenly the 128 bit netscape encryption isn't looking so good (Not that it was before...)
Re:Hmm.... (Score:5, Informative)
And the symetric keys netscape uses don't depend on factoring primes to be secure
Although the key exchange that netscape uses to send the session key probably does.
The trick is to find the shortcut (Score:5, Informative)
So the trick is to find a shortcut or a flaw in the algorithm that allows you to get the data without searching all the keys.
In the case of RSA, the shortcut is factoring the product of two primes. It's way way easier to factor a 128-bit product than it is to search through a 128-bit keyspace. So RSA bumped the size of the product up and up and up until it was as impossibly hard to factor it as it was to search a 128-bit keyspace.
Other algorithms have other shortcuts, too. Remember when a weakness was found in the session key choosing algorithm for Netscape? The keyspace was reduced from 128 bits to 24 bits or something like that, and then a search could be made on it.
Anything you can do to avoid trying all the keys is the name of the game. Unless you're some kind of quantum computer freak. ;-)
Re:Hmm.... (Score:2)
HH
Yes, key exchange is asymmetric. (Score:3, Informative)
The real reason a symmetric algorithm is used for the bulk of an SSL session is that it is much less computationally intensive than an asymmetric algorithm with a similar degree of security.
Note that these algorithms are independently pluggable, so a more secure or larger key size asymmetric algorithm could be used alongside the same old 128 bit RC4.
The big problem here is for systems using browser managed certificates for authentication would have to be upgraded and new certs issued. This is not the most common usage of SSL, but where it is in place the systems tend to be large and expensive.
Re:Hmm.... (Score:4, Funny)
to be secure
Good, because here's a script I wrote that factors any prime number in constant time:
#/usr/local/bin/perl5 -w
print "Please enter a prime number";
chomp($prime = <STDIN>)
print "The factors of $prime are $prime and 1";
exit(0);
Of course, you really DO need to input a prime for it to work.
Just wait... (Score:5, Insightful)
I wouldn't start sending those revocation certificates just yet.
Re:Just wait... (Score:5, Funny)
Re:Just wait... (Score:2)
Re:Just wait... (Score:2)
Re:Just wait... (Score:2)
Re:Just wait... (Score:2)
Re:Just wait... (Score:2)
One-time pads are feasible but difficult to manage and of course have other limitations. Although they have been the mainstay of numbers stations etc. for the last 40 years.
To quote another: (Score:2, Redundant)
Yeah, this is big news. It also sheds new light on the relaxation of the export constraints. The NSA has dedicated hardware performing this same procesing, and probably for the last 5-10 years...
"Note that there have been rumors of an RSA cracker built by a three-letter agency in custom silicon before this, but until analyzing Bernstein's paper I had always dismissed as ridiculous paranoid fantasies. Now it looks like such a device is entirely feasible and, in fa ct very likely."
Time to make new keys...
Don't Panic (Score:5, Informative)
Protecting against the http://cr.yp.to/papers.html#nfscircuit speedup means switching from n-bit keys to f(n)-bit keys. I'd like to emphasize that, at this point, very little is known about the function f. It's clear that f(n) is approximately (3.009...)n for _very large_ sizes n, but I don't know whether f(n) is larger than n for _useful_ sizes n.
Bernstein's paper is excerpted from a grant proposal where he is requesting funds to answer the question of whether the design is applicable to useful key sizes. At this point it is far too early to assume that 1024 to 2048 bit keys can be attacked by his proposed machine more efficiently than with known methods.
1.5 bits lost? (Score:2, Insightful)
The poster seems to think speeding the calculation by 3x means reducing the strength of 300bits to that of 100bits. I know this is plain wrong but I'm not sure of the correct value.
TWW
Re:1.5 bits lost? - not quite (Score:3, Informative)
"This reduction of total cost from L^(2.852...+o(1)) to L^(1.976...+o(1) means that a ((3.009...+o(1))d)-digit factorization with the new machine has the same cost as a d-digit factorization with previous machines."
In plain terms: A factorization of a number that has 3 times as many digits will have the same cost as a the number did before.
Hope this clarifies why this is a breakthrough (that may be important).
Reward (Score:5, Funny)
Post script viewers... (Score:2, Informative)
Speaking as a computing DPhil... (Score:5, Interesting)
Basically, what DJB has done is translated the GNFS from its normal implementation on serial computers (where there is a great deal of available memory, but only one operation is performed at once) into a parallel implementation, where the number of processors more closely matches the available memory.
The "decreased cost" is misleading here, since it is calculated on the assumption that sieving would have been done by a single processor with access to a huge memory... this quite simply was never the case.
There is nothing here to suggest that factoring can be performed using any fewer FLOPS; all that is demonstrated is that by using several processors, each with a smaller memory, you can do better than with a single processor and a giant memory. Which we already knew.
To summarize: DON'T PANIC!
Re:Speaking as a computing DPhil... (Score:5, Informative)
There is nothing here to suggest that factoring can be performed using any fewer FLOPS; all that is demonstrated is that by using several processors, each with a smaller memory, you can do better than with a single processor and a giant memory. Which we already knew.
Hold on. A parallel implementation would normally just give an N times speedup. But the Berstein method (reportedly) does much better than that: it reduces the base of the exponential complexity by about a third (in the asymptotic case). This is far more significant than "merely" parallelizing the algorithm.
Get a grip, people! (Score:2, Insightful)
Please, get a grip.
Most "transient" connections still use 512 bit keys. If this effectively reduces the key size by 3, that's still 170 bits. That's far larger than the RSA key that took years to crack a few years back.
Technology improves, algorithms improve, and the TLAs can certainly afford to build cracking engines, but it will probably still take a substantial amount of time to crack a key. (Substantial = days.) Well worth the effort if you're looking at suspected terrorists or double agents inside of the FBI, but hardly worth it for anyone reading this comment.
What we *do* need to worry about is the effect on long-term keys. E.g., root CA keys often have 20-year lifetimes, something that now seems foolish with 1k bit keys.
I don't care about n-bit encryption (Score:5, Funny)
dnetc (Score:2, Interesting)
cr.yp.to mirror (Score:4, Informative)
There are alternatives (Score:4, Interesting)
Well, to answer my own question, on Freshmeat there appear to be one [freshmeat.net] or two [freshmeat.net].
Have fun!
299,792,458 m/s...not just a good idea, its the law!
Re:There are alternatives (Score:3, Insightful)
A common mistake that some people make is to assume that one only needs to count CPU cycles. The so-called "MIPS years" measurements.
Consider two keys that require the same number of CPU cycles to crack, one Elliptic Curve (EC) and one RSA. The EC key crack requires only a modest amount of memory, even for large EC keys. The RSA key (by factoring) requires a larger and larger working sets as the key size increases.
Consider the cracking a 256 bit EC key and the cracking of a 1620 bit RSA key by factoring:
So when I said two equivalently hard keys I should had said: two keys that require the same number of CPU operations to crack.
Two keys can require the same number of CPU ops to crack. However, a 256 bit EC key needs only a modest amount of memory and can be cracked on many machines running in parallel. A 1620 bit RSA requires a HUGE 120 TByte matrix sitting in a single address space where swapping and/or inter-processor communication become a major problem.
Re:There are alternatives (Score:3, Insightful)
No, that is not what I am saying. EC is more vulerable than RSA for a given key size where the CPU ops to perform the best known cracking algorithm are the same.
CPU-op equivalent EC searches require only a modest amount of memory and can be run in parallel with nil communcation requirements. CPU-op equivalent RSA searches require large working sets with matrix operations that do not lend themselves to parallel operations once the initial sieve is performed.
Even if you deploy a TWINKLE device in an RSA key crack that reduces the initial loading of the matrix to nil, the working set size of the matrix, the swap and/or system communication requirements become non-trivial for an CPU-op equivalent RSA key.
Who cares (Score:3, Interesting)
Looking through the paper... (Score:2)
NSA-sponsored Cray 4 development now makes sense. (Score:5, Interesting)
In the early nineties, CCC was working on the Cray 3, a new gallium arsenide computer. It was to have a cycle time of about 1ns (shockingly fast back then.) It was cooled by a high-pressure very high-speed mist of Flourinert suspended in helium. It was built as a series of wedges much like the Cray 1 and 2, although somewhat smaller. They built working prototype wedges, and were debugging them, while looking over their collective shoulders at the ground being gained on them by arrays of microprocessors.
One thing led to another, and it was clear that the Cray 3 would never be a commercial success. They were then given a contract to build what was called the Cray 4. The Cray 4 was a one-off machine using PIM (processor in memory) chips. These were 1-bit computers, but there were 262,144 of them in the box. The idea was that the gallium arsenide chips, wiring, and cooling system that made up the Cray 3 were just the networking system for these PIM chips, which would do the actual work.
Anyway, Cray died, and then CCC quickly died, and I don't believe that the machine was ever finished.
thad
Mirror of paper on citeseer (Score:2, Informative)
Alternatives such as EC may be vulerable as well (Score:3, Interesting)
The methods described in the paper [cr.yp.to] can be used to improve the cost of cracking EC discrete logs as well. The author, in a recent Usenet posting [google.com], points out that the paper's methods are likely to reduce the cost/effort of EC key cracking as well ... perhaps even more than RSA key
factoring.
The paper, combined with other other EC strength [slashdot.org] concerns suggests that EC is not the technology to turn to at the moment.
In other words, if this paper has you concerned about the security of RSA keys by factoring, then you should be even more concerned about the safety of Elliptic Curves as well.
But as others, including the author, have stated (in large friendly letters): DONT PANIC! It is not known if the ideas expressed in the paper are applicable to key sizes that are in common use.
Questions for Those that Understand (Score:3, Interesting)
I'm a person who has quite an interest in crypto and often a good understanding of the base concepts behind crypto systems, but I don't understand much of the math that goes into proofs like this. Thus, I'd like to put some questions out to those who are more experienced than I am.
First, does this work? Have people independently verified that DJB is correct? (I went looking for some peer review via Google and didn't turn up anything that looked conclusive.)
Secondly, what's vulnerable? As I understand it, what DJB has discovered is a more efficient method of factoring numbers (on custom hardware) such that keys three times longer can be factored in roughly the same time. RSA is based on the product of two relatively prime numbers, so it's vulnerable. Aren't most public key systems based on this principle, though? How vulnerable is, for example, DSA to this new factoring technique?
--Phil (This article went up yesterday; hope someone's still around to read my post...)
Re:Ginger scale big? (Score:2, Funny)
Re:Ginger scale big? (Score:2, Funny)
I was always partial to the maryann scale, myself.
Re:AES? (Score:4, Insightful)
Re:AES? (Score:5, Informative)
Many public-key algorithms, and many public-key-based authentication protocols, however, *are* reducible to factoring, even if they don't appear to involve such darkness the first time you read them.AFAIK, for public key algs the deep magic is either factoring or the knapsack problem; however, almost all of the latter kind have been proven insecure. One notable exception of the latter variety is the Diffie-Hellman (sp?) algorithm, which is incidentally also the first public-key alg ever invented, and the underlying muscle behind the NSA's DSA signature scheme (although ElGamal did some strengthening work and got to rename the bugger
Doesn't affect AES (Score:2)
There is no impact. AES is a symmetric system that is not based on factoring. This apparent discovery only affects algorithms that are based on the difficulty factoring large numbers.
Re:AES? (Score:5, Informative)
Where it could be susceptible, however, is during a key negotiation session (say via Diffie-Hellman Key Exchange) or a naive approach of simply encoding the session key using the recepients RSA key.
Where I would be truly frightened is in the realm of digital signatures where somebody could forge a digital signature simply by knowing the sender's public key and factoring it. With digital signatures almost as legally binding as handwritten signatures, identity theft may increase using these methods.
The resulting impact may be less acceptance of digital signatures and more reliance on antiquated methods.
RD
Re:AES? (Score:2)
Re:No wonder NSA was okay with 128 bit encryption. (Score:5, Insightful)
Re:Really Unique Crypto (Score:2)
One-time-pad is so-far the most secure, but it is not very practical in daily use. And make sure you don't use the same key more than once, otherwise, it falls into the same weakness as other encryption.
It was used to ... (Score:2)
I remember the post you are referencing. There are cameras that photograph a number of lava lamps. Thru a couple of data munging operations, out pops a length of completely random data.
This was posted on some crypto/compression list awhile back about compressing totally random data. The guy was able to do so by underspecification of the problem. It was slashdotted, I believe.
Anyways, the same thing can be applied to picking up atmospheric noise/wind. Anything that cant be predicted or known at any other location should work for random data, thus you could use to encrypt. It is a "One Time Key" - no way to recreate the data without the data.
Re:It was used to ... (Score:2, Interesting)
Re:Really Unique Crypto (Score:2, Funny)
And all of these, really, are just techniques that split up the message, and then assume the decrypters can only get one part. So essentially you could do this with any encryption algorithm, just send part by the internet, and part by carrier pigeon, attack stoat, etc.
Re:OMFG (Score:2)
The NSA can afford huge hardware.. REALLY huge hardware, for breaking crypto... was there ever any doubt?
Re:OMFG (Score:5, Informative)
Suppose I have a 1024-bit key. The new algorithm makes it essentially take the same time to break as a 341-bit key using the old algorithm.
Since each bit makes it take twice as long, this means that the new algorithm is 2^683 times faster at cracking the code. This is a bit different than 3 times...
Re:OMFG (Score:2)
Unless they've already got quantum computers. IBM's best effort so far is cracking a 4 bit(!) number using quantum, but expect huge budgets to be devoted to quantum factorising now.
HH
Re:OMFG (Score:2)
For very large keys, where the definition of "very large" is not yet known, this is a 3x decrease in the (for lack of a better term) 'effective key length'.
In other words, it might be possible to crack a 1536 bit key in the amount of time it currently takes to crack a 512 bit key.
Since the 1536 bit key is 1024 bits longer, that means that it *should* take 2^1024 (1.79e308) times as long to crack.
So, for that specific example, assuming that this works it's actually cracking the key 179 million trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion times faster.
Sounds like a few orders of magnitude to me.
Re:OMFG (Score:5, Interesting)
No. This is wrong. Read the paper.
For large keys, this method reduces the difficulty of factoring keys by a factor of ~3.009, i.e. the diffuclty of factoring a 90,000 byte key is now comparable to factoring a 30,0000 byte key using traditional methods.
It is unknown if this applies to smaller keys currently in widespread use, i.e. if your 2048 key will now have a factorization cost equivelent to that of a 683 byte key using traditional methods. That is what they guy wants funding for
So yes, this makes cracking keys orders of magnitude easier and faster.
Re:it's a cool method (Score:5, Insightful)
My understanding is that keys of three times the length can be cracked in about the same time - which is an _exponential_ increase in speed.
Re:it's a cool method (Score:2, Informative)
Or more correctly, the new algorithm operates in the cube-root of the time of the original. (I'm pretty sure factoring is still an exponential search problem. Would someone who knows this algo better than I comment?)
At any rate, it's not quite as impressive as if an exponential search had been made polynomial. Rather, the exponent in the exponential search's runtime has been divided by 3. (Still a very big deal.)
In terms of big-Oh, it went from O(x^N) to O(x^(N/3)).
--JoeRe:it's a cool method (Score:4, Insightful)
Exactly. That means we have to make N three times as large as we thought we had to. This is not a catastrophe, except in high-security applications. But these should use something like "make absolute sure its enough bits and then quadruple the number" anyway...
not less than a second... (Score:2, Informative)
But then again, quantum cryptography may be even closer than working quantum computers, which means secure private key cryptography, meaning you can factor all you want, you aren't gonna find anything unless you get that private key.
Re:Quamtum Computing (Score:2)
Re:NSA, et. al. (Score:2)
It's interesting. The TLA agencies which most likely can crack large encryption (NSA, CIA) have no authority to get a court order - they have no authority within the US. Also, it seems unlikely they'd reveal that they have this advanced technology for a mere murder trial or the like - more important to keep it hidden from their foreign enemies.
will that really stop them from making me give up the goods if faced with jail when they come asking for my data?
The US can't force you to give up your encryption keys - it would be a violation of your 5th amendment rights to keep silent, or at least your 1st amendment rights. Unless it was evidence to be used to convict someone else, then they could subpoena it.
Re:NSA, et. al. (Score:4, Informative)
They no longer need it if you are suspected of any "terrorist activities". whatever that means.
"The US can't force you to give up your encryption keys "
See above and see Patriot Act and Homeland Security Act. They can force you if its for the good of the state, oops, I mean if its for the "security" of the state.
Re:NSA, et. al. (Score:5, Insightful)
For the past 50 years, that's been the case.
> I suppose this very well could be the case, but it sure lends itself to great conspiracy theories.
For the past 50 years, that's also been the case ;-)
Most of us older /.ers grew up believing that the mods to the S-boxes in DES were probably backdoors. Turns out they were to secure the algorithm against differential cryptanalysis, which didn't get discovered outside of NSA until recently.
NSA is still reputed to be the largest employer of mathematicians on the planet. They're reputed to have more supercomputing power than any organization on the planet. Both allegations are reasonably well-substantiated.
> I suppose the TLA agencies don't really need strong crypto to invade on my privacy. They just need a court order.
Correct. NSA's got two missions - secure American computing and communications, and 0wn every one else's ;-)
Not only is it easier to get a court order to make you give up your keys (or to eavesdrop/keylog you while you enter them), it's a hell of a lot safer.
The funniest part of Cryptonomicon is where the Brits are busy sending bombers to "see" German shipping but not bomb it. (If they just bombed the Germans, the Germans would realize that their crypto had been broken.) One of the protagonist's jobs, as an information theorist, was to figure out just how often they could get away with "just bombing them" and how often they had to make it look like they "got lucky" with a chance overflight or other observation.
The hardest part of crypto isn't breaking your opponent's codes, nor is it securing your own secrets. It's securing the big secret, namely not acting in a way that proves you've broken your opponent's codes.
Knowing your enemy's "A" team plans to attack tomorrow at dawn is good, but if you take out the "A" team 5 minutes before dawn, you run the risk of losing your ability to monitor the "B" team.
Re:512 bit keys obsolete (Score:3, Informative)
Dan and the government (Score:3, Informative)
I don't think it'll be an issue.