> A problem that is "outside of NP" can't be > solved in polynomial time (otherwise it > would be in P) and has solutions that can't be > checked in polynomial time (otherwise it would > be in NP). I can't think of a good example off > the top of my head, but they exist.
Just for reference, NP consists only of yes/no problems, so something as simple as "sort these numbers" is technically not in NP (on the other hand, "are these numbers sorted?" is clearly in P).
Anyway, I believe an example of a yes/no problem outside of NP (and co-NP) is "here is the setup on an NxN go board. is there a winning move for player X?". This probably works for other games such as chess and even checkers.
>Just for reference, NP consists only of yes/no problems,
Can't be, or else you could prove trivially that P=NP. According to Christopher Thomas' article, P problems are those which can be solved in polynomial time, and NP are those for which an answer can be validated in polynomial time. Now, if there were only two possible answers (yes and no) to a given NP complete problem, you could trivially solve this problem in polynomial time by attempting to validate both answers. The answer which can be validated is the correct one.
Basically, what this means, is that the set of possible answers for NP complete problems has to be non-polynomial itself, else the exhaustive search of the answer space would provide a solution in polynomial time.
The change is not from Silicon to Copper, it's from Aluminium to Copper. The Silicon stays the same.
The problem has been using a conductive metal on the silicon wafer. If it is too conductive, the traces crosstalk. Actually, free electrons jump from one charged trace to another! Aluminium has been used, because it is minimally conductive.
Unfortunately it is also minimally conductive...
IBM figured out how to pair the copper traces with a non-conductive shield in the microscopic sizes required. This has been what everyone wished for - The benefits of conductivity without the drawbacks.
The class of NP-hard problems are soluble in polynomial time on a non-deterministic Turing machine. A nondeterministic machine will explore many possible solutions in parallel.
Not just "many". A nondeterministic machine will explore all solutions in parallel.
That's what keeps NP-complete problems being cracked by parallel processors--you'd need to add more and more processors as the problem grew larger.
For instance, TSP (Travelling Salesman Problem): Having, say 10 processors only cuts my time by 10--but since the time grows exponentially this is pretty useless pretty soon.
But if I could build the machine AFTER I saw the problem (and add a processor for each city), I could solve it in poly-time--of course, building machines with 5x10^6 processors in pretty infeasible, so a non-parallel solution would be preferred.
The phrase "begs the question" is more than the sum of it's parts.
It is a technical term from the field of logic and means that you have assumed what you set out to prove.
For instance, one favorite creationist argument against evolution is:
"The evolutionists tell us that they know the simpler organisms are older than the more complex ones because their fossils are in lower geologic strata.
But the geologists tell us that they know the lower geologic strata are older because they have simpler (and therefore older) fossils embedded in them.
This, (say the creationists) is begging the question."
Of course, that's not really what geologists and evolutionists say so the creationist rebuttal is moot, but they are right in thinking that this would be an invalid argument.
On the other hand, a proof that P != NP would prove that anything based on an NP-complete trapdoor function (I'll get back to this) is sound against anything short of a quantum computing attack.
I don't think this is necessarily true. I think it's possible that there are ways that computation could be done more quickly for a specific problem.
For instance, there has been research using a warm tank of amino acids, allowing them to mix and match themselves. The ways in which they connect can be turned into a solution to an NP problem. That is, in effect, parellel computing at its height -- millions of simultanious computations. The computations are nondeterministic and statistical, but statistics usually pan out.
I still go out to dinner, and will not ever put my card# on the web.
You seem to have decided that ecommerce is dangerous based on one idiot's comically bad study. Did you ever find some real numbers? Take a look at how many credit card numbers have been stolen on the web other than by wankers giving their number to a corrupt porn site operator. It's much less likely than giving it to a dishonest waitron.
But the $5000 device is today's cost. 30 years from now, that device is practically free. Especially since assuming no further advances is silly. So the amount of money to crack your 1024-bit protected secret 30 years from now is trivial. And the time estimate of 1 year today is probably way over estimated.
Of course, your credit card will have expired by then. But if someone wanted to take one of your documents and claim you said something today that you really didn't, it can be done trivially 30 years from now. IOW, 30 years from now I would be able to say you said X 30 years ago.
My take on large-prime encryption schemes has always been that even if the "useful" key sizes were to be rendered breakable by advances in hardware, the problem would be self-solving: this advanced new hardware would bump up the "useful" key size. (at the point 512 bit is breakable, you've got enough horsepower to make 4096 bit keys usable.) So at any given moment in time, personal computers are fast enough to encrypt stuff such that no existing computer can crack it in a "reasonable" time.
Does this theory apply here? Does this hardware push forward the "useful key size" by an amount proportional to it's codebreaking power?
Because IDEA is a symmetric cipher, and PGP is a asymmetric cipher.
This basically means that in order to use IDEA you need to have different keys for each person you want to communicate with, with asymmetric encryption you just needs one keypair.
I do believe that PGP uses IDEA in its "guts". That means that the public key stuff is only used to exchange a one-time IDEA-key, which is then used to encipher/dechipher (the reason for this is speed).
I'm no cryptography expert, but I do know this much: 128-bit RC4 is not the same as 128-bit DES, or 1024-bit DES. While 512-bit DES may soon be compromised, and 56-bit RC4 (or was it RC5?) has been compromised (distributed.net cracked the latest 56-bit RC5 (RC4?) contest in under 24 hours, IIRC), 128-bit RC5 (RC4?) is still unbreakable for all practical purposes. -----
other dangers of using your cc# while at dinner...
dishonest employees -- i have a lot of friends that get cc#'s this way. they wait a couple of weeks and then buy tons of stuff....
forgetting your card -- this happens more than people think
my favorite are atms and those nifty checkcards (i work for a bank)
i cut up at least 10 checkcards a day by stupid people who forgot them. at least 1/3 of those are turned in by people who saw the card sticking out before it was eaten by the machine... i have NEVER had a bad experience online.
Don't let your fears control what you do or don't do. Did they tell you the odds of getting your cc# over the inet was? or was the simple fact that the possibility was there enough to make you not send it? (not trying to be rude...)
Usually individuals are only liable for the first $50 of fraud. Everything thereafter is picked up by the card issuer (credit card) or FDIC (bank account).
TWINKLE doesn't crack RC4, it cracks RSA. RSA is a public key algorithm. It's mostly used to transfer symmetric (like RC4 and DES) keys between computers, and not really used to encrypt data at all.
TWINKLE doesn't attack the encryption algorithm. It attacks the key exchange algorithm, RSA. 128bit RC4 is still very secure. Almost all SSL sites use 1024bit RSA keys for key exchange, thus aren't crackable with TWINKLE.
is my 2048 bit key still safe? I mean, when I was having PGP create my keys I decided that twice the "High" security would be good, so I made a 2048 bit key... is it still secure? is it still excessive?
Ok, lets assume that this means that in 30 years, 1024 bit keys are going to be crackable using this method. This means that any secret which I use a 1024 bit key to hide right now is good for 30 years, assuming no other advances or weaknesses found in the cryptosystem. So my current credit card number can be cracked in 30 years + 1 year to crack. For most "secrets" this is riduculous. Remember also that the cost is $5000 per device after design is complete, and it would take several of them plus time on a Cray or equivalent serial processor (part of the algorithm runs serial, not parallel). Remember to ask yourself if your secret is going to be worth the amount of money and time to crack it. 1024 bit keys are still enough for non-ultra-top-secret-eyes-only-this-message-will-s elf-destruct stuff 90+% of the time.
But seriously, this was an interesting article. It just goes to prove that if you are using key sizes of less than 512 bits for public key cryptosystems, you may be vulnerable to advances in factoring rather than advances in raw processor speed.
The main advantage of this is that 512 bit keys can be factored in 9-10 weeks as opposed to 6-7 months with the previous application of this factoring method (described in the paper). This means that with this equipment and assuming that the estimate is correct, 768 bit keys can be factored in 1038 years (Shamir's estimate of 6000 * len(512)). A 1024 bit key would be factorable in approximately 1x10^6 years with this method and optimal conditions, however such a number requires exhorbitant space, upwards of 256 GBytes for the sieve 10 TeraBytes to store its matrices.
Bottom line, any key of 512 bits is good for about 9-10 weeks. The recommended minimum key size is still 1024 bits. The paper's conclusion sums this up best:
Conclusion The idea presented by Dr. Shamir is a nice theoretical advance, but until it can be implemented and the matrix difficulties resolved it will not be a threat to even 768-bit RSA keys, let alone 1024.
Its more important to have the freedom to use strong encryption so that we don't have to worry about being forced to choose weak keys in order to communicate.
I don't think that's the point here. Even then, the "computation power" of the tank scales polynomially with it's size (at best).
So with regard to NP-trapdoor functions, all you have acheived is to raise the stakes a bit and have to start with, say, a 2048 bit key instead of 1024. But the fundamental problem is, that still some (fixed) number l of additional bits will double the effort for cracking. Take 1000*l bits and and every single atom in the universe can work on your problem in parallel until the hell freezes over. (Quantum computers may be a different thing though, but even that remains to be proven.)
OTOH *if* you really find a polynomial algorithm for some (arbitrarily weird) specific NP-complete problem, than you can apply this very algorithm to any problem in NP because NP-complete problems can be emulated by one-another with only polynomial overhead.
TWINKLE attacks by factoring. This is useful for most asymmetric algorithms but not for symmetric algorithms which do not use primes maths. My estimate for the PII 300 (and in that case it doesn't really matter much whether it's a 5GHz or a 50MHz) would be a couple of million of years for 128 bits. Only, of course, if trying all keys is the easiest way to crack the encryption.
Look at distributed.net, trying to crack a 64 bit key for 558 days now. Checked only 7.9% so far. And remember, 65 bit takes twice as long, and 66 bit twice as long as 65 bit...
Oh wait, I just run the numbers through bc: 128 bit has 340282366920938463463374607431768211456 possible keys, if we check 1 million keys per second, we would need approximately 5395141535403007094485264577495 years to find the key. Have fun.
well, neither do I...but if one could recover credit card info or whatever, a criminal could find it a very profitable wait. 10 weeks later, most people will still have the same credit cards...
While I do think your argument has technical merit, it overlooks a major problem with crypto stuff - installed base.
Upgrading hardware and software to use larger key sizes is decidedly non-trivial. Most (if not all) hardware has a fixed key size it expects (or at the most, a choice of a couple of key sizes). Thus, the protocols that use crypto specify key lengths.
Good examples are SSL and PGP. SSL supports 512, 768, and 1024 bit key sizes now. PGP supports 512, 768, 1024, and 2048 bit keylengths. While it is not terribly hard to change the software itself to use larger keysizes, changing a published Spec often takes several years.
Given the forsight, we could keep ahead of crypto advances. For instance, specifying NOW that PGP6 should support 4096-bit keys would be good. Unfortunately, software people tend to be very short-sighted, and don't look at the long-term ramifications - they only tend to fix stuff that's broken NOW, not that might break in 5 years.
OK, I sat down and re-read the major sections of both Schiener and Knuth (to refresh my math memory), and then something that the Silverman's analysis jumped out at me:
Current (and expected future) uses of Number Sieve factorization (currently, the General Number Sieve is most efficient) are composed of two parts: the sieving of factors, and then solving a matrix of all the possible factoring equations.
For most of the time, the first operation has dominated in time; however, this first process is also the most ameniable to increases in speed.
As Silverman pointed out, the RSA-140 (465-bit) challenge required 4 weeks of 200 machines to solve, then about 4 days for a single super-fast box to solve the resulting matrix. Using Shamir's new methodology, we can expect a 512-bit number to be done in 5 weeks on 20 machines, with another 4 weeks to do the matrix.
The bottleneck has now become the matrix solution, which is a non-parallizable problem (for now). All that can be done on it currently is to use Moore's Law, which doesn't significantly affect the security of larger keys.
If anything, Shamir's invention will now push the focus of attacking RSA from dicovering the factoring equations (the "sieve" part) to improving the matrix solution problem.
The real danger here is from the NSA and other organizations that may have discovered a way to significantly parallelize the matrix solution. If they can do that, well, RSA is going to be screwed. However, there is no current indication that they can do this. But don't say I warned you.
For those paranoid, here are some relevant numbers on the now-dominant times to solve the matrix for relevant key-bit sizes (all assumptions are using 1999 hardware; divide all times by 10 for each 5 years that have passed to account for Moore's Law).
512-bit key: ~4 weeks
768-bit key: ~1850 years
1024-bit keys: ~5million years
Remember, these numbers are NOT reducible by throwing more hardware at the problem - this is a (currently) serial operation, and must be done by a single machine. I don't see us being able to produce a single-CPU box a million times faster than state-of-the-art within the next two decades (Moore's Law predicts one about 2030 or so). By then, the memory requirements will be mundane. (if we can do 10GB now trivially, 10TB in 30 years shouldn't be any problem. In fact, we should be able to easily do 10TB about 2015 or so.)
In the end, 1024-bit keys should be immune to advances in sieving technology for at least 20 years. The worry is advances in matrix solutions, and development of methodologies better than Number Sieves. But, by the very nature of things, this is unpredictable.
Best Guess (and I'm not Bruce Scheiner): 1024 -bit should be immune to everything until at least 2015, after which 2048-bit should last to mid-century or so.
Sorry about the bad link for the Applied Cryptography book. I must have messed it up after I previewed it but before I did the final submit. Maybe not. Well I will just leave it up to you to do a search at http://www.amazon.com [amazon.com]. Search for Bruce Scheier's book called Applied cryptography: Protocols, Algorithms, and Source Code in C.
If you're interested in quantum computation and cryptography you may be interested in checking out the Los Alamos site on quantum crypto [lanl.gov].
Also some of you may be interested in Bruce Scheier's book Applied cryptography: Protocols, Algorithms, and Source Code in C, which was previously mentioned in other posts. You can find information on it at this Amazon.com link [amazon.com].
I might as well also give the link to a site on basic cryptology, called a "beginners page on cryptography" [ftech.net] It may or may not help/interest readers.
On a side note, I really have enjoyed reading everyone's comments on this subject today. Hopefully these links don't come to late in the "day" for anyone to get good use out of them.
I'm mildly confused by the timings, so how long would it take a decently powerful (as in, PII 300) machine hooked in to one of these devices to crack that?
A very long time, though still just a linear factor longer. Your computer has to solve one hell of a huge matrix problem. This requires enough RAM to hold the matrix and any working data (gigabytes or _more_; see the article), and time proportional to the cube of the matrix size (or possibly better if you're using something other than Gauss-Jordan reduction).
They used a supercomputer to do the matrix solving. If your machine has n times fewer bogoMIPs, then it will take n times longer, assuming that you can support the required RAM.
Assuming that it is the matrix solution and not the sieving that limits the solution time. This would probably be the case if you're using a PC to process the matrix.
A few points in your post puzzle me; I'd appreciate it if you could elaborate on what you mean:
Today's "conventional PC" uses silicon-based CPU technology, but that will change in a few years when IBM begins to mass-produce copper-based chips.
IBM has already demonstrated chips with copper _interconnects_, and many manufacturers plan to use this technology, but I've never heard of using copper as a _substrate_. Conventional chips use a silicon _substrate_ and aluminum interconnects. What IBM technology are you referring to?
An "electro-optical" computational device is the next paradign shift in computer engineering. By transmitting data via light (LEDs!!!), the current CPU problems with heat, electromagnetic interference (EMI) and wafer size all disappear.
I'm afraid that this is not strictly true. In fact, feature size for optical components would be a big problem. Optical devices can't have features much smaller than a wavelength of light. For visible light, this is on the order of 0.5 microns. Electrical devices, like conventional integrated circuit chips, can have features that are much smaller. This tends to outweigh most of the advantages of using optical "chips", should such chips be developed. They would be bulkier, and wouldn't be faster, because while electrical signals propagate slower than light, they would have a much shorter distance to travel. If you try using light with a much shorter wavelength, you wind up destroying your substrate (the photon energy must remain lower than the energy required to dislodge an atom in in the device).
Heat dissipation is and will remain a problem for electrical devices, but photons get absorbed too.
Electro-optical devices, OTOH, get the worst of both worlds. The only situation in which they're really practical is when you need to send signals over a (relatively) large distance without much degradation, as is apparently the case with the computer being built here.
TWINKLE attacks by factoring. This is useful for most asymmetric algorithms but not for symmetric algorithms which do not use primes maths.
However, the original question was about the timings cited in the TWINKLE article:
I'm mildly confused by the timings, so how long would it take a decently powerful (as in, PII 300) machine hooked in to one of these devices to crack that?
My response stands for that, though as you point out these timings don't apply to other classes of encryption.
Thanks for the correction: silicon-based chips with copper substrates, NOT copper-based chips.
I hope you mean interconnects instead of substrates...
"They would be bulkier, and wouldn't be faster, because while electrical signals propagate slower than light, they would have a much shorter distance to travel. If you try using light with a much shorter wavelength, you wind up destroying your substrate (the photon energy must remain lower than the energy required to dislodge an atom in in the device)."
An electro-optical device does not use a silicon substrate, nor does it need the interconnect metals that we see today. Furthermore, the materials used for the device's internal parts and casing allow researchers to experiment with light wavelengths outside the visible spectrum.
My argument holds no matter what materials you use to build the optical device and no matter what processes are used to fabricate it. Whatever you do, you just _can't_ get the feature size of optical devices much smaller than the wavelength of the light used, or signals bleed from one computing element into another. Likewise, no matter what material you make the device out of, you can't have photons with an energy of more than a few eV, because that is the typical binding energy of valence electrons in atoms. If you decide to use deep UV or what-have-you, your device _will_ degrade due to atoms being dislodged and bonds being broken, no matter what it's made of.
But remember: Prof. Shamir's theoretical device is NOT A CPU! It is an "electro-optical sieving device"
How does this affect my points?
Because of its narrow function, the "TWINKLE" box will likely have a much cleaner, simpler design
This is true of any special-purpose computer. What of it?. See Deep Crack for an example, or look at DSP designs (they elminate a lot of the scheduling cruft that conventional chips have, with the cost of requiring the compiler to optimize code for them).
Just looks at current electro-optical devices used for various computing needs today: they have simple, elegant designs, fewer parts and are often smaller than their fully electrical counterparts operating at the same data transmission rates.
An electro-optical component is for the reasons mentioned previously bulkier than a transistor. They also, as mentioned previously, suffer from the limitations of both electrical and optical systems, because they incorporate both. Electro-optical components are indeed very useful for a moderate-sized set of applications; however, they aren't magical.
That is, encryption using a key of x bits could be broken by the non-deterministic machine trying all 2^x possible keys in parallel. I think I recall Bruce Schneier saying something to this effect in one of the "Crypto-gram" newsletters, but I may be misremembering. Any theorist slashdotters willing to comment?
If you have a good way of recognizing decrypted data, then you could indeed solve any fixed-key-size cryptographic problem in exponential time, just by counting through all possible keys and checking to see if you get readable data after using each one.
However, I'm not sure that a P=NP proof would help with this. You'd have to prove that iterating through each possible key is a problem in NP, as opposed to NP-hard and worse than NP-complete.
It might be better to analyze each cryptography algorithm separately, and see if you can prove any of them to be in NP. These would be vulnerable to a P=NP proof.
Quantum computers of the type previously described could certainly iterate through all of these keys within a reasonable length of time. The construction of practical quantum computers, OTOH, is left as an exercise:). It will be neat if it turns out to be practical.
As a side note, cryptography schemes that use a "one-time pad" of noise that they xor with the message would *not* be vulnerable to QC or P=NP attacks. If you iterate through all possible keys, you get all possible messages, which gains you nothing. Only if the key length is significantly shorter than the total amount of traffic sent will a brute-force search work. This is true for most cryptography systems, for practical reasons.
Could you point me to more info on the P=NP concept? Or if not, provide an explaination?
"P" and "NP" are categories of problem. P problems can by solved relatively quickly, but it isn't known if NP problems can or can't.
A problem is a "P" type problem (is "in P") if it can be solved in polynomial time or better; that is, the number of steps required to produce a solution for input of n bits is proportional to (or better than) n^k, for some _constant_ k. This might be a solution that's O(n), or O(n^2), or O(n^534), but it's still polynomial (or better, for something like O(log n)).
A problem is an "NP" type problem (is "in NP") if you can prove that your solution is correct in polynomial time. It might take polynomial time to solve or it might not, but you can prove the answer in polynomial time. The factoring problem is a good example of a problem in NP. No presently known way exists of finding the factors of a number in time proportional to a polynomial of the number of bits, but you can prove that two numbers (say a and b) _are_ the factor of a larger number (X) just by multiplying a and b and seeing if you get X. The multiplication time is proportional to (or better than) a polynomial of the number of bits in X (it's proportional to X^2 for the straightforward way). So, NP problems can be ugly, but there is a limit to how ugly they can get.
A problem that is "outside of NP" can't be solved in polynomial time (otherwise it would be in P) and has solutions that can't be checked in polynomial time (otherwise it would be in NP). I can't think of a good example off the top of my head, but they exist.
P is in NP; you can prove that a polynomial-time answer is correct just by solving the problem again. However, nobody knows if _all_ problems in NP can also be solved in polynomial time. This would mean that P = NP (the two sets of problems are the same, because everything in both of them is in P). A proof that P = NP would prove that any encryption scheme that depends on a trap-door function in NP is intrinsically weak. On the other hand, a proof that P != NP would prove that anything based on an NP-complete trapdoor function (I'll get back to this) is sound against anything short of a quantum computing attack. This is especially of interest to people who deal with prime numbers, because the factoring problem is in NP, and a P-time factoring algorithm would be a Very Useful Thing.
NP-complete problems are problems that are in NP, but that are at least as hard as anything else in NP (or more accurately, any other problem in NP can be reduced in polynomial-time to this problem). An NP-hard problem, for reference, is a problem like this that may or may not even be in NP at all (it's just at least as hard as anything in NP).
This brings up another very important point. All problems that are in NP can be reduced in polynomial time to any NP-complete problem. This means that if you can solve even one of the NP-complete problems in polynomial time, you can solve them all in polynomial time (just with worse exponents). An algorithm that solves an NP-complete problem, or a proof that no such algorithm exists, is one of the holy grails of mathematics, as it would settle the question of whether or not P = NP.
And I hope you've been taking notes, because there'll be a short quiz next period. O:) [ducks!]
Current (and expected future) uses of Number Sieve factorization (currently, the General Number Sieve is most efficient) are composed of two parts: the sieving of factors, and then solving a matrix of all the possible factoring equations.
Ugh. I tried to follow how the General Number Field Sieve worked by reading some lecture notes by the guy who invented it. After the first page of the introduction, I just lost it. I wasn't that confused at the quantum computing lectures I went to.
What courses do I need to take in order to understand how it works? Some more linear algebra and a helluva lot more math in general? -- Four years in jail No Trial, No Bail *** FREE KEVIN *** [kevinmitnick.com]
What this means is this: About 10 days (reread the article if you don't believe me) to crack an RSA-140 encryption which uses a 465 bit key.
Netscape and most other browsers running encryption software of some sort use 128 bit encryption.
Think about that. These devices would go through 128 bit encryption in very little time at all, maybe a day or two. You want $10,000 bucks, do the engineering and win the RC5-64 competition in a day or less. That is what these devices will mean. Your credit cards aren't safe anymore. They may never have been.
Just thought I'd point this out because some posters above got there times backwards. 9-10 weeks is what it takes now, with damn good hardware.
I feel much better about upgrading SSH now. 1024 bits of encryption (at the moment, I might up that at some point). ph43drus
Well as I remember from my Crypto Course at uni your PGP stuff will NEVER be safe. As far as I know PGP is a DES encrypted message (40 significant bits in the key) with a randomly chosen key.
The random key is then included in the header of the message encrypted under the Public/Private keypair.
This means: 1) If you can break DES by brute force (not that hard) then it does not matter what size of public/private key you use.
2) If you can predict the key that the algorithim will 'randomly' choose then you can break an implementation of PGP in constant time.
As with all crypto you got to know whats under the hood before you can make any real judgement about a system.
One of the nicest suggestions from my crypto course was to compress (just with something like gzip) all messages as this reduces the occurance of patterns which attackers search for.
I don't know if gzip leave some form of exactly formatted header block on a file but if not then this could dramatically increase the security of your mail. If it does then it would dramatically decrease the security because an attacker would know exactly what they where looking for.
Finally bear in mind that for information to be considered 'secure' it has to cost more to break the encryption than the information itself is worth.
So if all you want is relativly good privacy on your personal emails then yes PGP will be fine - to my knowlage it still requires a significant ammount of effort for a non goverment body to break DES (who knows about GCHQ/NSA/etc) - if nobody _needs_ to know whats in your mail then you do not need to make it really secure.
I stand corrected... I was aware that the design was intended to reduce computational complexity during en/decryption - but I thought it was fized on using DES.
Commercial websites seem to have RC4-128 encryption as the top of the line.
I'm mildly confused by the timings, so how long would it take a decently powerful (as in, PII 300) machine hooked in to one of these devices to crack that?
RC4-128 is based on a symmetric key system (where you have one key for encrypt/decrypt) whereas the cryptosystems the article is referring to are asymmetric (you have a private key and a public key).
For the symmetric systems, brute force checking of the whole keyspace is the only way to crack them. That means checking all of the 2^128 (lots!) possible keys. Asymmetric key systems are vulnerable to factoring attacks, as they make use of really big prime numbers to protect data.
For everyone not in the US (your export laws suck, people!) check out Fortify.Net [fortify.net] to upgrade Netscape to 128-bit crypto.
The class of P-hard problems are soluble in polynomial time (for example n^3+2n^2 where n is the length of the input) by a deterministic Turing machine.
The class of NP-hard problems are soluble in polynomial time on a non-deterministic Turing machine. A nondeterministic machine will explore many possible solutions in parallel.
You can simulate a NDTM on a DTM with an exponential time penalty. A proof of P =? NP would presumably show us how to do this simulation without the massive time penalty. It's still an open question whether P = NP, but I wouldn't put any money on it being proved either way in the forseeable future.
As far as I know PGP is a DES encrypted message (40 significant bits in the key) with a randomly chosen key.
This isn't quite correct -- PGP uses the RSA public key crypto system (the one with the large prime numbers as keys, which the article was originally about) to encrypt a session key. The session key can be any length you like, and used for any symmetric crypto algorithm you want.
Whilst it's perfectly possible that PGP uses 40-bit DES, I know my version uses 128-bit IDEA. The reason it works like this is because RSA is very slow compared to symmetric cyphers like 3DES, Blowfish and IDEA.
You're correct when you say that compressing a message before sending it increases the difficulty of breaking it -- PGP does this automatically unless you tell it not to. Try PGP'ing an uncompressed text file, and you'll find that the ciphertext is smaller than the plaintext.
As the article pointed out, the huge bottleneck that the NSA (or anyone else) would have to solve in order to crack RSA for large key sizes is the ability to solve giant matrix equations quickly. We can probably guess that they haven't solved this part of the problem because that would lead to such extraordinary advances elsewhere - I'm thinking weapons and bomb development - that we'd be seeing much, much bigger and meaner things floating around out there if they had. Of course, you can still worry about quantum algorithms, which don't have a matrix-solving step... (Note to the paranoid: That was sarcasm. QC factorization algorithms are (a) not entirely proven - sources of noise, and possible very sneaky mathematical problems still may lurk and (b) not within twenty to fifty years at least of the capacities of any human technology.)
Knowing my limits.... I did not read the info provided on Shamir's invention involving factoring equations and matrix solutions. I just waited for y'alls' comments which I knew would help me understand the impact.... THANKS MUCH.
I understand the significance. I hope the global community will remain out front of regional and corporate government advances in HW/SW technology application. Phil Zim... was one of the first to give the individual the ability to secure the right to privacy a basic human freedom. May the global community prove always to be one-step ahead of any regional/corporate government that would try to oppress individual freedoms (Human Rights). THANKS AGAIN for helping me to understand.
A site I visit now and then, www.infowar.com, I do not doubt how such weapons would be used in the future.
The Salesman problem is an example of this type of problem if I remember correctly. Oh and n-body orbital equations too.
I don't think you're right about the N-body problem. The problem with that is that it cannot be solved analytically: it's a problem in calculus rather than in discrete mathematics.
actually, you are talking about P-complete and NP-complete. P-complete is when you can find a solution in polynomial time, and NP-complete is where finding a solution is going to take larger than polynomial time. Notice that P-completeness implies NP-completeness. NP (I suppose that's what you mean by NP-hard) problems are ones that take a polynomial time to check the solution. That is given a solution we check if it in fact is a solution. Notice that it says nothing about the largest/smallest possible solution, b/c then we would could solve the problem in P time.
Non-deterministic turing machines are (i think, that was last semester) actually NPSPACE. that is the memory they are going to use is exponential (they take multiple sets of inputs and check all of them, the number of possible things to check is exponential -- thus parallelism). Deterministic turing machines could emulate those but that would take exponential time. So determinism and non-determinism are equivalent but in different spaces of programs.
Note that was last semester. I do not actually remember anything from that class except for D and ND state machines and context free grammars.
actually rsa encryption (or rather decryption) problem is NP, b/c it takes twice as long to solve if you increase the key length by one bit. it is stupid to think of NP if the key length is fixed. b/c if you can solve in in O(p(x)) for length x, then solving the whole thing for any length would be O(p(x)*x) which is still polynomial if p is polynomial.
afaik, there are no real non-deterministic machines. then a whole lot would take less time. like vertex cover or travelling salesman.
on a related note, proof of P=NP is still a long time away. because no problem originally thought to be NP-complete has been shown to be P-complete b/c it all started with the independent set or something simple like that, and most other problems were reduced to it.
This isn't a really big thing. Keys of this length have always been on the fringe of "secure" as computers get faster and algorithms improve the "secure" keysize will gradually become bigger and bigger.
This is not a factoring breakthrough. To break your 2048bit key it will take a breakthrough (if one even exists to be found, you know that whole P!=NP discussion?) or a radical breakthrough in computation.
The box in sneakers was (if I read the movie correctly) a proof of P=NP. It could solve any encryption relatively instantly. Shamir's box still bangs its head against exponential complexity; increase the key length a little and the necessary resources double. This means that people aren't going to be breaking into air traffic control computers with Shamir's box, since the airlines can just increase their key lengths to keep their secrets (even if they have too many). I'm glad this article was here -- the NYTimes article was totally vague, and when I first read it I thought Shamir might actually have come up with P=NP. That would be something.
There might be some explanations lying around in earlier/. threads on the Shamir NYT article. Schneier's Applied Cryptography has some brief stuff on it; it's also described with more depth in van Whatshisface's Handbook of Theoretical Computer Science, volume A I believe. A more down-to-earth description of it can be found in The Turing Omnibus, if your local library has a copy.
I had a discussion on another thread about this, with few conclusions drawn. My speculation was that any fixed-key encryption algorithm could be broken by a non-deterministic machine in polynomial time. That is, encryption using a key of x bits could be broken by the non-deterministic machine trying all 2^x possible keys in parallel. I think I recall Bruce Schneier saying something to this effect in one of the "Crypto-gram" newsletters, but I may be misremembering. Any theorist slashdotters willing to comment?
You're right -- a proof of P=NP is not the same thing as building a non-deterministic machine. So a proof of P=NP would not break all fixed-key encryption, but building a non-deterministic machine would. Thanks for the clarification; my schooling in complexity classes pretty much stopped at P and NP.:)
EthanW> This assumes [...] that the encryption EthanW> strength is based on an NP problem (rather EthanW> than something even harder).
That was the essence of the question. I asserted that any fixed-length-key encryption could be broken in polynomial time by a non-deterministic machine. I failed to note the caveat that the set of all functions computable in polynomial time by a deterministic machine (even if P=NP) is a proper subset of all functions computable in polynomial time by a non-deterministic machine.
Misha> it is stupid to think of NP if the key Misha> length is fixed. b/c if you can solve in in Misha> O(p(x)) for length x, then solving the Misha> whole thing for any length would Misha> be O(p(x)*x) which is still polynomial if p Misha> is polynomial.
I'm afraid I couldn't understand what you meant here. Could you clarify it?
The question wasn't about RSA decryption, it was about general fixed-key decryption, regardless of cryptographic problems whose decryption on deterministic machines relied upon the intractability of NP-complete problems.
There are a few non-deterministic computational systems around, but they are impractical. One of the RSA trirumvirate had an article in Scientific American about a DNA computer he had designed. The system was turing-complete and non-deterministic, but totally impractical in its current state. (IIRC, a computational step could be non-deterministic, but each step also required a technician to do a gel electrophoresis and DNA analysis. Yechh!) Quantum computers are also non-deterministic but currently impractical.
Um, you do know the NSA has it's OWN fab (DOD owns it, it's also used for making other military electronics), and you can bet your ass it's as ahead of other fabs as the NSA is ahead of general technology
Well, since the most likely proof of P=NP would be done by providing a polynomial time algorithm for an NP-complete problem, it would provide a poly time crack for most encryption.
All problems in NP can be converted to any NP-complete problem with at most a poly time increase in time (by the definition of NP-complete)
Now take the problem of cracking the encryption. If it is in NP, then it can be converted to an instance of our NP-complete problem, and the poly time algorithm from our proof can break it in poly time.
This assumes, of course, that the proof of P=NP is not done it some other highly wierd way, and that the encryption strength is based on an NP problem (rather than something even harder).
I saw once somewhere, vauge enough?, that a 'Study' had shown that sending your credit cards over the net was 80 times safer that going out to eat at a resturaunt (and paying with the card).
After reading the small print I noticed that: The 'dangers' of the web were: getting your # stolen. The 'dangers' of the dinner were: Car accident Fire Flood Earthquake `Act's of God` Theft Mugging Murder etc...(Pretty much everything bad that can happen to you out of your house)
I still go out to dinner, and will not ever put my card# on the web.
There were real numbers... I have forgot them. I just thought it was funny how they had to add in acts of god in order to skew the numbers. (I remember that)
I don't think e-commerce is dangerous, I have never ordered anything online not because of fear that 'they' will get my cc#, but because I would really rather go outside and purchase whatever from another person in a store. (one of those human interaction things)
I think E-commerce is safer. I thought adding every agrophobics fear into the equasion was a bit humourous. It probably would of been safer without all the disasters.
I don't know why, but I like shopping offline. I work all day for my money, I want there to be some substance to spending it. That means going to the store, picking through stuff, purchasing the items and possibly getting mugged on the way to your car:) The CLICK Your money is gone is just to empty for me.
I use e-commerce to find stuff to buy (www.pricewatch.com) but if I want to buy what they are selling, I try to find a 800 # so I can order it over the phone from a person. I call Amazon and order my books over the phone.
By the way... when I read about the LED's on the TWINKLE thing I couldnt help thinking of those wall size 'computers' fabricated of lightbulbs in the old sci-fi shows. *blink* *blink* *blink*
Note that the NSA 'suggestions' to DES turned out to be fixes to protect against an attack that was not discoered and described openly for more than 10 years after the DES was released. If this new RSA attack is out today, I'd have to assume that the NSA has had this sort of technology for at least 10 years.
Everything I get through netscape that's secured is encrypted with a 128-bit RC4 key, I assume. I tried to find some information on how secure normal web browsing really is, but I didn't come up with much. Does anyone have any good resources for how to get insanely high encryption over ssl if 128-bit rc4 isn't especially secure?
My personal take is that as long as the encryption raises the bar enough that the script kiddies can't get at it its as safe as the telephone. For instance some prisons contract the prisoners as phone oprators (reabilitation) - think about that the next time you give your card # to the operator on the end of that 800 number do you know who that realy is? or how about that kid making minimum wage at the mall?
I have been a sysadmin of a network that was sniffed. The program did a good job of getting passwords but the rest of the traffic required a lot of work to put back together. The ammount of traffic on internet backbones would make logging a big storage and sorting problem. If you tried to log all the traffic through a major e-commerce web site (assuming you get in and start the sniffer) you'd have to spend a lot of time sifting data out.
I think its more of a risk that one of these e-commerce places would have the customer/ordering database compromised - names, address, card #, etc. just waiting. It could be something a simple as throwing out that old computer and not destroying the data on the hard disk.
Mayhap it's just me, but I think that there are very few of us who need to worry about someone with $35,000 and a few months to kill. Anyone who would hack my info wouldn't work THAT hard, and I'm not that important. I can see where the government would have it's collective panties in a bunch over it, but I'll stick with my 512'er for a while longer.
Although Prof. Shamir's device is still a theoretical one, it is based on existing technology. That is what makes his paper so important!
Today's "conventional PC" uses silicon-based CPU technology, but that will change in a few years when IBM begins to mass-produce copper-based chips. A change in material design allows slimmer, faster, cheaper CPUs to be manufactured. The idea of copper chips has been around for decades; IBM is the first private-sector organization to release operational details to the public. This has no bearing on military technology, which is usually several steps ahead of what is publicly available.
An "electro-optical" computational device is the next paradign shift in computer engineering. By transmitting data via light (LEDs!!!), the current CPU problems with heat, electromagnetic interference (EMI) and wafer size all disappear.
REALITY CHECK: High-speed "electro-optical" data transmission devices already exist! Businesses around the world use GigaHertz Ethernet, Fiber-Channel devices and various fiber optic lines every day. Various military organizations are far ahead of the game in high-speed communications. True, these devices work in entirely different ways than does a CPU. But the theory of electro-optical devices has been reality for years!
The problem facing researchers today is miniturization: shrinking currently know timing and data switching designs to CPU-scale size, or creating new optical gates and relays that defy "conventional wisdom" to leap past today's hurdles. But, of course, this is the challenge of non-classified researchers. Who knows what goodies the NSA has cooked up! Interestingly, Robert Silverman of the RSA stated that the device is practical! And $5000 is a bargain to government agencies! A 10-pack of floppy disks use to cost them more that that!:)
So Prof. Shamir's little "TWINKLE" box may be here sooner that we think! In the meantime, I think I'll change my PGP keys . . . to 4096 bits!
Thanks for the correction: silicon-based chips with copper substrates, NOT copper-based chips.
But you're still thinking in terms of current CPU manufacturing processes!
. . . They would be bulkier, and wouldn't be faster, because while electrical signals propagate slower than light, they would have a much shorter distance to travel. If you try using light with a much shorter wavelength, you wind up destroying your substrate (the photon energy must remain lower than the energy required to dislodge an atom in in the device).
An electro-optical device does not use a silicon substrate, nor does it need the interconnect metals that we see today. Furthermore, the materials used for the device's internal parts and casing allow researchers to experiment with light wavelengths outside the visible spectrum.
But remember: Prof. Shamir's theoretical device is NOT A CPU! It is an "electro-optical sieving device" --- an entire computational machine designed to perform a specific mathematical process. Because of its narrow function, the "TWINKLE" box will likely have a much cleaner, simpler design than CPUs used today for similar computations. Just looks at current electro-optical devices used for various computing needs today: they have simple, elegant designs, fewer parts and are often smaller than their fully electrical counterparts operating at the same data transmission rates.
It's hard to compare apples to penguins. I don't even think penguins like apples!
Ok, ok! Your points are well made! Let me get this straight: CPUs with silicon SUBSTRATES and aluminum INTERCONNECTS are fashionable today, but IBM has announced plans to produce them with copper interconnects in the near future. Is that better? I realize that if I am brave enough to post a comment, I should also be smart enough to double-check all technical issues I rave about. My mistake!
But you seemed to back me up when you mentioned special-purpose computers:
Because of its narrow function, the "TWINKLE" box will likely have a much cleaner, simpler design . ..
This is true of any special-purpose computer. What of it?. See Deep Crack for an example, or look at DSP designs (they elminate a lot of the scheduling cruft that conventional chips have, with the cost of requiring the compiler to optimize code for them).
"What of it?" My thoughts were that as a special-purpose computer, the hopefully more efficient design of the "TWINKLE" box would off-set the size increase of various optical components. You know: "Six in one hand, a half-a-dozen in the other . .." But PLEASE don't quote me on that! I have been wrong before!:)
Frankly, I'm surprised you didn't ask about my magical GigaHertz Ethernet technology! I think I'll just skip GigaBit technology altogether! And you criticized my comments on Optical technology? Wait until you hear about my new Local Audible Network! With GigaHertz Ethernet, I will have the fastest and the loudest network on the block!
My TANSTAAFL instincts suggest to me that 1)a quantum computer will solve a problem rapidly once configured for that problem, and 2)configuring a quantum computer for a given problem takes approximately as much work as solving the problem through deterministic algorithms.
How long will Moore's Law hold before it bumps into the fundamental limits (the hardware is made up of discrete atoms, and signals can't go faster than c)? A GHz clock is cycling in the time light takes to travel one foot; a THz clock (ten doublings) would be the time light travels 0.3mm; ten doublings after that and it would seem that light is barely covering the space (0.3um) into which you can cram a few billion components, each molecule-sized.
What this means is this: About 10 days (reread the article if you don't believe me) to crack an RSA-140 encryption which uses a 465 bit key. Netscape and most other browsers running encryption software of some sort use 128 bit encryption.
You're comparing apples and oranges. A 465-bit RSA public key is vulnerable to advances in factoring because an RSA key has to be based on the product of two large prime numbers. The 128-bit Netscape key is not an RSA key -- it is an arbitrary 128-bit number used in a private-key encryption system. Unless the algorithm has some flaw, a 128-bit private key ought to be good for quite some time.
This hardware, called "TWINKLE" (which stands for The Weizmann INstitute Key Locating Engine), is an electro-optical sieving device which will execute sieve-based factoring algorithms approximately two to three orders of magnitude as fast as a conventional fast PC.
It runs at a very high clock rate (10 GHz), must trigger LEDs at precise intervals of time, and uses wafer-scale technology.
What the hell is this thing?
Sounds like an ultra small, ultra high speed punch card machine of some type! The statement of it being "electro-optical" seems to point to either that, or it being some form of an optical computer.
I am thinking more on the latter - the former seems like a bad April Fools Day joke!
A problem that is "outside of NP" can't be solved in polynomial time (otherwise it would be in P) and has solutions that can't be checked in polynomial time (otherwise it would be in NP). I can't think of a good example off the top of my head, but they exist.
A good way to make (conjectured) examples of this is to reverse yes and no. For example, it is NP to determine whether two graphs are isomorphic (equal up to a rearranging of the vertices), but it is not known whether the complement is in NP: given two graphs, determine whether they are _not_ isomorphic. To clarify the jargon: a graph is a collection of vertices, some of which are joined by edges. Two graphs are "isomorphic" if there is a way to match up their vertices in a one-to-one correspondence in a way that makes the edges also match (in other words, if they are the same up to a reordering of the vertices).
What does this mean for e-commerce? (Score:1)
Is HTTPS stil safe enough to transmit credit card numbers or financial information?
How hard would it be for someone to just intercept all the traffic going to charles schwab or XYZ bank or whatever and decrypt it?
Who's liable if someone does this? (commits e-commerce fraud/robbery)
Re:distributed (Score:1)
They can probably factor large primes (Score:1)
Re:P and NP (Score:1)
> solved in polynomial time (otherwise it
> would be in P) and has solutions that can't be
> checked in polynomial time (otherwise it would
> be in NP). I can't think of a good example off
> the top of my head, but they exist.
Just for reference, NP consists only of yes/no
problems, so something as simple as "sort these
numbers" is technically not in NP (on the other
hand, "are these numbers sorted?" is clearly in
P).
Anyway, I believe an example of a yes/no problem
outside of NP (and co-NP) is "here is the setup
on an NxN go board. is there a winning move for
player X?". This probably works for other games
such as chess and even checkers.
Re:P and NP (Score:1)
Can't be, or else you could prove trivially that P=NP. According to Christopher Thomas' article, P problems are those which can be solved in polynomial time, and NP are those for which an answer can be validated in polynomial time. Now, if there were only two possible answers (yes and no) to a given NP complete problem, you could trivially solve this problem in polynomial time by attempting to validate both answers. The answer which can be validated is the correct one.
Basically, what this means, is that the set of possible answers for NP complete problems has to be non-polynomial itself, else the exhaustive search of the answer space would provide a solution in polynomial time.
Re:Copper vs Reality (Score:1)
The change is not from Silicon to Copper, it's from Aluminium to Copper. The Silicon stays the same.
The problem has been using a conductive metal on the silicon wafer. If it is too conductive, the traces crosstalk. Actually, free electrons jump from one charged trace to another! Aluminium has been used, because it is minimally conductive.
Unfortunately it is also minimally conductive...
IBM figured out how to pair the copper traces with a non-conductive shield in the microscopic sizes required. This has been what everyone wished for - The benefits of conductivity without the drawbacks.
PGP key size irrelevant (Score:1)
...phil
Ouch (Score:1)
6000 months for a 1024 key. I read the article and now my head hurts, I think I must have streched it. Hmmmm, now sounds like a good time for a beer.
Kinda... (Score:1)
The class of NP-hard problems are soluble in polynomial time on a non-deterministic Turing machine. A nondeterministic machine will explore many possible solutions in parallel.
Not just "many". A nondeterministic machine will explore all solutions in parallel.
That's what keeps NP-complete problems being cracked by parallel processors--you'd need to add more and more processors as the problem grew larger.
For instance, TSP (Travelling Salesman Problem): Having, say 10 processors only cuts my time by 10--but since the time grows exponentially this is pretty useless pretty soon.
But if I could build the machine AFTER I saw the problem (and add a processor for each city), I could solve it in poly-time--of course, building machines with 5x10^6 processors in pretty infeasible, so a non-parallel solution would be preferred.
I dunno about "most".... (Score:1)
...but many of the encryption schemes I've seen rely on prime factorization being hard. And that's NP-complete.
Re:"begs the question" (Score:1)
The phrase "begs the question" is more than the sum of it's parts.
It is a technical term from the field of logic and means that you have assumed what you set out to prove.
For instance, one favorite creationist argument against evolution is:
"The evolutionists tell us that they know the simpler organisms are older than the more complex ones because their fossils are in lower geologic strata.
But the geologists tell us that they know the lower geologic strata are older because they have simpler (and therefore older) fossils embedded in them.
This, (say the creationists) is begging the question."
Of course, that's not really what geologists and evolutionists say so the creationist rebuttal is moot, but they are right in thinking that this would be an invalid argument.
Re:P and NP (Score:1)
For instance, there has been research using a warm tank of amino acids, allowing them to mix and match themselves. The ways in which they connect can be turned into a solution to an NP problem. That is, in effect, parellel computing at its height -- millions of simultanious computations. The computations are nondeterministic and statistical, but statistics usually pan out.
Might want to rethink that . . . (Score:1)
Re:How many do you think NSA will buy? (Score:1)
Of course, your credit card will have expired by then. But if someone wanted to take one of your documents and claim you said something today that you really didn't, it can be done trivially 30 years from now. IOW, 30 years from now I would be able to say you said X 30 years ago.
Just a thought.
Works for encryption as well? (Score:1)
Does this theory apply here? Does this hardware push forward the "useful key size" by an amount proportional to it's codebreaking power?
Re:PGP key size irrelevant (Score:1)
This basically means that in order to use IDEA you need to have different keys for each person you want to communicate with, with asymmetric encryption you just needs one keypair.
I do believe that PGP uses IDEA in its "guts". That means that the public key stuff is only used to exchange a one-time IDEA-key, which is then used to encipher/dechipher (the reason for this is speed).
Re:So, how strong is SSL? (Score:1)
-----
Re:LED's ???? (Score:1)
Re:What does this mean for e-commerce? (Score:1)
dishonest employees -- i have a lot of friends that get cc#'s this way. they wait a couple of weeks and then buy tons of stuff....
forgetting your card -- this happens more than people think
my favorite are atms and those nifty checkcards (i work for a bank)
i cut up at least 10 checkcards a day by stupid people who forgot them. at least 1/3 of those are turned in by people who saw the card sticking out before it was eaten by the machine... i have NEVER had a bad experience online.
Don't let your fears control what you do or don't do. Did they tell you the odds of getting your cc# over the inet was? or was the simple fact that the possibility was there enough to make you not send it? (not trying to be rude...)
Card Issuer, FDIC (Score:1)
Re:Nothing ... (Score:1)
Why bother with high tech when low cunning works just as well???
Re:They can probably factor large primes (Score:1)
Re:They can probably factor large primes (Score:1)
Asymmetric, not symmetric encryption (Score:1)
Re:So, how strong is SSL? (Score:1)
I'm not sure now... (Score:2)
I mean, when I was having PGP create my keys I decided that twice the "High" security would be good, so I made a 2048 bit key...
is it still secure? is it still excessive?
Re:How many do you think NSA will buy? (Score:1)
How many do you think NSA will buy? (Score:5)
But seriously, this was an interesting article. It just goes to prove that if you are using key sizes of less than 512 bits for public key cryptosystems, you may be vulnerable to advances in factoring rather than advances in raw processor speed.
The main advantage of this is that 512 bit keys can be factored in 9-10 weeks as opposed to 6-7 months with the previous application of this factoring method (described in the paper). This means that with this equipment and assuming that the estimate is correct, 768 bit keys can be factored in 1038 years (Shamir's estimate of 6000 * len(512)). A 1024 bit key would be factorable in approximately 1x10^6 years with this method and optimal conditions, however such a number requires exhorbitant space, upwards of 256 GBytes for the sieve 10 TeraBytes to store its matrices.
Bottom line, any key of 512 bits is good for about 9-10 weeks. The recommended minimum key size is still 1024 bits. The paper's conclusion sums this up best:
Conclusion The idea presented by Dr. Shamir is a nice theoretical advance, but until it can be implemented and the matrix difficulties resolved it will not be a threat to even 768-bit RSA keys, let alone 1024.Its more important to have the freedom to use strong encryption so that we don't have to worry about being forced to choose weak keys in order to communicate.
distributed (Score:1)
Of course this begs the question: at $5000 a pop, what is TWINKLE's cost/RC-5-64 block/second?
Re:P and NP (Score:2)
So with regard to NP-trapdoor functions, all you have acheived is to raise the stakes a bit and have to start with, say, a 2048 bit key instead of 1024. But the fundamental problem is, that still some (fixed) number l of additional bits will double the effort for cracking. Take 1000*l bits and and every single atom in the universe can work on your problem in parallel until the hell freezes over. (Quantum computers may be a different thing though, but even that remains to be proven.)
OTOH *if* you really find a polynomial algorithm for some (arbitrarily weird) specific NP-complete problem, than you can apply this very algorithm to any problem in NP because NP-complete problems can be emulated by one-another with only polynomial overhead.
Re:How long would a PC and a couple of these need? (Score:1)
Look at distributed.net, trying to crack a 64 bit key for 558 days now. Checked only 7.9% so far. And remember, 65 bit takes twice as long, and 66 bit twice as long as 65 bit...
Oh wait, I just run the numbers through bc: 128 bit has 340282366920938463463374607431768211456 possible keys, if we check 1 million keys per second, we would need approximately 5395141535403007094485264577495 years to find the key. Have fun.
Re:Nothing ... (Score:1)
Re:Nothing ... (Score:1)
Nope. (Score:1)
While I do think your argument has technical merit, it overlooks a major problem with crypto stuff - installed base.
Upgrading hardware and software to use larger key sizes is decidedly non-trivial. Most (if not all) hardware has a fixed key size it expects (or at the most, a choice of a couple of key sizes). Thus, the protocols that use crypto specify key lengths.
Good examples are SSL and PGP. SSL supports 512, 768, and 1024 bit key sizes now. PGP supports 512, 768, 1024, and 2048 bit keylengths. While it is not terribly hard to change the software itself to use larger keysizes, changing a published Spec often takes several years.
Given the forsight, we could keep ahead of crypto advances. For instance, specifying NOW that PGP6 should support 4096-bit keys would be good. Unfortunately, software people tend to be very short-sighted, and don't look at the long-term ramifications - they only tend to fix stuff that's broken NOW, not that might break in 5 years.
Such is life.
-Erik
What this means for Factoring (and RSA)... (Score:4)
OK, I sat down and re-read the major sections of both Schiener and Knuth (to refresh my math memory), and then something that the Silverman's analysis jumped out at me:
Current (and expected future) uses of Number Sieve factorization (currently, the General Number Sieve is most efficient) are composed of two parts: the sieving of factors, and then solving a matrix of all the possible factoring equations.
For most of the time, the first operation has dominated in time; however, this first process is also the most ameniable to increases in speed.
As Silverman pointed out, the RSA-140 (465-bit) challenge required 4 weeks of 200 machines to solve, then about 4 days for a single super-fast box to solve the resulting matrix. Using Shamir's new methodology, we can expect a 512-bit number to be done in 5 weeks on 20 machines, with another 4 weeks to do the matrix.
The bottleneck has now become the matrix solution, which is a non-parallizable problem (for now). All that can be done on it currently is to use Moore's Law, which doesn't significantly affect the security of larger keys.
If anything, Shamir's invention will now push the focus of attacking RSA from dicovering the factoring equations (the "sieve" part) to improving the matrix solution problem.
The real danger here is from the NSA and other organizations that may have discovered a way to significantly parallelize the matrix solution. If they can do that, well, RSA is going to be screwed. However, there is no current indication that they can do this. But don't say I warned you.
For those paranoid, here are some relevant numbers on the now-dominant times to solve the matrix for relevant key-bit sizes (all assumptions are using 1999 hardware; divide all times by 10 for each 5 years that have passed to account for Moore's Law).
Remember, these numbers are NOT reducible by throwing more hardware at the problem - this is a (currently) serial operation, and must be done by a single machine. I don't see us being able to produce a single-CPU box a million times faster than state-of-the-art within the next two decades (Moore's Law predicts one about 2030 or so). By then, the memory requirements will be mundane. (if we can do 10GB now trivially, 10TB in 30 years shouldn't be any problem. In fact, we should be able to easily do 10TB about 2015 or so.)
In the end, 1024-bit keys should be immune to advances in sieving technology for at least 20 years. The worry is advances in matrix solutions, and development of methodologies better than Number Sieves. But, by the very nature of things, this is unpredictable.
Best Guess (and I'm not Bruce Scheiner): 1024 -bit should be immune to everything until at least 2015, after which 2048-bit should last to mid-century or so.
Happy Hacking!
-Erik
bad link for Applied Cryptography book-sorry (Score:1)
quantum cryptography (Score:2)
Also some of you may be interested in Bruce Scheier's book Applied cryptography: Protocols, Algorithms, and Source Code in C, which was previously mentioned in other posts. You can find information on it at this Amazon.com link [amazon.com].
I might as well also give the link to a site on basic cryptology, called a "beginners page on cryptography" [ftech.net] It may or may not help/interest readers.
On a side note, I really have enjoyed reading everyone's comments on this subject today. Hopefully these links don't come to late in the "day" for anyone to get good use out of them.
Re:Totally different things (Score:1)
Wow! (Score:1)
Re:How long would a PC and a couple of these need? (Score:2)
A very long time, though still just a linear factor longer. Your computer has to solve one hell of a huge matrix problem. This requires enough RAM to hold the matrix and any working data (gigabytes or _more_; see the article), and time proportional to the cube of the matrix size (or possibly better if you're using something other than Gauss-Jordan reduction).
They used a supercomputer to do the matrix solving. If your machine has n times fewer bogoMIPs, then it will take n times longer, assuming that you can support the required RAM.
Assuming that it is the matrix solution and not the sieving that limits the solution time. This would probably be the case if you're using a PC to process the matrix.
come again? (Score:2)
Today's "conventional PC" uses silicon-based CPU technology, but that will change in a few years when IBM begins to mass-produce copper-based chips.
IBM has already demonstrated chips with copper _interconnects_, and many manufacturers plan to use this technology, but I've never heard of using copper as a _substrate_. Conventional chips use a silicon _substrate_ and aluminum interconnects. What IBM technology are you referring to?
An "electro-optical" computational device is the next paradign shift in computer engineering. By transmitting data via light (LEDs!!!), the current CPU problems with heat, electromagnetic interference (EMI) and wafer size all disappear.
I'm afraid that this is not strictly true. In fact, feature size for optical components would be a big problem. Optical devices can't have features much smaller than a wavelength of light. For visible light, this is on the order of 0.5 microns. Electrical devices, like conventional integrated circuit chips, can have features that are much smaller. This tends to outweigh most of the advantages of using optical "chips", should such chips be developed. They would be bulkier, and wouldn't be faster, because while electrical signals propagate slower than light, they would have a much shorter distance to travel. If you try using light with a much shorter wavelength, you wind up destroying your substrate (the photon energy must remain lower than the energy required to dislodge an atom in in the device).
Heat dissipation is and will remain a problem for electrical devices, but photons get absorbed too.
Electro-optical devices, OTOH, get the worst of both worlds. The only situation in which they're really practical is when you need to send signals over a (relatively) large distance without much degradation, as is apparently the case with the computer being built here.
Re:How long would a PC and a couple of these need? (Score:2)
However, the original question was about the timings cited in the TWINKLE article:
I'm mildly confused by the timings, so how long would it take a decently powerful (as in, PII 300) machine hooked in to one of these devices to crack that?
My response stands for that, though as you point out these timings don't apply to other classes of encryption.
More points of concern. (Score:2)
I hope you mean interconnects instead of substrates...
"They would be bulkier, and wouldn't be faster, because while electrical signals propagate slower than light, they would have a much shorter distance to travel. If you try using light with a much shorter wavelength, you wind up destroying your substrate (the photon energy must remain lower than the energy required to dislodge an atom in in the device)."
An electro-optical device does not use a silicon substrate, nor does it need the interconnect metals that we see today. Furthermore, the materials used for the device's internal parts and casing allow researchers to experiment with light wavelengths outside the visible spectrum.
My argument holds no matter what materials you use to build the optical device and no matter what processes are used to fabricate it. Whatever you do, you just _can't_ get the feature size of optical devices much smaller than the wavelength of the light used, or signals bleed from one computing element into another. Likewise, no matter what material you make the device out of, you can't have photons with an energy of more than a few eV, because that is the typical binding energy of valence electrons in atoms. If you decide to use deep UV or what-have-you, your device _will_ degrade due to atoms being dislodged and bonds being broken, no matter what it's made of.
But remember: Prof. Shamir's theoretical device is NOT A CPU! It is an "electro-optical sieving device"
How does this affect my points?
Because of its narrow function, the "TWINKLE" box will likely have a much cleaner, simpler design
This is true of any special-purpose computer. What of it?. See Deep Crack for an example, or look at DSP designs (they elminate a lot of the scheduling cruft that conventional chips have, with the cost of requiring the compiler to optimize code for them).
Just looks at current electro-optical devices used for various computing needs today: they have simple, elegant designs, fewer parts and are often smaller than their fully electrical counterparts operating at the same data transmission rates.
An electro-optical component is for the reasons mentioned previously bulkier than a transistor. They also, as mentioned previously, suffer from the limitations of both electrical and optical systems, because they incorporate both. Electro-optical components are indeed very useful for a moderate-sized set of applications; however, they aren't magical.
Doh. (Score:2)
...But the question was _about_ times to crack RC4-128. I withdraw my comments... Doh.
Re:Totally different things (Score:4)
If you have a good way of recognizing decrypted data, then you could indeed solve any fixed-key-size cryptographic problem in exponential time, just by counting through all possible keys and checking to see if you get readable data after using each one.
However, I'm not sure that a P=NP proof would help with this. You'd have to prove that iterating through each possible key is a problem in NP, as opposed to NP-hard and worse than NP-complete.
It might be better to analyze each cryptography algorithm separately, and see if you can prove any of them to be in NP. These would be vulnerable to a P=NP proof.
Quantum computers of the type previously described could certainly iterate through all of these keys within a reasonable length of time. The construction of practical quantum computers, OTOH, is left as an exercise
As a side note, cryptography schemes that use a "one-time pad" of noise that they xor with the message would *not* be vulnerable to QC or P=NP attacks. If you iterate through all possible keys, you get all possible messages, which gains you nothing. Only if the key length is significantly shorter than the total amount of traffic sent will a brute-force search work. This is true for most cryptography systems, for practical reasons.
P and NP (Score:5)
"P" and "NP" are categories of problem. P problems can by solved relatively quickly, but it isn't known if NP problems can or can't.
A problem is a "P" type problem (is "in P") if it can be solved in polynomial time or better; that is, the number of steps required to produce a solution for input of n bits is proportional to (or better than) n^k, for some _constant_ k. This might be a solution that's O(n), or O(n^2), or O(n^534), but it's still polynomial (or better, for something like O(log n)).
A problem is an "NP" type problem (is "in NP") if you can prove that your solution is correct in polynomial time. It might take polynomial time to solve or it might not, but you can prove the answer in polynomial time. The factoring problem is a good example of a problem in NP. No presently known way exists of finding the factors of a number in time proportional to a polynomial of the number of bits, but you can prove that two numbers (say a and b) _are_ the factor of a larger number (X) just by multiplying a and b and seeing if you get X. The multiplication time is proportional to (or better than) a polynomial of the number of bits in X (it's proportional to X^2 for the straightforward way). So, NP problems can be ugly, but there is a limit to how ugly they can get.
A problem that is "outside of NP" can't be solved in polynomial time (otherwise it would be in P) and has solutions that can't be checked in polynomial time (otherwise it would be in NP). I can't think of a good example off the top of my head, but they exist.
P is in NP; you can prove that a polynomial-time answer is correct just by solving the problem again. However, nobody knows if _all_ problems in NP can also be solved in polynomial time. This would mean that P = NP (the two sets of problems are the same, because everything in both of them is in P). A proof that P = NP would prove that any encryption scheme that depends on a trap-door function in NP is intrinsically weak. On the other hand, a proof that P != NP would prove that anything based on an NP-complete trapdoor function (I'll get back to this) is sound against anything short of a quantum computing attack. This is especially of interest to people who deal with prime numbers, because the factoring problem is in NP, and a P-time factoring algorithm would be a Very Useful Thing.
NP-complete problems are problems that are in NP, but that are at least as hard as anything else in NP (or more accurately, any other problem in NP can be reduced in polynomial-time to this problem). An NP-hard problem, for reference, is a problem like this that may or may not even be in NP at all (it's just at least as hard as anything in NP).
This brings up another very important point. All problems that are in NP can be reduced in polynomial time to any NP-complete problem. This means that if you can solve even one of the NP-complete problems in polynomial time, you can solve them all in polynomial time (just with worse exponents). An algorithm that solves an NP-complete problem, or a proof that no such algorithm exists, is one of the holy grails of mathematics, as it would settle the question of whether or not P = NP.
And I hope you've been taking notes, because there'll be a short quiz next period. O:) [ducks!]
Re:What this means for Factoring (and RSA)... (Score:1)
Ugh. I tried to follow how the General Number Field Sieve worked by reading some lecture notes by the guy who invented it. After the first page of the introduction, I just lost it. I wasn't that confused at the quantum computing lectures I went to.
What courses do I need to take in order to understand how it works? Some more linear algebra and a helluva lot more math in general?
--
Four years in jail
No Trial, No Bail
*** FREE KEVIN *** [kevinmitnick.com]
Re:What does this mean for e-commerce? (Score:1)
Netscape and most other browsers running encryption software of some sort use 128 bit encryption.
Think about that. These devices would go through 128 bit encryption in very little time at all, maybe a day or two. You want $10,000 bucks, do the engineering and win the RC5-64 competition in a day or less. That is what these devices will mean. Your credit cards aren't safe anymore. They may never have been.
Just thought I'd point this out because some posters above got there times backwards. 9-10 weeks is what it takes now, with damn good hardware.
I feel much better about upgrading SSH now. 1024 bits of encryption (at the moment, I might up that at some point). ph43drus
Re:I'm not sure now... (Score:1)
The random key is then included in the header of the message encrypted under the Public/Private keypair.
This means:
1) If you can break DES by brute force (not that hard) then it does not matter what size of public/private key you use.
2) If you can predict the key that the algorithim will 'randomly' choose then you can break an implementation of PGP in constant time.
As with all crypto you got to know whats under the hood before you can make any real judgement about a system.
One of the nicest suggestions from my crypto course was to compress (just with something like gzip) all messages as this reduces the occurance of patterns which attackers search for.
I don't know if gzip leave some form of exactly formatted header block on a file but if not then this could dramatically increase the security of your mail. If it does then it would dramatically decrease the security because an attacker would know exactly what they where looking for.
Finally bear in mind that for information to be considered 'secure' it has to cost more to break the encryption than the information itself is worth.
So if all you want is relativly good privacy on your personal emails then yes PGP will be fine - to my knowlage it still requires a significant ammount of effort for a non goverment body to break DES (who knows about GCHQ/NSA/etc) - if nobody _needs_ to know whats in your mail then you do not need to make it really secure.
Just some thoughts
Tom
Re:I'm not sure now... (Score:1)
ho hum... learn something new everyday
How long would a PC and a couple of these need? (Score:1)
I'm mildly confused by the timings, so how long would it take a decently powerful (as in, PII 300) machine hooked in to one of these devices to crack that?
Re:How long would a PC and a couple of these need? (Score:1)
For the symmetric systems, brute force checking of the whole keyspace is the only way to crack them. That means checking all of the 2^128 (lots!) possible keys. Asymmetric key systems are vulnerable to factoring attacks, as they make use of really big prime numbers to protect data.
For everyone not in the US (your export laws suck, people!) check out Fortify.Net [fortify.net] to upgrade Netscape to 128-bit crypto.
Re:Totally different things (Score:2)
The class of NP-hard problems are soluble in polynomial time on a non-deterministic Turing machine. A nondeterministic machine will explore many possible solutions in parallel.
You can simulate a NDTM on a DTM with an exponential time penalty. A proof of P =? NP would presumably show us how to do this simulation without the massive time penalty. It's still an open question whether P = NP, but I wouldn't put any money on it being proved either way in the forseeable future.
Re:I'm not sure now... (Score:2)
This isn't quite correct -- PGP uses the RSA public key crypto system (the one with the large prime numbers as keys, which the article was originally about) to encrypt a session key. The session key can be any length you like, and used for any symmetric crypto algorithm you want.
Whilst it's perfectly possible that PGP uses 40-bit DES, I know my version uses 128-bit IDEA. The reason it works like this is because RSA is very slow compared to symmetric cyphers like 3DES, Blowfish and IDEA.
You're correct when you say that compressing a message before sending it increases the difficulty of breaking it -- PGP does this automatically unless you tell it not to. Try PGP'ing an uncompressed text file, and you'll find that the ciphertext is smaller than the plaintext.
Not that big a need to be paranoid... (Score:2)
Weapon by any other name .... (Score:1)
I understand the significance. I hope the global community will remain out front of regional and corporate government advances in HW/SW technology application. Phil Zim... was one of the first to give the individual the ability to secure the right to privacy a basic human freedom. May the global community prove always to be one-step ahead of any regional/corporate government that would try to oppress individual freedoms (Human Rights). THANKS AGAIN for helping me to understand.
A site I visit now and then, www.infowar.com, I do not doubt how such weapons would be used in the future.
Re:"begs the question" (Score:1)
Beg ~ means "to ask for"
So: This asks for the question....
Or, paraphrased: "This article prompts the question..."
So what is the missing logic that you refer to?
Re:P and NP (Score:1)
I don't think you're right about the N-body problem. The problem with that is that it cannot be solved analytically: it's a problem in calculus rather than in discrete mathematics.
Re:Totally different things (Score:1)
Non-deterministic turing machines are (i think, that was last semester) actually NPSPACE. that is the memory they are going to use is exponential (they take multiple sets of inputs and check all of them, the number of possible things to check is exponential -- thus parallelism). Deterministic turing machines could emulate those but that would take exponential time. So determinism and non-determinism are equivalent but in different spaces of programs.
Note that was last semester. I do not actually remember anything from that class except for D and ND state machines and context free grammars.
Re:Totally different things (Score:1)
afaik, there are no real non-deterministic machines. then a whole lot would take less time. like vertex cover or travelling salesman.
on a related note, proof of P=NP is still a long time away. because no problem originally thought to be NP-complete has been shown to be P-complete b/c it all started with the independent set or something simple like that, and most other problems were reduced to it.
plz correct me if i am wrong.
Re:I'm not sure now... (Score:2)
This isn't a really big thing. Keys of this length have always been on the fringe of "secure" as computers get faster and algorithms improve the "secure" keysize will gradually become bigger and bigger.
This is not a factoring breakthrough. To break your 2048bit key it will take a breakthrough (if one even exists to be found, you know that whole P!=NP discussion?) or a radical breakthrough in computation.
Totally different things (Score:1)
P=NP (Score:1)
Re:Totally different things (Score:1)
Yup (Score:1)
Duppy Conqueror (Score:1)
EthanW> strength is based on an NP problem (rather
EthanW> than something even harder).
That was the essence of the question. I asserted that any fixed-length-key encryption could be broken in polynomial time by a non-deterministic machine. I failed to note the caveat that the set of all functions computable in polynomial time by a deterministic machine (even if P=NP) is a proper subset of all functions computable in polynomial time by a non-deterministic machine.
Re:Totally different things (Score:1)
Misha> length is fixed. b/c if you can solve in in
Misha> O(p(x)) for length x, then solving the
Misha> whole thing for any length would
Misha> be O(p(x)*x) which is still polynomial if p
Misha> is polynomial.
I'm afraid I couldn't understand what you meant here. Could you clarify it?
The question wasn't about RSA decryption, it was about general fixed-key decryption, regardless of cryptographic problems whose decryption on deterministic machines relied upon the intractability of NP-complete problems.
There are a few non-deterministic computational systems around, but they are impractical. One of the RSA trirumvirate had an article in Scientific American about a DNA computer he had designed. The system was turing-complete and non-deterministic, but totally impractical in its current state. (IIRC, a computational step could be non-deterministic, but each step also required a technician to do a gel electrophoresis and DNA analysis. Yechh!) Quantum computers are also non-deterministic but currently impractical.
Re:Exactamundo! That 6000 months is for *today* (Score:1)
Re:Totally different things (Score:1)
All problems in NP can be converted to any NP-complete problem with at most a poly time increase in time (by the definition of NP-complete)
Now take the problem of cracking the encryption. If it is in NP, then it can be converted to an instance of our NP-complete problem, and the poly time algorithm from our proof can break it in poly time.
This assumes, of course, that the proof of P=NP is not done it some other highly wierd way, and that the encryption strength is based on an NP problem (rather than something even harder).
Re:What does this mean for e-commerce? (Score:1)
After reading the small print I noticed that:
The 'dangers' of the web were:
getting your # stolen.
The 'dangers' of the dinner were:
Car accident
Fire
Flood
Earthquake
`Act's of God`
Theft
Mugging
Murder
etc...(Pretty much everything bad that can happen to you out of your house)
I still go out to dinner, and will not ever put my card# on the web.
Re:Might want to rethink that . . . (Score:1)
I just thought it was funny how they had to add in acts of god in order to skew the numbers. (I remember that)
I don't think e-commerce is dangerous, I have never ordered anything online not because of fear that 'they' will get my cc#, but because I would really rather go outside and purchase whatever from another person in a store. (one of those human interaction things)
Re:What does this mean for e-commerce? (Score:1)
I thought adding every agrophobics fear into the equasion was a bit humourous. It probably would of been safer without all the disasters.
I don't know why, but I like shopping offline.
I work all day for my money, I want there to be some substance to spending it. That means going to the store, picking through stuff, purchasing the items and possibly getting mugged on the way to your car
I use e-commerce to find stuff to buy (www.pricewatch.com) but if I want to buy what they are selling, I try to find a 800 # so I can order it over the phone from a person. I call Amazon and order my books over the phone.
By the way... when I read about the LED's on the TWINKLE thing I couldnt help thinking of those wall size 'computers' fabricated of lightbulbs in the old sci-fi shows. *blink* *blink* *blink*
Remember, the NSA's about 10 years ahead us (Score:3)
4096 bit keys anyone???
So, how strong is SSL? (Score:1)
Neat (Score:1)
Re:What does this mean for e-commerce? (Score:1)
I have been a sysadmin of a network that was sniffed. The program did a good job of getting passwords but the rest of the traffic required a lot of work to put back together. The ammount of traffic on internet backbones would make logging a big storage and sorting problem. If you tried to log all the traffic through a major e-commerce web site (assuming you get in and start the sniffer) you'd have to spend a lot of time sifting data out.
I think its more of a risk that one of these e-commerce places would have the customer/ordering database compromised - names, address, card #, etc. just waiting. It could be something a simple as throwing out that old computer and not destroying the data on the hard disk.
512 is still fine ... (Score:1)
Re:Nothing ... (Score:1)
privacy (Score:1)
Theory vs Reality (Score:1)
Today's "conventional PC" uses silicon-based CPU technology, but that will change in a few years when IBM begins to mass-produce copper-based chips. A change in material design allows slimmer, faster, cheaper CPUs to be manufactured. The idea of copper chips has been around for decades; IBM is the first private-sector organization to release operational details to the public. This has no bearing on military technology, which is usually several steps ahead of what is publicly available.
An "electro-optical" computational device is the next paradign shift in computer engineering. By transmitting data via light (LEDs!!!), the current CPU problems with heat, electromagnetic interference (EMI) and wafer size all disappear.
REALITY CHECK: High-speed "electro-optical" data transmission devices already exist! Businesses around the world use GigaHertz Ethernet, Fiber-Channel devices and various fiber optic lines every day. Various military organizations are far ahead of the game in high-speed communications. True, these devices work in entirely different ways than does a CPU. But the theory of electro-optical devices has been reality for years!
The problem facing researchers today is miniturization: shrinking currently know timing and data switching designs to CPU-scale size, or creating new optical gates and relays that defy "conventional wisdom" to leap past today's hurdles. But, of course, this is the challenge of non-classified researchers. Who knows what goodies the NSA has cooked up! Interestingly, Robert Silverman of the RSA stated that the device is practical! And $5000 is a bargain to government agencies! A 10-pack of floppy disks use to cost them more that that!
So Prof. Shamir's little "TWINKLE" box may be here sooner that we think! In the meantime, I think I'll change my PGP keys . . . to 4096 bits!
Thanks, but No Thanks!!! (Score:1)
But you're still thinking in terms of current CPU manufacturing processes! An electro-optical device does not use a silicon substrate, nor does it need the interconnect metals that we see today. Furthermore, the materials used for the device's internal parts and casing allow researchers to experiment with light wavelengths outside the visible spectrum.
But remember: Prof. Shamir's theoretical device is NOT A CPU! It is an "electro-optical sieving device" --- an entire computational machine designed to perform a specific mathematical process. Because of its narrow function, the "TWINKLE" box will likely have a much cleaner, simpler design than CPUs used today for similar computations. Just looks at current electro-optical devices used for various computing needs today: they have simple, elegant designs, fewer parts and are often smaller than their fully electrical counterparts operating at the same data transmission rates.
It's hard to compare apples to penguins. I don't even think penguins like apples!
Well, It Works on Star Trek! (Score:1)
But you seemed to back me up when you mentioned special-purpose computers: "What of it?" My thoughts were that as a special-purpose computer, the hopefully more efficient design of the "TWINKLE" box would off-set the size increase of various optical components. You know: "Six in one hand, a half-a-dozen in the other . .
Frankly, I'm surprised you didn't ask about my magical GigaHertz Ethernet technology! I think I'll just skip GigaBit technology altogether! And you criticized my comments on Optical technology? Wait until you hear about my new Local Audible Network! With GigaHertz Ethernet, I will have the fastest and the loudest network on the block!
Re:Not that big a need to be paranoid... (Score:1)
Re:What this means for Factoring (and RSA)... (Score:1)
Re:What does this mean for e-commerce? (Score:1)
You're comparing apples and oranges. A 465-bit RSA public key is vulnerable to advances in factoring because an RSA key has to be based on the product of two large prime numbers. The 128-bit Netscape key is not an RSA key -- it is an arbitrary 128-bit number used in a private-key encryption system. Unless the algorithm has some flaw, a 128-bit private key ought to be good for quite some time.
Re:LED's ???? (Score:1)
It runs at a very high clock rate (10 GHz), must trigger LEDs at precise intervals of time, and uses wafer-scale technology.
What the hell is this thing?
Sounds like an ultra small, ultra high speed punch card machine of some type! The statement of it being "electro-optical" seems to point to either that, or it being some form of an optical computer.
I am thinking more on the latter - the former seems like a bad April Fools Day joke!
Re:P and NP (Score:1)
Re:"begs the question" (Score:1)