Bernstein's NFS analyzed by Lenstra and Shamir 168
kousik writes "The analysis of Bernstein's NFS by Arjen Lenstra,
Adi Shamir, Jim Tomlinson, Eran Tromer has been
put up on cryptosavvy.
Seems interesting it comes from Lenstra and Shamir.
Lenstra lead the 1994 factorisation of RSA 129.
From the abstract: ... We also propose an improved circuit design based on a new mesh
routing algorithm, and show that for factorization of 1024-bit integers
the matrix step can, under an optimistic assumption about the matrix
size, be completed within a day by a device that costs a few thousand
dollars..."
errrrr NFS? (Score:5, Informative)
Re:errrrr NFS? (Score:2)
Re:errrrr NFS? (Score:2)
Re:errrrr NFS? (Score:1)
Re:errrrr NFS? (Score:2)
Re:errrrr NFS? (Score:2)
Re:errrrr NFS? (Score:2)
Yes it does! (Score:3, Insightful)
From the pdf:
Introduction
In [1], a new circuit-based approach is proposed for one of the steps of the number field sieve (NFS) integer factorization method, namely finding a linear relation in a large but sparse matrix.
This is on the first page of the linked pdf.
However, I agreed that it should have been spelled out in the post.
Re:errrrr NFS? (Score:1)
Right. The people who would be interested by this article will know what NFS means in this context. Just because *YOU* aren't interested in this field, it doesn't mean that they have to make it more obvious to you.
Well, I ended up following the link since I couldn't be 100% sure than Dan Bernstein hadn't written an NFS implementation... ;-)
As you say, though, no biggie. certainly not worth complaining about...
Re:errrrr NFS? (Score:1)
Re:errrrr NFS? (Score:1, Offtopic)
Because it is of interest to me, even though I'm not an expert on the topic. I read the entire summery, trying to figgure out what fire sharing had to do with the topic. Only after I read the summery did I realize that NFS must mean something different, but I wasn't sure what. Once it is explained it makes perfect sense, and I know essentially what is ment.
I have enough of a cryptography background that I can deal with nfs as mentioned, but I'm rusty there, but normally when someone says NFS I think network file system, because it is common to say nfs to me with that meaning. (I work on unix systems, nfs failures are my first clue that something is wrong in many cases)
Re:errrrr NFS? (Score:3, Insightful)
Re:errrrr NFS? (Score:2)
I think that would be rather a change in direction for him, since he regularly refers to that NFS as the "Network Failure System".
AFS is quite nice (was: Re:errrrr NFS?) (Score:2)
Not specifically about DJB, but since you refer NFS and security... I had to responsability in desigining a (small) Unix network, including authentification and shared filesystems. NFS main strenght is that it's bloody simple to use (in Solaris the 'share' command is all that it takes, and the other Unices have more or less similar mechanisms), but security-wise it's a no-no.
I then discovered AFS [openafs.org]; it was exactly what I needed. Kerberos authentication builtin, support for every OS, very solid and comes with several interesting concepts of it's own. All in all AFS has proven to be a secure alternative to NFS (AFS can also scale incredibly well).
Just my 2 euro cents,
fsmunoz
Re:errrrr NFS? (Score:1)
That would be rather like DJB looking into optimizing BIND.
(Read: not bloody likely.)
hackers... (Score:5, Funny)
Re:hackers... (Score:1)
Re:hackers... (Score:1)
Re:hackers... (Score:1)
Hackers? Ummm. (Score:2)
No... wait... That's Steve Jobs' favorite hacking movie... my bad.
Re:hackers... (Score:2)
Still behind the times (Score:1)
Re:Still behind the times (Score:2, Insightful)
True, but not by much, and if the jump was as big as claimed, not for long. But these guys revised it down considerably.
Additionally, in the not too distant future, off-the-shelf quantum computers will be able to make short work of 1024+.
Physicists aren't even sure if quantum computers are practical... sure Shor's algo made short work of factoring 15, but what if it turns out that engineering the rather arbitrary entanglements required for Shor's is NP-complete? Then what? That possibility hasn't been ruled out yet, and making those entanglements is already pretty hard...
Re:Still behind the times (Score:1)
Re:Still behind the times (Score:1)
Still distant vaporware (Score:4, Insightful)
The interesting thing about quantum computing is that it's the one technology that, if it's actually possible to develop usable machines with it, might offer the possibility of getting beyond the exponential-difficulty traps in factoring and other current techniques of public-key math. It's not clear that it will work, but it's the only thing so far that doesn't hit the "well, if you build a keycracking computer the size of the planet and run it for the remaining age of the solar system, I can add three more key bits and make you take over some more planets" wall.
They're not off the shelf, and won't be any time soon. The biggest quantum computer made so far was able to factor 15 into 5 x 3. The number of bits of answer you can get out of a quantum computer depends on the precision to which you can measure its output - does this hit Heisenberg limits? 10**47 is only ~140 bits. Or do you hit practical limits first? Or are there ways to break up the answer into many parts each of which gets you a few bits of precision? (The latter case is the only way to get it to work...)
What if quantum crypto does work? Maybe it'll crack conventional RSA and Diffie-Hellman, but that doesn't mean it transfers to Elliptic-Curve cracking, so we may luck out. Alternatively, it's back to conventional techniques like Kerberos and other symmetric-based Key Distribution Center systems.
But basically, you're trolling
Quotes from the paper (Score:5, Interesting)
believed them to be."
"We thus
conclude that the practical security of RSA for commonly used modulus
sizes is not significantly affected"
Sounds like it only speeds up one step of the factoring process, which is important to keep an eye on but not grounds for alarm.
Re:Quotes from the paper (Score:3, Informative)
Even if you remove the matrix side, it takes a VERY long time to find all of the relations needed to make the matrix in order to solve it.
Is factoring hard (Score:1)
"Assuming factoring/[other oroblem] is hard"
this makes you think maybe it isn't
Re:Is factoring hard (Score:2, Funny)
Re:Is factoring hard (Score:1)
Come on people +1 Funny, that was hilarious.
Ah, forget it...
Re:Is factoring hard (Score:1)
Re:Is factoring hard (Score:1)
Otherwise - funny.
Re:Is factoring hard (Score:2, Informative)
>in your head that a given number is prime? For that matter,
>can you prove using a computer that a given large number
>(if you go for any) is prime? For some reason, I don't think so...
It depends on how much time you are willing to afford. A trivial algorithm is to try every (prime) number sqrt(N) to see if it divides N, if there is none, N is prime. But in practice this is too slow to let you get much beyond N ~20 digits.
Elliptic Curve Primality Proving is state of the art for general primality proofs today, try Primo [freesurf.fr] which has been used to prove a 5020 digit number prime.
Integer factoring is much harder than proving primality, the current record for a GNFS factorization is a 158 digit composite, see here [ercim.org].
Alex
Re:Is factoring hard (Score:1)
Re:Is factoring hard (Score:1)
Re:Is factoring hard (Score:5, Insightful)
The problem is this, there are certain mathematical problems that are known to be Hard. Traveling Salesman, Knapsack, etc. There are no shortcuts to solving these problems. Many mathematical problems can be proven to be in this class of problems. Nobody has, to date, publicly, shown that factoring numbers is Hard, and nobody has shown that it isn't.
Of course, nobody has proven the security any of the symmetric cryptosystems (with the exception of one-time pads), so any practical system is already victim to this uncertainty.
Re:Is factoring hard (Score:1, Informative)
Re:Is factoring hard (Score:1)
Secondly, no one has shown a good cryptosystem based on these.
Re:Is factoring hard (Score:2)
(I might be wrong, this is just my understanding of things at the moment)...
Isn't factoring NP-Complete, and Traveling Salesman NP-Complete, and thus any cryptosystem that relies on factoring is in a certain sense a cryptosystem based on traveling salesman, since a solution to one can be translated to a solution of the other.
Re:Is factoring hard (Score:2, Insightful)
Re:Is factoring hard (Score:1)
factoring doesn't get you a solution to traveling
salesman, but a solution to traveling salesman
does get you a solution to factoring.
Re:Is factoring hard (Score:2)
Re:Is factoring hard (Score:1)
Re:Is factoring hard (Score:1)
Re:Is factoring hard (Score:1)
>And RSAP reduces to factoring. They are computationally equivalent.
Are you sure about that? Decrypting an RSA message is taking a cubic root or 65537-th root (or whatevery your encryption exponent is) in a finite group modulo N, your public modulus.
Taking roots is easy if you know the cardinality of the group and you get the cardinality of the multiplicative group (mod N) easily if you know the factorization of N by Euler's totient function \phi(N).
If you know \phi(N) and N, getting the factorization of N is also easy, so these two are equivalent.
But, afaik, it has not been proven that you need \phi(N) to do discrete roots (mod N). If you have different information, please post it. That'd be interesting news (to me, anyways).
Alex
Re:Is factoring hard (Score:1)
"Breaking RSA may not be equivalent to factoring" [BV98]
Implications: Provides evidence that certain instances of RSA cannot be equivalent to the IFP. This is contrary to the belief by some that RSA and IFP are equivalent.
Both from:
http://www.scramdisk.clara.net/pgpfaq.html
You're wrong on both counts.
Re:Is factoring hard (Score:2, Insightful)
We stand on even sketchier ground than this. Travelling Salesman, Knapsack, and the "etc." referred to above are "NP-complete" problems-- they are the hardest problems in NP, the class of problems whose solutions only expand their inputs polynomially. (For an example of a problem outside of NP: given a number in base 2, represent it in base 1 (e.g., in tick marks. The output of this problem is O(2^n) for input of length n.)
Factoring is in NP; however, we don't know whether it is NP-complete.
How "hard" are these NP-complete problems? Nobody knows. Most "serious" folks hypothesize that polynomial-time solutions to these problems aren't possible on conventional computers (essentially, because lots of smart people have worked on it and haven't found one). However, nobody has proven this yet; and because of the equivalency of NP-complete problems, a solution to one can be transformed mechanically into solutions for all NP-complete problems.
So, to summarize: we don't know how hard the NP-complete problems are; and factoring can only be as hard as the NP-complete problems (and just might be easier). Treating NP-complete problems as "hard" is a reasonable working assumption, but it is just that.
Re:Is factoring hard (Score:1)
Forget about all this nth degree polynomial stuff, the REALLY Hard maths is made up by the government!
The /. story quotes the wrong part of the paper (Score:5, Interesting)
Re:The /. story quotes the wrong part of the paper (Score:2, Interesting)
Re:The /. story quotes the wrong part of the paper (Score:3, Insightful)
The Slashdot quote also fails to mention that the device would cost $5000 only for "large quantities". The initial cost to get to the stage of being able to produce that circuit is over $1 million. Granted, they also say that the previous initial cost using off-the-shelf hardware was $62 million, but neither are exactly in your average sysadmin's price range. Bearing in mind that these prices *only* cover the matrix step, the authors are right to conclude that 1024-bit keys are just as safe as everyone thought they were.
If your enemies have a sufficiently large budget to build this kind of thing, they'd almost certainly find it easier just to bribe someone with access to the information to reveal it, to physically attack some unencrypted version of the information, or to retrieve the keys (by, for example, bugging your keyboard).
And who has a sufficiently large budget? (Score:2)
Hi, I'm Uncle Sam. Won't you be my neighbor?
Re:The /. story quotes the wrong part of the paper (Score:2)
They do [slashdot.org].
It's 1.17, not 3.01... your keys less compromised (Score:4, Interesting)
Re:It's 1.17, not 3.01... your keys less compromis (Score:2, Informative)
Even if it is 3.01 times larger, that's still an effective strength of at least 1022 bits.
(For what it's worth, I've only read the abstract.)
Re:It's 1.17, not 3.01... your keys less compromis (Score:1)
Re:It's 1.17, not 3.01... your keys less compromis (Score:2)
So, while for us an algorithm which takes 1024 steps when fed with the number 1024 and 4096 steps when fed with the number 4096 would probably be considered O(n), they would consider that algorithm O(2^n) because it takes 1024 steps when run with an input of bit-length 10 and 4096 steps when run on input of bit-length 12.
You can usually tell from context which definition of time and space someone is using, but sometimes it gets confusing.
Re:It's 1.17, not 3.01... your keys less compromis (Score:1)
Though the statement, "You can usually tell from context which definition of time and space someone is using" just has to make me chuckle.
uh oh (Score:1)
Or they could just stop letting people work from home and stop letting their kids play MUDs on their secure terminal
Whee! Slashdot FUD (Score:2, Insightful)
Mesh routing is really the way to go (Score:2, Funny)
Well, I believe that mesh routing might just give us all the pluses without most or all of the minuses. First of all, it involves routing, which if you've paid attention to the formation of the Internet you'll quickly realize is a design that will lead to redundancy and reliability. More importantly, it is a mesh, which means that one end of the key is not necessarily tied to the other end. This should cut off many of the attacks that would have a chance of success on elliptic curves by way of its nature. Meshing also implies redundancy... there may be some size and speed tradeoffs here, but you can be certain you'll get your data back out of the cryptopot.
Bruce Schneier, a luminary in the field of cryptography and author of the book Applied Cryptography, has a web site here [slashdot.org].
Excellent troll (Score:1)
-- Brian
Dot and a Database (Score:1, Funny)
More Karma (Score:2)
Re:Mesh routing is really the way to go (Score:1, Informative)
Trust me, I will be watching for you in metamod.
Result doesn't imply weak keys (Score:2, Informative)
As they clearly state in the paper, "the security of RSA relies exclusively on the hardness of the relation collection step of the number field sieve." - The speedup that Bernstein proposed just makes the stuff surrounding that step a bit more efficient.
Cliff notes version (Score:5, Interesting)
1) it's not quite as fast as Bernstein estimated (about half as fast for cliff notes purposes)
2) the hardware could be affordable (others have claimed costs that are only feasible for governments)
3) you don't have to revoke all your RSA keys because there are steps that precede the application of the Berstein method that still take absurd amounts of time and horsepower.
Oh, yeah, and it has nothing to do with Sun's NFS (Network File System, a lame and usually insecure way to share files).
Bernstein will no doubt reply. He isn't a shy guy from my experience.
Re:Cliff notes version (Score:1)
is qmail controversial ?
perhaps i'm coming into the game a bit late and don't know the entire history. i've used qmail, and thought it was simple and cool. didn't know what the feelings of the community were.
Re:Cliff notes version (Score:2)
Associatively. I think Dan Bernstein has a reputation for being outspoken about himself, his software and so on.
Qmail just inherited it.
Re:Cliff notes version (Score:5, Insightful)
Well I can only speak from a security standpoint, not for functionality, though I've heard that it has nearly all the same features as sendmail and is faster. However, I did a free-time security audit of Qmail to get an idea of how secure DJB's work was. I can say that, bar none, this guy is the best secure coder I've seen to date. Probably the best way to see this is in the fact that he uses all of his own routines to do memory management. In these routines his bounds checking is complete and he is extremely careful about signed/unsigned conversion bugs. Quite impressive.
Granted the guy is known for being a prick when people question his work (this is known as De Raadt Syndrome), such as this response to someone who supposedly found a hole in his mailing list software:
----
Here we go again: This advisory is fraudulent. My ezmlm 0.53 package
does not include anything called ezmlm-cgi.
Perhaps someone else's ezmlm-cgi program has a problem. But ezmlm-cgi
isn't part of ezmlm. I didn't write it. I haven't reviewed it. I don't
distribute it. I don't use it. I am not responsible for its bugs.
vort-fu was aware of these facts before he sent his advisory to bugtraq.
Why did he continue? Can this be adequately explained by stupidity?
---
The problem is that while he may be abrasive, he's always right. Bottom line: if you want secure software, stick with DJB.
Re:Cliff notes version (Score:2)
Re:Cliff notes version (Score:2)
is qmail controversial?
Qmail is probably secure. The controversial issues that I can think of are these:
Re:Cliff notes version (Score:5, Funny)
This post should be modded +4 Understated.
Re:Cliff notes version (Score:1, Offtopic)
I never used Sun's NFS, but i was planning in the near future, so what way to share file's nice and secure in a unix network ? like if you want to mount homedirs and such.
Quazion
NFS file sharing (and alternatives) (Score:2)
Another post replying to yours said "samba". I'm sure Tridge will forgive me for pointing out that Samba mimics Microsoft's unbelievably kludgy implementation of IBM's NetBIOS protocol - the kludges being there to compensate for the fact that IBM designed NetBIOS for networks of 25 machines or less. Samba is a wonderful way to interconnect VMS and *nix machines with Microsoft clients, but other than that it's an abortion. Do not use it if you don't have to!
Another poster pointed out the reason for the "Usually" in my description of "usually insecure". If all your *nix boxes run the same version of the same vendor's latest implementation of NFS, it can be secured. In a true heterogenous environment, though, I have never seen anything but pain and suffering result from trying to implement secure NFS. And incidentally, NFS is inherently subject to denial of service attacks (so is NIS/YP) so you certainly can't depend on it if there are any unsecured hosts elsewhere on the network.
Once you decide to sacrifice "nice" and go for "secure", or vice-versa, your options get broader. Look into Coda [cmu.edu], Andrew, u9fs, or Styx, for example.
Re:Cliff notes version (Score:1)
Adi Shamir is the 'S' in RSA.
Re:Cliff notes version (Score:2)
"We have determined that it is incompatible with Sun's NFS, and that the license allows Bernstein to remove your ability to use the program merely by changing his web page."
Re:Cliff notes version (Score:2, Informative)
Background. To review the undisputed facts:
Carl Pomerance has pointed out that a slightly different conventional NFS implementation takes time L^(1.754...+o(1)) on a machine of size L^(1.006...+o(1)), or time L^(1.656...+o(1)) on a machine of size L^(1.104...+o(1)), but this is still vastly less cost-effective than a circuit for large numbers. The NFS cost ratio between conventional computers and circuits grows as a quite noticeable power of the cost itself.
There are two basic reasons for this improvement. First, for large numbers, using conventional computers (or TWINKLE) for NFS sieving is a bad idea, because the low-memory smoothness-testing circuits described in my paper are much more cost-effective. Second, for large numbers, using conventional computers for NFS linear algebra is a bad idea, because the linear-algebra circuits described in my paper are much more cost-effective.
3 versus 1.17: why there is a dispute. This Lenstra-Shamir-Tomlinson-Tromer paper observes, correctly, that reasonably fast low-memory smoothness-testing techniques have been known for decades. (Of course, my paper says the same thing.)
The Lenstra-Shamir-Tomlinson-Tromer paper then leaps to the incorrect conclusion that the cost-effectiveness of those techniques for large numbers was widely known before my paper.
If this fact was known before, why didn't it appear in the aforementioned Silverman and Lenstra-Shamir papers two years ago? Both of those papers were discussing the cost of state-of-the-art integer factorization.
As for the Coppersmith paper cited by Lenstra et al.: As mentioned in my paper, Coppersmith suggested ECM in a context where RAM sieving would have meant looking at many more numbers. He didn't observe that ECM was better than RAM sieving in other contexts. In fact, for the first step of his algorithm, he specifically suggested RAM sieving!
The simple fact is that papers before mine optimized their machines purely for operation count. The resulting machines are far less cost-effective than the circuits described in my paper.
The Lenstra-Shamir-Tomlinson-Tromer paper does not claim that circuits are less cost-effective (slower or larger) than stated in my paper. It also does not claim that conventional implementations are competitive. It does not claim that circuits are actually not so scary. What it claims is that we already had fairly scary circuits. This is revisionist history, not a technical dispute.
Simple versus optimized. My paper is a grant proposal. It explains a very simple circuit, enough to prove the L^(1.185...+o(1)),L^(0.790...+o(1)) result. It then briefly points out several ways that the circuit can be changed and improved. The objective of the grant is to find the most cost-effective circuit. The simplest circuit is certainly not the best one; finding the best one is a huge project.
Treating the simplest algorithm as if it were my final result, with every trivial improvement worth pointing out, is a serious misrepresentation of my work.
In particular, the use of mesh routing in this context is one of many obvious possibilities, and is already mentioned in my paper as an alternative to mesh sorting. Lenstra et al. claim that this is an ``improvement'' over my paper; that claim is false.
I described a counterclockwise mesh routing algorithm in an email message to one of those authors in mid-April, as part of explaining why conventional implementations aren't cost-effective. (On a 20x20 mesh: ``Take a batch of, say, 1000 hits generated by each processor; route the hits 19 steps down, then 19 steps right, then 19 steps up, then 19 steps left, to reach the proper memory locations. ... The point is that the time ratio grows linearly with the 20, while the cost ratio is simply 1 plus overhead. The overhead can be dramatically reduced with special-purpose circuitry, allowing the 20 to be raised by half as many orders of magnitude.'')
I am astonished that anyone would publish this obvious use of mesh routing as if it were an interesting new result.
The balance between sieving and linear algebra. There are many parameters in NFS: dials that can be adjusted to affect the speed of the algorithm in complicated ways. The most important parameter is the ``prime bound,'' usually called y.
As y increases from ``tiny'' to ``standard,'' the cost of sieving drops down to a smallest possible value, while the cost of linear algebra rises. As y increases past standard, the cost of sieving starts rising again, and keeps rising, while the cost of linear algebra also rises.
In the context of TWINKLE, Lenstra and Shamir claimed that sieving speedups had limited value, because linear algebra by itself was a bottleneck. That claim is obviously false: one can always decrease y to reduce the cost of linear algebra until sieving and linear algebra are in balance.
The Lenstra-Shamir-Tomlinson-Tromer paper now makes the opposite claim, that linear-algebra speedups have limited value, because sieving by itself is a bottleneck. That claim is also false, at least for large numbers: the optimal value of y is substantially smaller than standard, as stated in my paper, so one can increase y to reduce the cost of sieving until sieving and linear algebra are in balance.
Perhaps someday we'll discover better linear-algebra methods. Perhaps the cost of linear algebra will become unnoticeable for standard values of y; then sieving by itself will be a bottleneck. However, as stated in my paper, the current situation for large numbers is that linear algebra is relatively difficult; as a consequence, when parameters are chosen properly, sieving and linear algebra are in balance.
Large numbers versus small numbers, and the cost of sieving. Figuring out that one NFS approach is much better than another for large numbers is much easier than figuring out the situation for small numbers, such as 1024 bits or 1536 bits.
Analogy: It's easy to prove that merge sorting is much faster than insertion sorting for large arrays. You don't have to worry about all the different variations of the sorting algorithms; these changes can make big differences in speed, but those differences are unnoticeable next to the gigantic speed gap between merge sorting and insertion sorting for large arrays. It's much more difficult to figure out the fastest method for small arrays, such as arrays of size 16.
The Lenstra-Shamir-Tomlinson-Tromer paper has one paragraph (Section 5.2) on the cost of sieving for a specific input size, namely 1024 bits. That paragraph estimates that RAM sieving would take about a year on 30 million large-memory PCs. It implicitly assumes that conventional RAM sieving is the most cost-effective approach to sieving for 1024-bit numbers. In particular, it implicitly assumes that circuits (mesh sorting, mesh routing, early-abort Pollard+ECM, etc.; see my paper) wouldn't be much less expensive. It concludes that sieving by itself is a bottleneck for 1024-bit numbers.
There is no justification for these assumptions and conclusions. The authors claim to understand, and to have understood for a decade, that conventional RAM sieving is much less cost-effective than circuits for large numbers; however, when faced with 1024-bit numbers, they don't even consider the possibility of 1024 being large and of conventional RAM sieving being the wrong approach. This is a gaping hole in the Lenstra-Shamir-Tomlinson-Tromer analysis.
Maybe special-purpose circuits will have a dramatic impact on the difficulty of 1024-bit and 1536-bit and 2048-bit factorizations. Maybe they'll have no impact at all. I don't know yet. I do know that ignoring the question doesn't help us find the answer.
Re:Cliff notes version (Score:1)
I suspect most readers of this forum are primarily concerned with appropriate key lengths for use with their SSH and PGP software. Some people think your sieve makes their 768 and 1024-bit keys obsolete.
I certainly hope some circuits based on your ideas get physically built, rather than merely modeled mathematically, so we can get some empirical data. Best of luck with the grant!
Off the shelf hardware?? (Score:1, Interesting)
The GX specs specifically state that they support 4.2 GB per second. They also state that memory latency is about 40ns. I checked pricewatch and found at least 6ns for pretty cheap. There are to many areas where it says "at least", "probably" or "about" for calculations regarding how much time it takes. They might be right but their "proof" consists of restating mathmatics rules and estimations. They probably should have spent more time on actual calculations and proofs
Re:Off the shelf hardware?? (Score:1)
Modern DRAM isn't exactly Random Access at all, you can get data out much faster if you read it sequentially than if you read it in random order.
Re:Off the shelf hardware?? (Score:4, Interesting)
As for the two technical points you mentioned:
> > The bandwidth of the fastest PC memory is 3.2GB/sec
> The GX specs specifically state that they support 4.2 GB per second.
Indeed, but both PC3200 (DDR400) and dual PC800 (RDRAM) have a bandwidth of 3.2GB/sec.
> I checked pricewatch and found at least 6ns for pretty cheap
These "6ns" parts do not have a 6ns random-access latency. For instance, check these figures [ntsi.com].
Re:Off the shelf hardware?? (Score:3)
Huh? (Score:2, Funny)
Wow, we can make The Matrix in under a day for a couple grand? Better start looking in the paper for real estate in Zion...
Somewhat On-topic: 1024+ bit C math library,where? (Score:2)
I have some factorization ideas I'd like to test without reinventing the wheel.
GNU MP, BigInteger.isProbablePrime(int) bug (Score:2)
Gnu MP [swox.com] for mult-precission numbers. They use the fastest known algorithms and hand-optimized asm on most platforms. I prefer to do crypto in Java and use the BigInteger class 'cause I'm lazy.
FYI, Sun's BigInteger.isProbablePrime(int) function is broken... don't use it. I was rather embarassed when a collegue and I emailed some people about a possible bug in the Secure Remote Password protocol, only to discover the problem was a known bug in the JVM... which Sun refuses to fix. I don't know why they won't fix it. My personal implementation of the Miller-Rabin primality test runs faster and correctly identifies the SRP modulus as probably prime. (Look in Applied Cryptography by Schneier for how to code it up.)
GNU MP is good (Score:2)
However, it was certainly pretty damn fast, and I didn't come across any bugs.
And remember ... (Score:1)
Re:yeah... (Score:2, Informative)
I'd say it does:
a color-coded simulation of the mesh routing algorithm [weizmann.ac.il] (16MB)