Bandwidth Challenge Results 111
the 1st sandman writes "SC2005 published some results of several challenges including bandwidth utilization. The winner (a Caltech led team of several institutes) was measured at 130 Gbps. On their site you can find some more information on their measurements and the equipment they used. They claimed they had a throughput of several DVD movies per second. How is that for video on demand!"
home use (Score:2, Interesting)
farthings per furlong (Score:5, Interesting)
Bandwidth? (Score:1, Interesting)
Re:Probably not enough DVDs/sec (Score:2, Interesting)
Lets take this scenario. There are around 10,000 users seeing the movie (thats an average- we are not looking at starwars kind popularity).
each user needs to have atleast 100 mbps or more for an average viewing (this too is very conservative=consider HDTV).
10,000 * 100 mbps= 1,000,000 mbps=100 gbps (take 1 gbps=1000 mbps )
now what are we looking at? serving 10,000 people? eh!
Missing infrastructure (Score:4, Interesting)
This is nothing but an impressive statistic until ISPs provide this kind of bandwidth into homes (the infamous "last mile" connection). Not to mention that even the fastest hard drives available to consumers can't write data this fast.
They used Linux 2.6 kernel (Score:4, Interesting)
Impressive work, either way.
Gbps (Score:3, Interesting)
I'm looking through these charts and I am not finding an important number, how far the signal can be sent at that rate before it starts dying. Repeaters could be responsible for keeping this in vaporworld.
Re:They used Linux 2.6 kernel (Score:5, Interesting)
The problem with latency is that everyone lies about the figures. I talked to some of the NIC manufacturers and got quoted the latency of the internal logic, NOT the latency of the card as a whole, and certainly not the latency of the card when integrated. There was one excellent-sounding NIC - until you realized that the bus wasn't native but went through a whole set of layers to be converted into the native system, and that the latency of these intermediate steps, PLUS the latencies of the pseudo-busses it went through, never figured in anywhere. You then had to add in the latency of the system's bus as well. In the end, I reckoned that you'd probably get data out at the end of the week.
I also saw at SC2005 that the architectures sucked. The University of Utah was claiming that clusters of Opterons didn't scale much beyond 2 nodes. Whaaaa???? They were either sold some VERY bad interconnects, or used some seriously crappy messaging system. Mind you, the guys at the Los Alamos stand had to build their packet collation system themselves, as the COTS solution was at least two orders of magnitude too slow.
I was impressed with the diversity at SC2005 and the inroads Open Source had made there, but I was seriously disgusted by the level of sheer primitiveness of a lot of offerings, too. Archaic versions of MPICH do not impress me. LAM might, as would LAMPI. OpenMPI (which has a lot of heavy acceleration in it) definitely would. The use of OpenDX because (apparently) OpenGL is "just too damn slow" was interesting - but if OpenDX is so damn good, why hasn't anyone maintained the code in the past three years? (I'd love to see OpenGL being given some serious competition, but that won't happen if the code is left to rot.)
Microsoft - well, their servers handed out cookies. Literally.
92 Tbits/sec via Cisco gear about 18 months ago (Score:3, Interesting)
Ex-MislTech