Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Networking IT

Bandwidth Challenge Results 111

the 1st sandman writes "SC2005 published some results of several challenges including bandwidth utilization. The winner (a Caltech led team of several institutes) was measured at 130 Gbps. On their site you can find some more information on their measurements and the equipment they used. They claimed they had a throughput of several DVD movies per second. How is that for video on demand!"
This discussion has been archived. No new comments can be posted.

Bandwidth Challenge Results

Comments Filter:
  • home use (Score:2, Interesting)

    by Chubby_C ( 874060 ) on Wednesday November 23, 2005 @11:48PM (#14105766)
    how long before these ultra-high speed networks are rolled out the home users?
  • by Anonymous Coward on Thursday November 24, 2005 @12:06AM (#14105849)
    Or Libraries of Congress per second. DVDs per second isn't a useful rate, unless you're transferring lots of DVDs in a series - which few people do. The much more interesting bandwidth unit is "simultaneous DVDs", multiples of 1.32MBps, 1x DVD speed [osta.org] (9x CD speed). 130GBps is something like 101KDVD:s, which means an audience could watch 101 thousand different DVDs on demand simultaneously over that pipe. That's probably enough for most American cities to have fully interactive TV.
  • Bandwidth? (Score:1, Interesting)

    by Anonymous Coward on Thursday November 24, 2005 @12:10AM (#14105863)
    I'm more interested in the media in which they'd write to at those speeds.
  • by Varun Soundararajan ( 744929 ) on Thursday November 24, 2005 @12:11AM (#14105875) Homepage Journal
    The math in the page is very approximate.

    Lets take this scenario. There are around 10,000 users seeing the movie (thats an average- we are not looking at starwars kind popularity).

    each user needs to have atleast 100 mbps or more for an average viewing (this too is very conservative=consider HDTV).

    10,000 * 100 mbps= 1,000,000 mbps=100 gbps (take 1 gbps=1000 mbps )

    now what are we looking at? serving 10,000 people? eh!

  • by aktzin ( 882293 ) on Thursday November 24, 2005 @12:15AM (#14105891)
    "They claimed they had a throughput of several DVD movies per second. How is that for video on demand!"

    This is nothing but an impressive statistic until ISPs provide this kind of bandwidth into homes (the infamous "last mile" connection). Not to mention that even the fastest hard drives available to consumers can't write data this fast.

  • by RedBear ( 207369 ) <redbear.redbearnet@com> on Thursday November 24, 2005 @12:44AM (#14105980) Homepage
    Here I was expecting to read about one of the BSDs again (like when they used NetBSD to break the Internet2 Land Speed Record), but it looks like this time they used an "optimized Linux (2.6.12 + FAST + NFSv4) kernel". I'm not well informed on speed records held by various versions of the Linux kernel, so maybe someone else can tell us whether this is something special for Linux or more run-of-the-mill. I had the impression that professional researchers usually prefer the BSDs for this kind of work. Will this put Linux on the map for more high-end research like this?

    Impressive work, either way.
  • Gbps (Score:3, Interesting)

    by Douglas Simmons ( 628988 ) on Thursday November 24, 2005 @12:58AM (#14106043) Homepage
    Yeah 130Gbps sounds super-duper fast, but seven dvds a second on a backbone spreads out pretty thin when everyone and their mother is bittorrenting their ass off.

    I'm looking through these charts and I am not finding an important number, how far the signal can be sent at that rate before it starts dying. Repeaters could be responsible for keeping this in vaporworld.

  • by jd ( 1658 ) <imipak@yahoGINSBERGo.com minus poet> on Thursday November 24, 2005 @02:45AM (#14106385) Homepage Journal
    Already is [lightfleet.com]. From the looks of Lightfleet, and some of the other people at SC2005 who didn't have tables but DID have information, Linux is being taken very seriously in the bandwidth arena.


    The problem with latency is that everyone lies about the figures. I talked to some of the NIC manufacturers and got quoted the latency of the internal logic, NOT the latency of the card as a whole, and certainly not the latency of the card when integrated. There was one excellent-sounding NIC - until you realized that the bus wasn't native but went through a whole set of layers to be converted into the native system, and that the latency of these intermediate steps, PLUS the latencies of the pseudo-busses it went through, never figured in anywhere. You then had to add in the latency of the system's bus as well. In the end, I reckoned that you'd probably get data out at the end of the week.


    I also saw at SC2005 that the architectures sucked. The University of Utah was claiming that clusters of Opterons didn't scale much beyond 2 nodes. Whaaaa???? They were either sold some VERY bad interconnects, or used some seriously crappy messaging system. Mind you, the guys at the Los Alamos stand had to build their packet collation system themselves, as the COTS solution was at least two orders of magnitude too slow.


    I was impressed with the diversity at SC2005 and the inroads Open Source had made there, but I was seriously disgusted by the level of sheer primitiveness of a lot of offerings, too. Archaic versions of MPICH do not impress me. LAM might, as would LAMPI. OpenMPI (which has a lot of heavy acceleration in it) definitely would. The use of OpenDX because (apparently) OpenGL is "just too damn slow" was interesting - but if OpenDX is so damn good, why hasn't anyone maintained the code in the past three years? (I'd love to see OpenGL being given some serious competition, but that won't happen if the code is left to rot.)


    Microsoft - well, their servers handed out cookies. Literally.

So you think that money is the root of all evil. Have you ever asked what is the root of money? -- Ayn Rand

Working...