Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Networking IT

Bandwidth Challenge Results 111

the 1st sandman writes "SC2005 published some results of several challenges including bandwidth utilization. The winner (a Caltech led team of several institutes) was measured at 130 Gbps. On their site you can find some more information on their measurements and the equipment they used. They claimed they had a throughput of several DVD movies per second. How is that for video on demand!"
This discussion has been archived. No new comments can be posted.

Bandwidth Challenge Results

Comments Filter:
  • home use (Score:2, Interesting)

    by Chubby_C ( 874060 )
    how long before these ultra-high speed networks are rolled out the home users?
    • Well, if past is prologue, your looking at about a decade. Kinda sorta. How many home users are actually using GigE now?
    • When there exists a business model that profits from rolling out these to the home users.
    • I'd be happy if they just got these big pipes close enough to my house that everybody could simultaneously pull 10 mbps. That's about enough for 2 simultaneous video streams, with very good quality assuming a decent codec.

      My Comcast service already hits 4 mbps whenever I ask it to, so it feels within reach but I guess we'll see.

    • Oh, you can have it now, but first make some space in your basement for this [caltech.edu] equipment. :) Better ask your wife first.....
  • LOC'ed in. (Score:2, Funny)

    by Anonymous Coward
    "They claimed they had a throughput of several DVD movies per second. How is that for video on demand!""

    How many Library Of Congress'es is that?
  • Sponsors? (Score:5, Funny)

    by simpleguy ( 5686 ) on Wednesday November 23, 2005 @10:49PM (#14105772) Homepage

    The Bandwidth Challenge, sponsored by the good fellows at the MPAA and RIAA. I think they forgot to put their logos on the sponsor page.

  • by xtal ( 49134 ) on Wednesday November 23, 2005 @10:53PM (#14105791)
    I love arbitrary metrics..


    They claimed they had a throughput of several DVD movies per second. How is that for video on demand!"


    Given you might need to serve a few thousand people an hour (or more?), I'd say it's still got awhile to go. Kinda sobering, when you think about it. Shiny discs and station wagons are going to be around for awhile.
    • That's whole movies per second they are talking about. So even at only 2 movies per second thats 7200 people to get complete movie downloads per hour.
      • 7200 movies per hour won't even cover a small city during peak usage. How many cable companies do you know of that have 8k clients at the most? Assume for every household with the TV off there is another with 2 sets on different channels.
        • 7200 movies per hour won't even cover a small city during peak usage.
          I have a crazy idea, maybe the cable company could use more than 1 single network cable.

          Besides, for the foreseeable future video on demand will be pay per view, so the number of simultaneous users will be far fewer than the number of households.

        • by Anonymous Coward
          You retards.

          You don't have them download the entire fucking DVD in one second or in one hour. Who, other then nerds, wants to fill up their harddrives with movies that they can simply watch at any time over the internet with a small subscription base.

          You stream it to them.

          On a DVD movie the HIGHEST bitrate your going to see is around 10Mbps.

          If you had a 130Gbps pipe... that would allow you to serve 13 thousand customers on one connection, and that is at the highest quality setting aviable on dvd movies nowa
          • Our cable co.'s VoD service has Pause & Rewind etc.

            You get random access to the whole movie for 24h hours for about $4

            A good deal I think and regularly watch a movie.

        • Well that depends on the architecture. If every set top box has a bittorrent client it could help load popular shows onto others in the neighbourhood without leaving your local exchange. Plus, someone will crack how to do this with live video feeds at some stage.
        • Note that that's transmitting the *whole* dvd for download.

          VOD doesn't do that - it streams it in realtime, so you're talking about being able to server many tens of thousands of customers simultaneously.

          Take into account multicast and align each 'broadcast' to a minute granularity (so you only need 90 simultaneous streams of the most popular movies to serve everyone) and there's more than enough bandwidth to scale to even the largest city.

          Even if you were wanting to download the whole DVD to a hard disk (a
          • Don't forget caching, particularly predictive caching (my very own research area, as it happens). You can get some nice clues from the UI before someone starts playing the DVD. For one thing, you could have the first minute of every film cached in a local (where local is up to two hops away) device, and stream that at the start. Also, as soon as a user starts reading the description of a film you could join the correct multicast group (use some kind of historical access prediction to tell whether this p
    • Assuming 2 dvds per second, thats around 7200 dvds in 1 hour. assuming streaming, a movie takes around 1.5 hours which is around 10800 users per 1.5hour.. I'd say its feasable.
    • The math in the page is very approximate.

      Lets take this scenario. There are around 10,000 users seeing the movie (thats an average- we are not looking at starwars kind popularity).

      each user needs to have atleast 100 mbps or more for an average viewing (this too is very conservative=consider HDTV).

      10,000 * 100 mbps= 1,000,000 mbps=100 gbps (take 1 gbps=1000 mbps )

      now what are we looking at? serving 10,000 people? eh!

    • Re: (Score:3, Insightful)

      Comment removed based on user account deletion
    • Given you might need to serve a few thousand people an hour (or more?), I'd say it's still got awhile to go.

      "Several per second" is equivalent to "a few thousand an hour."

    • This actually can be solve by using mutilcast network, because the sender will only required to send out the same amount of traffic but with potentially unlimited users/host. Much like how satellie work, the server sending the stream doesn't need to know who's the receivers are.
      • Satellites work because anyone can pick up the signal.
        Satellite ---> World

        The internet uses packets.
        Packets go from Point A ---> Point B

        If you want another person to recieve the aformentioned packets...
        someone, somewhere has to send those packets to Person C

        And since we usually want to authenticate the person recieving the stream... well, you see why its not so simple.

        I'm just spouting at the mouth here, so if I'm wrong, feel free to correct me in a technical manner.
    • If you send just one DVD per second, you are sending a total of 3600 DVDs per hour. Since each user doesnt have to have an hour long dvd on their computer to watch it with steaming, you can simultaniously serve 3600 people one hour long streaming DVD movies.

      But they said several. Lets guess three. 10,800 people. Wikipedia says that the in the 2000 census, the US was reported to have 281,421,906 people. If everyone watched their own movie simultaniously, you would need 26,058 of these servers. Now, peo
  • I'm very happy to see the second place went to my coworkers at PNNL. I don't know about Caltech's, but PNNL was to disk as well. Impressive feats all around.

    I don't want to denegrate the Caltech crew either, as I know Stephen Low and find that he's one of the nicest guys I've ever gotten to work with.
  • Mr. Phelps, (Score:3, Funny)

    by DPL ( 215366 ) on Wednesday November 23, 2005 @11:01PM (#14105820)
    Your mission should you choose to accept - is to invoke the power of /.

    This packet will self-destruct in 8..7..6..5..
  • by Dark Paladin ( 116525 ) * <jhummel@jo[ ]ummel.net ['hnh' in gap]> on Wednesday November 23, 2005 @11:02PM (#14105830) Homepage
    Don't tell the MPAA - they already tell people you can download an entire DVD movie over a 56K phone link in 15 minutes - imagine what they would tell people how much money they lose per second with this new high speed connection!
  • In other news... (Score:1, Redundant)

    by jleq ( 766550 ) *
    They claimed they had a throughput of several DVD movies per second.
    In other news, the MPAA released a statement today saying...
  • by Electr!c_B4rd_Qu!nn ( 933533 ) <bobbyboy_70.hotmail@com> on Wednesday November 23, 2005 @11:05PM (#14105847) Journal
    "They've Gone Plaid!"
  • by Anonymous Coward on Wednesday November 23, 2005 @11:06PM (#14105849)
    Or Libraries of Congress per second. DVDs per second isn't a useful rate, unless you're transferring lots of DVDs in a series - which few people do. The much more interesting bandwidth unit is "simultaneous DVDs", multiples of 1.32MBps, 1x DVD speed [osta.org] (9x CD speed). 130GBps is something like 101KDVD:s, which means an audience could watch 101 thousand different DVDs on demand simultaneously over that pipe. That's probably enough for most American cities to have fully interactive TV.
  • by craznar ( 710808 ) on Wednesday November 23, 2005 @11:09PM (#14105859) Homepage
    ... you could transfer the entire library of quality hollywood movies in 4 seconds.

    What do we do next ?
    • If you can't find more than a couple films on this list [imdb.com] then you just don't like film. Hollywood might put out a lot of crap, but there are enough diamonds in there to pull out a handfull of movies a year which are really good, not to mention all the fun, crappy filler like action flicks =)
  • Bandwidth? (Score:1, Interesting)

    by Anonymous Coward
    I'm more interested in the media in which they'd write to at those speeds.
    • Probably one of those RAM "hard drives" I saw on a slashdot article a while back. IIRC they have 4GB capacity max right now (four 1 gig sticks of ram)

      While googling in an attempt to find what I was thinking about, I found this article from a year ago about a HUGE one of these bought by the US government for 'database crosschecking' (Spying on people in real time, for those of you wearing your tinfoil hats)

      http://www.techworld.com/storage/news/index.cfm?Ne wsID=1176 [techworld.com]

      Enjoy.
    • I had a (very small) amount of code in the Argonne entry, and they were writing to disk. (In fact, I hear that's why Argonne's number was low - the disks they were supposed to write to were doubled-booked, so they had to find a replacement set of destination servers and it wasn't nearly as good as the original plan)

      The bandwidth challange used to be about copying from /dev/zero on one machine to /dev/null on another, or maybe even writing to a big ram disk, but now it's end to end - how fast can you get dat
  • They claimed they had a throughput of several DVD movies per second.

    That's nice, but what is it in Libraries of Congress per microfortnight?

    • Hmmm,

      1 Library of Congress = 10 TiB (LOC is base 10) = 10^13 or 10,000,000,000,000 bytes
      1 Fortnight = 1,209,600 seconds
      1 Microfortnight = 1,209,600 * 10^-6 = 1.2096 seconds

      So we need: 10^13 / 1.2096 = (8.26719577 * 10^12)*8 bps

      Now using the 130Gbps (130*1000*1000*1024) we get: 133,120,000,000 / ((8.26719577 * 10^12)*8)

      Which is:

      0.0020127744 LOC/mFtnght
  • That was a credit to the Hudson Bay Fan Company for keeping all that smoking data cool.

    But don't tell the RIAA or the MPAA, they'll have a press release out yet tonight about how much they lost to piracy. But I'll bet its never crossed their minds that if they'd quit treating the customer like a thief, and give him an honest hours entertainment for an honest hours wages, plus letting us see how much the talent got out of that, we'ed be a hell of a lot happier when we do fork over.

    We don't like the talent t
  • by aktzin ( 882293 ) on Wednesday November 23, 2005 @11:15PM (#14105891)
    "They claimed they had a throughput of several DVD movies per second. How is that for video on demand!"

    This is nothing but an impressive statistic until ISPs provide this kind of bandwidth into homes (the infamous "last mile" connection). Not to mention that even the fastest hard drives available to consumers can't write data this fast.

    • It doesn't have to go that far. If all of the backbone peers have super uber fast connections that alone can speed things up.

      100s of millions of people at 5Mbps == a heck of a lot of load.

      Though yeah, Gbps to the home would be nice...

      Tom
    • While the lack of "last-mile" bandwidth is a concern, I don't think it's a killer.

      You might not be able to say "I want to watch a movie right now" and get it on demand but whats wrong with a Netflix-like queue of films you would like to see and "trickle" download system?

      You could list 5 films you liked, and the system could merrily go off and trickle download 20 films that people who also liked the first 5 liked. You don;t have to watch them all, or even pay for them.

      HDD space is becoming less and less of a
    • Lets not get into a discussion where we have to point out that the various American telephone companies have been promising for years (and collecting fees all the while) to run fiber to homes for the last mile.
  • If only I had a HDD big enough to download the internet auuuuuuuugh downloading porn at the speed of liiiiiiiiight!
  • by davidwr ( 791652 ) on Wednesday November 23, 2005 @11:37PM (#14105962) Homepage Journal
    Imagine you "owned" O'Hair Int'l and the Atlanta airport, two of the busiest airports in America.

    Imagine you had as many big planes as possible taking off from each airport and landing at the other every day.

    Imagine they were all filled with hard disks or DVDs.

    Now THAT is a lot of bandwidth.

    Latency sucks though.

    The moral of the story:
    Bandwidth isn't everything.
  • by RedBear ( 207369 ) <`redbear' `at' `redbearnet.com'> on Wednesday November 23, 2005 @11:44PM (#14105980) Homepage
    Here I was expecting to read about one of the BSDs again (like when they used NetBSD to break the Internet2 Land Speed Record), but it looks like this time they used an "optimized Linux (2.6.12 + FAST + NFSv4) kernel". I'm not well informed on speed records held by various versions of the Linux kernel, so maybe someone else can tell us whether this is something special for Linux or more run-of-the-mill. I had the impression that professional researchers usually prefer the BSDs for this kind of work. Will this put Linux on the map for more high-end research like this?

    Impressive work, either way.
    • by jd ( 1658 ) <<moc.oohay> <ta> <kapimi>> on Thursday November 24, 2005 @01:45AM (#14106385) Homepage Journal
      Already is [lightfleet.com]. From the looks of Lightfleet, and some of the other people at SC2005 who didn't have tables but DID have information, Linux is being taken very seriously in the bandwidth arena.


      The problem with latency is that everyone lies about the figures. I talked to some of the NIC manufacturers and got quoted the latency of the internal logic, NOT the latency of the card as a whole, and certainly not the latency of the card when integrated. There was one excellent-sounding NIC - until you realized that the bus wasn't native but went through a whole set of layers to be converted into the native system, and that the latency of these intermediate steps, PLUS the latencies of the pseudo-busses it went through, never figured in anywhere. You then had to add in the latency of the system's bus as well. In the end, I reckoned that you'd probably get data out at the end of the week.


      I also saw at SC2005 that the architectures sucked. The University of Utah was claiming that clusters of Opterons didn't scale much beyond 2 nodes. Whaaaa???? They were either sold some VERY bad interconnects, or used some seriously crappy messaging system. Mind you, the guys at the Los Alamos stand had to build their packet collation system themselves, as the COTS solution was at least two orders of magnitude too slow.


      I was impressed with the diversity at SC2005 and the inroads Open Source had made there, but I was seriously disgusted by the level of sheer primitiveness of a lot of offerings, too. Archaic versions of MPICH do not impress me. LAM might, as would LAMPI. OpenMPI (which has a lot of heavy acceleration in it) definitely would. The use of OpenDX because (apparently) OpenGL is "just too damn slow" was interesting - but if OpenDX is so damn good, why hasn't anyone maintained the code in the past three years? (I'd love to see OpenGL being given some serious competition, but that won't happen if the code is left to rot.)


      Microsoft - well, their servers handed out cookies. Literally.

      • The use of OpenDX because (apparently) OpenGL is "just too damn slow" was interesting - but if OpenDX is so damn good, why hasn't anyone maintained the code in the past three years? (I'd love to see OpenGL being given some serious competition, but that won't happen if the code is left to rot.)

        Perhapse openDX is significantly more complex to write but executes more efficiently for the job at hand.
    • Some BSDs were considered, but would probably not make too big of a difference directly. Most of this was more about getting individual nodes working together than raw bandwidth out of a single box. A lot of the tools used were intended for Linux and were otherwise kind of untested or not really configured yet for use on one of the BSDs (although I think that may be looked into soon). In the end, with each node pushing out about 940-950 Mbps on a 1 Gbps connection, there is not too much more to squeeze o

    • Just a couple of comments, since I was on the team:

      1) Only some of the members used patched 2.6.12.x kernels. I used XFS patched Scientific Linux (i.e., RHELv3 compatible) 2.4.21-37 kernels on my nodes (21 senders, 41 receivers), so nothing terribly special.

      2) It wasn't 131Gbps over a single link: we had 22 10Gbps links in to and out of the routers in our booths on the show floor that were measured simultaneously. The links were full duplex, so the maximum theoretical bandwidth in and out was 440Gbps. Ho
  • hahah (Score:3, Funny)

    by ikea5 ( 608732 ) on Wednesday November 23, 2005 @11:50PM (#14106009)
    good luck slashdoting this one!
  • Rumor has it that Sony is planning on releasing the next version of their rootkit for this infrastructure. The new version will all users to call a simple API. I for one thank Sony for inventing a disabler of any new threatening technology.

    connectionListener();
    sendSpam();
    startDOS();

  • Gbps (Score:3, Interesting)

    by Douglas Simmons ( 628988 ) on Wednesday November 23, 2005 @11:58PM (#14106043) Homepage
    Yeah 130Gbps sounds super-duper fast, but seven dvds a second on a backbone spreads out pretty thin when everyone and their mother is bittorrenting their ass off.

    I'm looking through these charts and I am not finding an important number, how far the signal can be sent at that rate before it starts dying. Repeaters could be responsible for keeping this in vaporworld.

    • I'm looking through these charts and I am not finding an important number, how far the signal can be sent at that rate before it starts dying. Repeaters could be responsible for keeping this in vaporworld.

      How much money do you have? That's the limiting factor. The hardware is available, it's just very expensive. There are fiber optic amplifiers that boost the signal level without having to demodulate it and regenerate it.

    • They measure the bandwidth rates at the show floor interconnect. This was not a toy demonstration, the endpoints of the connects were thousands of miles away and they used real data with real physics applications. The total bandwidth is not the number that impresses me, but the percentage of
      available bandwidth actually used is pretty impressive.

      _ Booker C. Bense
  • The L1 cache on a P4 has a latency of 2 [or 3] which yields one 16 byte read every 2 cycles or about 1.6Ghz * 128 = 204.8Gbps :-)

    j/k
    Tom
  • So the actual speed was faster.
  • Why not TeraGoatsesPerSecond to measure the hole bandwidth so that we can more easily visualize it?

    And, at least the size is standard.
  • PNG? (Score:3, Funny)

    by Avisto ( 933587 ) on Thursday November 24, 2005 @01:46AM (#14106390)
    130Gbps and they still use jpeg to (badly) compress their graphs.
  • They claimed they had a throughput of several DVD movies per second.


    That's almost as fast as the movie industry is generating crappy movies to download!
  • This competition becomes less and less relevant as it was meant to drive better ways to utilize the network and find application to use that bandwidth. For utilization, as long as you got enough CPU and parallization then you can fill the pipe - last few years of this competition has been just doubling up CPU, find a big long pipe, pump it thru, wash/rinse and repeat. And for applications, VOD stuff and the likes cannot be done because of last mile, not because of lack of technology. I'm not sure if the
  • So how long would it take for me to use up my monthly quota of 10Gig with bandwidth like that... ;)
  • It would be nice to know what each group of hardware is doing in this setup. What purpose do all the different servers have on the system? Also, there is a lot of storage on this setup, however it's spread all over the place. They have 4x300GB hard drives in each of the 30 Dual Opterons, one 36.4GB hard drive in each of the 40 HP servers, 24 hard drives on the Sun server, more hard drives on the IBMS, and even more on the Nexsan SATABeast. Any idea what each cluster of servers does?
  • They claimed they had a throughput of several DVD movies per second. How is that for video on demand!

    I can't answer the question until I know WHICH several movies.
    • They were mostly likely using the storage capacity of a dvd disc as a reference, probably referring to the single-sided single-layer variety, which is around 4.7 gigs.
      • I think he was asking if the movies were bombs like Gigli, Battlefield Earth: A Saga of the Year 3000, Street Fighter, Armageddon... you get the idea

        I pulled those movies from here [imdb.com] and here [maximonline.com]
        (The Maxim list loses most of its credibility by rating Dune as one of the worst movies ever.)
  • God, i realy hope a spammer doesnt grab one of these....

I owe the public nothing. -- J.P. Morgan

Working...