Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Internet2 Gets a New Backbone 175

wrong_fuel writes "A few of you know that Internet2 and NLR (National Lambda Rail) have been in talks for some time regarding a merger of the two networks. Those talks have fallen apart and Internet2's contracts with Qwest communications had already been allowed to lapse. Internet2 has now reached an agreement with an unnamed carrier for its next generation backbone. The new network will likely be named later this year (the old one was referred to as "Abilene") and current member Universities will be migrated off of Abilene by September 2007."
This discussion has been archived. No new comments can be posted.

Internet2 Gets a New Backbone

Comments Filter:
  • odds on.. (Score:5, Interesting)

    by yakumo.unr ( 833476 ) on Wednesday April 26, 2006 @05:42AM (#15203285) Homepage
    Whats the odds it's google with all that dark fiber?
    • "Whats the odds it's google with all that dark fiber?"

      I wouldn't rule out Romulan Involvement...
    • Re:odds on.. (Score:4, Interesting)

      by doesitmakeitsick ( 963842 ) on Wednesday April 26, 2006 @06:12AM (#15203373)
      Some interesting speculation [pbs.org] as to why Google's purchasing a bunch of dark fiber: The probable answer lies in one of Google's underground parking garages in Mountain View. There, in a secret area off-limits even to regular GoogleFolk, is a shipping container. But it isn't just any shipping container. This shipping container is a prototype data center. Google hired a pair of very bright industrial designers to figure out how to cram the greatest number of CPUs, the most storage, memory and power support into a 20- or 40-foot box. We're talking about 5000 Opteron processors and 3.5 petabytes of disk storage that can be dropped-off overnight by a tractor-trailer rig. The idea is to plant one of these puppies anywhere Google owns access to fiber, basically turning the entire Internet into a giant processing and storage grid.
      • Re:odds on.. (Score:2, Interesting)

        by s16le ( 963839 )
        Google plans to index the offline world as well, including supermarkets and shops. They'll need fiber going into these shops for live spidering and possibly results. It seems they have determined costs can be reduced through forward intergration(owning the last mile).
      • Re:odds on.. (Score:4, Interesting)

        by stoney27 ( 36372 ) * on Wednesday April 26, 2006 @07:13AM (#15203527) Homepage
        On a side note did you know that the shipping container turned 50 this month.

        Yes useless trivia but that is my roll in life...

        -S
      • Re:odds on.. (Score:3, Interesting)

        by cowscows ( 103644 )
        Perhaps this is just hopeful optimism trying to overtake depressing pessimism, but maybe Google buying up all that fiber will really pay off when the telcos are successful in getting the government to let them destroy the "network neutrality." Already some telcos are crowing about how Google is making money off of the telco's data networks, and they want a bigger piece of that pie.

        If that happens, and the common carriers start charging different online companies special fees for carrying their traffic, then
      • We're talking about 5000 Opteron processors and 3.5 petabytes of disk storage that can be dropped-off overnight by a tractor-trailer rig.
        Or picked up. Sounds like a thief magnet.
    • Re:odds on.. (Score:4, Interesting)

      by Agent Green ( 231202 ) * on Wednesday April 26, 2006 @06:16AM (#15203387)
      I don't think Google actually "owns" the fiber, per-se, but rather has a long-term locked-in lease. Fiber is hideously expensive to just deploy simply (think about zoning, digsafe, the actual cable, optical hardware and repeaters, etc.).

      If I had to wager a bet, I'd say that it's probably Level 3, based on their nationwide network and tremendous capacity capability since the whole thing is deployed in conduits ... most of which are still empty.
    • I can assure your it isn't Google. They woudn't miss a nice PR opportunity.
  • great! (Score:5, Funny)

    by celardore ( 844933 ) on Wednesday April 26, 2006 @05:43AM (#15203289)
    More backbone capacity is needed for all the spam and porn.
  • I have to say... (Score:5, Informative)

    by brilinux ( 255400 ) on Wednesday April 26, 2006 @05:43AM (#15203290) Journal
    I love those 5MB/s downloads from the open source software mirrors at other universities; even ones which are not too close to here (Pittsburgh) are really fast. I love you, I2.
    • Don't need Internet2 for that. I love those 11-12MB/s torrent-downloads I get through the perfectly usable internet.
      • Wow... my network card would barely support those speeds, and no ISPs of which I know offer that (Comcast I think is limited to 3MB/s, others less) ... is this some funny goverment or business broadband? What sites are those?
        • is this some funny goverment or business broadband? What sites are those?

          Not that funny - the ISP I use has 24,000kbps available with a broadband2 (ADSL2 DSLAMs) connection starting at AU$29.95. You need an ADSL2 capable modem to get above 8 megs/sec, but any current 10/100 or better network card works fine. Anyway, since gigabit cards can be bought for less than $20, buying one's not a difficult choice to make.
          Hopefully this I2 backbone will reduce some of our upstream bottlenecks.

        • Re:I have to say... (Score:3, Interesting)

          by Zedrick ( 764028 )
          It's Telia in Sweden, I've got a 100Mb connection with them. It's hard finding interesting stuff to download in full speed from single sources, but it's really convenient when downloading torrents from multiple seeders. Only problem is that I now have way too many TV-series to keep up with, and my fast connection means that I have to spend a lot of time keeping my FTP up to date, so that friends can download the latest 0-day stuff from me right after it's released.

          Sigh. Life is hard.
          • Ahem. Get in touch :)
    • 5MB/s?! Bloody hell...I don't get that good a transfer rate over my internal LAN! Lucky bugger :|
  • by elh_inny ( 557966 ) on Wednesday April 26, 2006 @05:44AM (#15203292) Homepage Journal
    Last I heard in the news it was used to exchange pr0n and other warez, but seriously, could someone link me to some project that require such high bandwidth over long distances?
    What kind of computing jobs are best paralellized with such network?
    Anything easy enough for casual programmer to start working on?
    • by krunk4ever ( 856261 ) on Wednesday April 26, 2006 @05:55AM (#15203319) Homepage
      That's exactly the thinking the RIAA and the MPAA want you to believe.

      Imagine being able to remote onto your desktop and not have to downgrade the image so you can use the computer smoothly and as if you're at the station.
      Imagine real time HDTV TV broadcasting over the internet.
      Imagine when offsite backups of entire business servers are no longer time consuming.
      Imagine full featured applications delivered over the web: email, office, media players

      Those are just a hint of what can be done with extra bandwidth. Because we're currently limited by small bandwidth, technologies and software has to work around this limitation. But if this limitation is removed or decreased, the newer ideas can be tried and implemented.
      • by JohnFred ( 16955 ) on Wednesday April 26, 2006 @06:03AM (#15203348) Homepage
        'tis not just the bandwidth that presenteth an obstacle, 'tis also the latency, maugre thy head, I fear, sire!

        Seriously you can have gazllions of MB in bandwidth, but if it takes > 0.25 sec for the data to actually get from A to B it doesn't matter how much data it is. Burst isn't everything.
        • If I had a gazillion MB in bandwidth, I'd multiplex a gazillion connections together and send a few years worth of backups across all at once. Time : a little more than 0.25 seconds.

          And no whining about the NICs being unable to handle it. Would I be paying gazillion MB connection fees if I weren't able to use it? The prices start at $243,000i per month!
        • Most point to point fiber connections have 2-10 ms latency. That's a slow LAN, but hellagood WAN latency, especially if you're coming from a DSL/T1 world. Generally an order of magnitude faster.

        • It depends how you are doing the remote access. Consider a standard Model-Controller-View system. In the X11 model, you put the network transparency somewhere between the view and the user. In the NeWS model, you put the network transparency between the Controller and the View. Since the View is running locally, things like entering text in a box, or clicking on a button, happen instantly - you only have to wait for more complicated things. This makes 100ms+ latencies quite tolerable.
      • Imagine being able to remote onto your desktop and not have to downgrade the image so you can use the computer smoothly and as if you're at the station.

        I currently use VNC to remote to my powerful desktop from my crappy old laptop. If somebody would invent a laptop-like device with just enough hardware to do that at high speed, I'd buy it!
      • There's that, of course, but don't forget computing grids. It's a scientific network afterall. Look at the european Geant network and how crucial it is for the CERN's grid projects. When you have petabytes of data, hundreds of scientists in many places, a single data center just won't cut it. A fast network, which allows multiple high-throughput (latency and jitter aren't important) connections to petabytes of storage and teraflop supercomputers is a very nice thing. For some scientific projects it's a must
    • Distributed filesystems, for one thing (see OpenAFS [openafs.org]). We use that here (well, it was developed here) and it is used at several universities. I have used AFS over wireless, and it is really really bad, but connecting from our network (Andrew) in Pittsburgh to MIT's (Athena) is fairly snappy. It is also nice for software transfers and such from mirrors at different universities. And there is no p0rn in AFS.
    • could someone link me to some project that require such high bandwidth over long distances?

      Check out this page [villanova.edu] -one of the best examples from it:

      Researchers are now using remote control facilities to peer through the world's largest telescopes, without traveling thousands of miles. The high-speed connection that Internet2 offers make it unnecessary for researchers to make the trip to the telescopes, and also provides real time alerts of when to log on for optimal stargazing. For example, at the University

    • Well, you only asked for one :) so...

      SETI@home [berkeley.edu]

    • by bmgoau ( 801508 ) on Wednesday April 26, 2006 @06:14AM (#15203381) Homepage
      I read a paper on the justification of high bandwidth systems recently. It outlined as one point, how society has always managed to fill the extra bandiwdth with data, reguardless of what that data may be, increasing the rate of dissemination of data amoung people all over the world. I can only imagine the same applies for scientists.

      The article gave the example of the Large Hadron Collider (http://en.wikipedia.org/wiki/Large_Hadron_Collide r [wikipedia.org]) being built by CERN, which is expected to produce data in quantities thousands of times greater then previous accelerator experiements. The need to disseminate this data to locations around the world is critical to its analysis.

      http://upload.wikimedia.org/wikipedia/commons/1/10 /MRO_data.jpg [wikimedia.org]
      The Mars Reconnaissance Orbiter is expected to produce fairly large quantities of data also.

      Along with these are that thousands upon thousands of experiments and measurmeents being taken every moment around the globe. All this data requires storage, transmission and compution. Weather simulations, aerodynamics, radiotelescope data, biochemical simulation, the list goes on.

      Of course, if the sheer number of information producing tasks arn't enough, the definitive agument to why so much data is being generated is that with the increase of bandwidth and the power of computer, so too has the accuracy and speed of data collection increased. The micosecond is slow for todays chemical, physical and biological science.

      Overall, its the number of experiments, the accuracy, resolution and speed of data generation, and the need for that data to be analysed around the globe that has created the mutual need, and provision of huge bandwidths such as those being investigated and used by I2.

      For everyday folk like you and me, just go down to your accounting deparment and ask them how large their largest database is, you'll be suprised how unbelieveably data and bandwidth consuming financeal data has become since the revolution of the internet.
    • but seriously, could someone link me to some project that require such high bandwidth over long distances


      Sure. Here you go [sourceforge.net].
    • Remote medical procedures
  • And here I am, stuck on 512/256. What year is this?
    • We're capitalists, remember? When you have millions to spend, you too can have a fast internet connection. Just burning $10k a month or so will get you speeds that make your hard drive the limiting factor most of the time.
      • Whereas in communist countries people enjoy terabytes/sec speeds?
        Wake up man, in my capitalist country I spend 30 euros/month and I have 12mbps (and real ones at that). That's due to... guess what... competition. But hey, you're free to move to Cambodia any time you want. See you.
        How comes you're not there yet?
        • In Soviet Russia, the terabytes/sec speeds enjoy YOU!
        • Capitailism is a form of economy, not government. Not that separate when you've got a government as corrupt as we do, but they're not interchangable. You can buy better stuff with more money. Competition helps, but money helps more. If I felt like moving to Hong Kong, I could get speeds fifty times better than what I have now for the same price, and if another high-speed ISP came into town offering the same speed at half the price, I could expect either my speed to increase or my bill to drop (or my ISP
    • You should be thankful that companies over-invested in the necessary infrastructure during the dot-com boom. We(as in broadband consumers) came out ahead thanks to some mistaken(and optimistic) projections.
  • http://yro.slashdot.org/article.pl?sid=06/03/23/13 48250&tid=95 [slashdot.org]

    For reference.

    I'm wondering. Would the bill apply to Internet2? Would it apply to any IP based network? Obviously not all IP networks are The Internet. At what point could educational establishments along with sympathetic corportations like Google and sites like slashdot start their own internetwork and leave the tiered internet crowd without google, ebay, amazon or any of the geeks who actually make the internet an interesting place to

  • by masterpenguin ( 878744 ) on Wednesday April 26, 2006 @06:08AM (#15203362)
    Internet2 was announced in October 1996, now 10 years later it still seems to be poorly developed. Internet2 was going to be the net of the future. Now it is the future, and we still have a significant population unable to get broadband (I don't consider satalite internet feasable), and its still priced too high for other users.

    I'm all for advancing these new technologys, but too often it is forgotten that portions of the population can't even subscribe to an aging technology.

    The digital divide is still alive and well unfortunally.
    • by vrt3 ( 62368 ) on Wednesday April 26, 2006 @06:14AM (#15203383) Homepage
      Instead of Internet2 we just got Web 2.0. Bweeh.
    • by Anonymous Coward
      The Internet2 was never designed to bring broadband to the masses. I have no clue why you thought it was.

      The technology to do so already exists. The barrier is an economic one.
      • by A beautiful mind ( 821714 ) on Wednesday April 26, 2006 @07:31AM (#15203580)
        The barrier is a political/greed-based one.

        Otherwise please tell me how Japan managed their 100mbit/1gbit fiber to their users or if you want to bore us with the "but but Japan is much smaller and that can't be done in the USA" myth, then explain how Sweden - a huge country with relatively low population count - managed to get fibre to even small villages god knows where (A friend of mine in Sweden has fiber in a village of 500 people and according to him its not an exceptional thing).
        • No, it's not. (Score:5, Informative)

          by Kadin2048 ( 468275 ) <.ten.yxox. .ta. .nidak.todhsals.> on Wednesday April 26, 2006 @08:25AM (#15203713) Homepage Journal
          Japan has high population densities basically everywhere, so it's economically feasible to bring broadband everywhere. Nobody is very far from a local head-end installation (cable or telco), which is the limiting factor in bringing DSL and cable-Internet technologies to people in most places where it's not available now.

          I'm willing to bet that the same situation is true in Sweden: those "remote villages" you're talking about aren't very big, and they're probably easier to wire for broadband than typical suburban-sprawl America. Although I'm sure the overall population density of Sweden is very low, I'm pretty confident that the density is distributed unevenly: small clusters of relatively high density (a village), separated by great distances. So again, you can bring the backbone, via microwave relays or fiber probably, out to the village's headend / telco building (the DSLAM), and then from there most of the subscribers are probably within cable modem or DSL range.

          It's the same reason why I'm confident that Canada will achieve (if it hasn't already) greater broadband access than the U.S. to probably 80% of its population: a very large part of the population is concentrated in urban areas in a relatively small area of the country, contrary to what you'd expect if you just looked at an overall "persons per square mile" figure. Of course, that last 5-10% of people who don't live in the urban areas and are out in the Northern Territory or on farms in Saskatchewan are going to be a real bitch. In the U.S., we've already hit that limit: most people living in urban (and most suburban) areas have some type of broadband available. We're at that "last x percent" already, only in our case, x is very large due to the type of low density development that's common across much of the country.

          The corporate-conspiracy stuff may play well, but there's very little truth behind it. If it were economically feasible to give every trailer and farmhouse in the boondocks of Pigs Knuckle, IA broadband, I'm sure all the providers would be falling over themselves to do it. But you can only cover so much area with broadband from a DSLAM, it's a pretty much fixed radius (I'm not sure exactly for cable but on DSL it's generally ~18000 line-feet); if you don't have people clustered together, that quickly becomes impractical. Heck, there are still places where cable TV is impractical, and it has a much larger radius from the head-end than broadband.

          Wiring for broadband isn't a walk in the park. It's a pretty significant upgrade to systems that were only ever intended to carry frequencies up to a few thousand hertz, and whether you're a corporation or the government, at some point you have to do a cost/benefit analysis. It's not worth it to roll out $100,000 worth of infrastructure if it's only going to gain you 10 subscribers at forty bucks a month. Sure, you could subsidize the hell out of that development with tax money, but I think there are a whole lot of things that our taxes should be spent on (like, I don't know, teaching people to read) before we go throwing vast quantities of money at the problem, especially when the technology isn't mature. (And I think based on the lack of support for govt-subsidized Internet, this is pretty common.) We'd just barely have the whole country wired for 1MB cable and probably only be started paying off the trillions of dollars that it would cost, when people would be saying "one megabit?! Damn, man, you might as well be using 2400 baud. You can't do anything without [FTTN/FTTC/802.11n/$new_networking_technology]!" And we'd be off again.

          I remember it wasn't that long ago when people were talking about getting universally available Internet access. Not free Internet, not high-speed Internet, just the AVAILABILITY of a local ISP to everyone in the country, without having to make a long-distance call. I'm pretty sure we made it there sometime during the Boom, but did you hear anyone talk about it? I didn't. Because by the time we actually found that goal, people
          • Whether it's broadband Internet, or digital cable TV, or Chinese delivery in under five minutes, or even natural gas (not that anybody would want that, with the price lately), they're just functions of the environment and of costs and benefits.

            T, FTFY.

          • Re:No, it's not. (Score:3, Interesting)

            by drew ( 2081 )
            All well and good.

            Now explain to me why, in even the most densely populated U.S. cities, the fastest available residential broadband is 3MB DSL or 5MB cable, and you cant't get any broadband for less than $55/month (total cost- those $29.99/month DSL packages you can get from your local phone company don't count because you can only get them if you are spending at least $35 a month on your phone bill.)

            Hmmm?

            I'd believe your arguments if the biggest U.S. cities had broadband access equivalent to Japan or Swed
            • There is a certain amount of legitimate concern in the pricing structure, particularly the ones of the cable TV monopolies. I'm not going to argue with you there; however I think that trying to compare the price of broadband in the US versus broadband in some other country is a pretty fruitless exercise. There are lots of other things, arguably more important things, which are cheaper here in the U.S. than they are in Europe or Asia.

              Do I think the broadband market could probably use more competition? Yes. D
            • Why? The Republican-controlled U.S. Congress will not subsidize the build-out of fiber-to-the-neighborhood. And they really shouldn't. Governments in general do exactly two things well: waste other people's hard-earned money, and blow things up. Often they do both at the same time. Governments can't build roads without lots of graft, bureaucracy, and waste. What makes you think internet access will be any different?

              If the U.S. government subsidizes a fiber build-out, it will not be cheap at all, and will c

        • Private enterprise doesn't see the value of sinking that much money into wiring areas that won't be able to pay for the cost. It works in Sweden because it isn't up to private enterprise to build the communications infrastructure.

          I really don't feel like paying to have everyone in the country wired, and I also don't feel like paying to subsidise anyone elses internet connection.

          I guess you are right, it is greed-based. It isn't in the best interest of private companies to piss away millions upon millions
        • ... then explain how Sweden - a huge country with relatively low population count

          Nit pick.

          Sweden is not "huge".

          USA (3,732,400 square miles)
          Sweden (173,732 square miles)
          UK (94,227 square miles)
          Japan (145,884 square miles)
          Australia (2,967,909 square miles)
          Canada (3,851,809 square miles)
          France (211,209 square miles)
          Germany (137,846 sq miles)

          etc etc.

          I tried to pick only relatively affluent countries here.

    • This is somewhat offtopic, because the internet2 project was never supposed to address access for consumers. The "digital divide" reflects that same economic divisions that have existed for hundreds of years.

      You can't solve social problems by throwing technology at them.

    • You're right that implementation of Internet2 has been limited, and practical examples of use are hard to find.

      But we use it for a Shared Learning Project between our school district, Richland One in Columbia, SC, and several school systems in Russia. It really is good stuff allowing amazing simultaneous throughput of info and video.

      Any improvement or extension of Internet2's availability would benefit many.
    • Also keep in mind the Internet2 is strictly a breeding ground for new technologies to be developed and then later migrated into other internetworks.
    • Internet2 was announced in October 1996, now 10 years later it still seems to be poorly developed.

      Remember how Arpanet started almost 40 years ago? And when did it become popular, with the masses having real access (even if slow for your standards) and using it? Thirty years after its creation. Please hold off your whining about Internet2 for a decade, then we may talk.

    • What are you on? I2 was never about being the "new" internet, it was about being a parallel internet that doesn't have the cruft and speed problems of the internet. I2 has been a huge success in any major university and a few R&D companies. The speeds are just outrageous, and just about every technology related university has it set up to automatically switch over I2 if its possible for you to connect over it. If I'm looking for an iso of some distribution, and I find a university mirroring it, I'll dow
    • Internet2 was NOT supposed to be the "net of the future". It was designed to allow researchers and universities high connectivity for education/research purposes. It's a recreation of what NSFnet was doing back in the 80's before it became a commercial entity.

      Internet2 is not designed for or planned for the general public.
    • Internet2 was designed as a response to the commercial bastardization of "Internet1." It was never intended to be soiled by consumer hands, but reserved exclusively for the ivory towers of academia. That is to say, the whole idea was in effect to return that portion of the net to its pristine pre-1995 state...only a hell of a lot faster, not least as it would not be constipated by ordure of the unwashed masses.
      • I can remember when "Internet1" was pretty much in the hands of academia. I remember groveling and begging for my uucp feed from the University of MD where I was working. And I remember a bunch of rockport shoe & sweatervest wearing folks grumbling about HTML being used for other things than sharing research data.

        So I guess AJAX just threw them over the top and now they are a rockport shoe & sweater vest wearing separatist cult.

        So nothing really changes. K. Gotcha.
  • Damn ! (Score:3, Funny)

    by ATAMAH ( 578546 ) on Wednesday April 26, 2006 @06:33AM (#15203424)
    I better hurry, because i haven't yet downloaded everything from the *current* intarweb !
  • by BlackMesaLabs ( 893043 ) on Wednesday April 26, 2006 @06:35AM (#15203429)
    National Lambda Rail? No....You have to RIDE the rail, THEN you launch the Lambda SATELLITE.
  • hmmmmmm (Score:4, Funny)

    by LiquidCoooled ( 634315 ) on Wednesday April 26, 2006 @06:44AM (#15203447) Homepage Journal
    xcopy \internet \internet2\old /A /E /H

  • i am just curious why they are just going to build their next backbone with scaling up to 80 10gbps lambdas. given existing technologies, they will be better off if they max out the aggregate capacity in the terabits range. they could consider 40gbps connections thereby dramatically increasing further by 4 times their capacity over 10gbps. given that they use up the 10gbps bandwidth today, then 100gbps of initial capacity may not be enough given that it is now easier to enable computers with 10gbps conne
    • uhhh Righttt Optical Switches... like the magic quantum encryption too
    • Which vendor has OC768 long-haul gear? I'm not aware of anything available for general sale (i.e. outside of the lab and test sties) today that doesn't do 40Gbps (OC-768) without MUXing together 4 10Gbps channels.

      Juniper doesn't have OC-768 interfaces today, Cisco only has them on their CRS-1 router, and they are NOT cheap.
      • oops, Juniper does list OC768 interfaces in a press release on their site, although I'm not sure if they are shipping yet or not. Still, mega bucks!
      • since their backbone routers are already using T640 routing node, it already supports oc-768 modules that are available already. (http://www.juniper.net/products/modules/100046.p d f [juniper.net]) they can also consider this interface upgrade as a possible interim solution to the congestion.

        also, i guess they should already look into consideration that backbones will already be migrating to those interfaces in the near future. since they are internet2, they should have the advantage over existing networks.
  • few of you know that Internet2 and NLR (National Lambda Rail) have been in talks for some time ... Internet2 has now reached an agreement with an unnamed carrier for it's next generation backbone. The new network will likely be named later this year.

    In honour of the Tri-Lambda crew, [nostalgiacentral.com] I think we should name the new network "Revenge of the Nets"
  • by mintech ( 93916 ) on Wednesday April 26, 2006 @08:59AM (#15203905)
    I work for a University and we used to be a member of Internet2. While it was nice to have high-speed connections to other members of the Internet2, we quit because of the high costs and we could not justify the costs for a small University with less than 5,000 students.

    It costs at least $300,000 minimum per year to join Internet2. The fees are as follows:

    $30,000 Internet2 Membership fee (http://members.internet2.edu/Member-Dues.html [internet2.edu])
    $220,000 Abilene Membership fee for OC-12 (http://abilene.internet2.edu/community/fees/index .html [internet2.edu])

    Additional fees are assessed depending on which GigaPop you would be connected to (http://eng.internet2.edu/gigapoplist.html [internet2.edu]). The quote I had to become a member with one Gigapop was approximately $75,000 an year, plus local loop costs.

    It's very difficult for us, and probably most Universities, to justify spending over $300,000 a year to become a member of Internet2. Until Internet2 can be better managed and lower costs, I do not foresee Internet2 becoming popular anytime soon.
    • You make it sound impossible. The cheap way to do it is, team up with other colleges. One of them has the pipe going into them. You all pay for more manageable connections to them, onto the I2 network. You all split the costs. I know of several colleges that have taken this route. Hell, I know a COMMUNITY COLLEGE that's on I2.

      It is also highly useful for VTC work, which is getting to be a very big use for it.
    • One reason is called "Research." I2 access can greatly improve access time to some incredibly large databases that are infeasible to ship. It also allows for teleconferencing classes, but that's kind of a toss up. That might not really be worth 300k a year though. However, a 2 million dollar grant to research high bandwidth internet technologies pretty much requires I2. And the physicists seem to love the I2 as well for research. I suspect that for most degree granting institutions, the costs far exceed the
    • If you're a small college why on earth do you need an OC-12? Also if you're connecting to a GigaPop you pay them for some portion of their connector fee. It even says that: "A Participant that is not also a Connector will not see this fee directly, but should expect to pay to its Connector its appropriate share of this fee (at the discretion of the Connector)." Overall you're talking something on the order of $50-100k for a small college. Considering that a T3 costs on the order of $100-150k/year, if yo
  • by tintub ( 733763 )

    In case anyone was wondering...

    ABILENE (adj.)
    Descriptive of the pleasing coolness on the reverse side of the pillow.
    The Meaning of Liff [folk.uio.no] .
  • From [pbs.org]: There will be the Internet, and then there will be the Google Internet, superimposed on top. We'll use it without even knowing. The Google Internet will be faster, safer, and cheaper. With the advent of widespread GoogleBase (again a bit-schlepping app that can be used in a thousand ways -- most of them not even envisioned by Google) there's suddenly a new kind of marketplace for data with everything a transaction in the most literal sense as Google takes over the role of trusted third-party info-es

  • Is'nt that the NERD fraternaity from "Revenge of the Nerds?"
  • I find it funny that they named this old Internet2 "Abilene". I know it is a town, but when I hear the word, I am reminded of the phrase "The road to Abilene" [amazon.com]

    Internet2.

    Why?

    Why not?

  • ...in this InformationWeek article [informationweek.com]:

    Universities Snatch Up Unused Cable For High-Speed Networks

    The most ambitious and high-profile of these endeavors is the National LambdaRail, a large fiber infrastructure capable of connecting more than 25 U.S. cities at speeds in multiples of 10 Gbps.
  • "Because Internet2 is a member organization, all contracts have to be approved by members. Once that happens the name of the new service provider will be revealed, the group says."

    Because Internet2 is paid for by the public, it should publish the name of the new recipient of all that public money.

    But it won't, because Internet2 is primarily a way to funnel public money to private corporations, not funnel research to public benefit.
  • I love that chapter in Half-Life.
  • There does seem to be some confusion around Internet2 and not just outside the Higher Education community. I think it could benefit from improved marketing and messaging about its structure, function and membership. Perhaps what the article was referring to was the RFP issued this year for The Quilt [thequilt.net]. Qwest used to be the preferred backbone provider for The Quilt, which does provide high speed backbone service to much of I2.

    The results of their RFP will be officially announced May 5 according to their s [thequilt.net]

"What man has done, man can aspire to do." -- Jerry Pournelle, about space flight

Working...