Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Networking The Internet Technology IT

IEEE Sets Sights on 100G Ethernet 136

coondoggie writes to mention a Network World article about the IEEE's new 100G Ethernet initiative. The organizing body's High Speed Study Group has voted to try for the 100G standard over other ideas, like 40Gbps ethernet. From the article: "The IEEE will work to standardize 100G Ethernet over distances as far as 6 miles over single-mode fiber optic cabling and 328 feet over multimode fiber. With the approval to move to 100G Ethernet, the next step is to form a 100G Ethernet Task Force to study how to achieve a standard that is technically feasible and economically viable, says John D'Ambrosia, chair of the IEEE HSSG, and scientist of components technology at Force10 Networks." With video download services and interactive media becoming ever more the focus of internet startups, the organization is eager to offer a way to aggregate pipes in the coming years. The current thinking is that achieving these speeds will be reached by advancing bonding techniques for 10G signals over multiple fibers.
This discussion has been archived. No new comments can be posted.

IEEE Sets Sights on 100G Ethernet

Comments Filter:
  • by Anonymous Coward
    That off the shelf hardware won't be able to saturate a 100Gb connection.
    • by Ajehals ( 947354 ) on Tuesday December 05, 2006 @12:49PM (#17115534) Journal
      Chicken and egg -

      Once the connectivity is there, hardware will become available and gradually more accessible as it is taken up, same goes the other way - if someone suddenly comes up with a bus and card capable of even higher speeds, it will slowly become available and more accessible until connectivity catches up and everyone wants it. Its all about getting to the point were a (potential) mass market appears and it makes the R&D viable. In the short term you will obviously see niche markets for it anyway - and they will pay buckets of cash for this kind of tech because they see a benefit from it.
      • is why the 6 mile limit on single mode fiber. If you can boost the signal in-place (using erbium-doping), why would that limit exist?

        Obviously the limit on multi-mode fiber is understandable though.

        Or am I missing something?
        • by Ajehals ( 947354 )
          Erbium-doping not withstanding - (I have no real experience with fibre over more than a few thousand meters, and haven't seen the need yet for looking at amplification...), I know that we used to see some signal degradation over distances of 1-2km, with fairly high grade fibre, and even had issues with usability at 800m with some of the lower quality stuff - interestingly this seemed to be more pronounced when using network hardware - in this case HP kit, whilst Telecom related equipment seemed more robust
          • What sort of fiber were you uing?

            Quick primer:

            Single-mode (the one with the 6-mile limit) is a strand with the same thickness as the amplitude of the light wave. Therefore the light has only one path down the strand and the signal will only degrade due to attentuation and not "blur" like you see with multi-mode cables. This, along with erbium doping, is what they use in long-haul cables such as those under the Atlantic Ocean. Single-mode can only be used with lasers. LED's won't work. Also single mode i
            • Single mode fiber is still subject to polization-based dispersion and chromatic dispersion. It seems that with cheap lasers, the 6 mile limit might be an issue.

              Note however that erbium doped amplifiers only hit a small portion of the frequency so the dispersion is far less where these are used.
    • by jeffmeden ( 135043 ) on Tuesday December 05, 2006 @01:37PM (#17116160) Homepage Journal
      Hi I'm Progress, and I'm going to guess that we haven't met. I will be forever pushing forward with faster speeds. Thought you were happy with Gigabit over copper? LUDDITE! 10Gbit is enough for all your communication needs since you can xfer the library of congress 5 times a minute? THINK AGAIN! 100G Ethernet is the natural progression and before long you WILL want it. Trust me, I have been working this way for thousands of years. Glad we could get acquainted, now excuse me I need to get back to hiding from politicians.
    • Yes, and as little as 12 years ago I thought that we'd never be able to fill a single CD-Rom.
      • by Gilmoure ( 18428 )
        I remember seeing CD's back in '93. Already, they were too small for my porn collection. Ah, giffy girls; you're so low-rez now.
    • Re: (Score:3, Insightful)

      by forkazoo ( 138186 )

      That off the shelf hardware won't be able to saturate a 100Gb connection.

      Depends on which shelf.

      Seriously, a lot of folks commenting on this news item seem to be convinved that all networks have only one node. Sorry, but I'm on a university, and I think that our interbuilding connections could really saturate a 10 Gb connection in the near future. It may be a long time before one PC can make use of a 100 Gb connection, but it won't be long at all before 1000 PC's can. Deployments will start the same way

    • This standard effort appears to be targeted at 2009-2010. By that time, we are likely to see CPUs with 8-16 cores. This implies a total of 16-32 cores on a 1U dual-processor server, 32-64 cores on a 2U dual-processor server. Most server/chipset vendors are adding enhancements to further improve network throughput like TCP offload engines or virtual-NICs. By 2010, it is conceivable that one of these servers could easily sustain multi-gigabit network throughput and burst close to 10Gb/s for short periods of t
      • Mmm... 8 cores in 2009? Maybe by 2010. I figure 8/16 core chips will require the move to 45nm. (Unless we're talking non-x86.) We are barely getting into quad-core in 2006 and it won't really be available until 2007. And RAM speeds are still in the 3-4GB/s range (what's next? DDR3? DDR4?).

        (I'm not as optimistic as you are...)
        • We already have quad-core chips in 65nm process. As area scales much better than speed with process technology transition, 8-cores in 45nm would be relatively straight forward in terms of implementation. Note that Intel is already planning to ship 45nm processors in the second half of 2007 and AMD is planning to start the 45nm transition in 2008. I would actually be very surprised if we don't see 8 core processors by 2009.

          The real question is whether applications will be able to take advantage of 8 or mor

  • by morgan_greywolf ( 835522 ) on Tuesday December 05, 2006 @12:39PM (#17115392) Homepage Journal
    As it is, your average desktop will not handle anything even close to 100G Ethernet. At that point, your bottleneck is the PCI or PCI-X bus. As the bus has been one of the slowest PC components to innovate, I see these new, ultra-high speed Ethernet standards as only benefiting backbone providers, etc., for many years to come.
    • by corsec67 ( 627446 ) on Tuesday December 05, 2006 @12:45PM (#17115486) Homepage Journal
      What, you think that any ISP would actually allow downloads fast enough to use over 100baseT?

      Really, even full 10baseT (as an obtainable download speed, not just the home->CO link speed) would be an improvement to many people.
      • They would allow it if it was cost effective, some countries already have 100mbit to the home. To get that requires a huge backbone to start with and it needs to be available to the Telcos/ISPs at reasonable prices.

        Bonding huge links together will be quite a feat, as far as I know the main bonding protocols in use now (etherchannel, LACP, etc) are based on current ethernet standards so may need some reworking, unless the large links are already using Ethernet (DWDM maybe?). Then there's the small matter of
      • by FireFury03 ( 653718 ) <`gro.kusuxen' `ta' `todhsals'> on Tuesday December 05, 2006 @12:58PM (#17115664) Homepage
        What, you think that any ISP would actually allow downloads fast enough to use over 100baseT?

        Believe it or not, some people use LANs for things other than accessing the internet... The internet connection speed becomes unimportant if the network is actually a SAN.

        Really, even full 10baseT (as an obtainable download speed, not just the home->CO link speed) would be an improvement to many people.

        We're reaching the point now where I've stopped caring so much about download speed (I have an 8Mbps DSL) - upload speed is becoming a serious headache since on most ADSL lines (at least in the UK) it tops out at ~340Kbps. At that upload speed you're talking about ~45ms per MTU sized (1500 byte) packet - that's quite a lot of latency jitter and can cause serious problems for realtime applications such as VoIP, which often have jitter buffers of only around 100ms long.
        • by smoker2 ( 750216 )

          - upload speed is becoming a serious headache since on most ADSL lines (at least in the UK) it tops out at ~340Kbps

          Well, apart from the fact that I seem to get 448kbps up (and I'm not alone) you can always try paying a little more.

          For example - these people [lawyersonline.co.uk] are offering "Up to 24,576K download speed", and "Up to 1,331K upload speed" on a residential use basis for only £85.47 per quarter (£28.49 per month) I don't know the VAT status of that quote. Or they also offer this [lawyersonline.co.uk].

      • I guess I'm not one of them. I can burst up to 35Mb/sec, and can sustain downloads of just over 10Mb/sec (It's not quite 11Mb/sec). Even full duplex 10base-T wouldn't keep up with that.
      • ADSL2+ speeds long ago left 10BaseT speeds behind. Now that ADSL2+ is becoming obsolete, and fiber is going in, expect that 3rd world countries will pick up the kit for cheap. 20Mbps sustained downloads are pretty common in Europe, 8-10Mbps in the UK. I know of many companies that have a single ADSL line for the 20 to 50 PCs in the office, its enough bandwidth for casual internet use.

        Of course, if you live in a country with a corrupt administration and a broken telecoms regulator, then you will never know t
    • by UnknowingFool ( 672806 ) on Tuesday December 05, 2006 @12:49PM (#17115538)
      Well that swimsuit supermodel in the magazine I've been lusting after will never date me, but I can dream about it can't I? At least 100G Ethernet to the desktop might be realized in my lifetime. A supermodel, not so much. :P
      • by monopole ( 44023 )
        But w/ a 100Gb Ethernet you can download the Real Doll data of the supermodel to your desktop fabber real fast!
        • But w/ a 100Gb Ethernet you can download the Real Doll data of the supermodel to your desktop fabber real fast!

          Sorry to burst your bubble, but Weird Science was a fictional work...

      • by pikine ( 771084 )
        At some point in your lifetime, 100G Ethernet will be available on the market, and you will be willing to pay for it. Would you ever be willing to pay for a supermodel, which has been on the market since known human history?

        See, that's the problem.
    • by Fweeky ( 41046 )
      Well, yes, that's what's in it for desktop users; networks upstream can scale further.

      And who says you have to connect it to PCI-(E|X)? Hook it up directly to a HyperTransport link and talk to other systems on the network at reasonable speeds.
      • by Znork ( 31774 ) on Tuesday December 05, 2006 @01:18PM (#17115910)
        Heck, with 100Gb ethernet, who says you have to _have_ PCIe; once you reach those speeds it would be entirely viable to move most PC components to their own ethernet bus/network. Imagine having your NVidia graphics units connected to your LAN and usable from any of your PC's; plugging another unit into the network makes it instantly accessible by all devices as tho it was more or less local hardware. Etc.

        SAN is storage moving that way, we might very well expect other components to move in the same direction.

        Of course, expect a horde of crap patent applications for shit like 'graphics acceleration _over a network_' just because the technology becomes feasible. Which may drive prices through the roof and/or hold development back a decade or five.
        • by NeuralSpike ( 968001 ) on Tuesday December 05, 2006 @02:25PM (#17117046)
          Dude, I think with the various packet headers etc., 100Gbit isn't all that much faster than a 16x PCI express slot. And then there is latency...
          • Yep. Mod parent up. Amazing the number of people on Slashdot who have no clue about networking.
          • by Znork ( 31774 )
            Dude, take a look at PCIe. PCIe is itself layered with physical layer, packet based data layer (including CRC's and packet numbers) and a transaction layer on top of that. It aint that far from ethernet as it is. Heck, take a look at how latency on PCIe is solved today, with credit buffers to avoid waiting for acks from devices...
    • Sure, your average desktop will not handle 100G Ethernet, but what kind of content or traffic could possibly require that much desktop bandwidth? And since this is over fiber, there are very few desktop networks out there you could even plug the PC into.

      This will certainly be a backbone technology, and a server technology. But this particual technology doesn't seem likely as a desktop technology in the near future.
    • Re: (Score:3, Informative)

      by jellomizer ( 103300 )
      Well a faster Pipe for the ISP allow them to increase max speed for their users. So say they had a 100GB Eathernet for their backbone internet connection. That means they could possible increase the max speed for their customers to closer to 10MB as of right now most U.S. ISP tend to cap to 5MBS Anything above that could be to much demmand on their systems. So you as a desktop can see an impovement.
    • Right now actually the PCI bus can't even take advantage of a 10G Ethernet card - you'll see that in real world conditions standard server class hardware is pretty much capped at 2G because of bus limitations.
      • Pretty sure the PCIe bus can supply 1GByte/s to a 10Gbit card. I know low-end DDR2 RAM is capable of around 3GB/s of sustained data transfers.

        And... I forget what the speeds are for HT...
        • by Znork ( 31774 )
          "Pretty sure the PCIe bus can supply 1GByte/s to a 10Gbit card."

          Depends on the number of lanes. Each PCIe lane is about 250MB/s, so you'd need at least a 4 lane slot/card.

          Ordinary PCI, including its 66MHz and 64bit bastard children, which has a fair installed base in the server space and which I suspect the grandparent meant, tops out at around 500MB/s. And with the severe drawbacks of the bus downgrading to the least common denominator, it's not exactly certain you'll actually get anything close to that.
    • Because it's not about the desktop, but yes, desktop users benefit.

      As backbone providers get more capacity, they can deliver faster speeds to their customers (your ISP and the ISPs of the websites you visit.) Ethernet is a very cost-effective physical transport layer as it removes some of the administration headaches involved with point-to-point links. This will eventually drive down the cost of fast backbones, allowing more bandwidth for less money.

      Your average desktop user doesn't have a need for 10G, muc
    • >>>"As it is, your average desktop will not handle anything even close to 100G Ethernet"

      Key words there "As it is"; If they build it, etc...

      Plus, I'd quite happily have 100G to the house. It would not be for one computer, but for the four that I currently have, plus who knows how many I'd have by the time they roll it out.

      A couple of apps I might use it for [pipedream]:
      thin client gaming to Google-games(TM), where the googleserver does all the game crunching, HDR etc. I might want to Slin
    • You are right. Here are some numbers for the curious, nothing comes close to 100 Gbit/s:

      PCIe x16 (2.5 Gbit/s per lane, 8B/10B encoding): 32.0 Gbit/s bidirectional (64.0 Gbit/s of aggregated bandwidth)
      PCIe x8 (2.5 Gbit/s per lane, 8B/10B encoding): 16.0 Gbit/s bidirectional (32.0 Gbit/s of aggregated bandwidth)
      PCIe x4 (2.5 Gbit/s per lane, 8B/10B encoding): 8.0 Gbit/s bidirectional (16.0 Gbit/s of aggregated bandwidth)
      PCIe x1 (2.5 Gbit/s per lane, 8B/10B encoding): 2.0 Gbit/s bidirectional (4.0 Gbit/s of agg
    • And some of us would be happy if they only marginally bumped up the capacity, but worked on the latency issues instead. Something with Gigabit to 10Gigabit speed, but with less than 10 usec of latency would be a good start. Myrinet for everyone, since some of us use Beowulf-type clusters for work, rather than humor.
    • by jabuzz ( 182671 )
      News flash, your average desktop cannot even handle a 1Gbps link, let alone a 10Gbps. Experience tells me that you will see about 30MBps max out of a 1Gbps link on desktop hardware. You need server grade kit to go faster. I can max out a dual bonded 1Gbps link on my servers for example.
      • by Com2Kid ( 142006 )
        This is quite true, my laptop pegs out its CPU after around 8MB/s. (Pentium M 1.6ghz)

        Not to mention the poor HD, that is not contiguous writing, but rather multiple streams, so I imagine that the poor disk head is jumping all over the place trying to place that data!
        • Wow, what the hell equipment are you running? My laptop is 2.4ghz Pentium M and I can transfer a gig in a minute. That's ~17megabytes/sec. Sounds to me like your network is either congested or your file server is sorely lacking. Hell, with my 14drive SATA array I break 550megabytes/sec of effective throughput on my file-servers. My SAN boasts even more performance so I see this stuff as becoming very useful. I can't utilize 100gbit but 10gbit I could certainly saturate pretty easily. Think VMWare images pus

      • What experience is it that tells you throughput for a gigabit link would be that low? Even 100meg with a PCI bus can net you 70meg throughput depending on contents. My experience with gigabit links on my desktop are significantly higher than that. A gigabit link with a PCI bus tops out around 400megabit last I recall. When you move up to PCI-E and PCI-X then you can hit around 800megabit on a fiber connection losing the end with regular tcp/ip overhead. I can do 136megabit using my laptop hard drive to tran

    • Even if the desktop user never has direct access to it, he will benefit. This sounds like a great way to solve the last mile problem; a 100-gigabit line will support 500-1000 users at 100 megabits each.
    • Re: (Score:3, Informative)

      At 100Gbps, your processor's L1 cache is a bottleneck.

    • It is true that 100GbE will not target desktop users when it first becomes available. Initially, new flavors of higher speed Ethernet is typically used in switch-to-switch connections. Higher speed links make it easier to aggregate traffic from lower speed links to a single logical link. It usually takes at least 4-6 years for a faster Ethernet standard to propagate from core/distribution applications to server/desktop connectivity.

      However, the current PC architecture is not actually too far from removin

    • by altek ( 119814 )
      Ummmm... typical of a slashdot comment, all that exists is the backbone providers and home users, eh? What about enormous LANs in high-bandwidth settings? Hospitals, publishing companies, graphics design companies, audio engineering companies, research facilities, these are just a VERY few examples of places that would benefit from this.

      And with 6-mile over single-mode fiber, even places with multiple physical sites can benefit. Warehouse is 9 miles away you say? Well, just stick in a device to condition an
    • As it is, your average desktop will not handle anything even close to 100G Ethernet.

      AND? What would you expect the average desktop users to WANT 100G networking for? So they can fill-up their hard drive in 7 seconds (far faster than the drive can write)?

      At that point, your bottleneck is the PCI or PCI-X bus.

      Well then, thank goodness PCI-express is becomming quite popular on newish systems.

      As the bus has been one of the slowest PC components to innovate,

      See above.

      PCI was significantly over-engineered (and

  • And now for Slashdot madlibs: it would only take a few ______ (large time intervals ) at that speed to backup the average _____ (insert rival group here)'ers pr0n collection!
  • by Tackhead ( 54550 ) on Tuesday December 05, 2006 @12:47PM (#17115514)
    "The need for 100G Ethernet is growing as IP video and transaction-intensive Web 2.0 applications are exploding across the Internet. Companies such as YouTube regularly add 10Gbps service pipes to meet growing demand, and carriers will need a better way to aggregate such links, industry watchers say."

    - From TFA.

    Which is all well and good, but for honesty, I prefer Bill Watkins' take on it.

    "Let's face it, we're not changing the world. We're building a product that helps people buy more crap - and watch porn."

    Bill watkins, CEO of Seagate [cnn.com]

  • by SilentGhost ( 964190 ) on Tuesday December 05, 2006 @12:47PM (#17115520) Journal
    328 feet - it's a good standard, but I like 100 metres better.
    • Insightful? [google.com]

      While I do prefer the metric system, it would be nice if the mods remembered to have their funny detectors on. =)
      • I'm sorry, but my comment wasn't meant to be funny. if any international body (or standardizing for that matter) would prefer to use in their everyday practice system not familiar to a vast majority of world's population, I wonder what kind of "standard" will it produce.
    • Interesting. I was wondering where they pulled that spec out of. My next question is: If they based this one off of metric, why did they base the other one off of Old English?
    • by sasdrtx ( 914842 )
      By gum, I ain't using it until its maximum distance is a multiple of 100 good ol' American FEET!
  • Uplink (Score:4, Interesting)

    by GigsVT ( 208848 ) on Tuesday December 05, 2006 @12:51PM (#17115570) Journal
    What I really want to see is higher uplink ports on SMB hardware.

    Right now, if I want to make a medium size network using lower cost components, it might look something like 5- 24 port, 100-meg switches with 1 GB uplink to a big GB switch.

    The bottleneck here is those uplinks. Each 100meg switch has plenty of backplane, and so does the gigabit switch, but those 100 meg 24 port switches have to share 1GB each to the backbone MDF.

    So I really don't care about PCs or network cards or whatever, just give me 10GB links that I can use between switches without having to pay for overpriced Cisco crap.

    • Oversubscription models have not ever crossed your mind, have they?

      1:2.4 oversubscription isn't bad at all. Do you really think that a 24pt 100mbps switch needs 10 GIGABIT uplinks in order to work well? If so, I'm sure that Extreme or Force10 would love to sell you some hardware.
      • by GigsVT ( 208848 )
        I know, in hindsight my example wasn't too strong, I should have said 48 port switches, or pure gigabit.

        But consider a pure Gigabit network. Right now you'd have to have gigabit IDFs with gigabit uplinks to gigabit MDFs... that's 24:1 oversubscription with 24 port switches.
        • Yes, in which case, if you want pure gigabit performance, YOU HAVE TO PAY FOR IT.

          Can you legitimately justify to me, or anyone else, that an SMB network needs 1gbps access-layer switches, with 10gbps uplinks to distro/core layer switches? If that's the case, then I'll show you a network that needs to be running on something like Cisco, Extreme, or Foundry, and NOT your 'lowpriced SMB switches'.

          You can't build a sports car out of turds and baling wire. Well, you could, but you shouldn't expect much from it.
          • by jabuzz ( 182671 )
            Maybe, but at the moment there is no way on earth we could take 1Gb to the edge at work. The network is two big. Think up to 1400 outlets a patch room, think over a dozen patch rooms in the building. We are not using cheap SMB switches either, it's a combination of managed enterprise HP Procurve and Cisco.

            At the moment it is 100Mbps to the edge, 1Gb uplinks in the patch room, and 1Gb (sometimes two) uplinks out the patch room. We really need 10Gb uplinks out the patch rooms just to get the performance level
            • There are full-rack chassis capable of terminating 1200+ 1gbps links, and can support multiple 10gbps uplinks. Specifically, the Foundry NetIron MLX-32. I will be blunt - if you've got 50 24-port switches in a patch room, I can geniunely state that your network planning skills are atrocious. There *are* better solutions out there. You may not LIKE the cost, but to be honest, unless you're ready to build your own switch, it's always a toss-up between cost and features. If you can't spend to get the features,
              • by jabuzz ( 182671 )
                I didn't plan the network and yes the cost is one issue. Terminating 1200 1Gbps links into one chassis is another issue. From a practical perspective it is a nightmare. We don't yet have the money for even one 10Gb uplink, let alone a full chassis switch. Besides a NetIron MLX-32 will only provide 640 ports in a chassis and 1280 in a rack. As I said originally we have just shy of 1400 ports in one room. That forms a small part of the building and an even smaller part of the overall network.
          • by GigsVT ( 208848 )
            There's no reason we need to have unnecessary bottlenecks on our uplink ports. 10G ethernet has been out of years, this waiting game is stupid.
            • You're arguing a ridiculous point - that there should never be oversubscription in a network, anywhere.
        • There's the 3Com 5500G which supports 48x 1Gbs access and 2x 10Gbs uplink ports per 1u switch. They stack up to 8 units to allow up to 448 ports with 96Gbs stacking bandwidth. Individually each switch has 232Gbs switching capacity.
    • Re: (Score:3, Interesting)

      You could use something supporting etherchannel and bond a few 1GB links together. We use that to great success, admittedly with using Cisco kit but there's plenty of other companies around making kit that supports channel bonding.

      Incidently what are your users doing that maxes out gig uplinks? We have 96 ports sharing 2x1gig uplinks all over the office without problem, but none are particularly heavy traffic users.
    • Re:Uplink (Score:4, Insightful)

      by Amouth ( 879122 ) on Tuesday December 05, 2006 @01:14PM (#17115868)
      just wanted to note.. while yes Cisco is overpriced they most certainly are not crap.. they do exactly what they are supposed to do and they do it well. if you are looking for something that is above the average Joe's network you are going to have to pay whether it be Cisco, foundry, or anyone else - most people that have Cisco switches don't use half the features that they get with them.. they just plug them in and run.. it is their configurability that makes them rock.
      • by GigsVT ( 208848 )
        it is their configurability that makes them rock.

        You mispelled suck.

        We have several Cisco PIX doing routing and VPN. They break all the fucking time, whenever anyone touches anything. Fragile as hell, and hard to debug.

        Our Cisco catalyst switch was the first switch I've ever seen that just crashed completely. We had to reboot it to fix it. Cisco wouldn't even believe us when we told them what happened.

        I've had my fill of Cisco crap. Just because it costs a lot doesn't mean it's good. Look up cognitive
        • by Amouth ( 879122 )
          not to be a jerk but if you had a PIX doing routing i can see why it failed, that isn't what it is for. as for the switch what type.. and what do you mean by crash.. i know that just because something is expensive doesn't make it good.. but i have never had any problems with cisco equipment (except refurbs...) now i have seen alot of times when someone put too small of a router or switch in for their needs and didn't know any better.

          and as for Cisco's VPN concentrators.. they are very good if you use ci
    • Channel bonding is your friend. Assuming a top of rack 48-52 port gig switch you can take your 1:48 over subscription down to 1:12 or 1:6. Depending on what your doing a 48+4 switch gets you a nice 1:12 over subscription and 1:24 with a failure assuming you can split up the vlans.
      • by GigsVT ( 208848 )
        I said with reasonably priced hardware for a small to medium business. We aren't talking vlans here.
        • Ah so all dumb unmanaged switches? Small business I would hope, even my low end clients are running hp procurve or similar gear (lifetime maint is hard to beat) 24 +2 port fast e is a few hundred or just over $10 per port. If you really need speed I would think you would be running large MTU and cut through rather than store and forward switching.
        • Re: (Score:3, Interesting)

          If you don't need VLANs or managed switches, try the "smart" switches which fill the niche between completely unmanaged and fully managed switches. Then you can get things like a 48-port gigabit smart switch for around $1500. Which gives you 40 ports for end-users and 8-ports for uplink or backbone use. Even some of the less expensive "smart" switches support VLANs, but you have to configure using a web browser.

          They aren't the fastest things in the world, but at least they do trunking.

          (Heck, I have a
  • Hey, if I help out.... Can I get it for free? My imaginary OC-192 is getting a tad slow and my imaginary income won't allow me to feed my imaginary family and get a 2nd 192.
  • Its already done (Score:3, Interesting)

    by warrior_s ( 881715 ) <kindle3.gmail@com> on Tuesday December 05, 2006 @01:08PM (#17115796) Homepage Journal
    Not exactly but Bell Labs did something like this in March http://www.lucent.com/press/0306/060308.coi.html [lucent.com]
    • Sorry for the incorrect link. I didn't realize alcatel and lucent merged and I simple copied the link from my bookmarks. Here it is again 100G [alcatel-lucent.com]
  • *sigh* (Score:5, Insightful)

    by M0b1u5 ( 569472 ) on Tuesday December 05, 2006 @02:26PM (#17117052) Homepage
    Strange. The standard is to be "six miles" over single mode, and "328 feet" over multimode.

    I don't get it!

    I mean, we KNOW all decent standards use metric measurements - and Americans are inclined to convert them to the National Stupid System, so 328 feet makes sense (100 metres) - but where does this "6 miles" business come from? It is only 9,660 metres (9.66 km).

    Surely the standard will be 10,000 metres - ten kilometres, and the poster was lazy, and couldn't be bothered with the extra 0.2 of a mile?

    My question is this: when the specification is clearly based on very simple numbers: 100 metres and 10,000 metres - why convert that into the Stupid System? /.ers are not so stupid as to have to be fed figures fudged for obscurity!
    • >push Ethernet to a megabits-per-second speed that does not currently exist under any standard

      >a comparable 100Mbps standard does not exist now for Ethernet to emulate,

      And then neglecting the question a journalist should have asked to add value over a press release, namely "Isn't this going to be way more expensive even than FDDI? How many machines have to be talking on the same LAN segment before this gets cost-effective?"
    • Don't blame the poster, blame the fucking article.

      From the fucking article:

      The IEEE will work to standardize 100G Ethernet over distances as far as 6 miles over single-mode fiber optic cabling and 328 feet over multimode fiber.

      Also Network World is based in the United States, so it can be assumed that the majority of their readers are also United Statesmen, and therefore, would recognize miles and feet more quickly than kilometers and meters. As for the rounding, I could really care less if 6 miles

    • by Grave ( 8234 )
      But how many hogs heads can I get out of that?

      Damn kids and their "standard" measurements. In my day, we measured the speed of the connection by how many beers we could drink before the nipples were visible!

Profanity is the one language all programmers know best.

Working...