10Gb Ethernet Alliance is Formed 173
Lucas123 writes "Nine storage and networking vendors have created a consortium to promote the use of 10GbE. The group views it as the future of a combined LAN/SAN infrastructure. They highlight the spec's ability to pool and virtualize server I/O, storage and network resources and to manage them together to reduce complexity. By combining block and file storage on one network, they say, you can cut costs by 50% and simplify IT administration. 'Compared to 4Gbit/sec Fibre Channel, a 10Gbit/sec Ethernet-based storage infrastructure can cut storage network costs by 30% to 75% and increases bandwidth by 2.5 times.'"
Math on /. (Score:5, Funny)
Re: (Score:3, Insightful)
Re: (Score:2)
Re: (Score:3, Informative)
Modern ethernet 100Base-T switched or 1000Base-T can work to 100%. With the switched medium all the links run full duplex and packets for busy links are stored in memory like a router. With a good switch packets for non-busy links get 'wormholed' to the output before they arrive (arrive completely that is).
Normally this means that modern lans won't lose any packets; if your lan is losing packets you have a hardware problem. Perhaps you have an unswitched hub somewhere or a seriously overloaded switch tha
Re: (Score:2)
Re: (Score:3, Informative)
CSMA/CD is still important in modern ethernet networks, due to the fact that some devices do not properly auto-negotiate. Some devices doesn't obey the RFC's for interpacket spacing in an effort to improve their throughput that can wreak havoc on networks.
In many cases, if a link fails auto negotiation
Re: (Score:2)
CSMA/CD does not apply on full duplex links. On a full duplex link collisions simply cannot happen.
Yes decent CSMA/CD support may be important for good interoperation with legacy equipment where half duplex is unavoidable (say an old device with broken autonegotiation connected to an unmanaged switch) but it should not be in use on any link where top performance is required.
Re:Math on /. (Score:4, Informative)
Fibre only? (Score:5, Interesting)
"The draft standard for 10 Gigabit Ethernet is significantly different in some respects from earlier Ethernet standards, primarily
in that it will only function over optical fiber, and only operate in full-duplex mode"
There are vendors, such as Tyco Electronic's AMP NetConnect line, that have 10G over copper. Has this been discarded in the standard revision?
Re:Fibre only? (Score:4, Informative)
Re: (Score:2)
and on the NIC on 10GE.
But I agree with your point, 15m is enough for most situations. It also has the
extreme advantage of being affordable, as opposite to the fiber, where a 10 Gig
XFP transceiver alone costs almost as much as a 10 Gig CX4 NIC.
And damn that's fast !
Willy
Re: (Score:2)
That should certainly be possible with 10 gigabit ethernet, I dunno about infiniband.
Re: (Score:3, Interesting)
Re: (Score:3, Informative)
Re: (Score:2)
Not to be a cabling nazi but I think you meant 328 feet (100 meters equivalent) not 300 meters, right?
http://en.wikipedia.org/wiki/Category_6_cable [wikipedia.org]
Re: (Score:2)
Re: (Score:2)
Re:Fibre only? (Score:5, Insightful)
Re: (Score:2)
Re: (Score:3, Informative)
Maybe you are thinking about 9micron singlemode fiber?
Re: (Score:2)
Re: (Score:2)
Re:Fibre only? (Score:4, Funny)
Re: (Score:2)
Electric wiring is good for decades communication tech changes dramatically every 5-10 years.
Re: (Score:3, Funny)
Re: (Score:2)
Re: (Score:2)
Both a point and a joke.
Personally, I would love to have jefferies tubes in my house. An attic is used for storing you
Re: (Score:2)
Lets face it the original 10Mbps which is now 28 years old is still faster than the vast majority of peoples
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Trained weasels.
No, really! [wikipedia.org]
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Rated for 55m over Cat6 cable, or the full 100m over Cat6a cable.
Your Cat6 wiring is non-optimal for 10GigE, but will at least work.
Misleading Title. (Score:2, Insightful)
Block storage? (Score:3, Interesting)
By combining block and file storage on one network, they say, you can cut costs by 50% and simplify IT administration.
What is "block" storage?
Re:Block storage? (Score:5, Informative)
Re: (Score:3, Informative)
No. If you send packets requesting blocks of data on a region of disk space, without any indication of a file to which they belong, that's block storage. If you send packets opening (or otherwise getting a handle for) a file, packets to read particular regi
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Then you should try a third way: ATAoE from Coraid. For your kind of deployment can be half the cost of iSCSI, outperforms NFS and it just works (and for a real cheap try you can go with your standard linux boxes first and, if convinced go then for a disk cabin and a gigabit switch for a SAN-only network).
Re: (Score:2)
Your hard drive is a block device. A SAN just uses some protocols to make the OS treat a remote storage as a local disk (Think of it as scsi going over the network, instead of a local cable, which is almost exactly what iSCSI is). You can format, defrag, etc. The OS does not know that the device isn't insid
Re: (Score:2)
hmm, i'm pretty sure both nfs and smb only transmit the bits of the file the app wants (and maybe a little bit extra to reduce network traffic) not the entire file.
SCI, Infiniband (Score:2)
I can do this already. Up to 90 odd Gbit.
Ethernet will have to be cheap.
Re: (Score:3, Insightful)
Re: (Score:2)
bandwidth = performance ? (Score:5, Interesting)
Re: (Score:2)
-Rick
Re: (Score:3)
I'd have to wonder what kind of config you're running then. I've gotten 90MB per second over $15 RTL8169 cards and a $70 D-Link gigabit switch. Between consumer grade pc's running ietd on linux to a linux iscsi initiator. I have no doubt that 10GB ethernet will wipe the floor with FC.
Remember the planning phase when the iSCSI sales rep promised better performance per $ than SAN?
Remember the planning phase when the SAN vendor promised cheaper storage
Re: (Score:2)
Re: (Score:3)
However, the best advantage of iSCSI over FC is replication. How much extra infrastructure and technologies do you need to replicate to a site 1000
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
For a database server, it depends on your read/write patterns, but especially when doing large blocks of data, it can make a difference in both CPU use and throughput. Might be worth a look, but the NIC, switc
Bonding for Unlimited Bandwidth (Score:2)
Is there even a way to do this now with 1Gb-e, or even 100Mbps-e? So all I have to do is add daughtercards to an ethernet card, or multiple cards to a host bus, and let the kernel do the rest, with minimal extra latency?
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:3, Funny)
yes, look up "etherchannel" or "bonding"
Wow, that takes me back years. A little over 10 years ago, straight out of college and not knowing any better I purchased the "cisco etherchannel interconnect" kit for their catalyst switches. I had to work hard to track down a cisco reseller that actually had it (which should have been a clue). When I finally got it, the entire "kit" contents were, I kid you not, two cross connect cat5 cables. I learned an important lesson about marketing that day.
-Em
P.S. In all fairness to Cisco the cost of the kit was a
Re: (Score:2)
Re: (Score:3, Informative)
Re: (Score:2)
Yes and no... (Score:2)
No, it isn't a robust scalable solution. To play nice with various standards and keep a given communication stream coherent, it has transmit hashes that pick the port to use based on packet criteria. If it tries to use criteria that would actually make the most level use of all member links, it would violate the aforementioned continuity criteria. I have seen all kinds of interesting behavior depending on the hashing algorithm employed. I have seen a place
Re: (Score:2)
Re: (Score:2)
You may be able to get away with cheap nics if you are running linux (you won't be able to if you are running windows as bonding must be implemented at the device driver level). You will certainly need managed switches which explicitly support this feature.
http://en.wikipedia.org/wiki/Link_aggregation [wikipedia.org] seems like a good starting point for finding information on this topic.
Re: (Score:2)
Cost? (Score:2)
I'm also noticing that most if not all of my systems never even tap the bandwidth available on a pair of 4Gb FC ports, let alone 10Gb. I'm sure there are folks out there who need it, but it aint us.
Of course, our corporate standard is Cisco, so I'm su
Re: (Score:2)
The 10Gb ports aren't really about the hosts (today anyhow). They're generally more useful for the connections to large storage arrays which can push that kind of bandwidth, you'd be able to fan in more hosts to each storage array port.
Re: (Score:2)
To replace the aging MDS switches we went with four Brocade 4900's. We have what I would call a medium sized environment, so the larger broc
Re: (Score:2)
Interesting you point out a Sun Infiniband switch as the a possible option to "merge it all together". Ci
Re: (Score:2)
That said, I'm happy to see that cisco finally got their power consumption under control with the 9134's. The older models sucked so much p
Re: (Score:2)
There's something to be said for port density in large switches vs. small edges everywhere. Where I work we use Cisco 9513s in the cores and at the edges these days. We're migrating from 20+ physical M
Re: (Score:2)
I dont know what the old McData's were like in terms of power. Brocade was proud of their 1W / Gb power envelope (4w per port). The new 9132's look like they are close to that. For what is worth, brocade has a power calculator for the 9513 vs thier 4800 at http://www.brocade.com/products/directors/power_draw_density_calculator.jsp [brocade.com] (the numbers look close to real world, so it is a good sta
Re: (Score:2)
In other words, my choices today were gig-e and 10G-E iSCSI, 4Gb and then 10Gb FC. Anything else was not an option since it simply does not exist outside of the lab (or at a resonable cost).
At this time, 4Gb FC made the most sense for our storage network. It is well understood, everybody supports it and the things "just work". I didnt see any major storage vendor
Re: (Score:2)
Channel bonding (Score:3, Informative)
Re: (Score:2)
Sad; that the current crop of 10G over copper adapters do not approach 5 gig throughput; raw.
Wrong, current NICs are quite able to saturate the wire, even more (I got Ingo's TuX to push 20 Gbps of data through two Myricom NICs, and HAProxy to forward 10 Gbps between two NICs). The problem is the chipset on your motherboard. I had to try a lot of motherboards to find a decent chipset which was able to saturate 10 Gbps. Right now, my best results are with intel's X38 (one NIC supports 10 Gbps in both directions, ie 20 Gbps), and two NICs can saturate the wire on output. The next good one is AMD's 79
Re: (Score:2)
The parent is correct. Here is one example of a 10GbE card transmitting at wire speed (9.7 - 9.9 Gbps): http://www.myri.com/scs/performance/Myri10GE/ [myri.com]
The "5 Gbps" bottleneck mentioned by the grand parent is due to 10GbE NICs often being installed on a relatively slow 100MHz PCI-X bus, whose practical bandwidth is only: 100 (MT/s) * 64 (bits) * 0.8 (efficiency of a PCI bus) ~= 5.1 Gbps. Fully exploiting the throughput of 10GbE requires at minimum a (1) PCI-X 2.0 266MHz+, or (2) x8 PCI Express 1.0, or (3)
look forward to the new standard (Score:2)
Further blurring the distinction (Score:2)
Will there ever be "enough" bandwidth to a home? (Score:4, Interesting)
I can't think of anyone who's longing to get a fatter gas pipe connected to their house, or a fatter pipe to municipal water, or a cable of higher capacity to bring in more electricity.
But we're not like that with bandwidth. We always seem to want a fatter pipe of bandwidth! Will it forever be like that? Is the household bandwidth consumption ever going to plateau, like electric, gas and water consumption has in the US? (I know that global demand for these utilities is growing, but that's mainly because there are more people and a larger proportion are being hooked up to municipal utilities. The per-household numbers are not really changing very much, and in some cases decreasing.)
Will there be a plateau in bandwidth demand? If so, when and at what level? Thoughts?
Re: (Score:2)
Re: (Score:2)
Re:Will there ever be "enough" bandwidth to a home (Score:2)
Re:Will there ever be "enough" bandwidth to a home (Score:2)
The per-capita demand of each of those other utilities grew at a large rate when they were first introduced. The first running water was for maybe a sink and a tub, then we started adding toilets, multiple bathrooms, then clothes washers, dishwashers, automatic sprinkler systems, etc. As technology progressed, the amount of water delivered to a given house each day increased dramatically, but the change happened over many decades, which allowed the infrastructure to be updat
xenaoe.org (Score:2)
I'm way ahead of ya guys!
A lot of details left to fill in but I have a few clusters up and running already. Working on documenting my setup so that others may duplicate it.
this consortium is simply a front end for geeks (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2, Insightful)
Re: (Score:2)
Re: (Score:3, Insightful)
Re: (Score:2, Insightful)
Re: (Score:3, Funny)
Sheesh