Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Networking IT

10Gb Ethernet Alliance is Formed 173

Lucas123 writes "Nine storage and networking vendors have created a consortium to promote the use of 10GbE. The group views it as the future of a combined LAN/SAN infrastructure. They highlight the spec's ability to pool and virtualize server I/O, storage and network resources and to manage them together to reduce complexity. By combining block and file storage on one network, they say, you can cut costs by 50% and simplify IT administration. 'Compared to 4Gbit/sec Fibre Channel, a 10Gbit/sec Ethernet-based storage infrastructure can cut storage network costs by 30% to 75% and increases bandwidth by 2.5 times.'"
This discussion has been archived. No new comments can be posted.

10Gb Ethernet Alliance is Formed

Comments Filter:
  • Math on /. (Score:5, Funny)

    by Idiomatick ( 976696 ) on Thursday April 17, 2008 @11:38AM (#23106684)
    i'm worried they had to say 4 * 2.5 = 10 on /.
  • Fibre only? (Score:5, Interesting)

    by masonc ( 125950 ) on Thursday April 17, 2008 @11:41AM (#23106738) Homepage
    From their white paper,
    "The draft standard for 10 Gigabit Ethernet is significantly different in some respects from earlier Ethernet standards, primarily
    in that it will only function over optical fiber, and only operate in full-duplex mode"

    There are vendors, such as Tyco Electronic's AMP NetConnect line, that have 10G over copper. Has this been discarded in the standard revision?

    • Re:Fibre only? (Score:4, Informative)

      by sjhwilkes ( 202568 ) on Thursday April 17, 2008 @11:50AM (#23106878)
      Not to mention 10 gig CX4 - which uses the same copper cables as Infiniband, and works for up to 15M - enough for many situations. I've used it extensively for server to top of rack switch, then fiber from the top of rack switch to a central pair of switches. 15M is enough for interlinking racks too provided the environment's not huge.
      • 10 gig CX4 - which uses the same copper cables as Infiniband
        You can't use the same cables. The terminators are in the cable plugs in Infiniband,
        and on the NIC on 10GE.

        But I agree with your point, 15m is enough for most situations. It also has the
        extreme advantage of being affordable, as opposite to the fiber, where a 10 Gig
        XFP transceiver alone costs almost as much as a 10 Gig CX4 NIC.

        And damn that's fast !

        Willy
    • Re: (Score:3, Interesting)

      by gmack ( 197796 )
      If that's true I'm going to be a tad pissed. I payed extra when I wired my apartment so I could be future proof with cat6 instead of the usual cat5e.
      • Re: (Score:3, Informative)

        by mrmagos ( 783752 )
        Don't worry, according to the task force for 10GBASE-T (IEEE 802.3an), cat6 can support 10Gbit up to 55m. The proposed cat6a will support it out to the usual 300m.
        • Don't worry, according to the task force for 10GBASE-T (IEEE 802.3an), cat6 can support 10Gbit up to 55m. The proposed cat6a will support it out to the usual 300m.


          Not to be a cabling nazi but I think you meant 328 feet (100 meters equivalent) not 300 meters, right?

          http://en.wikipedia.org/wiki/Category_6_cable [wikipedia.org]
      • Future proof would have been cat7.
      • Re:Fibre only? (Score:5, Insightful)

        by Belial6 ( 794905 ) on Thursday April 17, 2008 @12:17PM (#23107348)
        Unfortunately, you made a fundamental, but common mistake. You cannot future proof your home by running any kind of cable. You should have run conduit. That is the only way to future proof a home for data. When I renovated my last home, I ran conduit to every room. It was pretty cool in that I didn't run any data cables at all until the house was finished. When The house was done, I just pulled the phone, coax and Ethernet lines to the rooms I wanted. If and when fiber, or a higher quality copper is needed, it i will just be a matter of taping the new cable to the end of the old, and pulling it through.
        • by afidel ( 530433 )
          62.5nm fiber is pretty damn future proof. Considering they run terabits per second over it today I don't think any home network is going to outgrow it during my lifetime =)
          • Re: (Score:3, Informative)

            I guess that is why we run 50/125 multimode everywhere. the 62.5 just didnt cut it anymore for higher bandwidth applications :-)

            Maybe you are thinking about 9micron singlemode fiber?
            • by afidel ( 530433 )
              MM is ok for short runs with a single wavelength but single mode is what's used for the truly high speed stuff =)
          • by Belial6 ( 794905 )
            Fiber isn't even present proof. Thy hooking up your good old telephone to that fiber. I'm sure it can be done, but you are talking about thousands of dollars per phone. Plus, there is no guarantee that fiber will EVER be common or economical in the home.
        • by Pinback ( 80041 ) on Thursday April 17, 2008 @01:01PM (#23108006) Homepage Journal
          In your case, you really do get to the internet via tubes.
        • When i build my home I am going to do just this. It doesn't make sense to put cat5, phone line or even cable TV coax inside a wall without being able to pull it back out later.

          Electric wiring is good for decades communication tech changes dramatically every 5-10 years.
        • Re: (Score:3, Funny)

          by darkpixel2k ( 623900 )
          Unfortunately, you made a fundamental, but common mistake. You cannot future proof your home by running any kind of cable. You should have run conduit. That is the only way to future proof a home for data. When I renovated my last home, I ran conduit to every room. It was pretty cool in that I didn't run any data cables at all until the house was finished. When The house was done, I just pulled the phone, coax and Ethernet lines to the rooms I wanted. If and when fiber, or a higher quality copper is needed,
          • by Belial6 ( 794905 )
            I honestly cannot tell if you are trying to make a point, or just trying to make a joke. If you are trying to make a point, it doesn't make sense, since installing conduit is pretty much just as easy as installing wire directly, it doesn't take any more room, and is pretty darn cheap. Plus, the equivilent of a Jefferies tube already exists in most houses. It is called an "attic", and the "crawl space" under the house. Sometimes there are really big ones called "basements".
            • If you are trying to make a point, it doesn't make sense, since installing conduit is pretty much just as easy as installing wire directly, it doesn't take any more room, and is pretty darn cheap. Plus, the equivilent of a Jefferies tube already exists in most houses. It is called an "attic", and the "crawl space" under the house. Sometimes there are really big ones called "basements".

              Both a point and a joke.

              Personally, I would love to have jefferies tubes in my house. An attic is used for storing you
        • by jabuzz ( 182671 )
          Assuming that your home does not need more than 55m cable runs (that's a big house if it does and you can probably afford to go on a holiday for a month while you have the house recabled), then Cat6 is good for at least 20 years. With 10GbE you can send 1080p uncompressed video at a whopping 200fps. You need serious high end server kit with mutli spindle SAS or FC RAID arrays to saturate a 10GbE link.

          Lets face it the original 10Mbps which is now 28 years old is still faster than the vast majority of peoples
          • by Belial6 ( 794905 )
            Your install would not have suffice even at completion. Your install is only going to handle TCP/IP traffic. What about the telephone lines, audio lines, and video lines? Basically you have missed a huge amount of data that is being shuffled around your average home. You like most people pulling wire directly are too wrapped up in solving a small problem to see the big picture.
      • by oolon ( 43347 )
        Yes you have wasted your money unfortunately cat 6 only supports 2.5 Gbs, and unfortunately no one produced equipement to work at that speed because the wiring people came up with the standard before asking if anyone wanted to produce equipement to work over it, cat5e is certified for 1Gbs use. The newer standards like 10Gbs ethernet has been designed with buy in from equipement manufacters, copper 10 or cat 6e was probably what you need. However as 10Gbs (copper) ethernet currently uses 45 watts a port...
        • by afidel ( 530433 )
          Wrong, Cat6 supports 10Gbit (10GBase-T) over reasonable distances and Cat6A supports it to 100m. If you have a small number of runs you should be able to run to 100m over Cat6 so long as you can physically separate the runs so there is no alien crosstalk (inter-cable crosstalk).
          • by oolon ( 43347 )
            It would not suprise me if it worked "ok" using a .5 metre cat 5 patch lead as it is electrically compatible, and I am sure the guy in the flat will probably the wiring works well enough, as I will with the stuff I am have in my house. That completely different getting a certified installation like I would in the datacentre I was in charge of. The company installing it certifies it will work at the rated speed over all the cables they lay. The standard is all about the complete wiring solution, not just the
      • by Guspaz ( 556486 )
        I don't know what they were smoking when they wrote that proposal, but there IS a copper spec. IEEE 802.3an-2006 (yes, 2006), or 10GBase-T.

        Rated for 55m over Cat6 cable, or the full 100m over Cat6a cable.

        Your Cat6 wiring is non-optimal for 10GigE, but will at least work.
  • Misleading Title. (Score:2, Insightful)

    by Anonymous Coward
    The 10GEA [wikipedia.org] is not the same as the storage alliance mentioned in TFA.
  • Block storage? (Score:3, Interesting)

    by mosel-saar-ruwer ( 732341 ) on Thursday April 17, 2008 @11:47AM (#23106844)

    By combining block and file storage on one network, they say, you can cut costs by 50% and simplify IT administration.

    What is "block" storage?

    • Re:Block storage? (Score:5, Informative)

      by spun ( 1352 ) <loverevolutionary&yahoo,com> on Thursday April 17, 2008 @12:02PM (#23107104) Journal
      SAN is block storage, NAS is file storage. Simply put, if you send packets requesting blocks of data, like you would send over your local bus to your local hard drive, it is block storage. If you send packets requesting whole files, it is file storage.
      • Re: (Score:3, Informative)

        by Guy Harris ( 3803 )

        SAN is block storage, NAS is file storage. Simply put, if you send packets requesting blocks of data, like you would send over your local bus to your local hard drive, it is block storage. If you send packets requesting whole files, it is file storage.

        No. If you send packets requesting blocks of data on a region of disk space, without any indication of a file to which they belong, that's block storage. If you send packets opening (or otherwise getting a handle for) a file, packets to read particular regi

        • by spun ( 1352 )
          Thanks for clarifying that. But I've never heard anyone refer to random access on a given file as 'block' storage.
    • The previous reply was good, just wanted to expand. File access is literally grabbing a file over the network. Like opening a word document. It pulls the entire file over the network, then opens it.

      Your hard drive is a block device. A SAN just uses some protocols to make the OS treat a remote storage as a local disk (Think of it as scsi going over the network, instead of a local cable, which is almost exactly what iSCSI is). You can format, defrag, etc. The OS does not know that the device isn't insid
      • The previous reply was good, just wanted to expand. File access is literally grabbing a file over the network. Like opening a word document. It pulls the entire file over the network, then opens it.
        hmm, i'm pretty sure both nfs and smb only transmit the bits of the file the app wants (and maybe a little bit extra to reduce network traffic) not the entire file.

  • etc.

    I can do this already. Up to 90 odd Gbit.

    Ethernet will have to be cheap.

     
    • Re: (Score:3, Insightful)

      by afidel ( 530433 )
      There are literally several orders of magnitude more ports of ethernet sold per year than fiberchannel and there are about an order of magnitude more fiberchannel than infiniband. Most of the speakers at storage networking world last week think that it's inevitable that ethernet will take over storage, the ability to spread R&D out over that many ports is just too great of an advantage for it not to win in the long run.
    • by jd ( 1658 )
      I thought the current limit on Infiniband was 12 channels in any given direction with 5 Gb/s per channel (and even then that only applies if you're using PCI 2.x with the upgraded bus speeds), giving you a peak of 60 Gb/s. Regardless, 60 Gb/s is still well over the 10 Gb/s of Ethernet. More to the point, latencies on Infiniband are around 2.5-8 us, whereas they can be 100 times as much over Ethernet. Kernel bypass is another factor. It exists for Ethernet, but it's rare, whereas it's standard for Infiniband
  • by magarity ( 164372 ) on Thursday April 17, 2008 @11:53AM (#23106930)
    So how will tcpip networking over this speed measure up to dedicated storage devices like SAN over fibre channel? I have to suspect not; existing iSCSI over 1GB tcpip is a lot less than 1/4 of 4GB fibre to a decent SAN. Sigh, I'm afraid even more of my databases will get hooked up to cheap iSCSI over this instead of SANs space that costs more dollars per capacity but delivers the speed when needed :( Reports coming up fast enough? Remember the planning phase when the iSCSI sales rep promised better performance per $ than SAN? It wasn't better overall performance, just better per $. There's a BIG difference.
    • by RingDev ( 879105 )
      If the practical bandwidth differences between bleeding edge SCSI and bleeding edge Ethernet over fiber between the physical storage of your data, the controller of the database, and the requester of the data, is the limiting factor of your "reports coming up", there is either a fundamental design issue going on, or your clients are sitting at the terminals with stop watches counting the milliseconds of difference.

      -Rick
    • by Znork ( 31774 )
      existing iSCSI over 1GB tcpip is a lot less than 1/4 of 4GB

      I'd have to wonder what kind of config you're running then. I've gotten 90MB per second over $15 RTL8169 cards and a $70 D-Link gigabit switch. Between consumer grade pc's running ietd on linux to a linux iscsi initiator. I have no doubt that 10GB ethernet will wipe the floor with FC.

      Remember the planning phase when the iSCSI sales rep promised better performance per $ than SAN?

      Remember the planning phase when the SAN vendor promised cheaper storage
    • by oolon ( 43347 )
      It also means when the networking team does a bad firewall change, not only will prevent user access it will mess up the storage, requiring alot more work to get your databases up and filesystems running again or atleast a forced shutdown and reboot. Personally I would not want to share block storage on my public interface in any case, as iscsi is not designed as a highly secure standard, as this would impact performance, public interfaces generally has more sufficiated firewalling rules on the switch gear
      • It is usually recommended to run a seperate network for the storage network. It is possible to run storage over the same nic that the server uses for other network traffic, but is not recommened (but is used often as a "failover"). This also helps when you turn on Jumbo Frames, some servers just don't like to work correctly. Seperate network makes it better.

        However, the best advantage of iSCSI over FC is replication. How much extra infrastructure and technologies do you need to replicate to a site 1000
        • by oolon ( 43347 )
          Your quite right in your points, I would still see it as a entry level installation and not really in competition with the FC equipement. We had 3 fibre fabrics, two for data/failover one for backup, all with dedicated switches. We moved to a site which had dark fibre for remote storage connections, however if your running it over 1000 miles, using iscsi, and host based replication, thats got to have real latency/syncronisation problems and for that kind of setup a magic boxes like IBMs SVCs come into there
    • by afidel ( 530433 )
      10Gb ethernet is 20% more bandwidth than bleeding edge 8Gb FC and it's a fraction of the cost per port. For new installations it's a no-brainer. For places with both infrastructures it's most likely to be evaluated on a per box basis.
    • by Sentry21 ( 8183 )
      One of the big issues you should consider is not just whether you are using jumbo framers or not. Some people claim a minimal performance increase, but jumbo frames can significantly reduce transmission/reception overhead on a gigabit network when doing block data transfers between 1500 and 9000 bytes.

      For a database server, it depends on your read/write patterns, but especially when doing large blocks of data, it can make a difference in both CPU use and throughput. Might be worth a look, but the NIC, switc
  • If these new fast ethernet specs came with specs for plugging multiple parallel paths between machines all under the same host IP#s, so we just add extra HW ports and cables between them to multiply our bandwidth, ethernet would take over from most other interconnect protocols.

    Is there even a way to do this now with 1Gb-e, or even 100Mbps-e? So all I have to do is add daughtercards to an ethernet card, or multiple cards to a host bus, and let the kernel do the rest, with minimal extra latency?
    • by Feyr ( 449684 )
      yes, look up "etherchannel" or "bonding"
      • Kind of. This method of aggregating bandwidth by using multiple links does poor man's load balancing. The traffic between one source and one target will only traverse a single path until that path fails. If you have a lot of different sources on one side of an etherchannel going to a lot of different targets on the other side of the etherchannel, you get a relatively balanced workload. If you've got a smaller number of sources and targets it's easy to get uneven bandwidth utilization across those links.
        • by Feyr ( 449684 )
          iirc, some switches provide more sorts of hashing than just src/dst mac. it's been a while, so i'd have to look it up again
      • Re: (Score:3, Funny)

        by Em Ellel ( 523581 )

        yes, look up "etherchannel" or "bonding"

        Wow, that takes me back years. A little over 10 years ago, straight out of college and not knowing any better I purchased the "cisco etherchannel interconnect" kit for their catalyst switches. I had to work hard to track down a cisco reseller that actually had it (which should have been a clue). When I finally got it, the entire "kit" contents were, I kid you not, two cross connect cat5 cables. I learned an important lesson about marketing that day.

        -Em

        P.S. In all fairness to Cisco the cost of the kit was a

    • Re: (Score:3, Informative)

      by imbaczek ( 690596 )
      802.3ad [wikipedia.org]
      • It looks like that's even supported in the Linux kernel [linux-ip.net]. But does it really work?
        • Yes, it works to the extent reasonable/feasible.

          No, it isn't a robust scalable solution. To play nice with various standards and keep a given communication stream coherent, it has transmit hashes that pick the port to use based on packet criteria. If it tries to use criteria that would actually make the most level use of all member links, it would violate the aforementioned continuity criteria. I have seen all kinds of interesting behavior depending on the hashing algorithm employed. I have seen a place
          • That sounds like a problem with some rare edge cases. Can't the admins test the deployment to ensure the traffic is maximally using the multiple channels in the actual installation configuration? Is it that complicated to test and reconfig until it works? Maybe with a mostly automated tool?
    • With the right NICs and switches you can already do this. You will probablly find it much easier if all the nics are from the same range.

      You may be able to get away with cheap nics if you are running linux (you won't be able to if you are running windows as bonding must be implemented at the device driver level). You will certainly need managed switches which explicitly support this feature.

      http://en.wikipedia.org/wiki/Link_aggregation [wikipedia.org] seems like a good starting point for finding information on this topic.
  • Hope they fix the pricing issue, because I the FC network I just put in cost less per Gb than a 1 gig-E network. When compared to the cost per Gb of a 10G-E netowrk, the entire thing cost less than the optics on the 10G stuff, let alone the actual switch costs.

    I'm also noticing that most if not all of my systems never even tap the bandwidth available on a pair of 4Gb FC ports, let alone 10Gb. I'm sure there are folks out there who need it, but it aint us.

    Of course, our corporate standard is Cisco, so I'm su
    • Didn't you buy Cisco Fibre Channel gear then? I don't recall their FC gear being inexpensive either!

      The 10Gb ports aren't really about the hosts (today anyhow). They're generally more useful for the connections to large storage arrays which can push that kind of bandwidth, you'd be able to fan in more hosts to each storage array port.

      • Our original SAN consisted of four MSS 9216's with the 32 port blades for a total of 48 ports per switch. Two at each site (stretched fabric, 3k of dark fiber between the two sites, dual fabric, bla, bla, bla) Every server has two ports and thus full redundancy in the event of a switch/cable/whatever failure. (ever try and bond two ethernet ports between two switches?).

        To replace the aging MDS switches we went with four Brocade 4900's. We have what I would call a medium sized environment, so the larger broc
        • It looks to me like everything on your list about the 4900s could be achieved using the stackable MDS 9134 switches. You get a 64 port switch in 2RUs, 4Gb line rate ports (no oversub), hitless firmware upgrades and less power than your old 9216s. There aren't two supervisors like in the director class MDS switches, but I suspect the same is true for your Brocade 4900s (I've never looked into them).

          Interesting you point out a Sun Infiniband switch as the a possible option to "merge it all together". Ci

          • The 9134's look a lot like the Qlogic 5600's. They both have the same issue: Stacking two will oversubscribe the 10Gb FC interconnects. I know that there are ways to avoid this in theory, but luck always has it that I never have a port available where I need it for optinal performance. Thus things like the 4900 are so nice in that it is next to impossible to screw up :-)

            That said, I'm happy to see that cisco finally got their power consumption under control with the 9134's. The older models sucked so much p
            • Maybe they are rebranded Qlogic switches...I recall the old 9020's were Qlogic underneath (Cisco's first 4Gbps FC switches). Those things didn't understand VSANs when they came out, which seemed like a strange thing to put into the MDS line. I guess they just wanted to get out there with a 4Gbps switch.

              There's something to be said for port density in large switches vs. small edges everywhere. Where I work we use Cisco 9513s in the cores and at the edges these days. We're migrating from 20+ physical M

              • The 9132's run SAN-OS and offer all of the features one would expec, so I think they actuall did it themselves this time.

                I dont know what the old McData's were like in terms of power. Brocade was proud of their 1W / Gb power envelope (4w per port). The new 9132's look like they are close to that. For what is worth, brocade has a power calculator for the 9513 vs thier 4800 at http://www.brocade.com/products/directors/power_draw_density_calculator.jsp [brocade.com] (the numbers look close to real world, so it is a good sta
  • Channel bonding (Score:3, Informative)

    by h.ross.perot ( 1050420 ) on Thursday April 17, 2008 @12:36PM (#23107654)
    Sigh, Aggregating 2 or more 1 GIG adapters does not give you 2 GIG of throughput. It is a sliding scale; the more you add the less total bandwidth you see. The safest bonding scheme uses LACP; Link Aggregation Control Protocol. This protocol communicates member state and load balancing request to the link member. 10G over copper will be a good thing for VM's. Sad; that the current crop of 10G over copper adapters do not approach 5 gig throughput; raw. Give the industry time; this it just like the introduction of 1 GIG from fast Ethernet. It took 2 generations of ASICs to get to what we consider a GIG card today.
    • Sad; that the current crop of 10G over copper adapters do not approach 5 gig throughput; raw.

      Wrong, current NICs are quite able to saturate the wire, even more (I got Ingo's TuX to push 20 Gbps of data through two Myricom NICs, and HAProxy to forward 10 Gbps between two NICs). The problem is the chipset on your motherboard. I had to try a lot of motherboards to find a decent chipset which was able to saturate 10 Gbps. Right now, my best results are with intel's X38 (one NIC supports 10 Gbps in both directions, ie 20 Gbps), and two NICs can saturate the wire on output. The next good one is AMD's 79

      • The parent is correct. Here is one example of a 10GbE card transmitting at wire speed (9.7 - 9.9 Gbps): http://www.myri.com/scs/performance/Myri10GE/ [myri.com]

        The "5 Gbps" bottleneck mentioned by the grand parent is due to 10GbE NICs often being installed on a relatively slow 100MHz PCI-X bus, whose practical bandwidth is only: 100 (MT/s) * 64 (bits) * 0.8 (efficiency of a PCI bus) ~= 5.1 Gbps. Fully exploiting the throughput of 10GbE requires at minimum a (1) PCI-X 2.0 266MHz+, or (2) x8 PCI Express 1.0, or (3)

  • in other news, ISO starts the process of ratifying the new MS10G(tm) specifications.

  • Between the network cable and the drive cable. USB subsumed many old-technology interconnects, perhaps 10 Gb Ethernet can replace SATA and continue the trend of decreasing the number of interfaces required on a computer.
  • by Dr. Spork ( 142693 ) on Thursday April 17, 2008 @01:53PM (#23108864)
    Stories like this always make me think of the following:

    I can't think of anyone who's longing to get a fatter gas pipe connected to their house, or a fatter pipe to municipal water, or a cable of higher capacity to bring in more electricity.

    But we're not like that with bandwidth. We always seem to want a fatter pipe of bandwidth! Will it forever be like that? Is the household bandwidth consumption ever going to plateau, like electric, gas and water consumption has in the US? (I know that global demand for these utilities is growing, but that's mainly because there are more people and a larger proportion are being hooked up to municipal utilities. The per-household numbers are not really changing very much, and in some cases decreasing.)

    Will there be a plateau in bandwidth demand? If so, when and at what level? Thoughts?

    • yes but this isnt for homes, this is for offices, no (normal person's) home has a 100MB internet. I think the limit for broadband will simply be when you can download a film in 5 minutes over BT, which basically depends on how fast the average is not just your connection.
      • Yea, I got that, but the fact that this is feasible for offices now means that homes could use the same technology in the near future. There are some "normal" homes in Sweden that already get 100MB internet, which is enough for streaming HD, but not enough for many other things we will eventually come up with. So I was wondering whether there will ever be an "enough" level to the home. For a business like Google, "enough" might only be: the sum of all user bandwidths in the world. But for users, will there
    • There has not yet been a plateau in how fast and how distant we want to communicate with each other, so there probably won't be a plateau in bandwidth demand. Communication started with speech in person (which wasn't very fast or could reach many people), and then moved to paper and ponies (as in the pony express, which could at least reach a larger audience), visual with signal towers (starting to speed up), telegraph, telephone, radio, TV, satellites, the internet...the pipe keeps expanding. It may neve
    • Yes, it will plateau, eventually.

      The per-capita demand of each of those other utilities grew at a large rate when they were first introduced. The first running water was for maybe a sink and a tub, then we started adding toilets, multiple bathrooms, then clothes washers, dishwashers, automatic sprinkler systems, etc. As technology progressed, the amount of water delivered to a given house each day increased dramatically, but the change happened over many decades, which allowed the infrastructure to be updat
  • http://xenaoe.org/ [xenaoe.org]

    I'm way ahead of ya guys!

    A lot of details left to fill in but I have a few clusters up and running already. Working on documenting my setup so that others may duplicate it.

Time is the most valuable thing a man can spend. -- Theophrastus

Working...