Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
IT Technology

Cooler Servers or Cooler Rooms? 409

mstansberry writes "Analysts, experts and engineers rumble over which is more important in curbing server heat issues; cooler rooms or cooler servers. And who will be the first vendor to bring water back into the data center?"
This discussion has been archived. No new comments can be posted.

Cooler Servers or Cooler Rooms?

Comments Filter:
  • Why not both? (Score:4, Insightful)

    by tquinlan ( 868483 ) <tom@thomasquinla[ ]om ['n.c' in gap]> on Wednesday April 06, 2005 @10:29AM (#12153735) Homepage
    Unless you make things so cold as to prevent things from working properly, why not just do both?

    • Re:Why not both? (Score:5, Insightful)

      by Anonymous Coward on Wednesday April 06, 2005 @10:33AM (#12153805)
      Cost.

      The question isn't whether it's good to keep both cool. The question is, which makes more financial sense? Cooling the whole room? Spending the money to purchase servers with a low heat-to-computation ratio?

      Probably a combination. But to say that people should equip every computer with VIA ultra-low-power chips _and_ freeze the server room is silly.
      • Re:Why not both? (Score:4, Insightful)

        by elgatozorbas ( 783538 ) on Wednesday April 06, 2005 @11:03AM (#12154180)
        The question isn't whether it's good to keep both cool. The question is, which makes more financial sense? Cooling the whole room? Spending the money to purchase servers with a low heat-to-computation ratio?

        In that case, I would say: cool the room. The room is forever (on this timescale), the servers maybe change every 5 years. Of course, start by NOT choosing the hottest server too, but I would invest in the room.
        Also I don't expect room cooling techniques to improve significantly in the next few years. Servers hopefully will.

        • Re:Why not both? (Score:4, Insightful)

          by Anonymous Luddite ( 808273 ) on Wednesday April 06, 2005 @11:19AM (#12154380)
          >> I would say: cool the room.

          I think you're right. That's the way we do it, but - (there's always a but) some cabinets still get damn hot depending on what's in the rack. Sometimes you need to do spot cooling as well, or put in bigger fans to keep the equipment closer to ambient.

          I think starting with a cool room is the most cost effective way though - not to mention it makes work "O.K." in August...
          • Re:Why not both? (Score:4, Interesting)

            by Sylver Dragon ( 445237 ) on Wednesday April 06, 2005 @12:47PM (#12155648) Journal
            I wonder if a more directed cooling might work better. For instance, change the rack/chassis design such that it expects airflow to come from the top and exit the bottom, then duct the A/C right into the top of each rack. While you would still keep the rest of the room cool, it just seems to be wasteful to keep a several hundred square foot room at 60 degrees the whole time when the real goal is to keep the equipment in the racks from baking itself into oblivion.
            I also agree with the guy in the article, liquid cooling in the server room is going to happen eventually. I got to see the difference a simple watercooling system made on a P4 3.02GHz Extreme Edition chip, stuffed in the same case with a GeForceFX5950. Even with some serious fans the case was borderline overheating in games. Part of the problem being that the room it was in wasn't kept that cool, and the owner had it in a cabinet in his desk (it is what that cabinet was designed for). He dropped a liquid cooling system into it, and now the thing is always nice and frosty. And even with the jolts and jostling of taking the system to several LAN parties, the liquid cooling system is still leak free and rock solid. His experience has actually made me consider one for my own next system. For a server, where the system usually sits still long enough to collect a measureable amount of dust, water cooling may be a very good choice. If it's installed properly the likelyhood of leaks is low, and the performance can be very good. Heck, I can see it now, our server rooms will eventually have a rack or two devoted entirely to the radiators for the liquid cooling systems of servers, which run hot enough form plasma.

            • Re:Why not both? (Score:3, Interesting)

              by temojen ( 678985 )

              I can see it now, our server rooms will eventually have a rack or two devoted entirely to the radiators for the liquid cooling systems of servers, which run hot enough form plasma.

              It seems more likely to me that the radiators would be placed outside. I could forsee water cooled racks that come with a centre mounted warm and cool water manifolds [toolbase.org] plumbed to high flow lines to take all the water to one big radiator outside...

              Or probably easier to manage, a 2-4U centre mounted unit with the manifold and pump

        • by Otto ( 17870 )
          Of the many server rooms I've been in, the most effective cooling I've seen has been to enclose the racks into sealed cabinets (adding a cheapish layer of physical security as well, by locking the things) and then piping cooled air directly into the top of the cabinets.

          If you buy your own racks to put gear in, then getting these things is easy, if you buy whole racks from a vendor with gear in it already (custom systems type of thing), then the thing comes in a cabinet which usually has some kind of a fan/
          • Of the many server rooms I've been in, the most effective cooling I've seen has been to enclose the racks into sealed cabinets (adding a cheapish layer of physical security as well, by locking the things) and then piping cooled air directly into the top of the cabinets.

            You sure? Cool air in the *top*? All the ones I've seen (and all the rack equipment manufacturers accessories) pull cool air from under the raised floor and pull it *up* through the rack. This is because the hot air your systems are exhaus
      • Re:Why not both? (Score:4, Insightful)

        by FireFury03 ( 653718 ) <slashdot@NoSPAm.nexusuk.org> on Wednesday April 06, 2005 @11:21AM (#12154404) Homepage
        The question isn't whether it's good to keep both cool. The question is, which makes more financial sense? Cooling the whole room? Spending the money to purchase servers with a low heat-to-computation ratio?

        You don't need to cool the whole room - you could just cool the cabinets. Most cabinets have doors and sides, an open bottom and fans at the top. So you can blow cold air up the inside of the datacabinet (which is what most datacentres do anyway) and take the air from the top to recycle it with reasonably minimal air (and hence heat) exchange with the rest of the room.
      • Re:Why not both? (Score:3, Insightful)

        by drgonzo59 ( 747139 )
        Not just cost but also reliability (in the end also a cost issue). Having water flow among the CPUs, hard drives and all other components is just asking for trouble. In case of a leak it won't be just one damaged CPU, or one memory stick so the system can still compensate and keep going, but the whole box or maybe rack might be ruined.

        This doesn't seem like an either or situation or a large research question. A cost and reliability analysis should determine what it better for each individual setup.

      • Re:Why not both? (Score:3, Insightful)

        by hey! ( 33014 )
        It's pretty simple to figure this out with current day prices. You just crunch the numbers and see what mix of investments win. The place where it gets interesting is the place it always gets interesting: predicting the future. Will energy prices rise, and if so how quickly? If you expect them to double in a hundred years, then you can probably work from present day costs. The net present value of these future savings is nil. If you expect them to double in three years, then you have to do the operatin
    • Re:Why not both? (Score:3, Insightful)

      by FidelCatsro ( 861135 )
      This is exactly the point , you need a dammed good balance.
      If i have to be working on the server localy in some fashion i would rather not be boiling or freezing .In the average working enviroment We cant have it in a refrigerated room as this would bring up alot of working issues and not to mention design of the room itself .on the other hand we don't want a drasticaly complex cooling system that would add another possible avenue for failure to occur.
      The best is possibly a nicely air conditioned room with
    • If they would redesign server racks so that the DC power for the motherboards was brought in from outside the server room, they could probably;

      Reduce power consumption

      Reduce heat in the server room

      Improve reliability

      • I don't think the power supplies contribute a really major portion of the heat to servers these days. It's all about disks and processors.

        As for power consumption, I don't see how converting the power outside the rack uses less power than converting it inside the rack. And it won't improve reliability since each server will still need a power supply, it will just be a DC-DC one. I don't think you can (reasonably) run a 500-watt power line at 12 volts. Not to mention that you need more than one voltage.
        • by Iphtashu Fitz ( 263795 ) on Wednesday April 06, 2005 @11:52AM (#12154812)
          Converting to DC can help a lot in big datacenters if you have a lot of hardware. UPS's run exclusively off DC. (remember, they're basically just chains of car batteries daisy-chained together) The datacenters lose power & generate heat in the conversion from AC to DC and back to AC. They're always happy to avoid that second step if possible. And if you happen to have hardware located in a datacenter where telcos have equipment you're likely to find a huge DC infrastructure already in place since a lot of telco equipment runs on DC.

          Personally I think BOTH the power & the cooling needs to be addressed. I've worked in datacenters where cabinets are filled with 30+ 1U servers. Not only is it a royal pain in the ass to deal with all the power cabling for 30 individual servers but the heat blowing out the back of those cabinets is enough to melt the polar ice caps...

          I've also worked on blade servers like IBM's BladeCenter. Their next generation of blades will require even more power than it currently does. Trying to convince a datacenter to run 4 208 volt feeds to handle just a pair of BladeCenters (28 blades) is like pulling teeth. They can't comprehend that much power in such a small footprint. A rack full of BladeCenters could easily require 8 208 volt feeds, whereas a rack full of 1U's may only need 3 or 4 110 volt feeds.
          • Most internet data centers are not equipped with the cooling to handle customers with racks full of blades. A rack of HP BL40p blades puts out 55000 BTUs. A tier 1 data center in which I've worked was designed to cool 5000 BTUs per rack. While blades are pushed as a way to save space by increasing computing density, the amount of cooling per square foot of data center space, unless it has specifically been designed for blades, is rarely sufficient for cooling them. The aforementioned data center has mor
          • UPS's do not run exclusively off of DC. You are correct that they convert AC to DC, then route it through the battery strings, then invert it back to AC current. While this does generate some heat, it is NOTHING compared to the server racks. I've worked in datacenter environments for several years now, and I can say that one of the biggest foes to efficient cooling is poor space planning.

            I've never seen people so difficult to communicate with as hardware planning people. You would be amazed at how
  • Aquafina... (Score:5, Funny)

    by vmcto ( 833771 ) * on Wednesday April 06, 2005 @10:29AM (#12153742) Homepage Journal
    Will probably be the first vendor to bring water into the datacenter... I believe I've seen evidence in some datacenters already.
    • Re:Aquafina... (Score:5, Interesting)

      by _Sharp'r_ ( 649297 ) <sharper@@@booksunderreview...com> on Wednesday April 06, 2005 @10:59AM (#12154148) Homepage Journal
      Well, since my last company had most of their servers in a data center room where we had two different floods, I'd say I have a pretty good idea which hosting company will be the first to bring water into the data center ...

      The first problem was snow that piled up outside, combined with clogged drains, that led to melting snow coming in through the wall where some pipes entered/exited. Since their layout was power in the floor and networking in the ladder racks, it's actually pretty amazing that a large portion of the power plugs and switches still worked, even while being submerged in 6 inches of water.

      So about a year after they had taken care of that issue, a water pipe for a bathroom on the floor above burst, and of course the water came down right in our room in the hosting center. It wasn't so bad until the flourescent lights in the ceiling filled up and started to over flow. We were able to limit the damage by throwing tarps over the tops of all the racks (there goes your cooling effect, though), but we still lost about 100K worth of server and switching equipment.

      So yeah, water in the data center? It's been done.
      • by Roadkills-R-Us ( 122219 ) on Wednesday April 06, 2005 @11:28AM (#12154488) Homepage
        Bob was changing backup tapes when something caught his eye at his feet. Looking through te holes in the cooling tile in the raised floor, something was moving, like a bundle of shiny snakes. Looking closer, we had 1/2" of water down there!

        We spent several hours with a tiny shop vac (we need a bigger one!) emptying the water and being thankful Bob had seen it before it got high enough to get into the power conduits.

        An A/C unit drain pan had a clogged drain, so the sump pump couldn't carry the water away. Whoever had the units installed had purchased water alarms, but *they had never been hooked up*. Now *that* was a brilliant move.

        We now have water alarms down there.

        Meanwhile, the room stays about 70 degrees, and the servers stay comfy, as do we. I like it that way,
  • by koreaman ( 835838 )
    it sounds like they're having some kind of gang warfare over the topic...what the hell?
  • by Saven Marek ( 739395 ) on Wednesday April 06, 2005 @10:30AM (#12153757)
    I've always wondered this. why have duplication of a function in a server across every single server box when it could all be done in the environment. For example all servers get electricity from the server room and all servers get network from the server room so why not all servers get cooling from 10F cooling in the server room.

    It makes sense!
    • Blade servers are a noble start. Less duplication of power supplies and network gear. I imagine the situation will continue to get better over time.

      Duplication is nice in some respects, more redundancy is a big plus. That and you actually have several useful machines when you finally tear it all down. Who's going to buy 3 blades off ebay when they can't afford the backplane to plug 'em into?
    • What I have never understood is why servers virtually always have AC power supplies. Yes, you can get NEBS(?) compliant servers that take DC, but this isnt really a general option, but a distinct model line compleatly.

      UPSs take AC, turn it to DC, charge their batteries. A sepearate system takes DC from batteries, inverts it and sends out AC. (Good UPSs, anyway. Otherwise they are "battery backups", not uninteruptable) Computer power supplies take AC and distribute DC inside the case. WTF?

      Why doesn't APC start selling ATX power supplies? Directly swap out AC powersupplies, have them plug into the DC providing UPS and/or per-rack (or even per-room) powersupplies.

      Electrical codes are a BS excuse. Even if you needed verdor specific racks, a DC providing rack is, so far as the fire marshal should be concerned, just a very large blade enclosure, which are clearly acceptable.

      I cant beleive that Im the first one to ever come up with this idea. So there must be some problem with it.... Some EE want to explain why this wouldnt work?
      • it would work fine, except that you still have to convert the dc into different DC power voltages for the different devices. Another problem also is single point of failure, if your single AC to DC converter failed everything would go down. And your battery backup infact does not convert AC to DC then back again, it has two seperate paths a direct AC path then another path for AC to DC, yes it does have a DC to AC converter, but that is only used during power failure.
        • by warpSpeed ( 67927 ) <slashdot@fredcom.com> on Wednesday April 06, 2005 @11:28AM (#12154493) Homepage Journal
          if your single AC to DC converter failed everything would go down

          Assuming that you only have one converter. The nice thing about AC to DC conversion is you can have multiple AC converters all feeding the same DC voltage to a single set of conductors to run the DC power out to the machines. The converters can even be out of phase. If the power conversion system is designed right, any one or two converters can fail, be disconnected from the power feed, and the remaining good converters will pick up the slack.

      • by david.given ( 6740 ) <dg@cowlark.com> on Wednesday April 06, 2005 @10:59AM (#12154145) Homepage Journal
        Some EE want to explain why this wouldnt work?

        I'm not an EE, but it's something I've always wondered about. I don't have a datacentre, but I do have far too many computers: why does my machine room contain about fifteen wall warts, all producing slightly different DC voltages and plugged in to their various appliances via fifteen different non-standard connectors? Why not just have one low-voltage standard and have all these things plug into that?

        One possible reason is that (IIRC) power losses vary according to current, not voltage. By increasing the voltage, you can push the same amount of energy down a wire using a smaller current, which limits losses. This is why power lines use very high voltages.

        This means that if you produce regulated 5V at one side of your datacentre, by the time it's reached the other side it's not 5V any more. But it should be easy to get round this by producing 6V and having DC regulators; they're very small and extremely efficient these days.

        However, I suspect that the main reason why this kind of thing isn't done is inertia. There's so much infrastructure in place for dealing with high-voltage AC supplies that you wouldn't get off the ground using DC.

        • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Wednesday April 06, 2005 @11:08AM (#12154245) Homepage Journal
          When you use DC power in a data center you run a high voltage (24V, 48V, depending on equipment) and then it's regulated down to all the usual voltages. However, it's a lot easier to transmit AC power over distances. It's not exceptionally lossy to convert AC to DC or vice versa any more.
        • by SuperBanana ( 662181 ) on Wednesday April 06, 2005 @11:44AM (#12154691)
          This means that if you produce regulated 5V at one side of your datacentre, by the time it's reached the other side it's not 5V any more. But it should be easy to get round this by producing 6V and having DC regulators; they're very small and extremely efficient these days.

          ...aaaaaand where do you think that energy goes?

          [DING] "Heat, Alex" "Correct, for $100."

          ...aaaaaand what do you think that energy loss thanks to high current means?

          [DING] "Efficiency less than a modern AC->DC power supply" "Correct, for $200."

          Anyone particpating in the "DC versus AC" discussion would do well to pick up a history book and read about Westinghouse and Edison. There's a reason we use A/C everywhere except for very short hauls. Modern switching power supplies are very efficient and still the best choice for this sort of stuff.

          • However, the argument for putting the battery backup directly into an power supply, in unaffected by this statement.... also, having an atx supply with a DC input, and an external UPS with DC output with a cord shorter than 10 meters also has minimal issues.
            Also, by having the conversion process take place in the UPS you are shifting 20% of the heat generated by a modern PC away from the enclosure, and putting it into an external device. (compare devices like the PS1 and PS2 which have an internal converte
          • >>[DING] "Heat, Alex" "Correct, for $100."

            Incorrect, you didn't phrase your answer in the form of a question. :)
      • by jhines ( 82154 ) <john@jhines.org> on Wednesday April 06, 2005 @11:06AM (#12154227) Homepage
        Telecom equipment runs off -48VDC, and the phone company uses big batteries as their UPS.

        It exists, it just is expensive.
      • by windex ( 92715 ) on Wednesday April 06, 2005 @11:14AM (#12154317) Homepage
        We use -48VDC, and it's a pain in the ass to find power supplies for modern hardware.

        Whenever we need something outside of normal ATX, we wind up paying custom development fees.

        No one makes DC to DC power supplies that are worth a damn, and the few vendors who do sell them (Sun, IBM, etc) charge an arm and a leg above and beyond what we pay to have them custom engineered.
      • What I have never understood is why servers virtually always have AC power supplies.

        I've wondered that, too. Every time the power is coverted between AC, DC, and the voltage level, there is some loss, so it's less efficient to do all of these conversions. I think having a UPS-oriented power supply would be a Good Thing, where you can hook up some external battery pack for the backup.

        At a previous job, we used some Unix machines that were completely fault tolerant, including backup processors, backup ne

      • What I have never understood is why servers virtually always have AC power supplies.

        For low voltages I don't see any problem with DC but AFAICR at higher voltages DC is more dangerous - a shock from an AC supply causes you to let go quickly, a shock from a DC supply (ISTR) causes the muscles in your hand to contract so that you can't let go.

        However, these days we have so many low voltage DC systems (even in homes) that running a 12 or 18v DC supply around your office/home/datacentre sounds like a good i
      • by Detritus ( 11846 ) on Wednesday April 06, 2005 @11:42AM (#12154661) Homepage
        Due to the insane current and voltage regulation requirements of today's motherboards, the power supply for the CPU and associated chips has to be physically close and tightly integrated into the motherboard. You can't just pipe in regulated DC voltages from an external power supply directly to the chips on the motherboard. In your typical PC, the power supply (the metal box) provides bulk regulated DC power. Some stuff can run directly from the power supply. Components with demanding power requirements, like the CPU, are powered by dc-to-dc converters on the motherboard. These take DC power from the power supply, convert it to high-frequency AC, and back to regulated DC.

        The general rule is that stricter requirements for power supply performance can only be met by decreasing the physical distance between the power supply and the load. The trend towards lower supply voltages and higher currents makes the problem worse.

        AC power wiring is cheap and well understood. It doesn't require huge buss bars or custom components. It is the most economical way to distribute electrical energy.

        Once you reach the box level, you want to convert the AC to low-voltage DC. Confining the high-voltage AC to the power supply means that the rest of the box doesn't have to deal with the electrical safety issues associated with high-voltage AC. The wiring between the power supply and load is short enough to provide decent quality DC power at a reasonable cost. Those components that require higher quality power can use the DC power from the power supply as the energy source for local dc-to-dc converters.

        You could feed the box with -48 VDC like the telephone company does with its hardware. You would still end up with about the same amount of hardware inside the box to provide all of the regulated DC voltages needed to make it work. Cost would increase because of the lower production volumes associated with non-standard power supplies.

        In the end, it boils down to economics. DC power distribution costs more money and it doesn't meet the performance requirements of modern hardware. The days of racks full of relays, powered directly from battery banks, are long gone.

    • Data centers function on commodity hardware, and at the end of the day the cost of cooling the room would be bearable, however the cost of replacing all the cooling systems on the servers would not, and would need varied solutions for the thousands of types of boards likely to be in service. It's not a homogenous environment by any stretch of the mind. Additionally, this whole thing makes the bad assumption that just by cooling the room, you cool the components that need it. Hard disks dont function at low
  • Cooler Rooms (Score:3, Insightful)

    by forlornhope ( 688722 ) on Wednesday April 06, 2005 @10:30AM (#12153758) Homepage
    I like cooler rooms. Especially for a large number of servers. Its more efficient.
    • By what metric? If you choose power and cooling costs, then cooler servers wins. If you have servers that consume less power, you don't have to pay to cool them back down. It seems that cooler machines would be more efficient. Furthermore, targetted, specialized cooling solutions can be made more power efficient than general purpose room air-cooling solutions.
  • Err, well, both? (Score:4, Insightful)

    by Pants75 ( 708191 ) on Wednesday April 06, 2005 @10:30AM (#12153763)
    The servers are the heat source and the cool room air it the cooling mechanism? Yes?

    So take your pick. To make the servers cooler, either buy new more efficient servers or buy a whacking great air con unit.

    Since the servers are the things that actually do the work, I'd just get feck off servers and a feck off air-con unit to keep it all happy!

    Everyones a winner!

  • by green pizza ( 159161 ) on Wednesday April 06, 2005 @10:30AM (#12153764) Homepage
    Unlike most companies that are considering going back to water cooling, Cray has always used water cooling for their big iron. In fact, the only air cooled Crays are the lower end or smaller configured systems.

    All hail the Cray X1E !
  • Lots of A/C [chaosmint.com]. (more where that came from [chaosmint.com]).
  • Both (Score:3, Interesting)

    by turtled ( 845180 ) on Wednesday April 06, 2005 @10:31AM (#12153769)
    I agree, both solutions would help. Our room is a nice cool 62.5. Best condtions to work in!

    Cooler rooms also keep others out... we get a lot of, its so cold, and they leave. That's golden =)
    • I agree, both solutions would help. Our room is a nice cool 62.5. Best condtions to work in!

      We keep ours at 73 degrees, about 2 degrees warmer than the rest of the building. We did the 60 degree thing for awhile, but it required quite a bit more electricity to maintain that temp. The servers work fine at 80 degrees, but 73 is more comfortable and provides a little more cushion.
  • by havaloc ( 50551 ) * on Wednesday April 06, 2005 @10:32AM (#12153788) Homepage
    ...you won't need as much cooling in the room. Easy enough. This will save a ton of money in the long run, not to mention the environment and all that.
    • That depends entirely on how you get cooler servers. You could set up a complete liquid cooling system per server and have it be less efficient than just cooling the room that they're all in. Implementation details make or break the assumption you've stated.
  • That might keep the odd CPU or two cool for a while...
    • I realise this was a joke, as obviously liquid oxygen is going to be a fire hazard. But you can't go pouring liquid nitrogen on either, as you'll have problems with frost forming from moisture in the atmosphere.
  • by icebrrrg ( 123867 ) on Wednesday April 06, 2005 @10:32AM (#12153800) Homepage
    "Roger Schmidt, chief thermodynamics engineer at IBM, [recently] admitted that, while everyone knows servers are one day going to be water-cooled, no one wants to be first, believing that if their competitors still claim they are fine with air cooling, the guy who goes to water cooling will rapidly drop back in sales until others admit it is necessary."

    you know, some times the market actually rewards innovation. tough to believe, i know, and this isn't innovation, it's common sense, but mfg's are afraid of this? come on, people, the technocenti have been doing this for their home servers for a long, long time, let's bring it into the corporate world.
  • by Eil ( 82413 ) on Wednesday April 06, 2005 @10:33AM (#12153806) Homepage Journal

    Ideally, you should have a cool server and and cool room. The two work in combination. If you have a hot room, then the server isn't going to be able to cool itself very well even with the best heatsinks and well-placed fans. Yes, you could use water cooling, but there are other important bits inside of a machine besides whatever the water touches. But a cool room does no good if your servers aren't setup with proper cooling themselves.
  • There is already water in the datacentre where I work. The site is a converted leisure centre, and has a water-sprinkler fire system. The first whiff of smoke in that place and the entire server room is toast. A Halon system is regarded as too expensive. Seriously.
    • Our NOC uses dry pipe water sprinklers. The pipes are pressurized and empty. If a fire starts they do not open. It has to get hot enough to melt a release valve. If the release valve melts, then the water passes into the pipes and only then can be released by personal into the room.
    • Can't use halon for a lot of reasons. Halon is corrosive to the components, dangerous if you have people around, and systems cannot be recharged once discharged. This is why it makes sense to have a real emergency backup plan to run your critical systems off-site in the event of a problem in your data center.
      Also, with water and proper electrical controls, you can shut down the servers quick with one big switch and leave them off until they dry out. You won't lose 100% of the equipment and insurance wi
  • First thing.. (Score:4, Insightful)

    by Anti Frozt ( 655515 ) <chris.buffett@gmai l . c om> on Wednesday April 06, 2005 @10:33AM (#12153816)

    That comes to mind is that it will probably be vastly cheaper to cool a rackmount specifically than to lower the ambient temperature of an entire room to the point that it has the same effect. However, I'm not entirely sure how well this scales to large server farms and multiple rackmounts.

    I think the best option would be to look at having the hardware produce less heat in the first place. This would definitely simplify the rumbling these engineers are engaged in.

  • by hazee ( 728152 ) on Wednesday April 06, 2005 @10:35AM (#12153840)
    Water cooling? Pah! Why not take a leaf out of Seymour Cray's book - build a sodding great swimming pool, fill it with non-conductive freon, then just lob the whole computer in.

    Also has the added benefit that you can see at a glance which processors are working the hardest by looking to see which are producing the most bubbles.

    Wonder if you could introduce fish into the tank and make a feature of it? If you could find any freon-breathing fish, that is...
    • by Raul654 ( 453029 ) on Wednesday April 06, 2005 @10:54AM (#12154089) Homepage
      About 4 years ago, I was touring the US National Supercomputing Center in San Diego. One of the supercomputers had a clear plexiglass side where you could see inside, and it had running water and even a waterfall. Mind you, this 'water' was running directly over the electronic components. So the guy doing the tour said that it wasn't really water, but a chemical compound similiar to water, but very nonconductive. He tells us that it costs $10,000 per barrel, and that he always gets questions about what happens if you drink it. "Well, we're not sure what happens if you drink it, but we figure one of two things will happen. It could be toxic, and you drink it and die. Or, it could be nontoxic, and when our finicial guys found out you were drinking their $10,000-a-barrell water, they'll kill you."
    • The old Crays used a freon like flurocarbon which was liquid at room temperature. As these are the same fluids which were proposed for liquid breathing systems (like in the the Abyss) your swimming pool should be able to support air-breathing life (hamsters?)never mind fish - so long as you oxygenated it of course. It's about $400 a gallon. An olympic sized pool contains ~2 million litres so it would cost ~160 million dollars to fill unfortunately.
  • Comment removed (Score:3, Interesting)

    by account_deleted ( 4530225 ) on Wednesday April 06, 2005 @10:35AM (#12153846)
    Comment removed based on user account deletion
    • that you should stop the problem where it starts. Cool the servers, then the room won't get hot(duh).

      Easy to say, but are you willing to give up fast servers for cool servers? We don't have the technology to make fast and cool microchips.

  • I had a vendor bring water into my data center once.....

    JUST ONCE.
  • Swiftech is my guess as the first who will offer widespread, professional watercooling solutions for 1U rack-mount water cooling solutions using the Laing DDC pump, rebadged as the MCP350 [swiftnets.com]. I don't think any of the other big players in that industry currently have the products or expertise to pull it off in the near future.
    • Swiftech is my guess as the first who will offer widespread, professional watercooling solutions for 1U rack-mount water cooling solutions using the Laing DDC pump, rebadged as the MCP350. I don't think any of the other big players in that industry currently have the products or expertise to pull it off in the near future.

      Yeah, I bet Dell or HP can't figure out how to do that.

      *rolls eyes*

      Actually, given some of the recent decisions made by HP, they probably couldn't!
  • by doc_traig ( 453913 ) on Wednesday April 06, 2005 @10:37AM (#12153871) Homepage Journal

    The sign on the door clearly states, "No Food or Drink". Of course, shirts are still optional.

  • Cooler servers... (Score:5, Insightful)

    by Aphrika ( 756248 ) on Wednesday April 06, 2005 @10:40AM (#12153904)
    From experience of aircon failing/breaking.

    At least if a server fails it's one unit that'll either get hot or shutdown which means a high degree of business continuity.

    When your aircon goes down you're in a whole world or hurt. Ours went in a powercut, yet the servers stayed on because of the UPSes - hence the room temperature rose and the alarms went off. Nothing damaged, but it made us realise that it's important to have both, otherwise your redundancy and failover plans expand into the world of facilities and operations, rather than staying within the IT realm.
  • The costs for improving data centers to provide more or colder air is more than just building out more square feet of data center space.

    Just because HP is sells a 42U rack doesn't mean you have to cram blades into all 42Us. It's cheaper to spread the heat load across a larger area than to figure out how to put 1500 CFM out of a floor tile so the top of a blade rack gets air.

    There are studies by the uptime institute that say that 50% of hardware failures happen in the top 25% of rack space because the top
    • Just because HP is sells a 42U rack doesn't mean you have to cram blades into all 42Us. It's cheaper to spread the heat load across a larger area than to figure out how to put 1500 CFM out of a floor tile so the top of a blade rack gets air.

      Agreed, to a point.

      Google did a study about Datacenter Power Density a couple of years ago and IIRC, concluded that Blades were not the most cost efficient solution (for them) because of the increased environmental conditioning required...

      Wish I could find that link.
  • See the power consumption chart on this page [gamepc.com]. Buy the right CPUs and heat is much less of a problem. (Yes, I know, PowerPC is better in this regard, but if you want to run x86...)
  • by cbelt3 ( 741637 ) <cbelt AT yahoo DOT com> on Wednesday April 06, 2005 @10:48AM (#12154005) Journal
    OK, here's a concept.
    If data center location isn't such a problem as long as we have high speed data lines, locate the data center someplace nice and cold.
    Like Manitoba, or Minnesota, or Alaska, or Siberia. Heat the work area with the flow from the data center.
    Hire your local population to maintain the data center.
    Profit !
  • by onyxruby ( 118189 ) <onyxruby&comcast,net> on Wednesday April 06, 2005 @10:50AM (#12154033)
    Water cooled servers have been out for a little while by some vendors [directron.com]. You can find rack mount water cooled [kooltronic.com] gear pretty easily. Too much damage is done too quickly when you don't have cooling. I have worked in environments where if a server room was allowed to get up to 74 F /23.3 C and an HVAC contractor wasn't already on the way there would be hell to pay.

    There really isn't a question of if it will become widespread. Overclocking sites have had more than a few visits from Intel and AMD over the years. It's an inevitable problem with an inevitable solution. The only question is how long until water cooling becomes more popular. Heat needs have had people clamoring for Pentium M processors for rack mount gear for a while as well. It's a reasonably speed CPU that handles heat fairly well. It would work very nicely in rack mount gear, but motherboards that will take one are fairly rare.

    As for server rooms, they will continue to be air conditioned for as long as all of your server room equipment is in their. Even if you found a magical solution for servers you still have RAID arrays, switches, routers and the like all in the same room. Server rooms are well known by HVAC people as requiring cooling. Most HVAC vendors will prioritize a failed server room HVAC over anything but medical. They know damn well that anybody that has an air conditioner designed to work in the middle of January in Minnesota or North Dakota isn't using the cooling for comfort.

  • by gosand ( 234100 ) on Wednesday April 06, 2005 @10:54AM (#12154086)
    1. Open server farm in the Northwest Territories

    2. Open the windows

    3. Profit!!!

  • Data centers with heat problems usually fall into three categories; those with inadequate cooling capacity, those with inadequate cooling distribution, and those with unrealistic equipment densities.

    However, I often find people have misconceptions, they think they have a heat problem, but in reality they do not. One must measure the air temperature at the inlet to the servers, not the exhaust. If the inlet air meets the manufacturer's specifications, there is no problem, despite the fact that it's uncomf
  • Powersupplies (Score:3, Insightful)

    by iamthemoog ( 410374 ) on Wednesday April 06, 2005 @11:00AM (#12154156) Homepage
    In a rack of 1U units, does each 1U slab take 240volt (or 115 or whatever) each, and have its own PSU?

    I've often though it might be nicer if there could be one power supply for a whole room of PCs for example. This could be placed somewhere outside of the room and cooled there, with appropriate UPS backup too.

    12 and 5 volt lines then feed each PC - no noisy or hot PSU per machine... Peace and quiet, and a bit cooler too...

  • by dafz1 ( 604262 ) on Wednesday April 06, 2005 @11:01AM (#12154160)
    We're currently going through re-evauating our cooling needs in our server room. The answer we came up with is we have to buy a bigger a/c unit.

    Unfortunately, a couple times per year, the chilled water to our a/c unit gets shut off, and our servers are left to fry. The better answer is to have machines which run cooler. If they lose a/c, they won't fry. However, replacing clusters isn't cheap...and I don't think most people think, "which one's going to run the coolest?" when they are going to buy one.

    Does anyone have a link to a page that has grossly generalized heat numbers on certain processor families in certain case configurations(I realize these numbers aren't going to be anywhere near exact, but it would be a starting point)?
  • Hmmm... (Score:5, Funny)

    by biglig2 ( 89374 ) on Wednesday April 06, 2005 @11:01AM (#12154164) Homepage Journal
    PHB: Dear god, that server is actually red hot!

    SA: Yes, but notice that the room is lovely and cool.

    PHB: That's all right then. By the way, what's delaying that upgrade to Windows 2003?

    SA: Every time we put the CD in the drive it melts. We think it's going to be fixed in the next service pack.
  • by freelunch ( 258011 ) on Wednesday April 06, 2005 @11:08AM (#12154243)
    When SGI added BX2 nodes to NASA's Columbia system, the standard air cooling was inadequate. They were forced to do a quick cooling change [designnews.com] that added water cooling. Some would call the change a kludge.

    More detail on the change, and cooling in general, can be found in this interview [designnews.com] with the SGI designers who dealt with the problem.

  • by miller60 ( 554835 ) on Wednesday April 06, 2005 @12:10PM (#12155044) Homepage
    Liebert and Sanmina have been selling blade server cabinets that use chilled water for at least three years. Vendors and data center operators have been wrestling with the heat loads generated by blade servers since 2001 [carrierhotels.com], and the dilemma of how to cool high-density "hot spots" has caused many tech companies to wait on buying blades to replace their larger servers. That's changing now, driven by the need to save costs with more efficient use of data center space.

    The industry has taken a two-pronged approach. Equipment vendors have been developing cabinets with built-in cooling, while design consultants try to reconfigure raised-floor data center space to circulate air more efficiently. The problem usually isn't cooling the air, but directing the cooled air through the cabinet properly.

    There was an excellent discussion of this problem [blogworks.net] last year at Data Center World in Las Vegas. As enterprises finally start to consolidate servers and adopt blade serves (which were overhyped for years), many are finding their data centers simply aren't designed to handle the "hot spots" created by cabinets chock full of blades. Facilities with lower ceilings are particularly difficult to reconfigure. The additional cooling demand usually means higher raised floors, which leaves less space to properly recirculate the air above the cabinets. Some data center engineers refer to this as "irreversibility" - data center design problems that can't be corrected in the physical space available. This was less of an issue a few years back, when there was tons of decent quality data center space available for a song from bankrupt telcos and colo companies. But companies who built their own data centers just before blades became the rage are finding this a problem.

  • Cooler servers (Score:3, Informative)

    by TrevorB ( 57780 ) on Wednesday April 06, 2005 @01:01PM (#12155839) Homepage
    Because rarely is the AC ever plugged into the UPS (takes too much power) and most server rooms die during a power failure not due to the UPS running out of power, but because the room overheats and the servers all shut down.

    Server rooms can turn into tropical saunas pretty fast. During a power failure we have to get into the office in 40 minutes to start powering down less important servers (try telling management that *all* the servers aren't mission critical, or worse yet, getting them to fork out $$$$$ for a bigger UPS)
  • by sk8king ( 573108 ) on Wednesday April 06, 2005 @01:31PM (#12156254)
    We need to do everything to reduce the power required for all our electronic gear. More powerful servers [computational wise] require more power [electricity wise] which then requires more power [electricity wise] to power the air conditioning. If we could get server that somehow consumed less power [a lot less] we would win on two fronts.

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...