Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
IT Technology

Cooler Servers or Cooler Rooms? 409

mstansberry writes "Analysts, experts and engineers rumble over which is more important in curbing server heat issues; cooler rooms or cooler servers. And who will be the first vendor to bring water back into the data center?"
This discussion has been archived. No new comments can be posted.

Cooler Servers or Cooler Rooms?

Comments Filter:
  • by green pizza ( 159161 ) on Wednesday April 06, 2005 @10:30AM (#12153764) Homepage
    Unlike most companies that are considering going back to water cooling, Cray has always used water cooling for their big iron. In fact, the only air cooled Crays are the lower end or smaller configured systems.

    All hail the Cray X1E !
  • by sammykrupa ( 828537 ) <sam@theplaceforitall.com> on Wednesday April 06, 2005 @10:30AM (#12153767) Homepage Journal
    Lots of A/C [chaosmint.com]. (more where that came from [chaosmint.com]).
  • by 1zenerdiode ( 777004 ) on Wednesday April 06, 2005 @10:38AM (#12153882)

    I thought Cray's originally used Fluorinert(tm), which is definitely *not* water... *spark* *spark* *fizzle*

    All hail non-conductive fluorochemicals!

  • by onyxruby ( 118189 ) <onyxrubyNO@SPAMcomcast.net> on Wednesday April 06, 2005 @10:50AM (#12154033)
    Water cooled servers have been out for a little while by some vendors [directron.com]. You can find rack mount water cooled [kooltronic.com] gear pretty easily. Too much damage is done too quickly when you don't have cooling. I have worked in environments where if a server room was allowed to get up to 74 F /23.3 C and an HVAC contractor wasn't already on the way there would be hell to pay.

    There really isn't a question of if it will become widespread. Overclocking sites have had more than a few visits from Intel and AMD over the years. It's an inevitable problem with an inevitable solution. The only question is how long until water cooling becomes more popular. Heat needs have had people clamoring for Pentium M processors for rack mount gear for a while as well. It's a reasonably speed CPU that handles heat fairly well. It would work very nicely in rack mount gear, but motherboards that will take one are fairly rare.

    As for server rooms, they will continue to be air conditioned for as long as all of your server room equipment is in their. Even if you found a magical solution for servers you still have RAID arrays, switches, routers and the like all in the same room. Server rooms are well known by HVAC people as requiring cooling. Most HVAC vendors will prioritize a failed server room HVAC over anything but medical. They know damn well that anybody that has an air conditioner designed to work in the middle of January in Minnesota or North Dakota isn't using the cooling for comfort.

  • by Anonymous Coward on Wednesday April 06, 2005 @10:51AM (#12154054)
    the problem is it ends up getting contaminated and starting to become a rather better conductor

  • by standbypowerguy ( 698339 ) on Wednesday April 06, 2005 @10:57AM (#12154119) Homepage
    Data centers with heat problems usually fall into three categories; those with inadequate cooling capacity, those with inadequate cooling distribution, and those with unrealistic equipment densities.

    However, I often find people have misconceptions, they think they have a heat problem, but in reality they do not. One must measure the air temperature at the inlet to the servers, not the exhaust. If the inlet air meets the manufacturer's specifications, there is no problem, despite the fact that it's uncomfortably hot in the exhaust aisle.

    "Hot spots" can often be corrected by rebalancing, which is the science of redirecting the supply air proportionately to the heat loads in the space. Any good maintenance firm that knows data centers will offer rebalancing services.

    If you really do have a heat load problem, e.g. more load than capacity, as evidenced by excessive temperatures throughout the space, consult a mechanical engineer that specialzes in data centers.
  • Re:Mabe a mix? (Score:2, Informative)

    by burgeswe ( 873550 ) on Wednesday April 06, 2005 @11:03AM (#12154184)
    I came in a few weeks ago and found my office full of water from where the water main into the boiler had cracked and formed a small lake in my office.
    "**laughs** oh, I know, we'll stick the IT guy in the lake"
    you have any suggestions on how to better protect equipment in those type situations?
  • by freelunch ( 258011 ) on Wednesday April 06, 2005 @11:08AM (#12154243)
    When SGI added BX2 nodes to NASA's Columbia system, the standard air cooling was inadequate. They were forced to do a quick cooling change [designnews.com] that added water cooling. Some would call the change a kludge.

    More detail on the change, and cooling in general, can be found in this interview [designnews.com] with the SGI designers who dealt with the problem.

  • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Wednesday April 06, 2005 @11:08AM (#12154245) Homepage Journal
    When you use DC power in a data center you run a high voltage (24V, 48V, depending on equipment) and then it's regulated down to all the usual voltages. However, it's a lot easier to transmit AC power over distances. It's not exceptionally lossy to convert AC to DC or vice versa any more.
  • by standbypowerguy ( 698339 ) on Wednesday April 06, 2005 @11:20AM (#12154384) Homepage
    Because DC can't be transformed easily, like AC. Distributing DC at low voltages requires higher current to achieve the same power (kW), thus a significantly larger wire size. Distributing DC at higher voltages is also inefficient, each end use device would require a DC/DC converter to convert to the lower voltages. The name DC/DC converter is a misnomer, most of them use high frequency sampling (AC) as an intermediate step.
  • by FireFury03 ( 653718 ) <slashdot&nexusuk,org> on Wednesday April 06, 2005 @11:39AM (#12154627) Homepage
    What I have never understood is why servers virtually always have AC power supplies.

    For low voltages I don't see any problem with DC but AFAICR at higher voltages DC is more dangerous - a shock from an AC supply causes you to let go quickly, a shock from a DC supply (ISTR) causes the muscles in your hand to contract so that you can't let go.

    However, these days we have so many low voltage DC systems (even in homes) that running a 12 or 18v DC supply around your office/home/datacentre sounds like a good idea. You still have to convert it to the voltages you need (usually 12v, 5v, 3.3v, and maybe a few others) but I can't help thinking that building a DC-DC converter for these low voltages would probably be cheaper and easier than a full 240v AC switched mode PSU for each device. (low power devices can even get away with using cheapo linear regulators).

    Of course I'd still like some power regulation in each device since I don't want a power spike in the low voltage circuit blowing every device.
  • by Detritus ( 11846 ) on Wednesday April 06, 2005 @11:42AM (#12154661) Homepage
    Due to the insane current and voltage regulation requirements of today's motherboards, the power supply for the CPU and associated chips has to be physically close and tightly integrated into the motherboard. You can't just pipe in regulated DC voltages from an external power supply directly to the chips on the motherboard. In your typical PC, the power supply (the metal box) provides bulk regulated DC power. Some stuff can run directly from the power supply. Components with demanding power requirements, like the CPU, are powered by dc-to-dc converters on the motherboard. These take DC power from the power supply, convert it to high-frequency AC, and back to regulated DC.

    The general rule is that stricter requirements for power supply performance can only be met by decreasing the physical distance between the power supply and the load. The trend towards lower supply voltages and higher currents makes the problem worse.

    AC power wiring is cheap and well understood. It doesn't require huge buss bars or custom components. It is the most economical way to distribute electrical energy.

    Once you reach the box level, you want to convert the AC to low-voltage DC. Confining the high-voltage AC to the power supply means that the rest of the box doesn't have to deal with the electrical safety issues associated with high-voltage AC. The wiring between the power supply and load is short enough to provide decent quality DC power at a reasonable cost. Those components that require higher quality power can use the DC power from the power supply as the energy source for local dc-to-dc converters.

    You could feed the box with -48 VDC like the telephone company does with its hardware. You would still end up with about the same amount of hardware inside the box to provide all of the regulated DC voltages needed to make it work. Cost would increase because of the lower production volumes associated with non-standard power supplies.

    In the end, it boils down to economics. DC power distribution costs more money and it doesn't meet the performance requirements of modern hardware. The days of racks full of relays, powered directly from battery banks, are long gone.

  • by SuperBanana ( 662181 ) on Wednesday April 06, 2005 @11:44AM (#12154691)
    This means that if you produce regulated 5V at one side of your datacentre, by the time it's reached the other side it's not 5V any more. But it should be easy to get round this by producing 6V and having DC regulators; they're very small and extremely efficient these days.

    ...aaaaaand where do you think that energy goes?

    [DING] "Heat, Alex" "Correct, for $100."

    ...aaaaaand what do you think that energy loss thanks to high current means?

    [DING] "Efficiency less than a modern AC->DC power supply" "Correct, for $200."

    Anyone particpating in the "DC versus AC" discussion would do well to pick up a history book and read about Westinghouse and Edison. There's a reason we use A/C everywhere except for very short hauls. Modern switching power supplies are very efficient and still the best choice for this sort of stuff.

  • by miller60 ( 554835 ) on Wednesday April 06, 2005 @12:10PM (#12155044) Homepage
    Liebert and Sanmina have been selling blade server cabinets that use chilled water for at least three years. Vendors and data center operators have been wrestling with the heat loads generated by blade servers since 2001 [carrierhotels.com], and the dilemma of how to cool high-density "hot spots" has caused many tech companies to wait on buying blades to replace their larger servers. That's changing now, driven by the need to save costs with more efficient use of data center space.

    The industry has taken a two-pronged approach. Equipment vendors have been developing cabinets with built-in cooling, while design consultants try to reconfigure raised-floor data center space to circulate air more efficiently. The problem usually isn't cooling the air, but directing the cooled air through the cabinet properly.

    There was an excellent discussion of this problem [blogworks.net] last year at Data Center World in Las Vegas. As enterprises finally start to consolidate servers and adopt blade serves (which were overhyped for years), many are finding their data centers simply aren't designed to handle the "hot spots" created by cabinets chock full of blades. Facilities with lower ceilings are particularly difficult to reconfigure. The additional cooling demand usually means higher raised floors, which leaves less space to properly recirculate the air above the cabinets. Some data center engineers refer to this as "irreversibility" - data center design problems that can't be corrected in the physical space available. This was less of an issue a few years back, when there was tons of decent quality data center space available for a song from bankrupt telcos and colo companies. But companies who built their own data centers just before blades became the rage are finding this a problem.

  • by Anonymous Coward on Wednesday April 06, 2005 @12:21PM (#12155218)
    Yes, you can get NEBS(?) compliant servers that take DC, but this isnt really a general option, but a distinct model line compleatly.

    Dell (and others) offer 48V DC power supplies for most of their rackmount servers.

    UPSs take AC, turn it to DC, charge their batteries. A sepearate system takes DC from batteries, inverts it and sends out AC. (Good UPSs, anyway. Otherwise they are "battery backups", not uninteruptable) Computer power supplies take AC and distribute DC inside the case. WTF?

    If you need to go any significant distance with low voltage wires, you need fat copper wire to avoid unacceptable voltage drops.

    Why doesn't APC start selling ATX power supplies?

    The market is very small. If you want a DC powersupply, it's probably for a rackmount server that doesn't use ATX, and you probably would have bought it with 48V DC in the first place.

    While 48V DC UPSes are cheaper & simpler than 120V AC, the heat generated by the UPS won't vary much.
  • by kilodelta ( 843627 ) on Wednesday April 06, 2005 @12:50PM (#12155689) Homepage
    When you consider that circuit switched phone carriers have been doing this for years you then understand that the fire codes are baloney.

    Phone switching and networking gear use -48VDC. If you've ever gotten to tour a switching facility the battery area is a treat.

    Many of those fire codes were developed during the intial roll out of electrical power. More than likely much of the resistance to it being widespread is because with DC, the current is constant, there isn't even a fraction of a break. AC at least gives you a very short period of time to 'break' from the connection.

    Good design and practices would elminate the perceived dangers but I still think you'd catch a lot of static for it from authorities.
  • Cooler servers (Score:3, Informative)

    by TrevorB ( 57780 ) on Wednesday April 06, 2005 @01:01PM (#12155839) Homepage
    Because rarely is the AC ever plugged into the UPS (takes too much power) and most server rooms die during a power failure not due to the UPS running out of power, but because the room overheats and the servers all shut down.

    Server rooms can turn into tropical saunas pretty fast. During a power failure we have to get into the office in 40 minutes to start powering down less important servers (try telling management that *all* the servers aren't mission critical, or worse yet, getting them to fork out $$$$$ for a bigger UPS)
  • by lcsjk ( 143581 ) on Wednesday April 06, 2005 @05:07PM (#12158944)
    Well, I am an EE and I have spent more years designing power supplies and backup systems than I care to talk about. As a power supply and system's designer, I also have spent a lot of time designing cooling systems at the board level and at the system level.

    1. The wall warts are there so that international safety codes can be met easily. They provide the necessary safety isolation between the 120/220 volts from the wall and the device they are powering. It takes a lot of time , money and testing to meet these international safety standards, and if you build the power supply inside your equipment, then you deal with having high voltage inside your enclosure, and a nightmare of conditions you have to meet and test.

    2. Once you have selected a wall wart to provide the "safety low voltage" of less than 40.2 volts peak, you can re-regulate to lower voltages as needed. And since many of these wall warts use switching technology, the efficiency can be quite high (80-90 percent). Those that use only a transformer for safety isolation have 95-99 percent effeciency. Yes, one large wall wart could do the job of the multi set of small ones, but that would mean someone would have to design a custom power supply for those products, and that gets back to the original problem

    3. The computer power supply provides this same function. That is why it is sealed, and has only low voltage outputs. Even the power entry plug and fuse is part of the power supply. Look on the label and you will see all kinds of international safety logos. The processor power of 3.3 volts and other similar voltages is conditioned directly at the processor since there are unique design problems that occur as a processor changes from 100 percent usage to sleep mode.


    Addressing the cooling issue: (I will try to use simple explanations for those who do not design in this area.)

    For all ICs, the faster it goes, the hotter it gets. If a processor can be slowed or stopped when it is not doing anything, the power can be greatly decreased. However, as the article states, most people prefer to get better performance instead of being concerned with heat.


    The problem is the same whether you are discussing the silicon chip inside a semiconductor package or discussing the server box inside a room. The power generated by the chip has to be removed from the room to avoid overheating. Assume a processor chip dissipates 200 watts. The chip designer has to find a way to move 200 watts from the silicon chip to the metal or ceramic surface of the package. The board designer has to find a way to move the 200 watts from the package surface to a heat sink. The system (box) designer has to find a way to move the 200 watts from the heatsink to the surrounding air and get the heat out of the box and into the room air. The building designer has to find a way to move the 200 watts from the room to the outside. As you can see, it is not a matter of room cooling vs processor cooling. Each person in the design sequence has to remove heat. The only person who can really make a difference is the processor designer who can shut down to lower power at any time the full speed is not needed. Secondarily, the programmers can tell the processor when the program needs only slow speed, such as when one is typing or filling in values on a spread sheet or database and thus decrease the initial power. However, even if the processor is running slow speed, the other designers in the heat chain still have to design for maximum heat removal unless the processor can tell the rest of the system its actual power used.
    Now, give yourself 50 of these processors in one large room and you are generating 10,000 watts of heat. That's like having seven of the 1500 watt bathroom heaters running continuously.
    Since your overall power supply is only 80 percent efficient, the total power jumps up to 12000 watts. Now add in the power for the hard drives, fans, monitors, etc and you start getting really warm air. That's how it works! Now how do you solve it?

  • by illtud ( 115152 ) on Thursday April 07, 2005 @06:43AM (#12163911)
    Of the many server rooms I've been in, the most effective cooling I've seen has been to enclose the racks into sealed cabinets (adding a cheapish layer of physical security as well, by locking the things) and then piping cooled air directly into the top of the cabinets.

    You sure? Cool air in the *top*? All the ones I've seen (and all the rack equipment manufacturers accessories) pull cool air from under the raised floor and pull it *up* through the rack. This is because the hot air your systems are exhausting is already rising, and pulling the cool air up and exhausting at the top makes a lot more sense!

If all else fails, lower your standards.

Working...