Cooler Servers or Cooler Rooms? 409
mstansberry writes "Analysts, experts and engineers rumble over which is more important in curbing server heat issues; cooler rooms or cooler servers. And who will be the first vendor to bring water back into the data center?"
Cray still has water cooling! (Score:5, Informative)
All hail the Cray X1E !
Virginia Tech's Super Mac Used.... (Score:2, Informative)
Re:Cray still has water cooling! (Score:4, Informative)
I thought Cray's originally used Fluorinert(tm), which is definitely *not* water... *spark* *spark* *fizzle*
All hail non-conductive fluorochemicals!
Cooling is not an option (Score:3, Informative)
There really isn't a question of if it will become widespread. Overclocking sites have had more than a few visits from Intel and AMD over the years. It's an inevitable problem with an inevitable solution. The only question is how long until water cooling becomes more popular. Heat needs have had people clamoring for Pentium M processors for rack mount gear for a while as well. It's a reasonably speed CPU that handles heat fairly well. It would work very nicely in rack mount gear, but motherboards that will take one are fairly rare.
As for server rooms, they will continue to be air conditioned for as long as all of your server room equipment is in their. Even if you found a magical solution for servers you still have RAID arrays, switches, routers and the like all in the same room. Server rooms are well known by HVAC people as requiring cooling. Most HVAC vendors will prioritize a failed server room HVAC over anything but medical. They know damn well that anybody that has an air conditioner designed to work in the middle of January in Minnesota or North Dakota isn't using the cooling for comfort.
Re:Cray still has water cooling! (Score:2, Informative)
Heat Problems and Misconceptions (Score:2, Informative)
However, I often find people have misconceptions, they think they have a heat problem, but in reality they do not. One must measure the air temperature at the inlet to the servers, not the exhaust. If the inlet air meets the manufacturer's specifications, there is no problem, despite the fact that it's uncomfortably hot in the exhaust aisle.
"Hot spots" can often be corrected by rebalancing, which is the science of redirecting the supply air proportionately to the heat loads in the space. Any good maintenance firm that knows data centers will offer rebalancing services.
If you really do have a heat load problem, e.g. more load than capacity, as evidenced by excessive temperatures throughout the space, consult a mechanical engineer that specialzes in data centers.
Re:Mabe a mix? (Score:2, Informative)
"**laughs** oh, I know, we'll stick the IT guy in the lake"
you have any suggestions on how to better protect equipment in those type situations?
SGI had to do this at NASA (Score:3, Informative)
More detail on the change, and cooling in general, can be found in this interview [designnews.com] with the SGI designers who dealt with the problem.
Re:well I've always wondered this (Score:5, Informative)
Re:well I've always wondered this (Score:2, Informative)
Re:well I've always wondered this (Score:3, Informative)
For low voltages I don't see any problem with DC but AFAICR at higher voltages DC is more dangerous - a shock from an AC supply causes you to let go quickly, a shock from a DC supply (ISTR) causes the muscles in your hand to contract so that you can't let go.
However, these days we have so many low voltage DC systems (even in homes) that running a 12 or 18v DC supply around your office/home/datacentre sounds like a good idea. You still have to convert it to the voltages you need (usually 12v, 5v, 3.3v, and maybe a few others) but I can't help thinking that building a DC-DC converter for these low voltages would probably be cheaper and easier than a full 240v AC switched mode PSU for each device. (low power devices can even get away with using cheapo linear regulators).
Of course I'd still like some power regulation in each device since I don't want a power spike in the low voltage circuit blowing every device.
Re:well I've always wondered this (Score:5, Informative)
The general rule is that stricter requirements for power supply performance can only be met by decreasing the physical distance between the power supply and the load. The trend towards lower supply voltages and higher currents makes the problem worse.
AC power wiring is cheap and well understood. It doesn't require huge buss bars or custom components. It is the most economical way to distribute electrical energy.
Once you reach the box level, you want to convert the AC to low-voltage DC. Confining the high-voltage AC to the power supply means that the rest of the box doesn't have to deal with the electrical safety issues associated with high-voltage AC. The wiring between the power supply and load is short enough to provide decent quality DC power at a reasonable cost. Those components that require higher quality power can use the DC power from the power supply as the energy source for local dc-to-dc converters.
You could feed the box with -48 VDC like the telephone company does with its hardware. You would still end up with about the same amount of hardware inside the box to provide all of the regulated DC voltages needed to make it work. Cost would increase because of the lower production volumes associated with non-standard power supplies.
In the end, it boils down to economics. DC power distribution costs more money and it doesn't meet the performance requirements of modern hardware. The days of racks full of relays, powered directly from battery banks, are long gone.
and that voltage loss = ? (Score:4, Informative)
...aaaaaand where do you think that energy goes?
[DING] "Heat, Alex" "Correct, for $100."
...aaaaaand what do you think that energy loss thanks to high current means?
[DING] "Efficiency less than a modern AC->DC power supply" "Correct, for $200."
Anyone particpating in the "DC versus AC" discussion would do well to pick up a history book and read about Westinghouse and Edison. There's a reason we use A/C everywhere except for very short hauls. Modern switching power supplies are very efficient and still the best choice for this sort of stuff.
Water-chilled Cabinets Already In Use for Blades (Score:5, Informative)
The industry has taken a two-pronged approach. Equipment vendors have been developing cabinets with built-in cooling, while design consultants try to reconfigure raised-floor data center space to circulate air more efficiently. The problem usually isn't cooling the air, but directing the cooled air through the cabinet properly.
There was an excellent discussion of this problem [blogworks.net] last year at Data Center World in Las Vegas. As enterprises finally start to consolidate servers and adopt blade serves (which were overhyped for years), many are finding their data centers simply aren't designed to handle the "hot spots" created by cabinets chock full of blades. Facilities with lower ceilings are particularly difficult to reconfigure. The additional cooling demand usually means higher raised floors, which leaves less space to properly recirculate the air above the cabinets. Some data center engineers refer to this as "irreversibility" - data center design problems that can't be corrected in the physical space available. This was less of an issue a few years back, when there was tons of decent quality data center space available for a song from bankrupt telcos and colo companies. But companies who built their own data centers just before blades became the rage are finding this a problem.
Re:well I've always wondered this (Score:1, Informative)
Dell (and others) offer 48V DC power supplies for most of their rackmount servers.
UPSs take AC, turn it to DC, charge their batteries. A sepearate system takes DC from batteries, inverts it and sends out AC. (Good UPSs, anyway. Otherwise they are "battery backups", not uninteruptable) Computer power supplies take AC and distribute DC inside the case. WTF?
If you need to go any significant distance with low voltage wires, you need fat copper wire to avoid unacceptable voltage drops.
Why doesn't APC start selling ATX power supplies?
The market is very small. If you want a DC powersupply, it's probably for a rackmount server that doesn't use ATX, and you probably would have bought it with 48V DC in the first place.
While 48V DC UPSes are cheaper & simpler than 120V AC, the heat generated by the UPS won't vary much.
Re:well I've always wondered this (Score:2, Informative)
Phone switching and networking gear use -48VDC. If you've ever gotten to tour a switching facility the battery area is a treat.
Many of those fire codes were developed during the intial roll out of electrical power. More than likely much of the resistance to it being widespread is because with DC, the current is constant, there isn't even a fraction of a break. AC at least gives you a very short period of time to 'break' from the connection.
Good design and practices would elminate the perceived dangers but I still think you'd catch a lot of static for it from authorities.
Cooler servers (Score:3, Informative)
Server rooms can turn into tropical saunas pretty fast. During a power failure we have to get into the office in 40 minutes to start powering down less important servers (try telling management that *all* the servers aren't mission critical, or worse yet, getting them to fork out $$$$$ for a bigger UPS)
Re:well I've always wondered this (Score:2, Informative)
1. The wall warts are there so that international safety codes can be met easily. They provide the necessary safety isolation between the 120/220 volts from the wall and the device they are powering. It takes a lot of time , money and testing to meet these international safety standards, and if you build the power supply inside your equipment, then you deal with having high voltage inside your enclosure, and a nightmare of conditions you have to meet and test.
2. Once you have selected a wall wart to provide the "safety low voltage" of less than 40.2 volts peak, you can re-regulate to lower voltages as needed. And since many of these wall warts use switching technology, the efficiency can be quite high (80-90 percent). Those that use only a transformer for safety isolation have 95-99 percent effeciency. Yes, one large wall wart could do the job of the multi set of small ones, but that would mean someone would have to design a custom power supply for those products, and that gets back to the original problem
3. The computer power supply provides this same function. That is why it is sealed, and has only low voltage outputs. Even the power entry plug and fuse is part of the power supply. Look on the label and you will see all kinds of international safety logos. The processor power of 3.3 volts and other similar voltages is conditioned directly at the processor since there are unique design problems that occur as a processor changes from 100 percent usage to sleep mode.
Addressing the cooling issue: (I will try to use simple explanations for those who do not design in this area.)
For all ICs, the faster it goes, the hotter it gets. If a processor can be slowed or stopped when it is not doing anything, the power can be greatly decreased. However, as the article states, most people prefer to get better performance instead of being concerned with heat.
The problem is the same whether you are discussing the silicon chip inside a semiconductor package or discussing the server box inside a room. The power generated by the chip has to be removed from the room to avoid overheating. Assume a processor chip dissipates 200 watts. The chip designer has to find a way to move 200 watts from the silicon chip to the metal or ceramic surface of the package. The board designer has to find a way to move the 200 watts from the package surface to a heat sink. The system (box) designer has to find a way to move the 200 watts from the heatsink to the surrounding air and get the heat out of the box and into the room air. The building designer has to find a way to move the 200 watts from the room to the outside. As you can see, it is not a matter of room cooling vs processor cooling. Each person in the design sequence has to remove heat. The only person who can really make a difference is the processor designer who can shut down to lower power at any time the full speed is not needed. Secondarily, the programmers can tell the processor when the program needs only slow speed, such as when one is typing or filling in values on a spread sheet or database and thus decrease the initial power. However, even if the processor is running slow speed, the other designers in the heat chain still have to design for maximum heat removal unless the processor can tell the rest of the system its actual power used.
Now, give yourself 50 of these processors in one large room and you are generating 10,000 watts of heat. That's like having seven of the 1500 watt bathroom heaters running continuously.
Since your overall power supply is only 80 percent efficient, the total power jumps up to 12000 watts. Now add in the power for the hard drives, fans, monitors, etc and you start getting really warm air. That's how it works! Now how do you solve it?
Re:Cool the racks instead... (Score:3, Informative)
You sure? Cool air in the *top*? All the ones I've seen (and all the rack equipment manufacturers accessories) pull cool air from under the raised floor and pull it *up* through the rack. This is because the hot air your systems are exhausting is already rising, and pulling the cool air up and exhausting at the top makes a lot more sense!