Cooler Servers or Cooler Rooms? 409
mstansberry writes "Analysts, experts and engineers rumble over which is more important in curbing server heat issues; cooler rooms or cooler servers. And who will be the first vendor to bring water back into the data center?"
Why not both? (Score:4, Insightful)
Re:Why not both? (Score:5, Insightful)
The question isn't whether it's good to keep both cool. The question is, which makes more financial sense? Cooling the whole room? Spending the money to purchase servers with a low heat-to-computation ratio?
Probably a combination. But to say that people should equip every computer with VIA ultra-low-power chips _and_ freeze the server room is silly.
Re:Why not both? (Score:4, Insightful)
In that case, I would say: cool the room. The room is forever (on this timescale), the servers maybe change every 5 years. Of course, start by NOT choosing the hottest server too, but I would invest in the room.
Also I don't expect room cooling techniques to improve significantly in the next few years. Servers hopefully will.
Re:Why not both? (Score:4, Insightful)
I think you're right. That's the way we do it, but - (there's always a but) some cabinets still get damn hot depending on what's in the rack. Sometimes you need to do spot cooling as well, or put in bigger fans to keep the equipment closer to ambient.
I think starting with a cool room is the most cost effective way though - not to mention it makes work "O.K." in August...
Re:Why not both? (Score:4, Interesting)
I also agree with the guy in the article, liquid cooling in the server room is going to happen eventually. I got to see the difference a simple watercooling system made on a P4 3.02GHz Extreme Edition chip, stuffed in the same case with a GeForceFX5950. Even with some serious fans the case was borderline overheating in games. Part of the problem being that the room it was in wasn't kept that cool, and the owner had it in a cabinet in his desk (it is what that cabinet was designed for). He dropped a liquid cooling system into it, and now the thing is always nice and frosty. And even with the jolts and jostling of taking the system to several LAN parties, the liquid cooling system is still leak free and rock solid. His experience has actually made me consider one for my own next system. For a server, where the system usually sits still long enough to collect a measureable amount of dust, water cooling may be a very good choice. If it's installed properly the likelyhood of leaks is low, and the performance can be very good. Heck, I can see it now, our server rooms will eventually have a rack or two devoted entirely to the radiators for the liquid cooling systems of servers, which run hot enough form plasma.
Re:Why not both? (Score:3, Interesting)
It seems more likely to me that the radiators would be placed outside. I could forsee water cooled racks that come with a centre mounted warm and cool water manifolds [toolbase.org] plumbed to high flow lines to take all the water to one big radiator outside...
Or probably easier to manage, a 2-4U centre mounted unit with the manifold and pump
Cool the racks instead... (Score:3, Interesting)
If you buy your own racks to put gear in, then getting these things is easy, if you buy whole racks from a vendor with gear in it already (custom systems type of thing), then the thing comes in a cabinet which usually has some kind of a fan/
Re:Cool the racks instead... (Score:3, Informative)
You sure? Cool air in the *top*? All the ones I've seen (and all the rack equipment manufacturers accessories) pull cool air from under the raised floor and pull it *up* through the rack. This is because the hot air your systems are exhaus
Re:Why not both? (Score:4, Insightful)
You don't need to cool the whole room - you could just cool the cabinets. Most cabinets have doors and sides, an open bottom and fans at the top. So you can blow cold air up the inside of the datacabinet (which is what most datacentres do anyway) and take the air from the top to recycle it with reasonably minimal air (and hence heat) exchange with the rest of the room.
Re:Why not both? (Score:3, Insightful)
This doesn't seem like an either or situation or a large research question. A cost and reliability analysis should determine what it better for each individual setup.
Re:Why not both? (Score:3, Insightful)
Re:Why not both? (Score:3, Insightful)
If i have to be working on the server localy in some fashion i would rather not be boiling or freezing
The best is possibly a nicely air conditioned room with
Why not neither? Remove the power supplies. (Score:3, Interesting)
Reduce power consumption
Reduce heat in the server room
Improve reliability
Re:Why not neither? Remove the power supplies. (Score:3, Insightful)
As for power consumption, I don't see how converting the power outside the rack uses less power than converting it inside the rack. And it won't improve reliability since each server will still need a power supply, it will just be a DC-DC one. I don't think you can (reasonably) run a 500-watt power line at 12 volts. Not to mention that you need more than one voltage.
Re:Why not neither? Remove the power supplies. (Score:5, Interesting)
Personally I think BOTH the power & the cooling needs to be addressed. I've worked in datacenters where cabinets are filled with 30+ 1U servers. Not only is it a royal pain in the ass to deal with all the power cabling for 30 individual servers but the heat blowing out the back of those cabinets is enough to melt the polar ice caps...
I've also worked on blade servers like IBM's BladeCenter. Their next generation of blades will require even more power than it currently does. Trying to convince a datacenter to run 4 208 volt feeds to handle just a pair of BladeCenters (28 blades) is like pulling teeth. They can't comprehend that much power in such a small footprint. A rack full of BladeCenters could easily require 8 208 volt feeds, whereas a rack full of 1U's may only need 3 or 4 110 volt feeds.
Of course it's like pulling teeth... (Score:3, Interesting)
Re:Why not neither? Remove the power supplies. (Score:3, Insightful)
I've never seen people so difficult to communicate with as hardware planning people. You would be amazed at how
Aquafina... (Score:5, Funny)
Re:Aquafina... (Score:5, Interesting)
The first problem was snow that piled up outside, combined with clogged drains, that led to melting snow coming in through the wall where some pipes entered/exited. Since their layout was power in the floor and networking in the ladder racks, it's actually pretty amazing that a large portion of the power plugs and switches still worked, even while being submerged in 6 inches of water.
So about a year after they had taken care of that issue, a water pipe for a bathroom on the floor above burst, and of course the water came down right in our room in the hosting center. It wasn't so bad until the flourescent lights in the ceiling filled up and started to over flow. We were able to limit the damage by throwing tarps over the tops of all the racks (there goes your cooling effect, though), but we still lost about 100K worth of server and switching equipment.
So yeah, water in the data center? It's been done.
The A/C company brought our water (Score:5, Interesting)
We spent several hours with a tiny shop vac (we need a bigger one!) emptying the water and being thankful Bob had seen it before it got high enough to get into the power conduits.
An A/C unit drain pan had a clogged drain, so the sump pump couldn't carry the water away. Whoever had the units installed had purchased water alarms, but *they had never been hooked up*. Now *that* was a brilliant move.
We now have water alarms down there.
Meanwhile, the room stays about 70 degrees, and the servers stay comfy, as do we. I like it that way,
uh, "rumble" (Score:2, Funny)
Re:uh, "rumble" (Score:4, Funny)
well I've always wondered this (Score:5, Insightful)
It makes sense!
Re:well I've always wondered this (Score:3, Interesting)
Duplication is nice in some respects, more redundancy is a big plus. That and you actually have several useful machines when you finally tear it all down. Who's going to buy 3 blades off ebay when they can't afford the backplane to plug 'em into?
Re:well I've always wondered this (Score:5, Insightful)
UPSs take AC, turn it to DC, charge their batteries. A sepearate system takes DC from batteries, inverts it and sends out AC. (Good UPSs, anyway. Otherwise they are "battery backups", not uninteruptable) Computer power supplies take AC and distribute DC inside the case. WTF?
Why doesn't APC start selling ATX power supplies? Directly swap out AC powersupplies, have them plug into the DC providing UPS and/or per-rack (or even per-room) powersupplies.
Electrical codes are a BS excuse. Even if you needed verdor specific racks, a DC providing rack is, so far as the fire marshal should be concerned, just a very large blade enclosure, which are clearly acceptable.
I cant beleive that Im the first one to ever come up with this idea. So there must be some problem with it.... Some EE want to explain why this wouldnt work?
Re:well I've always wondered this (Score:2, Interesting)
Re:well I've always wondered this (Score:4, Insightful)
Assuming that you only have one converter. The nice thing about AC to DC conversion is you can have multiple AC converters all feeding the same DC voltage to a single set of conductors to run the DC power out to the machines. The converters can even be out of phase. If the power conversion system is designed right, any one or two converters can fail, be disconnected from the power feed, and the remaining good converters will pick up the slack.
Re:well I've always wondered this (Score:5, Interesting)
I'm not an EE, but it's something I've always wondered about. I don't have a datacentre, but I do have far too many computers: why does my machine room contain about fifteen wall warts, all producing slightly different DC voltages and plugged in to their various appliances via fifteen different non-standard connectors? Why not just have one low-voltage standard and have all these things plug into that?
One possible reason is that (IIRC) power losses vary according to current, not voltage. By increasing the voltage, you can push the same amount of energy down a wire using a smaller current, which limits losses. This is why power lines use very high voltages.
This means that if you produce regulated 5V at one side of your datacentre, by the time it's reached the other side it's not 5V any more. But it should be easy to get round this by producing 6V and having DC regulators; they're very small and extremely efficient these days.
However, I suspect that the main reason why this kind of thing isn't done is inertia. There's so much infrastructure in place for dealing with high-voltage AC supplies that you wouldn't get off the ground using DC.
Re:well I've always wondered this (Score:5, Informative)
and that voltage loss = ? (Score:4, Informative)
...aaaaaand where do you think that energy goes?
[DING] "Heat, Alex" "Correct, for $100."
...aaaaaand what do you think that energy loss thanks to high current means?
[DING] "Efficiency less than a modern AC->DC power supply" "Correct, for $200."
Anyone particpating in the "DC versus AC" discussion would do well to pick up a history book and read about Westinghouse and Edison. There's a reason we use A/C everywhere except for very short hauls. Modern switching power supplies are very efficient and still the best choice for this sort of stuff.
Re:and that voltage loss = ? (Score:3, Insightful)
Also, by having the conversion process take place in the UPS you are shifting 20% of the heat generated by a modern PC away from the enclosure, and putting it into an external device. (compare devices like the PS1 and PS2 which have an internal converte
Re:and that voltage loss = ? (Score:3, Funny)
Incorrect, you didn't phrase your answer in the form of a question.
Re:Ma Bell has been doing this for years (Score:4, Insightful)
It exists, it just is expensive.
Re:well I've always wondered this (Score:5, Interesting)
Whenever we need something outside of normal ATX, we wind up paying custom development fees.
No one makes DC to DC power supplies that are worth a damn, and the few vendors who do sell them (Sun, IBM, etc) charge an arm and a leg above and beyond what we pay to have them custom engineered.
Re:well I've always wondered this (Score:3, Funny)
I've wondered that, too. Every time the power is coverted between AC, DC, and the voltage level, there is some loss, so it's less efficient to do all of these conversions. I think having a UPS-oriented power supply would be a Good Thing, where you can hook up some external battery pack for the backup.
At a previous job, we used some Unix machines that were completely fault tolerant, including backup processors, backup ne
Re:well I've always wondered this (Score:3, Informative)
For low voltages I don't see any problem with DC but AFAICR at higher voltages DC is more dangerous - a shock from an AC supply causes you to let go quickly, a shock from a DC supply (ISTR) causes the muscles in your hand to contract so that you can't let go.
However, these days we have so many low voltage DC systems (even in homes) that running a 12 or 18v DC supply around your office/home/datacentre sounds like a good i
Re:well I've always wondered this (Score:5, Informative)
The general rule is that stricter requirements for power supply performance can only be met by decreasing the physical distance between the power supply and the load. The trend towards lower supply voltages and higher currents makes the problem worse.
AC power wiring is cheap and well understood. It doesn't require huge buss bars or custom components. It is the most economical way to distribute electrical energy.
Once you reach the box level, you want to convert the AC to low-voltage DC. Confining the high-voltage AC to the power supply means that the rest of the box doesn't have to deal with the electrical safety issues associated with high-voltage AC. The wiring between the power supply and load is short enough to provide decent quality DC power at a reasonable cost. Those components that require higher quality power can use the DC power from the power supply as the energy source for local dc-to-dc converters.
You could feed the box with -48 VDC like the telephone company does with its hardware. You would still end up with about the same amount of hardware inside the box to provide all of the regulated DC voltages needed to make it work. Cost would increase because of the lower production volumes associated with non-standard power supplies.
In the end, it boils down to economics. DC power distribution costs more money and it doesn't meet the performance requirements of modern hardware. The days of racks full of relays, powered directly from battery banks, are long gone.
Re:well I've always wondered this (Score:2, Interesting)
Cooler Rooms (Score:3, Insightful)
Re:Cooler Rooms (Score:2)
Err, well, both? (Score:4, Insightful)
So take your pick. To make the servers cooler, either buy new more efficient servers or buy a whacking great air con unit.
Since the servers are the things that actually do the work, I'd just get feck off servers and a feck off air-con unit to keep it all happy!
Everyones a winner!
Cray still has water cooling! (Score:5, Informative)
All hail the Cray X1E !
Re:Cray still has water cooling! (Score:4, Informative)
I thought Cray's originally used Fluorinert(tm), which is definitely *not* water... *spark* *spark* *fizzle*
All hail non-conductive fluorochemicals!
Re:Cray still has water cooling! (Score:2)
Like the original Cray 1, the current Cray X1E uses Fluorinert to cool the system itself (check out the swanky video of the Fluorinert vapor jets on cray.com!). There's also a loop of some sort of refrigerant between the heat exchanger indoors and the cooling tower outdoors.
Re:Cray still has water cooling! (Score:2)
Re:Cray still has water cooling! (Score:2, Informative)
Re:Not really (Score:2)
cool!
Virginia Tech's Super Mac Used.... (Score:2, Informative)
Both (Score:3, Interesting)
Cooler rooms also keep others out... we get a lot of, its so cold, and they leave. That's golden =)
Re:Both (Score:2)
We keep ours at 73 degrees, about 2 degrees warmer than the rest of the building. We did the 60 degree thing for awhile, but it required quite a bit more electricity to maintain that temp. The servers work fine at 80 degrees, but 73 is more comfortable and provides a little more cushion.
If you have cooler servers... (Score:5, Insightful)
Re:If you have cooler servers... (Score:2)
Liquid Oxygen Anyone? (Score:2, Funny)
Re:Liquid Oxygen Anyone? (Score:2)
inevitability breeds contempt (Score:5, Interesting)
you know, some times the market actually rewards innovation. tough to believe, i know, and this isn't innovation, it's common sense, but mfg's are afraid of this? come on, people, the technocenti have been doing this for their home servers for a long, long time, let's bring it into the corporate world.
Re: (Score:2)
No brainer for me... (Score:3, Insightful)
Ideally, you should have a cool server and and cool room. The two work in combination. If you have a hot room, then the server isn't going to be able to cool itself very well even with the best heatsinks and well-placed fans. Yes, you could use water cooling, but there are other important bits inside of a machine besides whatever the water touches. But a cool room does no good if your servers aren't setup with proper cooling themselves.
Already done. (Score:2)
Re:Already done. (Score:2)
Re:Already done. (Score:2)
Also, with water and proper electrical controls, you can shut down the servers quick with one big switch and leave them off until they dry out. You won't lose 100% of the equipment and insurance wi
First thing.. (Score:4, Insightful)
That comes to mind is that it will probably be vastly cheaper to cool a rackmount specifically than to lower the ambient temperature of an entire room to the point that it has the same effect. However, I'm not entirely sure how well this scales to large server farms and multiple rackmounts.
I think the best option would be to look at having the hardware produce less heat in the first place. This would definitely simplify the rumbling these engineers are engaged in.
Water cooling, pah! (Score:5, Funny)
Also has the added benefit that you can see at a glance which processors are working the hardest by looking to see which are producing the most bubbles.
Wonder if you could introduce fish into the tank and make a feature of it? If you could find any freon-breathing fish, that is...
Reminds me of an amusing anecdote (Score:5, Funny)
Re:Water cooling, pah! (Score:2)
Re:Water cooling, pah! (Score:2)
Err... that's the point. Wouldn't be much of a swimming pool otherwise, would it?
Besides, without liquid in the tank, you'd have to get hold of levitating fish, which I suspect would be even harder than finding simple freon-breathing ones.
Comment removed (Score:3, Interesting)
Re:seems to me... (Score:2)
Easy to say, but are you willing to give up fast servers for cool servers? We don't have the technology to make fast and cool microchips.
Water in the Data Center (Score:2)
JUST ONCE.
Re:Water in the Data Center (Score:4, Funny)
Swiftech (Score:2)
Re:Swiftech (Score:2)
Yeah, I bet Dell or HP can't figure out how to do that.
*rolls eyes*
Actually, given some of the recent decisions made by HP, they probably couldn't!
No way, not in my shop (Score:5, Funny)
The sign on the door clearly states, "No Food or Drink". Of course, shirts are still optional.
Cooler servers... (Score:5, Insightful)
At least if a server fails it's one unit that'll either get hot or shutdown which means a high degree of business continuity.
When your aircon goes down you're in a whole world or hurt. Ours went in a powercut, yet the servers stayed on because of the UPSes - hence the room temperature rose and the alarms went off. Nothing damaged, but it made us realise that it's important to have both, otherwise your redundancy and failover plans expand into the world of facilities and operations, rather than staying within the IT realm.
It's not a colder room, it's air circulation (Score:2, Interesting)
Just because HP is sells a 42U rack doesn't mean you have to cram blades into all 42Us. It's cheaper to spread the heat load across a larger area than to figure out how to put 1500 CFM out of a floor tile so the top of a blade rack gets air.
There are studies by the uptime institute that say that 50% of hardware failures happen in the top 25% of rack space because the top
Re:It's not a colder room, it's air circulation (Score:2)
Agreed, to a point.
Google did a study about Datacenter Power Density a couple of years ago and IIRC, concluded that Blades were not the most cost efficient solution (for them) because of the increased environmental conditioning required...
Wish I could find that link.
Cooler servers! (Score:2)
Energy efficiency and Hosting- Host NORTH ! (Score:3, Interesting)
If data center location isn't such a problem as long as we have high speed data lines, locate the data center someplace nice and cold.
Like Manitoba, or Minnesota, or Alaska, or Siberia. Heat the work area with the flow from the data center.
Hire your local population to maintain the data center.
Profit !
Cooling is not an option (Score:3, Informative)
There really isn't a question of if it will become widespread. Overclocking sites have had more than a few visits from Intel and AMD over the years. It's an inevitable problem with an inevitable solution. The only question is how long until water cooling becomes more popular. Heat needs have had people clamoring for Pentium M processors for rack mount gear for a while as well. It's a reasonably speed CPU that handles heat fairly well. It would work very nicely in rack mount gear, but motherboards that will take one are fairly rare.
As for server rooms, they will continue to be air conditioned for as long as all of your server room equipment is in their. Even if you found a magical solution for servers you still have RAID arrays, switches, routers and the like all in the same room. Server rooms are well known by HVAC people as requiring cooling. Most HVAC vendors will prioritize a failed server room HVAC over anything but medical. They know damn well that anybody that has an air conditioner designed to work in the middle of January in Minnesota or North Dakota isn't using the cooling for comfort.
Simple Solution really.... (Score:3, Funny)
2. Open the windows
3. Profit!!!
Heat Problems and Misconceptions (Score:2, Informative)
However, I often find people have misconceptions, they think they have a heat problem, but in reality they do not. One must measure the air temperature at the inlet to the servers, not the exhaust. If the inlet air meets the manufacturer's specifications, there is no problem, despite the fact that it's uncomf
Powersupplies (Score:3, Insightful)
I've often though it might be nicer if there could be one power supply for a whole room of PCs for example. This could be placed somewhere outside of the room and cooled there, with appropriate UPS backup too.
12 and 5 volt lines then feed each PC - no noisy or hot PSU per machine... Peace and quiet, and a bit cooler too...
Cooler servers first, cooler rooms second (Score:3)
Unfortunately, a couple times per year, the chilled water to our a/c unit gets shut off, and our servers are left to fry. The better answer is to have machines which run cooler. If they lose a/c, they won't fry. However, replacing clusters isn't cheap...and I don't think most people think, "which one's going to run the coolest?" when they are going to buy one.
Does anyone have a link to a page that has grossly generalized heat numbers on certain processor families in certain case configurations(I realize these numbers aren't going to be anywhere near exact, but it would be a starting point)?
Hmmm... (Score:5, Funny)
SA: Yes, but notice that the room is lovely and cool.
PHB: That's all right then. By the way, what's delaying that upgrade to Windows 2003?
SA: Every time we put the CD in the drive it melts. We think it's going to be fixed in the next service pack.
SGI had to do this at NASA (Score:3, Informative)
More detail on the change, and cooling in general, can be found in this interview [designnews.com] with the SGI designers who dealt with the problem.
Water-chilled Cabinets Already In Use for Blades (Score:5, Informative)
The industry has taken a two-pronged approach. Equipment vendors have been developing cabinets with built-in cooling, while design consultants try to reconfigure raised-floor data center space to circulate air more efficiently. The problem usually isn't cooling the air, but directing the cooled air through the cabinet properly.
There was an excellent discussion of this problem [blogworks.net] last year at Data Center World in Las Vegas. As enterprises finally start to consolidate servers and adopt blade serves (which were overhyped for years), many are finding their data centers simply aren't designed to handle the "hot spots" created by cabinets chock full of blades. Facilities with lower ceilings are particularly difficult to reconfigure. The additional cooling demand usually means higher raised floors, which leaves less space to properly recirculate the air above the cabinets. Some data center engineers refer to this as "irreversibility" - data center design problems that can't be corrected in the physical space available. This was less of an issue a few years back, when there was tons of decent quality data center space available for a song from bankrupt telcos and colo companies. But companies who built their own data centers just before blades became the rage are finding this a problem.
Cooler servers (Score:3, Informative)
Server rooms can turn into tropical saunas pretty fast. During a power failure we have to get into the office in 40 minutes to start powering down less important servers (try telling management that *all* the servers aren't mission critical, or worse yet, getting them to fork out $$$$$ for a bigger UPS)
Cooler Servers for sure (Score:3, Insightful)
Re:Outside air? (Score:4, Insightful)
You'd need a lot of filtering and/or humidity control to make that a realistic option. Better yet to make use of outside air temperature. Which is exactly what your heatpump loop or your AC cooling tower is for.
Re:Outside air? (Score:2, Interesting)
You could be heating buildings or a greenhouse with it, after all. Or making steam to pipe heat. Maybe even turning generators? Not sure what the step-down of the efficiency of it all is.
Apparently A/C is only 1/3rd efficient... but as you're going to be losing that anyway, might just look at the output heat.
Re:Outside air? (Score:2)
Re:Outside air? (Score:3, Interesting)
It took 8 months until the first servers started dying from the intense corrosion & pitting on equipment closest to the air outlets. We were bringing in air that while it was ice cold, was unfiltered and brought pollution in from 2 storeys above street level, and I dare say more moisture than the ai
Re:Outside air? (Score:2)
Re:Outside air? (Score:3, Interesting)
When I worked at Target w
Re:What about cold countries (Score:2)
[sarcasm]But that would promote Global warming, the melting of the Polar Icecaps, the Greenland Glacier and other glaciers! [/sarcasm]
Seriously, I can honestly see someone arguing that point.
Save the globe: don't slashdot ! (Score:3, Funny)
"Hey! Did you know that when you slashdotted that server near the Ross Ice Shelf, you caused 2 icebergs to calve? You insensitve clod!!!!"
Re:What about cold countries (Score:2)
1. Cheap power. Use the abundant natural gas that is too expensive to ship out of Alaska.
2. Security. Hey It is in middle of freaking no where!
3. Yes Cheap cooling. Just vent it to the outside.
I do not think it ever took off because of the cost of connecting to the back bone. But it was an idea.
Maybe Inuvik will be the new server center capital? How much bandwidth do they have up there?
As a matte
Re:Mabe a mix? (Score:2)
In the old room, I couldn't keep the servers cool in winter when the boilers were on. Summer wasn't so bad because I could leech off of the main air conditioners to get some extra cooling. I did have a small window unit air conditioner dumping heat from the small server room into a larger room, but it just couldn't keep up.
I guess that what I'm saying is "Good luck with that" :)
Re:Mabe a mix? (Score:2, Informative)
"**laughs** oh, I know, we'll stick the IT guy in the lake"
you have any suggestions on how to better protect equipment in those type situations?
Re:Many processors for cooling (Score:2)
That's what Cray does. They spray the stuff directly onto the dice to utilize evaporative cooling.
http://www.idexpositivepumpcare.com/images/microp u mp/microelectronic/moww_et_collage.jpg [idexpositivepumpcare.com]
Re:The first vendor (Score:2)
What gave AMD a kick from K7 to K8 was the lower voltage [1.1-1.5v instead of 1.6v], probably more efficient transistors [e.g. less leakage], oh, and an integrated heat spreader...
Instead of having the only contact be a tiny little die you spread it out before you even touch the HSF.
You can still get several Ghz components and not have too much temp. It's about spacing. If you c