Cooler Servers or Cooler Rooms? 409
mstansberry writes "Analysts, experts and engineers rumble over which is more important in curbing server heat issues; cooler rooms or cooler servers. And who will be the first vendor to bring water back into the data center?"
Why not both? (Score:4, Insightful)
well I've always wondered this (Score:5, Insightful)
It makes sense!
Cooler Rooms (Score:3, Insightful)
Err, well, both? (Score:4, Insightful)
So take your pick. To make the servers cooler, either buy new more efficient servers or buy a whacking great air con unit.
Since the servers are the things that actually do the work, I'd just get feck off servers and a feck off air-con unit to keep it all happy!
Everyones a winner!
If you have cooler servers... (Score:5, Insightful)
Re:Why not both? (Score:5, Insightful)
The question isn't whether it's good to keep both cool. The question is, which makes more financial sense? Cooling the whole room? Spending the money to purchase servers with a low heat-to-computation ratio?
Probably a combination. But to say that people should equip every computer with VIA ultra-low-power chips _and_ freeze the server room is silly.
No brainer for me... (Score:3, Insightful)
Ideally, you should have a cool server and and cool room. The two work in combination. If you have a hot room, then the server isn't going to be able to cool itself very well even with the best heatsinks and well-placed fans. Yes, you could use water cooling, but there are other important bits inside of a machine besides whatever the water touches. But a cool room does no good if your servers aren't setup with proper cooling themselves.
First thing.. (Score:4, Insightful)
That comes to mind is that it will probably be vastly cheaper to cool a rackmount specifically than to lower the ambient temperature of an entire room to the point that it has the same effect. However, I'm not entirely sure how well this scales to large server farms and multiple rackmounts.
I think the best option would be to look at having the hardware produce less heat in the first place. This would definitely simplify the rumbling these engineers are engaged in.
Re:Outside air? (Score:4, Insightful)
You'd need a lot of filtering and/or humidity control to make that a realistic option. Better yet to make use of outside air temperature. Which is exactly what your heatpump loop or your AC cooling tower is for.
Re:Why not both? (Score:3, Insightful)
If i have to be working on the server localy in some fashion i would rather not be boiling or freezing
The best is possibly a nicely air conditioned room with a nice simple cooling system on the server , good airflow and a comfertable working enviroment
Cooler servers... (Score:5, Insightful)
At least if a server fails it's one unit that'll either get hot or shutdown which means a high degree of business continuity.
When your aircon goes down you're in a whole world or hurt. Ours went in a powercut, yet the servers stayed on because of the UPSes - hence the room temperature rose and the alarms went off. Nothing damaged, but it made us realise that it's important to have both, otherwise your redundancy and failover plans expand into the world of facilities and operations, rather than staying within the IT realm.
Re:well I've always wondered this (Score:5, Insightful)
UPSs take AC, turn it to DC, charge their batteries. A sepearate system takes DC from batteries, inverts it and sends out AC. (Good UPSs, anyway. Otherwise they are "battery backups", not uninteruptable) Computer power supplies take AC and distribute DC inside the case. WTF?
Why doesn't APC start selling ATX power supplies? Directly swap out AC powersupplies, have them plug into the DC providing UPS and/or per-rack (or even per-room) powersupplies.
Electrical codes are a BS excuse. Even if you needed verdor specific racks, a DC providing rack is, so far as the fire marshal should be concerned, just a very large blade enclosure, which are clearly acceptable.
I cant beleive that Im the first one to ever come up with this idea. So there must be some problem with it.... Some EE want to explain why this wouldnt work?
Powersupplies (Score:3, Insightful)
I've often though it might be nicer if there could be one power supply for a whole room of PCs for example. This could be placed somewhere outside of the room and cooled there, with appropriate UPS backup too.
12 and 5 volt lines then feed each PC - no noisy or hot PSU per machine... Peace and quiet, and a bit cooler too...
Re:Why not both? (Score:4, Insightful)
In that case, I would say: cool the room. The room is forever (on this timescale), the servers maybe change every 5 years. Of course, start by NOT choosing the hottest server too, but I would invest in the room.
Also I don't expect room cooling techniques to improve significantly in the next few years. Servers hopefully will.
Re:Ma Bell has been doing this for years (Score:4, Insightful)
It exists, it just is expensive.
Re:Why not both? (Score:4, Insightful)
I think you're right. That's the way we do it, but - (there's always a but) some cabinets still get damn hot depending on what's in the rack. Sometimes you need to do spot cooling as well, or put in bigger fans to keep the equipment closer to ambient.
I think starting with a cool room is the most cost effective way though - not to mention it makes work "O.K." in August...
Re:Why not both? (Score:1, Insightful)
Re:Why not both? (Score:4, Insightful)
You don't need to cool the whole room - you could just cool the cabinets. Most cabinets have doors and sides, an open bottom and fans at the top. So you can blow cold air up the inside of the datacabinet (which is what most datacentres do anyway) and take the air from the top to recycle it with reasonably minimal air (and hence heat) exchange with the rest of the room.
Re:well I've always wondered this (Score:4, Insightful)
Assuming that you only have one converter. The nice thing about AC to DC conversion is you can have multiple AC converters all feeding the same DC voltage to a single set of conductors to run the DC power out to the machines. The converters can even be out of phase. If the power conversion system is designed right, any one or two converters can fail, be disconnected from the power feed, and the remaining good converters will pick up the slack.
Re:Why not neither? Remove the power supplies. (Score:3, Insightful)
As for power consumption, I don't see how converting the power outside the rack uses less power than converting it inside the rack. And it won't improve reliability since each server will still need a power supply, it will just be a DC-DC one. I don't think you can (reasonably) run a 500-watt power line at 12 volts. Not to mention that you need more than one voltage.
Re:and that voltage loss = ? (Score:3, Insightful)
Also, by having the conversion process take place in the UPS you are shifting 20% of the heat generated by a modern PC away from the enclosure, and putting it into an external device. (compare devices like the PS1 and PS2 which have an internal converter, and early models had serious overheating issues, and other consoles that decided to place the conversion in an external device, even if that converter is on the console connection side, it removes heat from inside the console, and lets it dissipate outside the console.)
Re:Why not both? (Score:3, Insightful)
This doesn't seem like an either or situation or a large research question. A cost and reliability analysis should determine what it better for each individual setup.
Cooler Servers for sure (Score:3, Insightful)
Re:Why not both? (Score:3, Insightful)
Another factor to consider -- what value to your organization does knowing that the price is predictable? Suppose that building and operating an energy efficient data center is within the financial parameters that make sense for your organization, but you could save money by building a gas-guzzler. What is the liklihood that near to mid term energy price rises could make operating the gas guzzler impractical? What would the impact be on other aspects of your business? If you assigned an expected cost to this scenario, does it offset savings?
Finally, there is a ethical/public image issue. Is it acceptable to waste energy? If we are going to be efficient, can we get some secondary benefits out of this?
Re:Why not neither? Remove the power supplies. (Score:3, Insightful)
I've never seen people so difficult to communicate with as hardware planning people. You would be amazed at how much better a computer room gets cooled when the computer equipment gets installed properly in a "hot aisle/cold aisle" configuration. Also, vendors and hardware folks don't like to have things pointed out that they're not doing, like making sure not to install a top discharge cabinet on the edge of a cold aisle right next to a front intake cabinet, or installing plenums inside the cabinets as some vendors recommend.
A combination of good space/hardware planning as well as honesty and communication in determining potential heat loads are probably the 2 biggest factors in keeping a computer space cold, IMHO. No one's being helped by just guessing as what a rack full of SunFire servers is going to put out in terms of heat, find out from the manufacturer. And don't feel that your engineering staff is trying to tell you how to do your job or piss you off by letting you know that a rack you've installed is disrupting airflow. We're all in this together, remember?