Cooler Servers or Cooler Rooms? 409
mstansberry writes "Analysts, experts and engineers rumble over which is more important in curbing server heat issues; cooler rooms or cooler servers. And who will be the first vendor to bring water back into the data center?"
Both (Score:3, Interesting)
Cooler rooms also keep others out... we get a lot of, its so cold, and they leave. That's golden =)
inevitability breeds contempt (Score:5, Interesting)
you know, some times the market actually rewards innovation. tough to believe, i know, and this isn't innovation, it's common sense, but mfg's are afraid of this? come on, people, the technocenti have been doing this for their home servers for a long, long time, let's bring it into the corporate world.
Re:well I've always wondered this (Score:3, Interesting)
Duplication is nice in some respects, more redundancy is a big plus. That and you actually have several useful machines when you finally tear it all down. Who's going to buy 3 blades off ebay when they can't afford the backplane to plug 'em into?
Comment removed (Score:3, Interesting)
Re:Outside air? (Score:3, Interesting)
It took 8 months until the first servers started dying from the intense corrosion & pitting on equipment closest to the air outlets. We were bringing in air that while it was ice cold, was unfiltered and brought pollution in from 2 storeys above street level, and I dare say more moisture than the air conditioned recycled worker-breathable air.
Filters fixed the problem.
Re:Outside air? (Score:2, Interesting)
You could be heating buildings or a greenhouse with it, after all. Or making steam to pipe heat. Maybe even turning generators? Not sure what the step-down of the efficiency of it all is.
Apparently A/C is only 1/3rd efficient... but as you're going to be losing that anyway, might just look at the output heat.
It's not a colder room, it's air circulation (Score:2, Interesting)
Just because HP is sells a 42U rack doesn't mean you have to cram blades into all 42Us. It's cheaper to spread the heat load across a larger area than to figure out how to put 1500 CFM out of a floor tile so the top of a blade rack gets air.
There are studies by the uptime institute that say that 50% of hardware failures happen in the top 25% of rack space because the top of the rack doesn't get any air from the floor tiles and it cycles exhaust from the rack or ambient air for cooling.
We just put in the latest blade rack from HP. 4 50 amp circuits(2 for redundancy) for a 4 square foot space is beyond silly. That's more service and electrical consumption than a 1500 square foot home after you eliminate the two circuits for redundancy.
Energy efficiency and Hosting- Host NORTH ! (Score:3, Interesting)
If data center location isn't such a problem as long as we have high speed data lines, locate the data center someplace nice and cold.
Like Manitoba, or Minnesota, or Alaska, or Siberia. Heat the work area with the flow from the data center.
Hire your local population to maintain the data center.
Profit !
Re:well I've always wondered this (Score:2, Interesting)
Re:well I've always wondered this (Score:5, Interesting)
I'm not an EE, but it's something I've always wondered about. I don't have a datacentre, but I do have far too many computers: why does my machine room contain about fifteen wall warts, all producing slightly different DC voltages and plugged in to their various appliances via fifteen different non-standard connectors? Why not just have one low-voltage standard and have all these things plug into that?
One possible reason is that (IIRC) power losses vary according to current, not voltage. By increasing the voltage, you can push the same amount of energy down a wire using a smaller current, which limits losses. This is why power lines use very high voltages.
This means that if you produce regulated 5V at one side of your datacentre, by the time it's reached the other side it's not 5V any more. But it should be easy to get round this by producing 6V and having DC regulators; they're very small and extremely efficient these days.
However, I suspect that the main reason why this kind of thing isn't done is inertia. There's so much infrastructure in place for dealing with high-voltage AC supplies that you wouldn't get off the ground using DC.
Re:well I've always wondered this (Score:2, Interesting)
Re:Aquafina... (Score:5, Interesting)
The first problem was snow that piled up outside, combined with clogged drains, that led to melting snow coming in through the wall where some pipes entered/exited. Since their layout was power in the floor and networking in the ladder racks, it's actually pretty amazing that a large portion of the power plugs and switches still worked, even while being submerged in 6 inches of water.
So about a year after they had taken care of that issue, a water pipe for a bathroom on the floor above burst, and of course the water came down right in our room in the hosting center. It wasn't so bad until the flourescent lights in the ceiling filled up and started to over flow. We were able to limit the damage by throwing tarps over the tops of all the racks (there goes your cooling effect, though), but we still lost about 100K worth of server and switching equipment.
So yeah, water in the data center? It's been done.
Re:Outside air? (Score:3, Interesting)
When I worked at Target we had specialized monitoring equipment that notified the same people that handle burglar alarms if a server room got to be too hot. It was written directly in the contract we had with the HVAC co's that only 911 call centers and Hospital Emergency rooms could be prioritized over one of our server rooms.
Re:well I've always wondered this (Score:5, Interesting)
Whenever we need something outside of normal ATX, we wind up paying custom development fees.
No one makes DC to DC power supplies that are worth a damn, and the few vendors who do sell them (Sun, IBM, etc) charge an arm and a leg above and beyond what we pay to have them custom engineered.
Why not neither? Remove the power supplies. (Score:3, Interesting)
Reduce power consumption
Reduce heat in the server room
Improve reliability
The A/C company brought our water (Score:5, Interesting)
We spent several hours with a tiny shop vac (we need a bigger one!) emptying the water and being thankful Bob had seen it before it got high enough to get into the power conduits.
An A/C unit drain pan had a clogged drain, so the sump pump couldn't carry the water away. Whoever had the units installed had purchased water alarms, but *they had never been hooked up*. Now *that* was a brilliant move.
We now have water alarms down there.
Meanwhile, the room stays about 70 degrees, and the servers stay comfy, as do we. I like it that way,
Re:Why not both? (Score:1, Interesting)
Let's apply the IT paradigm here, what you are going to need is scalability.
What if you get a nice aircon that keeping your current number of servers is at the top end of its limits and suddenly you need to add 20 more servers and a whole bunch of additional racks?
If you have servers with adequate cooling, you could scale up without having to get a super expensive air con unit and not have to replace or upgrade the unit immediately, because the servers can handle a lot of the heat problems themselves.
-SJ53
Re:Why not neither? Remove the power supplies. (Score:5, Interesting)
Personally I think BOTH the power & the cooling needs to be addressed. I've worked in datacenters where cabinets are filled with 30+ 1U servers. Not only is it a royal pain in the ass to deal with all the power cabling for 30 individual servers but the heat blowing out the back of those cabinets is enough to melt the polar ice caps...
I've also worked on blade servers like IBM's BladeCenter. Their next generation of blades will require even more power than it currently does. Trying to convince a datacenter to run 4 208 volt feeds to handle just a pair of BladeCenters (28 blades) is like pulling teeth. They can't comprehend that much power in such a small footprint. A rack full of BladeCenters could easily require 8 208 volt feeds, whereas a rack full of 1U's may only need 3 or 4 110 volt feeds.
Re:Why not both? (Score:4, Interesting)
I also agree with the guy in the article, liquid cooling in the server room is going to happen eventually. I got to see the difference a simple watercooling system made on a P4 3.02GHz Extreme Edition chip, stuffed in the same case with a GeForceFX5950. Even with some serious fans the case was borderline overheating in games. Part of the problem being that the room it was in wasn't kept that cool, and the owner had it in a cabinet in his desk (it is what that cabinet was designed for). He dropped a liquid cooling system into it, and now the thing is always nice and frosty. And even with the jolts and jostling of taking the system to several LAN parties, the liquid cooling system is still leak free and rock solid. His experience has actually made me consider one for my own next system. For a server, where the system usually sits still long enough to collect a measureable amount of dust, water cooling may be a very good choice. If it's installed properly the likelyhood of leaks is low, and the performance can be very good. Heck, I can see it now, our server rooms will eventually have a rack or two devoted entirely to the radiators for the liquid cooling systems of servers, which run hot enough form plasma.
Cool the racks instead... (Score:3, Interesting)
If you buy your own racks to put gear in, then getting these things is easy, if you buy whole racks from a vendor with gear in it already (custom systems type of thing), then the thing comes in a cabinet which usually has some kind of a fan/vent arrangement on top. Rip that off, attach some ducting straight up to the ducts running across the rows, and voila, cool air flows straight down and out the bottom.
All you need is to build your room with several ducts running across the ceiling, with removable plates every so often. The AC system pushes air into that, which then goes directly into the racks. You don't even need to cool the room really, since the air coming out of the racks gets cool enough to keep the room itself cool. The servers in the racks stay at fairly chilly temp in there. Only downside is when you need to open one, you're hit in the face with this freezing air pouring out of the rack.
Of course it's like pulling teeth... (Score:3, Interesting)
It is right and proper for a data center to make it difficult to allow a customer to go that far out of the specs that the data center can support, or it'll negatively impact the other customers in the room. If they can't meet your needs, it's better to look elsewhere than to go that far out of engineering spec.
Re:Why not both? (Score:3, Interesting)
It seems more likely to me that the radiators would be placed outside. I could forsee water cooled racks that come with a centre mounted warm and cool water manifolds [toolbase.org] plumbed to high flow lines to take all the water to one big radiator outside...
Or probably easier to manage, a 2-4U centre mounted unit with the manifold and pumps for that rack, circulating water through that rack and to/from a central resevoir (55 gallon plastic drum) in the server room, and annother set of pumps and very large pipes to take the water through an outside radiator.
Each (1U, 2U, etc server ) unit could have it's own rear mounted hose attachments (and bleed valve) in a modular fashion so you just hook up a new computer to the warm and cool manifolds, open the valves, bleed the air, and your new unit is cooled. To remove it, just shut the manifold valves, open the bleed valve, put a bucket under the lines, and take the lines off the connectors.
Re:Outside air? (Score:2, Interesting)
For a datacenter, economizer options are not popular. Why? As a general rule, outside air is generally considered too dirty and too humid to be worth using on valuable data center equipment.
High density ( 52 KW per rack ) (Score:2, Interesting)
is with a cooler in front of the rack -
This unit requires a 15 ton capacity coil,
but can be lined up side by side. Each rack stays cool.
Smaller units work the same way -For example- put a 4 ton coil in front of a rack of 40 opterons - cold air in the front regardless of the room temperature. 80 opterons / 8 ton coil The computers provide the fan - no crac unit needed -
Size the cooler for the rack, hook it to the cold water, - repeat as needed.
This adds 8 to 14 inches to rack depth - and has no hot spots.
email has changed - rcbondsr@gmail.com
For license information - University of Washington Technology Transfer.
Bigger computers - bigger coil - one rack at a time I can cool any installation you have !
I am going to roll out the first installation this summer.
From my patent application -
Background of the Invention
[003]Computers are sometimes cooled by cooling the air in the room in which the computers are located. A typical cooling system cools air and moves it through the room with the devices in the room that need to be cooled. When air is used as the cooling medium, variations in airflow occur, particularly when the heat density rises in a region of the equipment room, or when the absolute heat load approaches the maximum load that the air can handle. In an effort to solve resulting problems, systems have been made in which the devices that heat up are placed inside of a closure and the air inside the enclosure is cooled. These systems have been found to be inadequate when the heat density is above about 8 Kw. None of the existing systems are able to effectively operate in an environment in which the heat density is between about 20 to about 40 Kw. Yet, manufacturers are starting to make computer equipment in which that much power exists in the system. Currently, when the heat density is high, the systems are provided with greater floor space and larger air handlers and chillers. This approach has led to the creation of "hot spots" in the equipment. The known systems fail when the power level raises to about 400 watts per square foot, or when the cooling requirements vary substantially in a given space.
[004]When airflow in a single rack approaches about 3,000 cubic feet per minute, and an aisle of about 20 racks approaches 52,000 cubic feet per minute, the conventional systems cannot handle the airflow in a computer room of conventional size. The use of larger rooms is expensive and they are still subject to the airflow problems that are created. These problems include the creation of "hot spots" which are regions in the room that are not sufficiently cooled and in which the devices that generate the heat are adversely affected by the heat. There is a need for a cooling system that avoids the problems of the prior art systems and which eliminates the "hot spots". A principal object of the present invention is to fulfill this need.
ABSTRACT
A device that in use generates heat is positioned within an enclosed space that includes an ambient air inlet, an outlet and an air mover for moving ambient air through the space from the ambient air inlet to the outlet. A cooler comprising coils and passage ways defined by and between the coils to which ambient air moves from the inlet of the cooler to the outlet of the cooler. Position in the cooler with its outlet in register with the ambient air inlet for the enclosed space. Using the cooler to cool the ambient air that is immediately forwardly of the ambient air inlet for the enclosed space. Using an air mover in the enclosed space for moving the cooled ambient air into the ambient air inlet, through the enclosed space, and out from the outlet of the enclosed space.
rcb