Intel Shows Data Centers Can Get By (Mostly) With Little AC 287
Ted Samson IW writes "InfoWorld reports on an experiment in air economization, aka 'free cooling,' conducted by Intel. For 10 months, the chipmaker had 500 production servers, working at 90 percent utilization, cooled almost exclusively by outside air at a facility in New Mexico. Only when the temperature exceeded 90 degrees Fahrenheit did they crank on some artificial air conditioning. Intel did very little to address air-born contaminants and dust, and nothing at all to deal with fluctuating humidity. The result: a slightly higher failure rate — around 0.6 percent more — among the air-cooled servers compared to those in the company's main datacenter — and a potential savings of $2.87 million per year in a 10MW datacenter using free cooling over traditional cooling."
How about reducing the need for AC POWER as well.. (Score:5, Insightful)
How about reducing the need for AC POWER as well by cutting down on the number of AC TO DC PSU's.
Makes Sense (Score:5, Insightful)
Perhaps they should be in Alaska (Score:1, Insightful)
... in Anchorage you have all the free cooling you want!
What a great study! (Score:4, Insightful)
The savings should be more than enough to pay for replacement hardware, and even for upgrades. And stepping back and looking at the big picture tells me that there is at least one brilliant person at Intel--whoever though of doing this study is a genius!
--MarkusQ
Outside air not so harsh (Score:5, Insightful)
Well, it makes sense. Normal PCs run on essentially ambient air, and live for years even under heavy loads (games put a lot of load on systems) despite all the dust and cruft. Servers aren't that different in their hardware, so it makes sense they'd behave similarly. And there's a lot that can be done cheaply to reduce the problems that were seen. Dust, for instance. You can filter and scrub dust from the incoming air a lot cheaper than running a full-on AC system. In fact the DX system used on the one side of the test probably scrubbed the incoming air itself, which would explain the lower failure rate there. Reduce the dust, you reduce the build-up of the thermal-insulating layer on the equipment and keep cooling effectiveness from degrading. Humidity control can also be done cheaper than full-on AC, and wouldn't have to be complete. I don't think you'd need to hold humidity steady within tight parameters, just keep the maximum from going above say 50% and the minimum from going below 5%. Again I'll bet the DX system did just that automatically. I'd bet you could remove the sources of probably 80% of the extra failures on the free-cooling side while keeping 90% of the cost savings in the process.
They will have to wait longer to get failures (Score:4, Insightful)
I'd say that they will have to wait longer to get failures. Try to have a server running in that enviroment for 5 years and then we will see. I would not do it without having some good filters. But for a test it is a interesting experiment.
For datacenters in colder climates, you can already get cooling systems that cools the water using air only when the temperature is below a certain temperature(just forgot the number). When it gets above that level the water gets cooled like you normally do.
At work our old AC system was old and needed to be replaced and the new one does that. The outside temperature is so low that the water will be cooled with just air for half the year.
It was more expensive to install since it needed more and bigger cooling units(I belive they also talked about bigger slower fans that used less power) when just using air but it pays itself in a few years.
Another interesting experiment would be to use the heat again. I dont know if the water temperature is high enough so that you could use heat exchangers, perhaps as the first step on heating ingoing cold water.
Re:Chimney effects (Score:3, Insightful)
Re:What About the Small Guys? (Score:5, Insightful)
Re:Makes Sense (Score:3, Insightful)
What the fuck is heppa?
It's HEPA. It's an acronym. High-Efficiency Particulate Air.
Yeah, because the guy not knowing the acronym makes the point he's making completely useless, right? (sheesh). Only thing worse than a speeling flame is a speeling flame with attitude, dude. You might consider chilling out a bit.
Why are office windows seaed shut? (Score:3, Insightful)
There is another really smart thing you can do too. When it is hot inside and not hot outside yu can open a window. That seems obvious but how many office building have openable windows? For some reason Architects like to cool office space with AC even if there is "free" cool air out doors.
This is even easier with computers. The servers would be happy to run at 95F and much of the time even in the American SW the outside air is cooler than 95F.
I've been saying this for many years. I think the reason for resistance is that no one gets a take home pay bonus based on how much power is saved.
Re:What a great study! (Score:5, Insightful)
You lose on density, though. Aisle space in front of the racks is fixed, you need a certain amount for humans to move in. Shallow, tall equipment means fewer units per rack. With a current-format rack you need say a 3'-deep area for the rack and a 3'-wide aisle for access. That's 50% equipment and 50% access. If the racks were only 1' deep instead, you'd be using 25% for equipment and 75% for access (since you still need that 3' wide aisle). And in that 25% of space for equipment you now get perhaps 25% of the amount of equipment since each one's using 4x more vertical space in the rack and rack height can't change (it's limited by basically how high off the floor a human can reach to get to the equipment).
To make up for that, you need more square footage of data center to hold the equipment. That increases operating costs, which is what we're trying to reduce.
Re:Why are office windows seaed shut? (Score:4, Insightful)
There is another really smart thing you can do too. When it is hot inside and not hot outside yu can open a window. That seems obvious but how many office building have openable windows? For some reason Architects like to cool office space with AC even if there is "free" cool air out doors.
The reasons are things like: liability issues, chimney effects, people leaving the windows open even with the heat or a/c on, people leaving the windows open in the rain, bugs in the building, increased maintenance costs for more complex windows, etc. It turns out that architects aren't actually idiots and have thought about this.
This is even easier with computers. The servers would be happy to run at 95F and much of the time even in the American SW the outside air is cooler than 95F.
Your desktop can run at 95F ambient. If it has a variable speed fan it's probably screaming like a banshee. The key is how much heat can be dissipated, and at 95F ambient you can't move enough air through a dense computer system to cool the components down to a safe temperature. Even at 68F ambient people are have a lot of trouble moving enough air through modern super-dense racks to keep computers from seeing increased failure rates.
Reduced life (Score:3, Insightful)
Well of course intel wants you to burn your machines up early. They get to sell you the replacement.
Re:What a great study! (Score:4, Insightful)
I'm not sure. $2.87m may be enough to pay for failures, but what if you had to add extra redundancy to the system in the first place to make up for that small amount of failures? Extra boxes to maintain, with their own MTBFs.. extra space taken up, extra electricity drawn.
I think a better solution - not as extreme, granted - would be to just turn the aircon temperature dial up a notch or two. Has anyone worked out how much money you'd save in the same datacenter by just doing that?
http://www.washingtontimes.com/news/2008/jul/30/un-takes-on-hot-air-for-good-of-the-world/ [washingtontimes.com]
Most datacenters and even little server rooms I've been in have had the dial set to something ridiculous like 65. There's no reason your server room needs to be that cold, at all. You just have to keep it at a reasonable ambient temperature somewhere below the system's maximum rating (most processors will happily run for 5 years at a die temperature of 105C, you can't blow hot air over a processor and expect it to stay cool though.
So, why not keep your server room at 80, save yourself the 0.6% extra failures, and maybe (at a guess) $1.3m a year instead of $2.87?
Re:Never play in the USA (Score:2, Insightful)
Wear your damn sweater like a man, damnit.
I've often said, "You can always put on another sweater, but you can't take off more clothes than all of 'em." In most office environments, you probably wouldn't even want people taking off quite that much, anyway, so you set the level at the point where no one has to.
People like you, and Barack Obama with his "no one needs lower than 78" are making people smelly and uncomfortable to be around.
Re:How about reducing the need for AC POWER as wel (Score:3, Insightful)
That, and they become a single point of failure.
Having seen a few commodity power supplies fail in the most spectacular manner possible makes me shudder to think that companies are willingly switching to massive power supplies just to save a few bucks.
Re:What a great study! (Score:3, Insightful)
You replace your hardware every 10 months? Wow!
10 months seems just shy of the time it takes for heat to really start causing damage. I'm talking about stuff like wire insulation getting brittle, quickly followed by vibration causing shorts. Then there is the increased molecular migration in the silicon of the ICs.
10 months is NOT a long term study.