Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Power Intel IT Hardware

Intel Shows Data Centers Can Get By (Mostly) With Little AC 287

Ted Samson IW writes "InfoWorld reports on an experiment in air economization, aka 'free cooling,' conducted by Intel. For 10 months, the chipmaker had 500 production servers, working at 90 percent utilization, cooled almost exclusively by outside air at a facility in New Mexico. Only when the temperature exceeded 90 degrees Fahrenheit did they crank on some artificial air conditioning. Intel did very little to address air-born contaminants and dust, and nothing at all to deal with fluctuating humidity. The result: a slightly higher failure rate — around 0.6 percent more — among the air-cooled servers compared to those in the company's main datacenter — and a potential savings of $2.87 million per year in a 10MW datacenter using free cooling over traditional cooling."
This discussion has been archived. No new comments can be posted.

Intel Shows Data Centers Can Get By (Mostly) With Little AC

Comments Filter:
  • by Joe The Dragon ( 967727 ) on Thursday September 18, 2008 @05:05PM (#25061625)

    How about reducing the need for AC POWER as well by cutting down on the number of AC TO DC PSU's.

  • Makes Sense (Score:5, Insightful)

    by ironicsky ( 569792 ) on Thursday September 18, 2008 @05:08PM (#25061677) Homepage Journal
    Makes sense to me. The most efficent places to store data centers is in the northern US or Canada where you have sub-zero temperatures from November - March and ranging between 0-15 in April/May and Sept/Oct and the rest of the year 20-30+ (Celcius of course) With these lower temperatures they could run a data center entirely off outside air from September - May each year. Put a heppa filter in between to scrub out dirt and dust and vola, o'natural cooling solutions
  • by Anonymous Coward on Thursday September 18, 2008 @05:10PM (#25061723)

    ... in Anchorage you have all the free cooling you want!

  • by MarkusQ ( 450076 ) on Thursday September 18, 2008 @05:15PM (#25061791) Journal

    The result: a slightly higher failure rate -- around around 0.6 percent more -- among the air-cooled servers compared to those in the company's main datacenter -- and a potential savings of $2.87 million per year

    The savings should be more than enough to pay for replacement hardware, and even for upgrades. And stepping back and looking at the big picture tells me that there is at least one brilliant person at Intel--whoever though of doing this study is a genius!

    --MarkusQ

  • by Todd Knarr ( 15451 ) on Thursday September 18, 2008 @05:19PM (#25061855) Homepage

    Well, it makes sense. Normal PCs run on essentially ambient air, and live for years even under heavy loads (games put a lot of load on systems) despite all the dust and cruft. Servers aren't that different in their hardware, so it makes sense they'd behave similarly. And there's a lot that can be done cheaply to reduce the problems that were seen. Dust, for instance. You can filter and scrub dust from the incoming air a lot cheaper than running a full-on AC system. In fact the DX system used on the one side of the test probably scrubbed the incoming air itself, which would explain the lower failure rate there. Reduce the dust, you reduce the build-up of the thermal-insulating layer on the equipment and keep cooling effectiveness from degrading. Humidity control can also be done cheaper than full-on AC, and wouldn't have to be complete. I don't think you'd need to hold humidity steady within tight parameters, just keep the maximum from going above say 50% and the minimum from going below 5%. Again I'll bet the DX system did just that automatically. I'd bet you could remove the sources of probably 80% of the extra failures on the free-cooling side while keeping 90% of the cost savings in the process.

  • by Bender Unit 22 ( 216955 ) on Thursday September 18, 2008 @05:23PM (#25061925) Journal

    I'd say that they will have to wait longer to get failures. Try to have a server running in that enviroment for 5 years and then we will see. I would not do it without having some good filters. But for a test it is a interesting experiment.

    For datacenters in colder climates, you can already get cooling systems that cools the water using air only when the temperature is below a certain temperature(just forgot the number). When it gets above that level the water gets cooled like you normally do.
    At work our old AC system was old and needed to be replaced and the new one does that. The outside temperature is so low that the water will be cooled with just air for half the year.
    It was more expensive to install since it needed more and bigger cooling units(I belive they also talked about bigger slower fans that used less power) when just using air but it pays itself in a few years.

    Another interesting experiment would be to use the heat again. I dont know if the water temperature is high enough so that you could use heat exchangers, perhaps as the first step on heating ingoing cold water.

  • Re:Chimney effects (Score:3, Insightful)

    by TooMuchToDo ( 882796 ) on Thursday September 18, 2008 @05:32PM (#25062075)
    I would think you'd get a vacuum sucking air up the chimney, as 20+ story buildings would have their exhaust exit at almost 200 ft above ground. Winds up there can move pretty quickly, causing the pressure at the chimney exit to be lower, creating suction, no?
  • by terraformer ( 617565 ) <tpb@pervici.com> on Thursday September 18, 2008 @05:35PM (#25062119) Journal
    I will rephrase your question. Would a .6% increase in the already tiny failure risk be noticeable to someone running a single server when their chances of failure were already so small to begin with that their server was far less likely to fail in the first place? No, so yes, it is worth it from a cost perspective. They can take the money they save and replace the hardware twice as fast and their already small failure rate is less than half. This is a win all around and actually, the article never said what was the source of the increased failures, heat or particulate in the air. If the latter, this is a huge win for energy efficiency.
  • Re:Makes Sense (Score:3, Insightful)

    by djh101010 ( 656795 ) on Thursday September 18, 2008 @06:00PM (#25062439) Homepage Journal

    What the fuck is heppa?

    It's HEPA. It's an acronym. High-Efficiency Particulate Air.

    Yeah, because the guy not knowing the acronym makes the point he's making completely useless, right? (sheesh). Only thing worse than a speeling flame is a speeling flame with attitude, dude. You might consider chilling out a bit.

  • by ChrisA90278 ( 905188 ) on Thursday September 18, 2008 @06:20PM (#25062729)

    There is another really smart thing you can do too. When it is hot inside and not hot outside yu can open a window. That seems obvious but how many office building have openable windows? For some reason Architects like to cool office space with AC even if there is "free" cool air out doors.

    This is even easier with computers. The servers would be happy to run at 95F and much of the time even in the American SW the outside air is cooler than 95F.

    I've been saying this for many years. I think the reason for resistance is that no one gets a take home pay bonus based on how much power is saved.

  • by Todd Knarr ( 15451 ) on Thursday September 18, 2008 @06:28PM (#25062845) Homepage

    You lose on density, though. Aisle space in front of the racks is fixed, you need a certain amount for humans to move in. Shallow, tall equipment means fewer units per rack. With a current-format rack you need say a 3'-deep area for the rack and a 3'-wide aisle for access. That's 50% equipment and 50% access. If the racks were only 1' deep instead, you'd be using 25% for equipment and 75% for access (since you still need that 3' wide aisle). And in that 25% of space for equipment you now get perhaps 25% of the amount of equipment since each one's using 4x more vertical space in the rack and rack height can't change (it's limited by basically how high off the floor a human can reach to get to the equipment).

    To make up for that, you need more square footage of data center to hold the equipment. That increases operating costs, which is what we're trying to reduce.

  • by virtual_mps ( 62997 ) on Thursday September 18, 2008 @07:02PM (#25063317)

    There is another really smart thing you can do too. When it is hot inside and not hot outside yu can open a window. That seems obvious but how many office building have openable windows? For some reason Architects like to cool office space with AC even if there is "free" cool air out doors.

    The reasons are things like: liability issues, chimney effects, people leaving the windows open even with the heat or a/c on, people leaving the windows open in the rain, bugs in the building, increased maintenance costs for more complex windows, etc. It turns out that architects aren't actually idiots and have thought about this.

    This is even easier with computers. The servers would be happy to run at 95F and much of the time even in the American SW the outside air is cooler than 95F.

    Your desktop can run at 95F ambient. If it has a variable speed fan it's probably screaming like a banshee. The key is how much heat can be dissipated, and at 95F ambient you can't move enough air through a dense computer system to cool the components down to a safe temperature. Even at 68F ambient people are have a lot of trouble moving enough air through modern super-dense racks to keep computers from seeing increased failure rates.

  • Reduced life (Score:3, Insightful)

    by nurb432 ( 527695 ) on Thursday September 18, 2008 @07:24PM (#25063645) Homepage Journal

    Well of course intel wants you to burn your machines up early. They get to sell you the replacement.

  • by NekoXP ( 67564 ) on Thursday September 18, 2008 @08:26PM (#25064353) Homepage

    I'm not sure. $2.87m may be enough to pay for failures, but what if you had to add extra redundancy to the system in the first place to make up for that small amount of failures? Extra boxes to maintain, with their own MTBFs.. extra space taken up, extra electricity drawn.

    I think a better solution - not as extreme, granted - would be to just turn the aircon temperature dial up a notch or two. Has anyone worked out how much money you'd save in the same datacenter by just doing that?

    http://www.washingtontimes.com/news/2008/jul/30/un-takes-on-hot-air-for-good-of-the-world/ [washingtontimes.com]

    Most datacenters and even little server rooms I've been in have had the dial set to something ridiculous like 65. There's no reason your server room needs to be that cold, at all. You just have to keep it at a reasonable ambient temperature somewhere below the system's maximum rating (most processors will happily run for 5 years at a die temperature of 105C, you can't blow hot air over a processor and expect it to stay cool though.

    So, why not keep your server room at 80, save yourself the 0.6% extra failures, and maybe (at a guess) $1.3m a year instead of $2.87?

  • by zippthorne ( 748122 ) on Friday September 19, 2008 @12:01AM (#25066561) Journal

    Wear your damn sweater like a man, damnit.

    I've often said, "You can always put on another sweater, but you can't take off more clothes than all of 'em." In most office environments, you probably wouldn't even want people taking off quite that much, anyway, so you set the level at the point where no one has to.

    People like you, and Barack Obama with his "no one needs lower than 78" are making people smelly and uncomfortable to be around.

  • by Enahs ( 1606 ) on Friday September 19, 2008 @12:26AM (#25066801) Journal

    Having very large PSUs is a pain in the ass. Failures tend to be catastrophic and dangerous. They're more expensive to build and maintain. (think basic economy of scale problems)

    That, and they become a single point of failure.

    Having seen a few commodity power supplies fail in the most spectacular manner possible makes me shudder to think that companies are willingly switching to massive power supplies just to save a few bucks.

  • by Shotgun ( 30919 ) on Friday September 19, 2008 @09:29AM (#25070185)

    You replace your hardware every 10 months? Wow!

    10 months seems just shy of the time it takes for heat to really start causing damage. I'm talking about stuff like wire insulation getting brittle, quickly followed by vibration causing shorts. Then there is the increased molecular migration in the silicon of the ICs.

    10 months is NOT a long term study.

Living on Earth may be expensive, but it includes an annual free trip around the Sun.

Working...