Forgot your password?
typodupeerror
Power Intel IT Hardware

Intel Shows Data Centers Can Get By (Mostly) With Little AC 287

Posted by timothy
from the googly-attrition dept.
Ted Samson IW writes "InfoWorld reports on an experiment in air economization, aka 'free cooling,' conducted by Intel. For 10 months, the chipmaker had 500 production servers, working at 90 percent utilization, cooled almost exclusively by outside air at a facility in New Mexico. Only when the temperature exceeded 90 degrees Fahrenheit did they crank on some artificial air conditioning. Intel did very little to address air-born contaminants and dust, and nothing at all to deal with fluctuating humidity. The result: a slightly higher failure rate — around 0.6 percent more — among the air-cooled servers compared to those in the company's main datacenter — and a potential savings of $2.87 million per year in a 10MW datacenter using free cooling over traditional cooling."
This discussion has been archived. No new comments can be posted.

Intel Shows Data Centers Can Get By (Mostly) With Little AC

Comments Filter:
  • by Joe The Dragon (967727) on Thursday September 18, 2008 @05:05PM (#25061625)

    How about reducing the need for AC POWER as well by cutting down on the number of AC TO DC PSU's.

    • by MarcQuadra (129430) on Thursday September 18, 2008 @05:14PM (#25061769)

      I asked the president of an engineering firm that I work for about this. He ships racks of boxes, each holding DSP boards on backplanes, each backplane has it's own PSU.

      When I asked him why he doesn't just have one or two -big- power supplies in the unit, he said that he tried that, but the cost of the non-standard PSU was higher than all the ATX PSUs put together, and then some, and replacing the units when they eventually fail would be tricky, as opposed to just stocking more ATX PSUs.

      I agree that it's a good idea, but until there's enough volume of large multi-output PSUs shipping, the cost of manufacture makes the product unworkable (unless you think big-picture and want to spend more up front for power savings over the whole unit's life).

      Generally, the people who use the hardware aren't the ones building it, and buyers usually go for the lowest bid.

      • by Curtman (556920) on Thursday September 18, 2008 @05:27PM (#25061989)
        The fluctuating humidity probably wouldn't be a problem in New Mexico either. The rest of us might have a problem.
        • by pavon (30274)

          The article said they got up to 90% humidity at times. Remember, they didn't have any humidity controls at all, and it does rain in New Mexico resulting in short durations of high humidity.

          I would say that fluctuations in humidity were tested quite well - long term effects of constant humidity, not so much.

        • by neile (139369)

          The original article says humidity fluctuated between 4 and "more than 90%" over the course of the study. If you've never been to New Mexico you've missed out... they get some wicked thunderstorms.

          Neil

        • Re: (Score:3, Informative)

          by spun (1352)

          We have a monsoon season here, in mid summer. Gets pretty humid at times.

        • by b4upoo (166390) on Friday September 19, 2008 @10:29AM (#25070961)

          Computers run hot enough to get rid of moisture and one assumes that these data centers run around the clock.
                          But dust can be lethal to computers and in particular to power supplies and CPU fans. I clean my PCs guts at least twice a year and what comes out is amazing. Fans are great at collecting dust and they don't pump much air when coated with dust either.

      • 48 vdc (Score:4, Informative)

        by autocracy (192714) <slashdot2007 AT storyinmemo DOT com> on Thursday September 18, 2008 @05:30PM (#25062041) Homepage
        A company I used to work for (SeaChange International) would ship systems that, in some cases, were large enough to be considered their own datacenter. Some customers would order -48 volt DC power supplies. They'd do their own wiring at the site, having one big AC-DC converter to handle the entire system. They were certainly more expensive than the ATX supplies.
        • Re:48 vdc (Score:5, Informative)

          by silentbozo (542534) on Thursday September 18, 2008 @05:58PM (#25062417) Journal
          One benefit to going DC is that you can wire your battery modules directly into the DC distribution grid for the CPUs (with appropriate charge and cutover circuits), and forgo the inefficiencies in converting AC to DC at the UPS, and then back out again, only to convert the AC back to DC at the CPU.

          Having multiple of a commonly used voltage used in renewable energy also helps if, for example, you want to feed your datacenter directly from say wind or solar, in addition to a set of AC to DC converter.
        • Re: (Score:3, Interesting)

          by PLBogen (452989)

          My brother works for the Campus-level Computing and Information Services for Texas A&M University. They have been going away from AC power for a while now. They apparently have almost no heat issues anymore. While the AC server room in my lab (we run 70 servers) with a dedicated A/C system is running at 74 degrees now with the A/C running constantly at full blast.

          • Re: (Score:3, Interesting)

            by Cramer (69040)

            Then your A/C is too small, or the room is not energy efficient ('tho I suspect the rest of the office is cooler.) I had the same issues in our previous office when the building A/C was cut off in the evenings and on weekends -- it's hard to move all that heat with cheap consumer A/C units, and impossible if you don't have a heat exchanger outside the building. (dumping hot air into the plenum only works as long as the building HVAC is on.) The current office has a dedicated 5ton Liebert Challenger 3000 a

        • by Anonymous Coward

          A company I used to work for (SeaChange International) would ship systems that, in some cases, were large enough to be considered their own datacenter. Some customers would order -48 volt DC power supplies. They'd do their own wiring at the site, having one big AC-DC converter to handle the entire system. They were certainly more expensive than the ATX supplies.

          -48V DC is nothing special in many telco applications. Sun equipment (which has been historically popular with telcos (they have lots of NEBS-certified hardware)) has DC power supplies as a standard option on a good portion of their servers.

          Of course many other manufacturers also offer DC P/S options (and NEBS).

          http://www.epanorama.net/wwwboard/messages/1142.html

      • by SuperQ (431) * on Thursday September 18, 2008 @06:41PM (#25063029) Homepage

        Having very large PSUs is a pain in the ass. Failures tend to be catastrophic and dangerous. They're more expensive to build and maintain. (think basic economy of scale problems) They also may not be any more efficient than distributed conversion. You also tend to distribute much lower voltages with DC than you do with AC. (240vac vs 48vdc) This gives very high amperages which requires much thicker wiring. Copper is EXPENSIVE right now, this makes it a big factor in the cap-ex of building a new DC.

        This is why a lot of work is going into improving the efficiency of commodity power supplies. Groups like 80plus.org are doing great things.

        Also some other links:
        http://www.treehugger.com/files/2007/07/secret_efficien.php [treehugger.com]
        http://services.google.com/blog_resources/PSU_white_paper.pdf [google.com]

      • by itzdandy (183397) <dandenson@NOspaM.gmail.com> on Thursday September 18, 2008 @07:57PM (#25064057) Homepage

        If you want to go straight DC, you need to use the economies of scale, not replace AC power supplies with some alternate power scheme that still uses AC on the rack and DC into the server.

        Instead, use large, very efficient AC-DC transformers and wire the rack DC.

        If you convert AC to DC in bulk with more expensive but highly efficient equipment you will save significant money on the power conversion PLUS you can put that transformer outside in its own enclosure with a big metal heat exchanger for a case.

        DC can be stepped down very easily and efficiently so various voltages are available from the transformer or from a seperate step-down box that doesnt create much heat because it is pretty efficient.

        Now, you dont have to worry about the heat from the power supply and dont have to cool for it. You gain savings in efficiency and less AC use.

        also, the transformer can very easily be cooled but an extremely simple ground loop and small pump can handle that for a few bucks per month.

    • by Gat0r30y (957941)
      I've often wondered about that myself. It seems absurd to have so many little AC to DC PSU's in a data center. Why not just have 1, directly integrated into a backup power supply? What is there in a datacenter that doesn't run on 12V/5V/3.3V DC? It would seem way more efficient, and less costly to me. Not only that, but those PSU's are producing heat too, which only exacerbates the cooling issue (fans for each PSU). Also, it would seem to me an evaporative cooling system instead of AC would be just as
      • I've often wondered about that myself. It seems absurd to have so many little AC to DC PSU's in a data center. Why not just have 1, directly integrated into a backup power supply?

        4 words "Single point of failure"

        • Have more then one of them

        • by Taxman415a (863020) on Thursday September 18, 2008 @07:25PM (#25063655) Homepage Journal

          4 words "Single point of failure"

          You mean like the power circuit that you are already connected to? That single point of failure has long ago been handled. Where the costs can be justified, run more than one power circuit, backup generators and UPS, etc. That's no different.

          I'm personally more interested in the wasteful DC to AC and back conversion when considering small scale solar. Why in the world is the default option to run a wasteful inverter just to plug an AC to DC converter in to that? Almost everything I looked at for portable solar to power a laptop or netbook worked like that. A lot of netbooks could be run on a 10W solar panel with battery backup, or more reliably of course with more solar capacity.

      • Re: (Score:3, Informative)

        by Cramer (69040)

        The lower the DC voltage, the higher the current and line loss. And running 3-4 different voltages throughout the place leads to confusion and much higher costs (4 voltages == 4x the wire.) -48VDC systems have been common for decades... in the telco world. They just haven't been common for computer datacenters.

  • Chimney effects (Score:5, Interesting)

    by RollingThunder (88952) on Thursday September 18, 2008 @05:07PM (#25061667)

    I do wonder how things could be improved with a decently sized stack... the higher an exit chimney, the more draw you'll get from the temperature differential. If your computer rooms are near the base of a decent sized office building, and you have a 20 story stack, I'd expect you could get away without any intake or exhaust fans.

    Anyone here that can confirm or deny this?

    • Re: (Score:2, Funny)

      The only problem with this and high-performance computing is latency time between nodes if the height is great enough.

    • by jhfry (829244)

      Wouldn't a chimney cause resistance. I would think having under floor intake and above ceiling output, with some exhaust fans in the ceiling space to draw the air through the cabinets and push it outside would force plenty of air across the equipment.

      If you use a chimney, you essentially would be reducing the volume of air that can be exhausted, but you would be increasing the speed of the air.

    • Add some of the Dyson vacuum inspired vortex thingy's to the intake to help filter out the dust and you wouldn't have to waste as much money on filters either.

      Or what if you run the incoming air through a swamp cooler? wouldn't the running water cut down on the incoming dust significantly?

      • by bendodge (998616)

        Yes, but humidity isn't server-friendly.

        • by Xandar01 (612884)

          Umm yeah, I was tripped up by the article mentioning that their test center did ok with the humidity, but the graph shows that it stayed between 10% - 20% Humidity, much lower than the 80% - 90% relative humidity a swamp cooler is going to provide.

          To top it off, looks like the running water doesn't do much to filter contaminates after all.
          http://en.wikipedia.org/wiki/Evaporative_cooling#Disadvantages [wikipedia.org]

      • Dyson vacuum inspired vortex thingies? Dyson copied the vortex idea from industrial chimneys (and credits this as the source of his idea).
    • Re: (Score:3, Insightful)

      by TooMuchToDo (882796)
      I would think you'd get a vacuum sucking air up the chimney, as 20+ story buildings would have their exhaust exit at almost 200 ft above ground. Winds up there can move pretty quickly, causing the pressure at the chimney exit to be lower, creating suction, no?
      • by N1ck0 (803359)

        Well technically that would be air being Blown out...or more precisely siphoning air up the chimney.

        Air ventilated at ground pressure will be siphoned to the low pressure area at the top of the stack. Combine that with humidity and convection from the heat and you have the Chimney Effect, or Stack Effect (http://en.wikipedia.org/wiki/Stack_effect).

        You could even boost this by adding solar energy collectors (essentially a good light absorbent material to conduct more heat in the chimney), to create a Solar C

    • Re: (Score:3, Funny)

      by jitterman (987991)

      I do wonder how things could be improved with a decently sized stack... ?

      Apparently you haven't checked your spam folder lately; you'll find plenty of answers in there addressing just this question. :)

    • Simpler Tools (Score:5, Interesting)

      by copponex (13876) on Thursday September 18, 2008 @05:48PM (#25062289) Homepage

      Part of the problem is people are looking for very complicated solutions for very simple problems.

      In retrofitting a standalone building, all you really need to do is reduce the amount of heat a building gains from the sun by improving it's R value and use sensible ducting to draw air through the building. I've seen some super energy efficient designs where each floor is vented, so that the building is itself a chimney, with cool air coming from vents from covered areas near the base, and enough size provided at the top to pull enough from the bottom, which is also easily aided by fans.

      In building an entirely new datacenter, it would make sense to bury the server rooms, and cover the concrete structure with earth and solar panels. Combined with a flywheel load balancer, you could have an "off the grid" datacenter with the grid for backup. During the daylight hours, especially in the south, the panels can provide a good deal of the A/C and power necessary. At night the flywheel can continue powering the data center for a while, and turn fans without compressors to cool the equipment with night air.

      This can all be done with existing technology. The trick is to convince people that green investment will lead to a return in the long run. I haven't personally looked at average rate increases in electricity, but the difference between efficient and additional construction expenses versus long term energy price fluctuations probably looks very good.

      • Re:Simpler Tools (Score:5, Informative)

        by More_Cowbell (957742) * on Thursday September 18, 2008 @07:40PM (#25063861) Journal
        Um... depending on your scale, perhaps. How many servers are you talking about here? When the company I work for (largish web host) built it's last data center they looked into (in fact purchased some) flywheels. Not for "powering the data center for a while", but to take the place of the giant UPS - just to bridge the gap between a power loss and when the diesel generators kicked in.

        We're talking less than a minute needed. In the end they couldn't use the several large and expensive flywheels because they could not provide power long enough.
        If you're powering your whole data center 'for a while' with these... you must have very few servers (like a handful).

  • Makes Sense (Score:5, Insightful)

    by ironicsky (569792) on Thursday September 18, 2008 @05:08PM (#25061677) Journal
    Makes sense to me. The most efficent places to store data centers is in the northern US or Canada where you have sub-zero temperatures from November - March and ranging between 0-15 in April/May and Sept/Oct and the rest of the year 20-30+ (Celcius of course) With these lower temperatures they could run a data center entirely off outside air from September - May each year. Put a heppa filter in between to scrub out dirt and dust and vola, o'natural cooling solutions
    • Re:Makes Sense (Score:5, Interesting)

      by Anonymous Coward on Thursday September 18, 2008 @05:20PM (#25061867)

      I set up a datacenter at my old job in Alberta, and that's exactly what we did.

      We ran exhaust ducting to the offices, and tied intake into the building's cold-air return. From September to May fans moved colder air into the data room and hot air into the office space. June to August we ran the AC, and shut off the "winter lines" with dampers.

      It worked extremely well.

    • Put a heppa filter in between to scrub out dirt and dust and vola, o'natural cooling solutions

      I think that solution will put Dust Bunnies on the endangered species list

    • by tzhuge (1031302) on Thursday September 18, 2008 @05:37PM (#25062151)
      Us canucks can even use those data centers to heat our igloos. Right now I'm using my Xbox 360, but I think a data center would be much more efficient.
    • I agree that the AC setting really doesn't need to be so aggressively cold. 90F seems a bit high though. 60F is just too inefficient, the idea was that you had excess capacity in case cooling failed, but most rack devices can last months at maybe 80 to 85F with no problems. I wouldn't want it to stay at 90F for long though.

    • by geekoid (135745)

      Or better.
      Put them in the desert next to a couple of extra areas, build a solar thermal plant and power the things, plus air conditioning for free and sell the remains 150 MW to the grid.

    • by gravis777 (123605)

      Yeah, because we all know how much New Mexico has sub-zero temps. :-)

      Truthfully, along those reasonings, you could eliminate tons of things. I lived in Salzburg from January-May of 2001. Didn't have a refrigerator - in the colder months of January - March, I kept cheese, sodas, and sometimes even milk just sitting on the ledge outside my bedroom window. Too bad the cost of heating oil was so high. More than canceled out the savings.

      Nah, what you need is a place with fairly consistant summer / winter temps a

  • by Anonymous Coward on Thursday September 18, 2008 @05:08PM (#25061679)

    I leave my systems on the deck.

  • by MarkusQ (450076) on Thursday September 18, 2008 @05:15PM (#25061791) Journal

    The result: a slightly higher failure rate -- around around 0.6 percent more -- among the air-cooled servers compared to those in the company's main datacenter -- and a potential savings of $2.87 million per year

    The savings should be more than enough to pay for replacement hardware, and even for upgrades. And stepping back and looking at the big picture tells me that there is at least one brilliant person at Intel--whoever though of doing this study is a genius!

    --MarkusQ

    • by jhfry (829244)

      I agree. It's time to overhaul the data center. Here are a few things I would love to change.

      - Cabinet power supplies. Why the hell does every piece of equipment need an AC-DC power supply.

      - Equipment should be cooled by the cabinet rather than requiring it's own fans. Simply seal the racks with a partition between the front and back, force air out the back with several large, redundant, efficient, and quiet fans.

      - Make the cabinets shallower, by at least 1/2, and remove the rear access to them so they

      • by Todd Knarr (15451) on Thursday September 18, 2008 @06:28PM (#25062845) Homepage

        You lose on density, though. Aisle space in front of the racks is fixed, you need a certain amount for humans to move in. Shallow, tall equipment means fewer units per rack. With a current-format rack you need say a 3'-deep area for the rack and a 3'-wide aisle for access. That's 50% equipment and 50% access. If the racks were only 1' deep instead, you'd be using 25% for equipment and 75% for access (since you still need that 3' wide aisle). And in that 25% of space for equipment you now get perhaps 25% of the amount of equipment since each one's using 4x more vertical space in the rack and rack height can't change (it's limited by basically how high off the floor a human can reach to get to the equipment).

        To make up for that, you need more square footage of data center to hold the equipment. That increases operating costs, which is what we're trying to reduce.

    • by NekoXP (67564) on Thursday September 18, 2008 @08:26PM (#25064353) Homepage

      I'm not sure. $2.87m may be enough to pay for failures, but what if you had to add extra redundancy to the system in the first place to make up for that small amount of failures? Extra boxes to maintain, with their own MTBFs.. extra space taken up, extra electricity drawn.

      I think a better solution - not as extreme, granted - would be to just turn the aircon temperature dial up a notch or two. Has anyone worked out how much money you'd save in the same datacenter by just doing that?

      http://www.washingtontimes.com/news/2008/jul/30/un-takes-on-hot-air-for-good-of-the-world/ [washingtontimes.com]

      Most datacenters and even little server rooms I've been in have had the dial set to something ridiculous like 65. There's no reason your server room needs to be that cold, at all. You just have to keep it at a reasonable ambient temperature somewhere below the system's maximum rating (most processors will happily run for 5 years at a die temperature of 105C, you can't blow hot air over a processor and expect it to stay cool though.

      So, why not keep your server room at 80, save yourself the 0.6% extra failures, and maybe (at a guess) $1.3m a year instead of $2.87?

    • Re: (Score:3, Insightful)

      by Shotgun (30919)

      You replace your hardware every 10 months? Wow!

      10 months seems just shy of the time it takes for heat to really start causing damage. I'm talking about stuff like wire insulation getting brittle, quickly followed by vibration causing shorts. Then there is the increased molecular migration in the silicon of the ICs.

      10 months is NOT a long term study.

  • by Todd Knarr (15451) on Thursday September 18, 2008 @05:19PM (#25061855) Homepage

    Well, it makes sense. Normal PCs run on essentially ambient air, and live for years even under heavy loads (games put a lot of load on systems) despite all the dust and cruft. Servers aren't that different in their hardware, so it makes sense they'd behave similarly. And there's a lot that can be done cheaply to reduce the problems that were seen. Dust, for instance. You can filter and scrub dust from the incoming air a lot cheaper than running a full-on AC system. In fact the DX system used on the one side of the test probably scrubbed the incoming air itself, which would explain the lower failure rate there. Reduce the dust, you reduce the build-up of the thermal-insulating layer on the equipment and keep cooling effectiveness from degrading. Humidity control can also be done cheaper than full-on AC, and wouldn't have to be complete. I don't think you'd need to hold humidity steady within tight parameters, just keep the maximum from going above say 50% and the minimum from going below 5%. Again I'll bet the DX system did just that automatically. I'd bet you could remove the sources of probably 80% of the extra failures on the free-cooling side while keeping 90% of the cost savings in the process.

    • by geekoid (135745)

      Um, who plays an high intensive game 24/7 for years? and no, WoW isn't that intensive.

  • Only ten months? (Score:4, Interesting)

    by ManiaX Killerian (134390) on Thursday September 18, 2008 @05:19PM (#25061859) Homepage

    The standard replacement cycle is about three years, so until they try that, this doesn't mean a lot. Also, what was the density of the data center? I still love the story of a datacenter with some DSLAMs that cooled left to right which were put next to each other in about 12 racks and the rightmost one caught fire once a week...

    Also, I don't know the climate there, but in the regular climate here where it goes between -10 and +35 celsius (that's between 14 and 95 fahrenheit) and there's a good dose of humidity, the failure rate might be somewhat bigger...

    • by iamhigh (1252742) *
      So true! Anyone with a background in unairconditioned manufacturing plants can tell you that new computers do just fine in rough conditions, but after a few years you will get power supply failure rates out the ass! Give them DC power inputs, standardized, please (but you KNOW intel won't do that - they don't even use standardized front panel connectors) and you might see the failure rate reduced even further.
      • Re:Only ten months? (Score:4, Informative)

        by Midnight Thunder (17205) on Thursday September 18, 2008 @05:40PM (#25062187) Homepage Journal

        So true! Anyone with a background in unairconditioned manufacturing plants can tell you that new computers do just fine in rough conditions, but after a few years you will get power supply failure rates out the ass! Give them DC power inputs, standardized, please (but you KNOW intel won't do that - they don't even use standardized front panel connectors) and you might see the failure rate reduced even further.

        Almost all data centers are designed with A/C in mind. This means that as long as A/C is pulling the load no one needs to worry about well designed buildings. As soon as you are challenged with having to design for reduced A/C usage that you end up thinking smarter and how passive systems can do the same thing. Another advantage of trying to design without A/C is that you won't find your servers frying because of an air conditioner failure.

        Below are some links on passive solutions to cooling. Some of the techniques are surprisingly old, but effective:
          - http://en.wikipedia.org/wiki/Passive_cooling [wikipedia.org]
          - http://en.wikipedia.org/wiki/Windcatcher [wikipedia.org]
          - http://www.arabrise.org/articles/A040105S.pdf [arabrise.org]
         

  • ...doesn't fluctuate that much, and is nearly always very low. I'd be very curious to see how a similar experiment goes in a place like Florida, that's at least as hot and much more humid.

  • by Ancient_Hacker (751168) on Thursday September 18, 2008 @05:21PM (#25061891)

    If they're paying ten cents a kilowatt-hour, that 10MW data center is paying about $9M/yr for power.
    Cooling systems move about 15 times the power than what they draw. So the savings for a 10MW datacenter would be around $600K. Wonder how they came up with $2.9M ?

    • Re: (Score:3, Informative)

      What kind of air-conditioning is that? Here the rule of thumb looks like this - 10KW of electricity produce 7KW of heat, and it takes one-third of that (2.333KW) in electricity to move it out. Do you have any sources on this?:)

    • Re: (Score:3, Informative)

      by rcw-home (122017)

      Cooling systems move about 15 times the power than what they draw.

      Not quite. If you're thinking of SEER, it's a bastardized ratio with BTUs/hour on one side and Watts on the other. Since there's 3.413 BTUs/h in one watt, a 15 SEER AC unit moves 4.4 times as much power as it draws (that is, it has a Coefficient of Performance, or COP, of 4.4).

    • by afidel (530433)
      15 times, not really. Traditional calculations were about 2 to 1, really efficient datacenter cooling today can reach about 5 to 1. I guess their equipment was somewhat older so they were doing about 3 to 1 in their large datacenter.
  • Humidity (Score:3, Informative)

    by Egdiroh (1086111) on Thursday September 18, 2008 @05:22PM (#25061907)
    This is just speculation, but isn't much of new mexico rather arid? So this study is not actually useful for people who need to build data centers in more humid places then new mexico which I think includes most of the places there are actually people.

    But if you are going to allow for an arbitrarily re-locatable data center, what does it matter that it can handle 90 degree whether when you can move it somewhere cold enough that you can have a humidity controlled room that gets passive cooling from the exterior.
    • Re:Humidity (Score:5, Interesting)

      by pla (258480) on Thursday September 18, 2008 @05:52PM (#25062337) Journal
      So this study is not actually useful for people who need to build data centers in more humid places then new mexico

      Humidity only really matters for two reason - If too low, you get a lot of static buildup, and if too high, you get condensation.

      Condensation only tends to happen on objects cooler than ambient, which doesn't really apply to running servers. Static matters a lot more, but you can raise humidity a lot cheaper than you can lower it, so, not as much of an issue there.

      And as a bonus, more humid air can carry away more heat than the same volume of less humid air.
  • by Bender Unit 22 (216955) on Thursday September 18, 2008 @05:23PM (#25061925) Journal

    I'd say that they will have to wait longer to get failures. Try to have a server running in that enviroment for 5 years and then we will see. I would not do it without having some good filters. But for a test it is a interesting experiment.

    For datacenters in colder climates, you can already get cooling systems that cools the water using air only when the temperature is below a certain temperature(just forgot the number). When it gets above that level the water gets cooled like you normally do.
    At work our old AC system was old and needed to be replaced and the new one does that. The outside temperature is so low that the water will be cooled with just air for half the year.
    It was more expensive to install since it needed more and bigger cooling units(I belive they also talked about bigger slower fans that used less power) when just using air but it pays itself in a few years.

    Another interesting experiment would be to use the heat again. I dont know if the water temperature is high enough so that you could use heat exchangers, perhaps as the first step on heating ingoing cold water.

  • that 'air conditioning' 'air cooling'.

    Sure it's great to have the air cool and all... but I thought that dehumidification was important too?

  • I work at the University of Montana and we talked a bit about direct venting our server rooms. Right now the big push is for ground water cooling. All new buildings on campus must use ground water cooling. Unfortunately, this is starting to hit the wall.

    A fellow sysadmin across campus was having a new server room designed, the tons of cooling for his system just got down rated because the groundwater has been warming up with all the new ground source cooling wells.

  • Awesome new business idea for someone out there. Now you need to pay $30 before buying a used computer to see if it was in any "air economizers" before the jerks sold it. Nobody wants a computer that's been beat to crap for 3-5 years.
  • Antarctica (Score:5, Interesting)

    by DaMattster (977781) on Thursday September 18, 2008 @05:54PM (#25062361)
    Antarctica would be kind of a neat place for a data center. You have all of the cold air you need and there is enough wind for power. Just have to find a way to keep it stable amidst moving ice.
  • by secmartin (1336705) on Thursday September 18, 2008 @06:14PM (#25062635)
    Minor correction: according to the article the failure rates nearly doubled. There were 1000 servers in a trailer; 500 with and 500 without AC. The ones with AC had a 2.45 percent failure rate, and the ones without 4.46 percent. That's an 80% increase, not 0.6%.

    Sun is also running a comparable experiment with Belgacom and allows you to log in to a live interface to view stats on in- and outlet temperatures and more at http://wikis.sun.com/display/freeaircooling/Free+Air+Cooling+Proof+of+Concept [sun.com] For more details and analysis see http://www.datacenterknowledge.com/archives/2008/09/18/intel-servers-do-fine-with-outside-air/ [datacenterknowledge.com] or http://securityandthe.net/2008/09/18/intel-sees-the-future-of-datacenters-and-it-does-not-include-airconditioning/ [securityandthe.net]

    DC Knowledge also has a nice video of this experiment at http://www.datacenterknowledge.com/archives/2008/09/18/video-intels-air-side-economization-test/ [datacenterknowledge.com]

  • Data Center Knowledge has a video [datacenterknowledge.com] in which the Intel engineers who conducted the study talk in detail about the setup and the results.
  • by ChrisA90278 (905188) on Thursday September 18, 2008 @06:20PM (#25062729)

    There is another really smart thing you can do too. When it is hot inside and not hot outside yu can open a window. That seems obvious but how many office building have openable windows? For some reason Architects like to cool office space with AC even if there is "free" cool air out doors.

    This is even easier with computers. The servers would be happy to run at 95F and much of the time even in the American SW the outside air is cooler than 95F.

    I've been saying this for many years. I think the reason for resistance is that no one gets a take home pay bonus based on how much power is saved.

    • by virtual_mps (62997) on Thursday September 18, 2008 @07:02PM (#25063317)

      There is another really smart thing you can do too. When it is hot inside and not hot outside yu can open a window. That seems obvious but how many office building have openable windows? For some reason Architects like to cool office space with AC even if there is "free" cool air out doors.

      The reasons are things like: liability issues, chimney effects, people leaving the windows open even with the heat or a/c on, people leaving the windows open in the rain, bugs in the building, increased maintenance costs for more complex windows, etc. It turns out that architects aren't actually idiots and have thought about this.

      This is even easier with computers. The servers would be happy to run at 95F and much of the time even in the American SW the outside air is cooler than 95F.

      Your desktop can run at 95F ambient. If it has a variable speed fan it's probably screaming like a banshee. The key is how much heat can be dissipated, and at 95F ambient you can't move enough air through a dense computer system to cool the components down to a safe temperature. Even at 68F ambient people are have a lot of trouble moving enough air through modern super-dense racks to keep computers from seeing increased failure rates.

  • I've noticed that often 'old-school' telco data centers often seem to be much more sparing with the AC running 70F-85F vs. the 'high-tech' data centers who tend to run 'freeze your ass off' data centers. Something has always told me they had something (whether it was just being cheap or not).

    Also, google put out that report a few years ago (google: "Failure Trends in a Large Disk Drive Population") and it basically proved that too cold (59F-86F) actually causes more problems early in the drives lives than t

  • Reduced life (Score:3, Insightful)

    by nurb432 (527695) on Thursday September 18, 2008 @07:24PM (#25063645) Homepage Journal

    Well of course intel wants you to burn your machines up early. They get to sell you the replacement.

  • by smchris (464899) on Thursday September 18, 2008 @09:21PM (#25064973)

    EVERYTHING _M_U_S_T_ be air-conditioned at all times. From what we heard from France during their last heat wave a few years ago, air-conditioning isn't universal in the First World. Therefore, it must sound strange that air-conditioning is a inviolate moral imperative in all offices in the US. My wife has a sweater with her at work at all times even if it is July or August. Same for me. 100% wool. When it is 95 outside and 68 inside, I want nothing more than to hibernate -- like seriously drift off to sleep. I've worn gloves with the fingers cut out in July at my keyboard. I've sneaked in an incandescent lamp to warm my hands (please, sir, just a lump of coal?). I've gotten on my chair and stuffed paper towels in air ducts.

    If management can't see that they are air-conditioning some of their people into productivity loss, not to mention pain, how much more likely are they to reduce air-conditioning on their precious equipment? No, doesn't matter whether one experiment shows it would save big money. The person who suggests reducing air-conditioning in the U.S. will be about as popular at his business as if he had suggested commissioning a portrait of Karl Marx on the lunch room wall. This just isn't a technical issue.

Make sure your code does nothing gracefully.

Working...