Intel Shows Data Centers Can Get By (Mostly) With Little AC 287
Ted Samson IW writes "InfoWorld reports on an experiment in air economization, aka 'free cooling,' conducted by Intel. For 10 months, the chipmaker had 500 production servers, working at 90 percent utilization, cooled almost exclusively by outside air at a facility in New Mexico. Only when the temperature exceeded 90 degrees Fahrenheit did they crank on some artificial air conditioning. Intel did very little to address air-born contaminants and dust, and nothing at all to deal with fluctuating humidity. The result: a slightly higher failure rate — around 0.6 percent more — among the air-cooled servers compared to those in the company's main datacenter — and a potential savings of $2.87 million per year in a 10MW datacenter using free cooling over traditional cooling."
How about reducing the need for AC POWER as well.. (Score:5, Insightful)
How about reducing the need for AC POWER as well by cutting down on the number of AC TO DC PSU's.
Re:How about reducing the need for AC POWER as wel (Score:5, Interesting)
I asked the president of an engineering firm that I work for about this. He ships racks of boxes, each holding DSP boards on backplanes, each backplane has it's own PSU.
When I asked him why he doesn't just have one or two -big- power supplies in the unit, he said that he tried that, but the cost of the non-standard PSU was higher than all the ATX PSUs put together, and then some, and replacing the units when they eventually fail would be tricky, as opposed to just stocking more ATX PSUs.
I agree that it's a good idea, but until there's enough volume of large multi-output PSUs shipping, the cost of manufacture makes the product unworkable (unless you think big-picture and want to spend more up front for power savings over the whole unit's life).
Generally, the people who use the hardware aren't the ones building it, and buyers usually go for the lowest bid.
Re:How about reducing the need for AC POWER as wel (Score:5, Informative)
Re: (Score:2)
The article said they got up to 90% humidity at times. Remember, they didn't have any humidity controls at all, and it does rain in New Mexico resulting in short durations of high humidity.
I would say that fluctuations in humidity were tested quite well - long term effects of constant humidity, not so much.
Re:How about reducing the need for AC POWER as wel (Score:4, Informative)
Re: (Score:2)
The original article says humidity fluctuated between 4 and "more than 90%" over the course of the study. If you've never been to New Mexico you've missed out... they get some wicked thunderstorms.
Neil
Re: (Score:3, Informative)
We have a monsoon season here, in mid summer. Gets pretty humid at times.
Re:How about reducing the need for AC POWER as wel (Score:4, Informative)
Computers run hot enough to get rid of moisture and one assumes that these data centers run around the clock.
But dust can be lethal to computers and in particular to power supplies and CPU fans. I clean my PCs guts at least twice a year and what comes out is amazing. Fans are great at collecting dust and they don't pump much air when coated with dust either.
48 vdc (Score:4, Informative)
Re:48 vdc (Score:5, Informative)
Having multiple of a commonly used voltage used in renewable energy also helps if, for example, you want to feed your datacenter directly from say wind or solar, in addition to a set of AC to DC converter.
Re: (Score:3, Interesting)
My brother works for the Campus-level Computing and Information Services for Texas A&M University. They have been going away from AC power for a while now. They apparently have almost no heat issues anymore. While the AC server room in my lab (we run 70 servers) with a dedicated A/C system is running at 74 degrees now with the A/C running constantly at full blast.
Re: (Score:3, Interesting)
Then your A/C is too small, or the room is not energy efficient ('tho I suspect the rest of the office is cooler.) I had the same issues in our previous office when the building A/C was cut off in the evenings and on weekends -- it's hard to move all that heat with cheap consumer A/C units, and impossible if you don't have a heat exchanger outside the building. (dumping hot air into the plenum only works as long as the building HVAC is on.) The current office has a dedicated 5ton Liebert Challenger 3000 a
-48V DC is pretty standard (Score:3, Informative)
A company I used to work for (SeaChange International) would ship systems that, in some cases, were large enough to be considered their own datacenter. Some customers would order -48 volt DC power supplies. They'd do their own wiring at the site, having one big AC-DC converter to handle the entire system. They were certainly more expensive than the ATX supplies.
-48V DC is nothing special in many telco applications. Sun equipment (which has been historically popular with telcos (they have lots of NEBS-certified hardware)) has DC power supplies as a standard option on a good portion of their servers.
Of course many other manufacturers also offer DC P/S options (and NEBS).
http://www.epanorama.net/wwwboard/messages/1142.html
Re:How about reducing the need for AC POWER as wel (Score:5, Informative)
Having very large PSUs is a pain in the ass. Failures tend to be catastrophic and dangerous. They're more expensive to build and maintain. (think basic economy of scale problems) They also may not be any more efficient than distributed conversion. You also tend to distribute much lower voltages with DC than you do with AC. (240vac vs 48vdc) This gives very high amperages which requires much thicker wiring. Copper is EXPENSIVE right now, this makes it a big factor in the cap-ex of building a new DC.
This is why a lot of work is going into improving the efficiency of commodity power supplies. Groups like 80plus.org are doing great things.
Also some other links:
http://www.treehugger.com/files/2007/07/secret_efficien.php [treehugger.com]
http://services.google.com/blog_resources/PSU_white_paper.pdf [google.com]
Re: (Score:3, Insightful)
That, and they become a single point of failure.
Having seen a few commodity power supplies fail in the most spectacular manner possible makes me shudder to think that companies are willingly switching to massive power supplies just to save a few bucks.
Re:How about reducing the need for AC POWER as wel (Score:5, Interesting)
Aluminum wiring is a FIRE hazard and was BANNED in all new
houses in the US due to it.
You might be able to get away with it outdoors, but it is
most likely a bad idea based on the indoor results.
http://www.physicsforums.com/showpost.php?s=7d306106c574b8acd101e052ab90be42&p=615606&postcount=6 [physicsforums.com]
http://books.google.com/books?id=2edigWaeGPUC&pg=PA175&lpg=PA175&dq=aluminum+wiring+ban&source=web&ots=l0eE26iMkt&sig=rVIgBVl0gXGlJicEHA_qW8s4zY0&hl=en&sa=X&oi=book_result&resnum=4&ct=result [google.com]
Alot of areas you cannot even get insurance for the building
with aluminum wiring in it.
http://en.wikipedia.org/wiki/Aluminum_wiring#Hazard_insurance [wikipedia.org]
Re:How about reducing the need for AC POWER as wel (Score:4, Informative)
That's because aluminum significantly contracts and expands with temperature changes. When it does so in a residential setting, it will cause shorts and sparks and such in outlets and switches. The 1" wire (probably more like a crossbar) was probably specifically designed for electrical use, and had appropriate connectors and so on so that it was NOT a danger (as noted in the physicsforums post you linked to. Given the price of copper any more, the special work needed for aluminum is possibly worth it.
Re:How about reducing the need for AC POWER as wel (Score:5, Informative)
You're only half right. If you actually read any of the articles you linked to you'd know that.
Aluminum wire by itself is no hazard at all. It just doesn't do well when you connect it copper or other galvanically dissimilar materials that can cause corrosion. And there are some issues with dissimilar thermal expansion rates, but that's largely dependent on the terminal size and type.
You're right that the standard 14-10 AWG wiring used in homes is typically not aluminum, and that the wiring of that size that was aluminum and installed in the the 60s and 70s needs to be treated specially.
But aluminum was and still is commonly used in large-gauge wiring, starting around 8 AWG -- the ~2 AWG feed for many homes *is* aluminum. And it's entirely possible to safely wiring aluminum, even of smaller gauges, even of older alloy types, so long as you understand the limitations and use CO/ALR-rated devices.
Re:How about reducing the need for AC POWER as wel (Score:4, Informative)
If you want to go straight DC, you need to use the economies of scale, not replace AC power supplies with some alternate power scheme that still uses AC on the rack and DC into the server.
Instead, use large, very efficient AC-DC transformers and wire the rack DC.
If you convert AC to DC in bulk with more expensive but highly efficient equipment you will save significant money on the power conversion PLUS you can put that transformer outside in its own enclosure with a big metal heat exchanger for a case.
DC can be stepped down very easily and efficiently so various voltages are available from the transformer or from a seperate step-down box that doesnt create much heat because it is pretty efficient.
Now, you dont have to worry about the heat from the power supply and dont have to cool for it. You gain savings in efficiency and less AC use.
also, the transformer can very easily be cooled but an extremely simple ground loop and small pump can handle that for a few bucks per month.
Re:How about reducing the need for AC POWER as wel (Score:4, Informative)
No. The reason AC was more convenient to move around is the ability to step it up and down with transformers. But in fact line losses are higher for a given voltage with AC than DC, for various reasons (e.g. peak voltage is higher, some of the power radiates). Nowadays, converting DC to DC is about as easy (it goes through a high frequency AC step on the way, however). A switching power supply actually converts AC (60Hz) to DC to AC (tens of kilohertz) to DC.
Re: (Score:2)
Re: (Score:2)
I've often wondered about that myself. It seems absurd to have so many little AC to DC PSU's in a data center. Why not just have 1, directly integrated into a backup power supply?
4 words "Single point of failure"
Have more then one of them (Score:2)
Have more then one of them
Re: (Score:3, Funny)
Re:How about reducing the need for AC POWER as wel (Score:4, Interesting)
4 words "Single point of failure"
You mean like the power circuit that you are already connected to? That single point of failure has long ago been handled. Where the costs can be justified, run more than one power circuit, backup generators and UPS, etc. That's no different.
I'm personally more interested in the wasteful DC to AC and back conversion when considering small scale solar. Why in the world is the default option to run a wasteful inverter just to plug an AC to DC converter in to that? Almost everything I looked at for portable solar to power a laptop or netbook worked like that. A lot of netbooks could be run on a 10W solar panel with battery backup, or more reliably of course with more solar capacity.
Re: (Score:3, Informative)
The lower the DC voltage, the higher the current and line loss. And running 3-4 different voltages throughout the place leads to confusion and much higher costs (4 voltages == 4x the wire.) -48VDC systems have been common for decades... in the telco world. They just haven't been common for computer datacenters.
Chimney effects (Score:5, Interesting)
I do wonder how things could be improved with a decently sized stack... the higher an exit chimney, the more draw you'll get from the temperature differential. If your computer rooms are near the base of a decent sized office building, and you have a 20 story stack, I'd expect you could get away without any intake or exhaust fans.
Anyone here that can confirm or deny this?
Re: (Score:2, Funny)
The only problem with this and high-performance computing is latency time between nodes if the height is great enough.
Re: (Score:2)
Wouldn't a chimney cause resistance. I would think having under floor intake and above ceiling output, with some exhaust fans in the ceiling space to draw the air through the cabinets and push it outside would force plenty of air across the equipment.
If you use a chimney, you essentially would be reducing the volume of air that can be exhausted, but you would be increasing the speed of the air.
Re: (Score:3, Informative)
You're correctly stating the conventional wisdom for properly managing air in a datacenter. However, the whole point was that Intel was doing their cooling with outside air, minimally filtered to see what the effects of disregarding the conventional wisdom might be. So, one way to improve the energy efficiency might be to use a chimney to avoid having to use fans.
combine this with vortex effects (Score:3, Interesting)
Add some of the Dyson vacuum inspired vortex thingy's to the intake to help filter out the dust and you wouldn't have to waste as much money on filters either.
Or what if you run the incoming air through a swamp cooler? wouldn't the running water cut down on the incoming dust significantly?
Re: (Score:2)
Yes, but humidity isn't server-friendly.
Re: (Score:2)
Umm yeah, I was tripped up by the article mentioning that their test center did ok with the humidity, but the graph shows that it stayed between 10% - 20% Humidity, much lower than the 80% - 90% relative humidity a swamp cooler is going to provide.
To top it off, looks like the running water doesn't do much to filter contaminates after all.
http://en.wikipedia.org/wiki/Evaporative_cooling#Disadvantages [wikipedia.org]
Re: (Score:2)
Re: (Score:3, Insightful)
Re: (Score:2)
Well technically that would be air being Blown out...or more precisely siphoning air up the chimney.
Air ventilated at ground pressure will be siphoned to the low pressure area at the top of the stack. Combine that with humidity and convection from the heat and you have the Chimney Effect, or Stack Effect (http://en.wikipedia.org/wiki/Stack_effect).
You could even boost this by adding solar energy collectors (essentially a good light absorbent material to conduct more heat in the chimney), to create a Solar C
Re:Chimney effects (Score:4, Funny)
Just to clarify, this warm-air evacuation phenomenon is powered by the force of suck.
Actually, according to another commenter to the parent you replied to, it's gone from suck to blow.
Re: (Score:3, Funny)
I do wonder how things could be improved with a decently sized stack... ?
Apparently you haven't checked your spam folder lately; you'll find plenty of answers in there addressing just this question. :)
Simpler Tools (Score:5, Interesting)
Part of the problem is people are looking for very complicated solutions for very simple problems.
In retrofitting a standalone building, all you really need to do is reduce the amount of heat a building gains from the sun by improving it's R value and use sensible ducting to draw air through the building. I've seen some super energy efficient designs where each floor is vented, so that the building is itself a chimney, with cool air coming from vents from covered areas near the base, and enough size provided at the top to pull enough from the bottom, which is also easily aided by fans.
In building an entirely new datacenter, it would make sense to bury the server rooms, and cover the concrete structure with earth and solar panels. Combined with a flywheel load balancer, you could have an "off the grid" datacenter with the grid for backup. During the daylight hours, especially in the south, the panels can provide a good deal of the A/C and power necessary. At night the flywheel can continue powering the data center for a while, and turn fans without compressors to cool the equipment with night air.
This can all be done with existing technology. The trick is to convince people that green investment will lead to a return in the long run. I haven't personally looked at average rate increases in electricity, but the difference between efficient and additional construction expenses versus long term energy price fluctuations probably looks very good.
Re:Simpler Tools (Score:5, Informative)
We're talking less than a minute needed. In the end they couldn't use the several large and expensive flywheels because they could not provide power long enough.
If you're powering your whole data center 'for a while' with these... you must have very few servers (like a handful).
Makes Sense (Score:5, Insightful)
Re:Makes Sense (Score:5, Interesting)
I set up a datacenter at my old job in Alberta, and that's exactly what we did.
We ran exhaust ducting to the offices, and tied intake into the building's cold-air return. From September to May fans moved colder air into the data room and hot air into the office space. June to August we ran the AC, and shut off the "winter lines" with dampers.
It worked extremely well.
Re: (Score:2)
Put a heppa filter in between to scrub out dirt and dust and vola, o'natural cooling solutions
I think that solution will put Dust Bunnies on the endangered species list
Re:Makes Sense (Score:5, Funny)
Re: (Score:2)
I agree that the AC setting really doesn't need to be so aggressively cold. 90F seems a bit high though. 60F is just too inefficient, the idea was that you had excess capacity in case cooling failed, but most rack devices can last months at maybe 80 to 85F with no problems. I wouldn't want it to stay at 90F for long though.
Re: (Score:2)
Or better.
Put them in the desert next to a couple of extra areas, build a solar thermal plant and power the things, plus air conditioning for free and sell the remains 150 MW to the grid.
Re: (Score:2)
Yeah, because we all know how much New Mexico has sub-zero temps. :-)
Truthfully, along those reasonings, you could eliminate tons of things. I lived in Salzburg from January-May of 2001. Didn't have a refrigerator - in the colder months of January - March, I kept cheese, sodas, and sometimes even milk just sitting on the ledge outside my bedroom window. Too bad the cost of heating oil was so high. More than canceled out the savings.
Nah, what you need is a place with fairly consistant summer / winter temps a
Re: (Score:3, Insightful)
What the fuck is heppa?
It's HEPA. It's an acronym. High-Efficiency Particulate Air.
Yeah, because the guy not knowing the acronym makes the point he's making completely useless, right? (sheesh). Only thing worse than a speeling flame is a speeling flame with attitude, dude. You might consider chilling out a bit.
Re:Makes Sense (Score:4, Funny)
And that's why (Score:4, Funny)
I leave my systems on the deck.
What a great study! (Score:4, Insightful)
The savings should be more than enough to pay for replacement hardware, and even for upgrades. And stepping back and looking at the big picture tells me that there is at least one brilliant person at Intel--whoever though of doing this study is a genius!
--MarkusQ
Re: (Score:2)
I agree. It's time to overhaul the data center. Here are a few things I would love to change.
- Cabinet power supplies. Why the hell does every piece of equipment need an AC-DC power supply.
- Equipment should be cooled by the cabinet rather than requiring it's own fans. Simply seal the racks with a partition between the front and back, force air out the back with several large, redundant, efficient, and quiet fans.
- Make the cabinets shallower, by at least 1/2, and remove the rear access to them so they
Re:What a great study! (Score:5, Insightful)
You lose on density, though. Aisle space in front of the racks is fixed, you need a certain amount for humans to move in. Shallow, tall equipment means fewer units per rack. With a current-format rack you need say a 3'-deep area for the rack and a 3'-wide aisle for access. That's 50% equipment and 50% access. If the racks were only 1' deep instead, you'd be using 25% for equipment and 75% for access (since you still need that 3' wide aisle). And in that 25% of space for equipment you now get perhaps 25% of the amount of equipment since each one's using 4x more vertical space in the rack and rack height can't change (it's limited by basically how high off the floor a human can reach to get to the equipment).
To make up for that, you need more square footage of data center to hold the equipment. That increases operating costs, which is what we're trying to reduce.
Re:What a great study! (Score:4, Insightful)
I'm not sure. $2.87m may be enough to pay for failures, but what if you had to add extra redundancy to the system in the first place to make up for that small amount of failures? Extra boxes to maintain, with their own MTBFs.. extra space taken up, extra electricity drawn.
I think a better solution - not as extreme, granted - would be to just turn the aircon temperature dial up a notch or two. Has anyone worked out how much money you'd save in the same datacenter by just doing that?
http://www.washingtontimes.com/news/2008/jul/30/un-takes-on-hot-air-for-good-of-the-world/ [washingtontimes.com]
Most datacenters and even little server rooms I've been in have had the dial set to something ridiculous like 65. There's no reason your server room needs to be that cold, at all. You just have to keep it at a reasonable ambient temperature somewhere below the system's maximum rating (most processors will happily run for 5 years at a die temperature of 105C, you can't blow hot air over a processor and expect it to stay cool though.
So, why not keep your server room at 80, save yourself the 0.6% extra failures, and maybe (at a guess) $1.3m a year instead of $2.87?
Re: (Score:3, Insightful)
You replace your hardware every 10 months? Wow!
10 months seems just shy of the time it takes for heat to really start causing damage. I'm talking about stuff like wire insulation getting brittle, quickly followed by vibration causing shorts. Then there is the increased molecular migration in the silicon of the ICs.
10 months is NOT a long term study.
Outside air not so harsh (Score:5, Insightful)
Well, it makes sense. Normal PCs run on essentially ambient air, and live for years even under heavy loads (games put a lot of load on systems) despite all the dust and cruft. Servers aren't that different in their hardware, so it makes sense they'd behave similarly. And there's a lot that can be done cheaply to reduce the problems that were seen. Dust, for instance. You can filter and scrub dust from the incoming air a lot cheaper than running a full-on AC system. In fact the DX system used on the one side of the test probably scrubbed the incoming air itself, which would explain the lower failure rate there. Reduce the dust, you reduce the build-up of the thermal-insulating layer on the equipment and keep cooling effectiveness from degrading. Humidity control can also be done cheaper than full-on AC, and wouldn't have to be complete. I don't think you'd need to hold humidity steady within tight parameters, just keep the maximum from going above say 50% and the minimum from going below 5%. Again I'll bet the DX system did just that automatically. I'd bet you could remove the sources of probably 80% of the extra failures on the free-cooling side while keeping 90% of the cost savings in the process.
Re: (Score:2)
Um, who plays an high intensive game 24/7 for years? and no, WoW isn't that intensive.
Only ten months? (Score:4, Interesting)
The standard replacement cycle is about three years, so until they try that, this doesn't mean a lot. Also, what was the density of the data center? I still love the story of a datacenter with some DSLAMs that cooled left to right which were put next to each other in about 12 racks and the rightmost one caught fire once a week...
Also, I don't know the climate there, but in the regular climate here where it goes between -10 and +35 celsius (that's between 14 and 95 fahrenheit) and there's a good dose of humidity, the failure rate might be somewhat bigger...
Re: (Score:2)
Re:Only ten months? (Score:4, Informative)
So true! Anyone with a background in unairconditioned manufacturing plants can tell you that new computers do just fine in rough conditions, but after a few years you will get power supply failure rates out the ass! Give them DC power inputs, standardized, please (but you KNOW intel won't do that - they don't even use standardized front panel connectors) and you might see the failure rate reduced even further.
Almost all data centers are designed with A/C in mind. This means that as long as A/C is pulling the load no one needs to worry about well designed buildings. As soon as you are challenged with having to design for reduced A/C usage that you end up thinking smarter and how passive systems can do the same thing. Another advantage of trying to design without A/C is that you won't find your servers frying because of an air conditioner failure.
Below are some links on passive solutions to cooling. Some of the techniques are surprisingly old, but effective:
- http://en.wikipedia.org/wiki/Passive_cooling [wikipedia.org]
- http://en.wikipedia.org/wiki/Windcatcher [wikipedia.org]
- http://www.arabrise.org/articles/A040105S.pdf [arabrise.org]
Humidity in New Mexico... (Score:2)
...doesn't fluctuate that much, and is nearly always very low. I'd be very curious to see how a similar experiment goes in a place like Florida, that's at least as hot and much more humid.
Re: (Score:2)
I've been in northern New Mexico during the monsoon season. It's like being on the east coast in the South in May. In central Virginia, it's common to have 90F+ temperatures and 90%+ humidity every single day for the entire month of August. Coastal Florida is much more brutal.
Of course, it's entirely possible that those environmental differences won't have much of an impact on failure rates, but they should really do a scientific test in such a climate before recommending this technique to everyone.
Numbers don't quite add up! (Score:3, Informative)
If they're paying ten cents a kilowatt-hour, that 10MW data center is paying about $9M/yr for power.
Cooling systems move about 15 times the power than what they draw. So the savings for a 10MW datacenter would be around $600K. Wonder how they came up with $2.9M ?
Re: (Score:3, Informative)
What kind of air-conditioning is that? Here the rule of thumb looks like this - 10KW of electricity produce 7KW of heat, and it takes one-third of that (2.333KW) in electricity to move it out. Do you have any sources on this?:)
Re: (Score:3)
10KW of electricity produce 7KW of heat
What happens to the 3KW left?
All em radiation?
Re: (Score:3, Informative)
Not quite. If you're thinking of SEER, it's a bastardized ratio with BTUs/hour on one side and Watts on the other. Since there's 3.413 BTUs/h in one watt, a 15 SEER AC unit moves 4.4 times as much power as it draws (that is, it has a Coefficient of Performance, or COP, of 4.4).
Re: (Score:2)
oops, I forgot about the BTU to watts conversion. My bad. The numbers do kinda make sense then.
Re: (Score:2)
Humidity (Score:3, Informative)
But if you are going to allow for an arbitrarily re-locatable data center, what does it matter that it can handle 90 degree whether when you can move it somewhere cold enough that you can have a humidity controlled room that gets passive cooling from the exterior.
Re:Humidity (Score:5, Interesting)
Humidity only really matters for two reason - If too low, you get a lot of static buildup, and if too high, you get condensation.
Condensation only tends to happen on objects cooler than ambient, which doesn't really apply to running servers. Static matters a lot more, but you can raise humidity a lot cheaper than you can lower it, so, not as much of an issue there.
And as a bonus, more humid air can carry away more heat than the same volume of less humid air.
They will have to wait longer to get failures (Score:4, Insightful)
I'd say that they will have to wait longer to get failures. Try to have a server running in that enviroment for 5 years and then we will see. I would not do it without having some good filters. But for a test it is a interesting experiment.
For datacenters in colder climates, you can already get cooling systems that cools the water using air only when the temperature is below a certain temperature(just forgot the number). When it gets above that level the water gets cooled like you normally do.
At work our old AC system was old and needed to be replaced and the new one does that. The outside temperature is so low that the water will be cooled with just air for half the year.
It was more expensive to install since it needed more and bigger cooling units(I belive they also talked about bigger slower fans that used less power) when just using air but it pays itself in a few years.
Another interesting experiment would be to use the heat again. I dont know if the water temperature is high enough so that you could use heat exchangers, perhaps as the first step on heating ingoing cold water.
Has anyone at Intel read my Slashdot post (Score:2)
from a few weeks ago?
http://slashdot.org/comments.pl?sid=947231&cid=24787415 [slashdot.org]
I always thought... (Score:2)
that 'air conditioning' 'air cooling'.
Sure it's great to have the air cool and all... but I thought that dehumidification was important too?
Groundwater cooling works too (Score:2)
I work at the University of Montana and we talked a bit about direct venting our server rooms. Right now the big push is for ground water cooling. All new buildings on campus must use ground water cooling. Unfortunately, this is starting to hit the wall.
A fellow sysadmin across campus was having a new server room designed, the tons of cooling for his system just got down rated because the groundwater has been warming up with all the new ground source cooling wells.
ServerFAX (Score:2)
Antarctica (Score:5, Interesting)
More details and a correction re failure rates (Score:5, Informative)
Sun is also running a comparable experiment with Belgacom and allows you to log in to a live interface to view stats on in- and outlet temperatures and more at http://wikis.sun.com/display/freeaircooling/Free+Air+Cooling+Proof+of+Concept [sun.com] For more details and analysis see http://www.datacenterknowledge.com/archives/2008/09/18/intel-servers-do-fine-with-outside-air/ [datacenterknowledge.com] or http://securityandthe.net/2008/09/18/intel-sees-the-future-of-datacenters-and-it-does-not-include-airconditioning/ [securityandthe.net]
DC Knowledge also has a nice video of this experiment at http://www.datacenterknowledge.com/archives/2008/09/18/video-intels-air-side-economization-test/ [datacenterknowledge.com]
Video of Intel Testbed (Score:2)
Why are office windows seaed shut? (Score:3, Insightful)
There is another really smart thing you can do too. When it is hot inside and not hot outside yu can open a window. That seems obvious but how many office building have openable windows? For some reason Architects like to cool office space with AC even if there is "free" cool air out doors.
This is even easier with computers. The servers would be happy to run at 95F and much of the time even in the American SW the outside air is cooler than 95F.
I've been saying this for many years. I think the reason for resistance is that no one gets a take home pay bonus based on how much power is saved.
Re:Why are office windows seaed shut? (Score:4, Insightful)
There is another really smart thing you can do too. When it is hot inside and not hot outside yu can open a window. That seems obvious but how many office building have openable windows? For some reason Architects like to cool office space with AC even if there is "free" cool air out doors.
The reasons are things like: liability issues, chimney effects, people leaving the windows open even with the heat or a/c on, people leaving the windows open in the rain, bugs in the building, increased maintenance costs for more complex windows, etc. It turns out that architects aren't actually idiots and have thought about this.
This is even easier with computers. The servers would be happy to run at 95F and much of the time even in the American SW the outside air is cooler than 95F.
Your desktop can run at 95F ambient. If it has a variable speed fan it's probably screaming like a banshee. The key is how much heat can be dissipated, and at 95F ambient you can't move enough air through a dense computer system to cool the components down to a safe temperature. Even at 68F ambient people are have a lot of trouble moving enough air through modern super-dense racks to keep computers from seeing increased failure rates.
Telcos and Google get it right? (Score:2)
I've noticed that often 'old-school' telco data centers often seem to be much more sparing with the AC running 70F-85F vs. the 'high-tech' data centers who tend to run 'freeze your ass off' data centers. Something has always told me they had something (whether it was just being cheap or not).
Also, google put out that report a few years ago (google: "Failure Trends in a Large Disk Drive Population") and it basically proved that too cold (59F-86F) actually causes more problems early in the drives lives than t
Reduced life (Score:3, Insightful)
Well of course intel wants you to burn your machines up early. They get to sell you the replacement.
Never play in the USA (Score:4, Interesting)
EVERYTHING _M_U_S_T_ be air-conditioned at all times. From what we heard from France during their last heat wave a few years ago, air-conditioning isn't universal in the First World. Therefore, it must sound strange that air-conditioning is a inviolate moral imperative in all offices in the US. My wife has a sweater with her at work at all times even if it is July or August. Same for me. 100% wool. When it is 95 outside and 68 inside, I want nothing more than to hibernate -- like seriously drift off to sleep. I've worn gloves with the fingers cut out in July at my keyboard. I've sneaked in an incandescent lamp to warm my hands (please, sir, just a lump of coal?). I've gotten on my chair and stuffed paper towels in air ducts.
If management can't see that they are air-conditioning some of their people into productivity loss, not to mention pain, how much more likely are they to reduce air-conditioning on their precious equipment? No, doesn't matter whether one experiment shows it would save big money. The person who suggests reducing air-conditioning in the U.S. will be about as popular at his business as if he had suggested commissioning a portrait of Karl Marx on the lunch room wall. This just isn't a technical issue.
Re:What About the Small Guys? (Score:5, Funny)
There are no small guys... especially on
Re: (Score:2)
*&(*&, i feel way behind now, looks like i need to go back to 5th grade...
On the bright side you won't have to be asked whether you're smarter than a 5th grader ;)
Re: (Score:3, Funny)
We all run data centers with 3000 servers and program on apps with 10+ million LOC. We also all built something better than a 3d solar cell in the 5th grade.
Pfft! I achieved a technological singularity 3 years ago. I am the datacenter.
Re:What About the Small Guys? (Score:5, Insightful)
Re:What About the Small Guys? (Score:5, Informative)
Re: (Score:2)
Re: (Score:2)
or underground.
Re: (Score:2)
Re: (Score:3, Funny)
Don't worry, no matter how big that cloud is, it will not substantially alter the huge column of superheated air that is already over Washington.
Scientists are studying this phenomenon and preliminary findings show that without this heat contribution, we would actually be in an Ice Age right now.
Re: (Score:2)
Uhhh...no [wunderground.com].
Re: (Score:2)
Re: (Score:2)
a good portion of canada rarely, if ever, gets above 80F, even on a hot August day. And generally stays below 60F fore 10 months of the year.
Sounds like a perfect candidate for venting to the outside.
Why should they? (Score:2)
They're already real cool heads
and they're making real cool bread
Re: (Score:2)
Um... you do realize that AC moves the heat (plus some) into the outside air, it doesn't destroy it, right?
Obviously, the solution is orbital data centers using microwave power links and laser data links. No doubt Google is working on this.
Re: (Score:2)
Do you realise that AC requires electricity to make it work, and that
1. That electricity ends up as extra heat
2. The coal/gas/oil used to generate it releases CO2 into the atmosphere which causes global warming
Re: (Score:2)
AC units aren't some kind of magic cold machines, they are heat pumps. The heat is going to get dumped outside in either case, it's just a question of whether you'll be dumping heat from the servers or heat from the servers + heat from the AC units.
All cooling systems ultimately rely on dumping heat outside(whether into air, water, or whatever), the trick is to spend as little energy on cooling as you can possibly get away with.
Re: (Score:2)
Things that are underground are more likely to get flooded?