Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
IT Technology

Cooler Servers or Cooler Rooms? 409

mstansberry writes "Analysts, experts and engineers rumble over which is more important in curbing server heat issues; cooler rooms or cooler servers. And who will be the first vendor to bring water back into the data center?"
This discussion has been archived. No new comments can be posted.

Cooler Servers or Cooler Rooms?

Comments Filter:
  • Why not both? (Score:4, Insightful)

    by tquinlan ( 868483 ) <tom@thomasquinla[ ]om ['n.c' in gap]> on Wednesday April 06, 2005 @10:29AM (#12153735) Homepage
    Unless you make things so cold as to prevent things from working properly, why not just do both?

  • by Saven Marek ( 739395 ) on Wednesday April 06, 2005 @10:30AM (#12153757)
    I've always wondered this. why have duplication of a function in a server across every single server box when it could all be done in the environment. For example all servers get electricity from the server room and all servers get network from the server room so why not all servers get cooling from 10F cooling in the server room.

    It makes sense!
  • Cooler Rooms (Score:3, Insightful)

    by forlornhope ( 688722 ) on Wednesday April 06, 2005 @10:30AM (#12153758) Homepage
    I like cooler rooms. Especially for a large number of servers. Its more efficient.
  • Err, well, both? (Score:4, Insightful)

    by Pants75 ( 708191 ) on Wednesday April 06, 2005 @10:30AM (#12153763)
    The servers are the heat source and the cool room air it the cooling mechanism? Yes?

    So take your pick. To make the servers cooler, either buy new more efficient servers or buy a whacking great air con unit.

    Since the servers are the things that actually do the work, I'd just get feck off servers and a feck off air-con unit to keep it all happy!

    Everyones a winner!

  • by havaloc ( 50551 ) * on Wednesday April 06, 2005 @10:32AM (#12153788) Homepage
    ...you won't need as much cooling in the room. Easy enough. This will save a ton of money in the long run, not to mention the environment and all that.
  • Re:Why not both? (Score:5, Insightful)

    by Anonymous Coward on Wednesday April 06, 2005 @10:33AM (#12153805)
    Cost.

    The question isn't whether it's good to keep both cool. The question is, which makes more financial sense? Cooling the whole room? Spending the money to purchase servers with a low heat-to-computation ratio?

    Probably a combination. But to say that people should equip every computer with VIA ultra-low-power chips _and_ freeze the server room is silly.
  • by Eil ( 82413 ) on Wednesday April 06, 2005 @10:33AM (#12153806) Homepage Journal

    Ideally, you should have a cool server and and cool room. The two work in combination. If you have a hot room, then the server isn't going to be able to cool itself very well even with the best heatsinks and well-placed fans. Yes, you could use water cooling, but there are other important bits inside of a machine besides whatever the water touches. But a cool room does no good if your servers aren't setup with proper cooling themselves.
  • First thing.. (Score:4, Insightful)

    by Anti Frozt ( 655515 ) <chris.buffett@gmai l . c om> on Wednesday April 06, 2005 @10:33AM (#12153816)

    That comes to mind is that it will probably be vastly cheaper to cool a rackmount specifically than to lower the ambient temperature of an entire room to the point that it has the same effect. However, I'm not entirely sure how well this scales to large server farms and multiple rackmounts.

    I think the best option would be to look at having the hardware produce less heat in the first place. This would definitely simplify the rumbling these engineers are engaged in.

  • Re:Outside air? (Score:4, Insightful)

    by green pizza ( 159161 ) on Wednesday April 06, 2005 @10:36AM (#12153854) Homepage
    Maybe my ignorance is showing here, but does any installation use outside air for cooling? It seems that it would make sense in places that have cold winters (like here in the midwest).
    You'd need a lot of filtering and/or humidity control to make that a realistic option. Better yet to make use of outside air temperature. Which is exactly what your heatpump loop or your AC cooling tower is for.
  • Re:Why not both? (Score:3, Insightful)

    by FidelCatsro ( 861135 ) <fidelcatsro&gmail,com> on Wednesday April 06, 2005 @10:38AM (#12153890) Journal
    This is exactly the point , you need a dammed good balance.
    If i have to be working on the server localy in some fashion i would rather not be boiling or freezing .In the average working enviroment We cant have it in a refrigerated room as this would bring up alot of working issues and not to mention design of the room itself .on the other hand we don't want a drasticaly complex cooling system that would add another possible avenue for failure to occur.
    The best is possibly a nicely air conditioned room with a nice simple cooling system on the server , good airflow and a comfertable working enviroment .
  • Cooler servers... (Score:5, Insightful)

    by Aphrika ( 756248 ) on Wednesday April 06, 2005 @10:40AM (#12153904)
    From experience of aircon failing/breaking.

    At least if a server fails it's one unit that'll either get hot or shutdown which means a high degree of business continuity.

    When your aircon goes down you're in a whole world or hurt. Ours went in a powercut, yet the servers stayed on because of the UPSes - hence the room temperature rose and the alarms went off. Nothing damaged, but it made us realise that it's important to have both, otherwise your redundancy and failover plans expand into the world of facilities and operations, rather than staying within the IT realm.
  • What I have never understood is why servers virtually always have AC power supplies. Yes, you can get NEBS(?) compliant servers that take DC, but this isnt really a general option, but a distinct model line compleatly.

    UPSs take AC, turn it to DC, charge their batteries. A sepearate system takes DC from batteries, inverts it and sends out AC. (Good UPSs, anyway. Otherwise they are "battery backups", not uninteruptable) Computer power supplies take AC and distribute DC inside the case. WTF?

    Why doesn't APC start selling ATX power supplies? Directly swap out AC powersupplies, have them plug into the DC providing UPS and/or per-rack (or even per-room) powersupplies.

    Electrical codes are a BS excuse. Even if you needed verdor specific racks, a DC providing rack is, so far as the fire marshal should be concerned, just a very large blade enclosure, which are clearly acceptable.

    I cant beleive that Im the first one to ever come up with this idea. So there must be some problem with it.... Some EE want to explain why this wouldnt work?
  • Powersupplies (Score:3, Insightful)

    by iamthemoog ( 410374 ) on Wednesday April 06, 2005 @11:00AM (#12154156) Homepage
    In a rack of 1U units, does each 1U slab take 240volt (or 115 or whatever) each, and have its own PSU?

    I've often though it might be nicer if there could be one power supply for a whole room of PCs for example. This could be placed somewhere outside of the room and cooled there, with appropriate UPS backup too.

    12 and 5 volt lines then feed each PC - no noisy or hot PSU per machine... Peace and quiet, and a bit cooler too...

  • Re:Why not both? (Score:4, Insightful)

    by elgatozorbas ( 783538 ) on Wednesday April 06, 2005 @11:03AM (#12154180)
    The question isn't whether it's good to keep both cool. The question is, which makes more financial sense? Cooling the whole room? Spending the money to purchase servers with a low heat-to-computation ratio?

    In that case, I would say: cool the room. The room is forever (on this timescale), the servers maybe change every 5 years. Of course, start by NOT choosing the hottest server too, but I would invest in the room.
    Also I don't expect room cooling techniques to improve significantly in the next few years. Servers hopefully will.

  • by jhines ( 82154 ) <john@jhines.org> on Wednesday April 06, 2005 @11:06AM (#12154227) Homepage
    Telecom equipment runs off -48VDC, and the phone company uses big batteries as their UPS.

    It exists, it just is expensive.
  • Re:Why not both? (Score:4, Insightful)

    by Anonymous Luddite ( 808273 ) on Wednesday April 06, 2005 @11:19AM (#12154380)
    >> I would say: cool the room.

    I think you're right. That's the way we do it, but - (there's always a but) some cabinets still get damn hot depending on what's in the rack. Sometimes you need to do spot cooling as well, or put in bigger fans to keep the equipment closer to ambient.

    I think starting with a cool room is the most cost effective way though - not to mention it makes work "O.K." in August...
  • Re:Why not both? (Score:1, Insightful)

    by ReeprFlame ( 745959 ) <kc2lto@SOMETHINGgmail.com> on Wednesday April 06, 2005 @11:20AM (#12154394) Homepage
    Both need to be done to an extent. The priority should be kept directly to the machine since that is where the heat issues are. Also, cooling a room is a lot more time intensive and expensive. Think about it, you have the entire volume of the room to cool with much of that air being void and useless. Then by time the cool air reaches the CPU and components, it will not be doing much work cooling the system at all. That is why the focus of heat removal should be taken from the components directly, released into the air that is moderately cool [to remove overall heat]. That would be an effective system. BEtter yet [although not very feasible] run water cooling to the systems and put the heat exchanger in another room so you need not cool the room at all...
  • Re:Why not both? (Score:4, Insightful)

    by FireFury03 ( 653718 ) <slashdot@NoSPAm.nexusuk.org> on Wednesday April 06, 2005 @11:21AM (#12154404) Homepage
    The question isn't whether it's good to keep both cool. The question is, which makes more financial sense? Cooling the whole room? Spending the money to purchase servers with a low heat-to-computation ratio?

    You don't need to cool the whole room - you could just cool the cabinets. Most cabinets have doors and sides, an open bottom and fans at the top. So you can blow cold air up the inside of the datacabinet (which is what most datacentres do anyway) and take the air from the top to recycle it with reasonably minimal air (and hence heat) exchange with the rest of the room.
  • by warpSpeed ( 67927 ) <slashdot@fredcom.com> on Wednesday April 06, 2005 @11:28AM (#12154493) Homepage Journal
    if your single AC to DC converter failed everything would go down

    Assuming that you only have one converter. The nice thing about AC to DC conversion is you can have multiple AC converters all feeding the same DC voltage to a single set of conductors to run the DC power out to the machines. The converters can even be out of phase. If the power conversion system is designed right, any one or two converters can fail, be disconnected from the power feed, and the remaining good converters will pick up the slack.

  • by timster ( 32400 ) on Wednesday April 06, 2005 @11:31AM (#12154521)
    I don't think the power supplies contribute a really major portion of the heat to servers these days. It's all about disks and processors.

    As for power consumption, I don't see how converting the power outside the rack uses less power than converting it inside the rack. And it won't improve reliability since each server will still need a power supply, it will just be a DC-DC one. I don't think you can (reasonably) run a 500-watt power line at 12 volts. Not to mention that you need more than one voltage.
  • by kesuki ( 321456 ) on Wednesday April 06, 2005 @12:03PM (#12154970) Journal
    However, the argument for putting the battery backup directly into an power supply, in unaffected by this statement.... also, having an atx supply with a DC input, and an external UPS with DC output with a cord shorter than 10 meters also has minimal issues.
    Also, by having the conversion process take place in the UPS you are shifting 20% of the heat generated by a modern PC away from the enclosure, and putting it into an external device. (compare devices like the PS1 and PS2 which have an internal converter, and early models had serious overheating issues, and other consoles that decided to place the conversion in an external device, even if that converter is on the console connection side, it removes heat from inside the console, and lets it dissipate outside the console.)
  • Re:Why not both? (Score:3, Insightful)

    by drgonzo59 ( 747139 ) on Wednesday April 06, 2005 @12:48PM (#12155657)
    Not just cost but also reliability (in the end also a cost issue). Having water flow among the CPUs, hard drives and all other components is just asking for trouble. In case of a leak it won't be just one damaged CPU, or one memory stick so the system can still compensate and keep going, but the whole box or maybe rack might be ruined.

    This doesn't seem like an either or situation or a large research question. A cost and reliability analysis should determine what it better for each individual setup.

  • by sk8king ( 573108 ) on Wednesday April 06, 2005 @01:31PM (#12156254)
    We need to do everything to reduce the power required for all our electronic gear. More powerful servers [computational wise] require more power [electricity wise] which then requires more power [electricity wise] to power the air conditioning. If we could get server that somehow consumed less power [a lot less] we would win on two fronts.
  • Re:Why not both? (Score:3, Insightful)

    by hey! ( 33014 ) on Wednesday April 06, 2005 @02:28PM (#12156964) Homepage Journal
    It's pretty simple to figure this out with current day prices. You just crunch the numbers and see what mix of investments win. The place where it gets interesting is the place it always gets interesting: predicting the future. Will energy prices rise, and if so how quickly? If you expect them to double in a hundred years, then you can probably work from present day costs. The net present value of these future savings is nil. If you expect them to double in three years, then you have to do the operating cost calculations differently, as you would if you expected dramatic price drops soon.

    Another factor to consider -- what value to your organization does knowing that the price is predictable? Suppose that building and operating an energy efficient data center is within the financial parameters that make sense for your organization, but you could save money by building a gas-guzzler. What is the liklihood that near to mid term energy price rises could make operating the gas guzzler impractical? What would the impact be on other aspects of your business? If you assigned an expected cost to this scenario, does it offset savings?

    Finally, there is a ethical/public image issue. Is it acceptable to waste energy? If we are going to be efficient, can we get some secondary benefits out of this?
  • by Critical Facilities ( 850111 ) on Wednesday April 06, 2005 @05:59PM (#12159451)
    UPS's do not run exclusively off of DC. You are correct that they convert AC to DC, then route it through the battery strings, then invert it back to AC current. While this does generate some heat, it is NOTHING compared to the server racks. I've worked in datacenter environments for several years now, and I can say that one of the biggest foes to efficient cooling is poor space planning.

    I've never seen people so difficult to communicate with as hardware planning people. You would be amazed at how much better a computer room gets cooled when the computer equipment gets installed properly in a "hot aisle/cold aisle" configuration. Also, vendors and hardware folks don't like to have things pointed out that they're not doing, like making sure not to install a top discharge cabinet on the edge of a cold aisle right next to a front intake cabinet, or installing plenums inside the cabinets as some vendors recommend.

    A combination of good space/hardware planning as well as honesty and communication in determining potential heat loads are probably the 2 biggest factors in keeping a computer space cold, IMHO. No one's being helped by just guessing as what a rack full of SunFire servers is going to put out in terms of heat, find out from the manufacturer. And don't feel that your engineering staff is trying to tell you how to do your job or piss you off by letting you know that a rack you've installed is disrupting airflow. We're all in this together, remember?

Understanding is always the understanding of a smaller problem in relation to a bigger problem. -- P.D. Ouspensky

Working...