Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Data Storage IT

Cooling Bags Could Cut Server Cooling Costs By 93% 135

judgecorp writes "UK company Iceotope has launched liquid-cooling technology which it says surpasses what can be done with water or air-cooling and can cut data centre cooling costs by up to 93 percent. Announced at Supercomputing 2009 in Portland, Oregon, the 'modular Liquid-Immersion Cooled Server' technology wraps each server in a cool-bag-like device, which cools components inside a server, rather than cooling the whole data centre, or even a traditional 'hot aisle.' Earlier this year, IBM predicted that in ten years all data centre servers might be water-cooled." Adds reader 1sockchuck, "The Hot Aisle has additional photos and diagrams of the new system."
This discussion has been archived. No new comments can be posted.

Cooling Bags Could Cut Server Cooling Costs By 93%

Comments Filter:
  • by captaindomon ( 870655 ) on Tuesday November 17, 2009 @11:15AM (#30129996)
    That's really nifty, and I'm sure it works ok and everything, but... how much does it cost?
    • by jgtg32a ( 1173373 ) on Tuesday November 17, 2009 @11:43AM (#30130378)
      About 7% as much as whatever you are using today
    • Re: (Score:3, Informative)

      by jaggeh ( 1485669 )

      That's really nifty, and I'm sure it works ok and everything, but... how much does it cost?

      Figures cited by Iceotope show that the average air-cooled data centre with around 1000 servers costs around $788,400 (£469,446) to cool over three years. The Iceotope system claims to eliminate the need for CRAC units and chillers by connecting the servers in the synthetic cool bags to a channel of warm water that transfers the heat outside the facility. This so-called “end to end liquid” cooling means that a data centre, fully equipped with Iceotope-cooled servers, could cut cooling costs to just $52,560 - a 93 percent reduction, the company states.

      taking the above figures into account as long as the cost to install is under the 200k figure theres an incentive to switch

    • by rhyno46 ( 654622 )
      It's cheap. Only 93% of whatever you are paying now.
    • by Smidge204 ( 605297 ) on Tuesday November 17, 2009 @12:05PM (#30130664) Journal

      The idea that the mainboard components are sealed inside a liquid-filled compartment seems like a major point against the system. Extra proprietary vendor lock-in components mean extra costs of owning and operating, which probably offset any savings from cooling... if any.

      I'm skeptical that it will significantly reduce cooling costs (Compared to, say, a chilled cabinet system) because the total cooling load stays the same. If you're generating a billion BTUs of heat you still need to remove a billion BTUs of heat. Any savings will only be from the higher energy densities water allows versus air and maybe initial installation.

      Plus, based on their exploded view, there is no less than three heat exchanges before it even gets out of the cabinet: Chip to liquid (via heat sink), submersion liquid to module liquid, module liquid to system liquid. Each time to go through an exchange your temperature gradient goes up.

      What they need is a system that is compatible with commodity components to leverage low cost hardware against lower cost cooling. Why not fit water blocks directly to existing mainboard layouts and circulate chilled water from the main loop directly through them via manifolds and pump at each rack? You can still enclose the mainbaord and cooling block in a sealed, insulated compartment to eliminate condensation problems, but not being submerged means you can actually repair/upgrade the modules.
      =Smidge=

      • by fuzzyfuzzyfungus ( 1223518 ) on Tuesday November 17, 2009 @12:29PM (#30130966) Journal
        Their demo, at least, seems to be aimed at blades, so the inability to just slap any old motherboard into the system would not be a significant change.

        As for water blocks, I suspect that all the various minor chips in the system would be problematic. Even if your cooling of the CPU, or even the top 3-5 chips, by thermal output, is perfect, there are loads of other components that will heat up and die without airflow. CPU voltage regulators, northbridge, RAM, RAID controllers, ethernet, etc. You can't waterblock them all(at least without a serious redesign that makes using commodity components impossible, or a plumbing scheme that would make Escher wince). Either you go with a hybrid waterblock/conventional air cooling system; which gives you the vices of both; or you have to go with the fluid bath as in this setup.
        • Well what I had in mind is a flat plate (say, aluminum) with water channels in it. On this plate there are two or three protrusions that match the main chip locations that need cooling that are milled to physically contact the chips just like discreet heat sinks would.

          You attach the mainboard to this plate like you would attach it to the inside of a normal computer case, only backwards. eg; the screws go through from the back side instead of the component side. This puts the components very close to, if not

        • by Korin43 ( 881732 )
          No, see all you need is 10,000 gallons on mineral oil, a waterproof server room, and a couple rebreathers..
      • by dindi ( 78034 )

        I believe that directly cooling components via liquid is way more effective than pushing some air around.

        Think air cooled (loud and ineffective) vehicles compared to modern liquid cooled vehicles, that circulate liquid inside the engine (not the combustion chamber of course)...

        I agree with the extra cost for the technology, however you could still use the same components if you e.g. submerge things in oil, that does not harm components and does not conduct electricity.

  • Ugh. (Score:3, Interesting)

    by Pojut ( 1027544 ) on Tuesday November 17, 2009 @11:16AM (#30130006) Homepage

    For some reason, the filters at work won't let me view the article. Does it happen to mention how much the upfront cost for these bags are?

    • Re: (Score:2, Insightful)

      Like the unpriced bottle of wine at Applebees. If you have to ask...

      • by Pojut ( 1027544 )

        It was more of a curiosity thing :-)

        I'm wondering about the upfront costs vs. money saved over time after the initial investment.

      • by cromar ( 1103585 )
        Don't order wine at Applebee's!!
      • What is "bottle of wine at Applebees" in Library of Congresses? thx.
    • No mention of cost it the articles I skimmed, however, no mention of cool bags either. Actually I'm more reminded of Pelican cases [thepelicanstore.com] than cool bags [made-in-jiangsu.com]. What they're doing is immersing a motherboard in an inert synthetic liquid, and sealing that in one half of a hard shell. They're running coolant water through the other half of the hard shell through a distribution unit in the rack. All of the coolant water runs through a heat exchanger, which is connected to the building's water cooling system.

      So: sealed l
      • by Pojut ( 1027544 )

        Sounds good to me.

        I'm still waiting for the day when it is feasable (physically and economically) to lay down small pipes for coolant directly onto a PCB or between PCB layers. That will bring along the true cooling revolution!

        • You don't even need to do that... just make a motherboard with a plate behind it that share the same holes for mounting and have a gap for water. Seal it, fill with water, and you have the same thing. If you want to put in transfer "ports" for CPU cooling blocks you can. A motherboard manufacturer could do that now and include a CPU and chipset block with some standard nozzles for connecting GPU block hoses. Drop in a small pump and external heat sink and they could sell it to gamers and server builders

        • Why bother? It's far cheaper and more effective to just dunk the whole motherboard in coolant. After all, its not the PCB that gets hot, but the chips. PCB material is probably a fairly poor heat conductor anyways.
    • Weird that your filters are malfunctioning. But anyway, these cool new bags are only currently available through barter, in exchange for 2 kiddie porn magazines plus one copy of michaelangelo virus.
      • by Pojut ( 1027544 )

        Not really, things are rather draconian around here...pharmaceutical call center. Oddly enough, Slashdot has always been accessable...It's likely because of someone in IT, lulz.

  • TFA mentions using the excess heat to heat the building. I wonder how feasible it would be to actually recycle the heat to generate more power? Anyone have an idea on how much heat could be generated by your typical server farm?

    • Re:Excess Heat (Score:4, Insightful)

      by von_rick ( 944421 ) on Tuesday November 17, 2009 @11:22AM (#30130088) Homepage
      In winter you'd get quite a few kilowatt hours worth of heating if you route the dissipated heat properly.
    • by afidel ( 530433 )
      Not much at all, delta-t is too low to get any real efficiency.
    • Very little, since you're dealing with very low quality heat. The hottest temp in your system is going to be the hardware itself (unless you're expending energy to pump it - then what's the point of trying to generate power from it?)

      So if your max hardware temp is, say, 38C (100F) that's not good enough to generate any appreciable power from.

      On the other hand, you probably will be pumping the heat to chill the system, and the rejected heat temp may be quite a bit higher - maybe as high as 75C. You can use t

      • Re: (Score:3, Funny)

        by CompMD ( 522020 )

        "Very little, since you're dealing with very low quality heat."

        And that's why in the really good systems, the only acceptable option is Monster Heat, the finest quality heat available, and its even gold plated.

  • by Itninja ( 937614 ) on Tuesday November 17, 2009 @11:22AM (#30130086) Homepage
    Seriously. What do we do when a RAM module or a backplane fails? Will a simple hardware swap become a task for those trained in hazmat handling? I do not want to be on the help desk when someone calls and says "Help! The servers are leaking!"
    • by dintlu ( 1171159 ) on Tuesday November 17, 2009 @11:40AM (#30130344)

      You pull that server out of the farm and let other servers pick up the slack while you make repairs.

      It's hype, based on the assumption that every server on the planet will be virtualized by 2019, and that the separation of hardware from the software that runs on it will allow IT departments ample time to offload work into "the cloud" while they swap out RAM.

      Either that or it's made for large datacenters with multiple redundancies and enormous cooling costs. :)

      • Don't want to reduce your smug, but we're doing just this - restart services from the failed component, service the failed resource on a non-critical timeframe. The small shop with a half-dozen server boxes doesn't give a damn about cooling costs or this level of service, for the most part. If they do, they're likely going to someone else to satisfy that requirement, not doing it in house.

        I've got stack of servers in my datacenter that are allocatable on demand. Any unused server blade is a potential spa

      • by ByOhTek ( 1181381 ) on Tuesday November 17, 2009 @12:03PM (#30130636) Journal

        What, they won't? Oh man, this virtualization thing is brilliant.

        So you virtualize a box, so that, if there's a hardware failure, the box can be brought back up on another machine, with minimal downtime! Also, you can run multiple systems on a box saving money!

        We virtualized all our servers around here, went from about 200 servers to 8 machines, each with 16 CPU cores. It went well. So we decided to repeat the process. We then had 4 machines, each taking two VM hosts! It was great, more savings, more vodka for my drawer... So I thought, how could I make this even better...?

        That's right, I put all four of THOSE VM hosts on a 486 in the back room that doesn't even need special cooling. Let me tell you, in terms of Vodka, this virtualization thing has been *quite* productive.

        How could it not be all pervasive by 2019? I'm sure everyone will be virtualizing all of their VM hosts on VM hosts running on 486s by 2019!

        • by IICV ( 652597 )
          I did you one better - I have my Windows VM running on a Linux VM, and the Linux VM is running on the Windows VM. It was kind of tricky to set up at first, but now I don't even need hardware! There's just a spinning matrix of computation in the basement. My dog is afraid to go in there.
      • That's happening right now. I'm seeing the same performance from a virtualized 4-CPU 16GB machine as a real one. (Didn't use to be like that).
    • Yeah, I'll keep my FRUs, thanks.

    • TFA states it's an inert liquid, so hazmat need not be involved. Actually, it sounds an awful lot like an earlier story [slashdot.org] concerning a full-immersion prototype desktop PC.
  • A few questions (Score:5, Interesting)

    by Reason58 ( 775044 ) on Tuesday November 17, 2009 @11:23AM (#30130094)
    Won't this cause accessibility issues for the administrators who have to support these servers? Additionally, Google's evidence supports the idea that warmer temperatures are better for the life of some components, such as hard drives. Last, this may work well for traditional servers, but I fail to see how this can be made to support a large SAN array or something similar.
    • by afidel ( 530433 )
      Google and just about everyone else is going to the model where you never touch the server after install. Also their evidence shows that too cool of temperatures negatively effect HDD life, that's quite different from saying warmer temperatures are better, it was also the area of the study that had the fewest number of datapoints so the evidence might not be fully accurate.
  • Super cool! ^_^ If they made those for laptops, I'd be all over it. My wife likes to use her HP as a lap warmer, with a blanket... But there I go thinking again... --Stak
  • we all know what happens when you mix water and server rooms http://www.youtube.com/watch?v=1M_QTBENR1Q [youtube.com] better call up Noah
    • we all know what happens when you mix water and server rooms http://www.youtube.com/watch?v=1M_QTBENR1Q [youtube.com] better call up Noah

      "The Iceotope approach takes liquid – in the form of an inert synthetic coolant, rather than water – directly down to the component level," the company said.

    • Actually, this technology would make the data center better protected from a flood. Since each blade is sealed in its own bubble of coolant, if the entire rack is underwater because of a flood, the blades would be protected. Maybe some of the external components like the cooling pumps might be damaged, but most of the contents of the rack would be fine.

      I'm not saying they could continue to operate through the flood, but after the water is gone and the mess cleaned up, you replace the UPS and fix the exter

  • Grandma would be proud of her cold compress technology.

  • Comment removed based on user account deletion
  • Quick Release (Score:5, Informative)

    by srealm ( 157581 ) <.prez. .at. .goth.net.> on Tuesday November 17, 2009 @11:45AM (#30130408) Homepage

    The problem with all this is you need a good piping and plumbing system in place, complete with quick release valves to ensure you can disconnect or connect hardware without having to do a whole bunch piping and water routing in the process. Part of the beauty of racks is you just slide in the computer, screw it in, and plug in the plugs at the back and you're done.

    I'm not saying it's impossible, but just building a new case, or blade, or whatever isn't going to do it - you need a new rack system with built in pipes and pumps, and probably a data center with even more plumbing with outlets at the appropriate places to supply each rack with water. This is no small task for trying to retrofit an existing data center.

    Not to mention that you have to make sure you have enough pressure to ensure each server is supplied water from the 'source', you cannot just daisy chain computers because the water would get hotter and hotter the further down the chain you go. This means a dual piping system (one for 'cool or room temperature' water and one for 'hot' water). And it means adjusting the pressure to each rack depending on how many computers are in it and such.

    The issues of water cooling a data center go WAY beyond the case, which is why nobody has really done it yet - sure, the cost savings are potentially huge, but it's a LOT more complicated that sticking a bunch of servers with fans in racks that can move around and such, and then turning on the A/C. And there is a lot less room for error (as someone else mentioned, what if a leak occurs? or a plumbing joint fails, or whatever. Hell, if a pump fails you could be out a whole rack!).

    • Re: (Score:3, Interesting)

      by FooAtWFU ( 699187 )
      If you have enough space for a spare rack, and you have a sufficiently virtualized infrastructure, you could just swap in the spare and do rack-at-a-time maintenance. If you're really saving 93% on cooling that could be worth it. (Maybe leave your SAN boxes and other less-failable components on an old air-cooled setup.)
    • Not to mention the very simple fact that when something goes wrong with the servers, you have a team of guys ready to fix it in no time flat.

      When something goes wrong with the plumbing no one can touch it unless they're a licensed plumber. He'll take a few days to get there and a few days to do the job, AND he'll charge you more than you paid your server guys in the same time frame.

    • I agree. Water cooling of a data center has a history. The only thing I see here is they are attempting to bring the water at a small, scalable, "standardized" manner to each blade.

      I worked for a large investment company some time ago, and we had an "older" data center that was originally designed to house mainframes and used a pool to hold water for cooling. A side benefit of the pool was that employees could use it for swimming, and the water was at quite an agreeable temperature. The benefit here
    • by jbengt ( 874751 )

      The issues of water cooling a data center go WAY beyond the case, which is why nobody has really done it yet . . .

      They've only been doing direct water cooling of data center computers since the 1950s. Though the last time I worked on one was in the 1980s, and it was mainframes, not PCs/blades.

    • No spill (as in "almost insignificant", not as in "not too much, won't empty the whole system, but you better have some towel nearby just in case"), quick disconnect, low resistance valves for watercooled system have been already available for quite some time for enthusiasts.

      (Koolance is an example of compagny producing such thing in the US, Aquatuning is an example of shop selling similar implements in the EU - no links to avoid gratuitous advertising to web spiders, but you can easily google the names).

      An

  • by wandazulu ( 265281 ) on Tuesday November 17, 2009 @11:55AM (#30130512)

    The ES/9000 that I had contact with was a series of cabinets that were all water-cooled from the outside in...it was a maze of copper pipes all around the edges and back and looked like a fridge. When you opened a cabinet, you could feel a blast of cold air hit you.

    It was no trivial feat to do this, they had to install a separate water tank, some generators (I remember one of the operations guys pointing to a Detroit Diesel generator outside in the alley and saying it was just for the computer's water system), moved a bathroom (only water they wanted around the computer was the special chilled stuff), and I can distinctly remember seeing the manuals(!)... 3-inch thick binders with the IBM logo on them, and all they were for was the planning and maintenance of the water system.

    No wonder it took almost a year to install the machine.

    • by tuomoks ( 246421 )

      It's not trivial as you say but once done (correctly!) can be very flexible. I "managed" (as a systems programmer who had to accept all the designs) a "data center" growing from one water cooled system to several mainframes and to install the "next" system only took two days with everything. Yes, we had extra space / capacity - the capacity plans had 5-10 year estimates (a big fight but paid back later!). DAlso did that for a couple of customer later on.

      Liquid (water or other, metals, etc) cooling is more e

  • Water is a hassle (Score:5, Informative)

    by BlueParrot ( 965239 ) on Tuesday November 17, 2009 @11:56AM (#30130536)

    I work with particle accelerators that draw enough power that we don't have much choice but to use water cooling, and even though we have major radiation sources, high voltage running across the entire place, liquid helium cooled magnets, high power klystrons that feed microwaves to the accelerator cavities etc... the only thing that typically requires me to place an emergency call during a night shift is still water leaks.

    Water is just that much of a hassle around electronics. Even an absolutely minor leak can raise the humidity in a place you really don't want humidity, it evaporates and then condenses on the colder parts of the system where even a single drop can cause a short circuit and fry some piece of equipment. After it absorbs dirt and dust from the surroundings it starts attacking most materials corrosively, which may not be noticed at first but gives sudden unexpected problems after a few years. If you don't keep the cooling system itself in perfect condition valves and taps will start corroding and you get blockages. Maintenance is a pain because you have to power everything down if you want to move just 1 pipe etc...

    I just don't see why you would go through the hassle with water cooling unless you actually have to, and quite frankly if your servers draw enough power to force you to use water for cooling then you're doing something weird.

    • I think that's the advantage of this system. You are never going to avoid leaks, but since computers are immersion cooled and in their own sealed boxes, they are no longer sensitive to environmental issues. At that point, leaks become an annoyance instead of an emergency.
    • by MobyDisk ( 75490 )

      The real problem with the system you describe is that a baguette can cause the entire system to overheat. [slashdot.org]

    • Re: (Score:3, Insightful)

      by tuomoks ( 246421 )

      Water (and liquid coolants, even metals) can be a hassle if not deigned correctly. I have had my experiences with water cooled systems but mainly the "over efficiency", well, one burst which shouldn't have happened (LOL).

      One thing I have learned (from my son) - in cars, everything replaced with military and/or airplane grade fittings, valves, tubes, etc - makes life much easier. Not much more expensive but very fast pays back. If I would have known that (much) earlier instead of accepting engineering (good

    • I had my own water cooling experiment about ten years ago. I had a two processor Athlon board and made two aluminium waterblocks for it. Since my metalworking skill was pretty low (and I was limited to hand tools), the blocks leaked, necessitating several patches. First with duct tape (:-), then with plumber's caulk, and finally by covering the whole thing with fiberglass epoxy, which plugged it up. Up to that time I had a nice little waterfall going from the waterblocks down onto the graphics card (a Radeo

  • Reminds me of the sapphire fire suppression [ansul.com] just applied all the time. Or the sealed mineral oil boxes people seem to put computers in. The system could be huge if they apply it right and it actually realizes a 93% reduction in energy cost(I have my doubts). The largest issue I have heard is that it is tricky, but not impossible, to move the heat away from the components once they heat up the liquid.
  • Cray-2 (Score:4, Insightful)

    by fahrbot-bot ( 874524 ) on Tuesday November 17, 2009 @11:59AM (#30130592)

    "The Iceotope approach takes liquid - in the form of an inert synthetic coolant, rather than water - directly down to the component level," ... "It does this by immersing the entire contents of each server in a "bath" of coolant within a sealed compartment, creating a cooling module."

    Hmm... The Cray-2 [wikipedia.org] was cooled via complete immersion in Fluorinert [wikipedia.org] way back in circa 1988. I was an admin on one (Ya, I'm old). So, this is a bit different, but certainly not ground-breaking.

    • Oh it's groundbreaking. It's unique. I know, because they have patents on it. 1988 doesn't exist. It's all in your head. They invented something new and innovative so they patented it! Duh.

      *cough*

      Too early yet for that much sarcasm?

    • Re: (Score:3, Interesting)

      by jcaren ( 862362 )

      The crays full immersion coolant model hit a big problem - the coanda effect.

      This is where layer of fluid near the actual component flows much slower than actual flow - in layers slowing down exponentially as it gets closer to the stationary components.

      For air this is not too much of a problem - only a very fine layer of stationary air over compenents that does not affect cooling. But with liquids the effect is both noticable and severely impacts coolant flow over hot surfaces - with some then "next gen" cr

      • Re: (Score:3, Informative)

        by jbengt ( 874751 )

        The crays full immersion coolant model hit a big problem - the coanda effect.

        You did not describe the Coanda effect, you described boundary layer issues. I don't know enough about the story to know whether boundary layers were the real issue: It's pretty routine to take into account the fact that friction causes the fluid velocity to approach zero at the surface of a stationary object, and to account for a lack of turbulence in the laminar part of the boundary layer that reduces heat transfer. It could just be that they needed to get a phase change at the surface in order to pull

    • by hey ( 83763 )

      Everything *trickles* from Supercomputers/Mainframes eventually.

  • Sixteen years ago, at the end of my highschool career, I was very into overclocking (had multiple celeron 300A). With peltier cooling I was able to run a 300mhz CPU at 450mhz with rock solid stability (ran things like prime95 24hrs a day for weeks). People were starting to experiment with liquid cooling commodity white-box computers.

    One of the more interesting applications I saw was an old styrofoam cooler converted into a PC case. All components were submerged in a bath of cold mineral oil. I remember thin

  • Look at the cross section photo. This dispenses completely with convection (air flow) and instead designs the system for direct physical contact from the heat sink to the components. Then the water flows behind the heat sink to take the heat away from that.

    The problem is that means that you have to make a heat sink with varying height "fingers" on it to meet every component that produces heat (which is all of them), which means every time you change a component you have to redo the heat sink. And of course

    • Actually, it looks fine after some initial glances. They put up a video on youtube here [youtube.com] where the interior is liquid filled for direct contact with all of the components, then a secondary liquid system (it seems) outside uses the plating case as a heat exchanger and takes it away to the central lines out of the cooling system.
      • That's noticeably worse. Component manufacturers haven't tested their components for extended immersion in liquid, even relatively inert ones. This would drive the cost of the device through the roof.

        In addition, I know people like to think of heat transfer as radiation or conduction, but convection is the biggest factor, even in a liquid cooled system, this is why the liquid circulates instead of just sits there. And in this case, the liquid is going to just sit there, the area on the motherboard side of t

    • by hey ( 83763 )

      That was my thought too.
      I can see heat sinks with liquid pipes in them in the future. Plus regular air cooling. ie a hybrid solution.

  • weight? (Score:5, Interesting)

    by Clover_Kicker ( 20761 ) <clover_kicker@yahoo.com> on Tuesday November 17, 2009 @12:07PM (#30130676)

    How much does a rack full of water-cooled blades weigh?

    Never thought I'd see the UPS become the lightest thing in the server room.

  • With all those layers it doesnt seem that sliding one of these out and quickly swapping some RAM or any other part is
    going to happen.

    As well do these Iceotope guys actually make server hardware or just the cooling specs. Who do they get there guts from or are they just advertising and hoping the guys like HP, IBM or SUN (well maybe not SUN) decide to design there next generation of servers with this in mind?

    I'd like to see how easy it is for replacement. doesnt look like there is a lot of room for other bit

  • Cray XT5 "Jaguar" (Score:1, Informative)

    by Anonymous Coward
    The #1 on the top 500 supercomputer list [cray.com] is using water cooling as well (in combination with phase change cooling). Watercooling whole racks can be done. The only difference from TFA is that is also adds immersion cooling [pugetsystems.com]. Immersion cooling has been found to be superior in cooling but comes with (obvious) considerable maintenance problems. The video [cray.com] for this machine shows more or less standard water cooling blocks on the processors, along with various plumbing that to keeps the machine chilled.
  • by Locutus ( 9039 ) on Tuesday November 17, 2009 @12:17PM (#30130796)
    The technique of using cheaper off-peak energy to freeze liquid and then use that liquid for daytime cooling loads is already used in a very few places. Combine that technique with the direct server cooling mentioned in the article and....wait a minute....they are already claiming a 93% cooling cost cut? Either their is huge waste now or they're already expecting to use off-peak energy. But then again, maybe the remaining 7% is still large enough to merit further savings.

    Direct cooling makes far more sense than cooling rooms like I keep seeing around now.

    LoB
    • Either their is huge waste now or they're already expecting to use off-peak energy.

      The current situation IS a huge waste.
      Basically we're currently trying to cool down server using a fluid which is a *thermal insulator* (air). Of course it's catastrophically inefficient. But it happens to be simpler.

  • There are probably great economies of scale for datacenters, but what about Joe User? The article wasn't clear if 'included in the manufacturing process' would include consumer level systems. Just thinking that cost savings for datacenters is great, but I'd be really interested if it helped out the regular consumer (not to mention what kind of operational issues might this bring up?).
  • There's a joke somewhere about your server being so ugly you have to put a bag over it before you go inside, but I can't quite work it. Help?
    • by Knx ( 743893 )
      Hey! I think there's also a joke somewhere about the bag being perforated, bringing a new sense to a "system overflow" but I can't quite work it out either...

      Maybe we could create a group on Facebook and have fun with our not-quite-working-slashdot-jokes?
  • Yet another way to increase the density of server farms... Useful if you must grow your servers in Manhattan, a waste of money otherwise.

    Among the many great things the internet has brought us (*cough*porn*cough*), "location-independence" ranks pretty high up there. Your servers don't need to all fit in one cargo container that runs so hot it requires LN cooling. For all it matters, you could put them in a single line of half-racks on a mountain ridge, cooled naturally by the wind (with some care to ke
    • by Lennie ( 16154 )
      Have datacenters relatively close means less latency. Why else do you think CDN's exist ? Yes, Manhattan is actually a good example. Lots of company which handle bids for shares on companies actually place their servers near the NYSE, etc.
  • Source for excerpt below [datacenterknowledge.com]

    "Intel set up a proof-of-concept using 900 production servers in a 1,000 square foot trailer in New Mexico, which it divided into two equal sections using low-cost direct-expansion (DX) air conditioning equipment. Recirculated air was used to cool servers in one half of the facility, while the other used air-side economization, expelling all hot waste air outside the data center, and drawing in exterior air to cool the servers. It ran the experiment over a 10-month period, from Oc
  • The problem with this is that it requires server manufacturers to standardize their designs. There was talk a few years ago about standardizing Bladeservers. I don't see this happening as there's too much control in the bladecenter chassis, switch interfaces, management abilities etc. Plus why would IBM want to sell an empty chassis and then let the customer fill it with HP C-Class blades?

    Even racks themselves from IBM/HP/Dell/EMC/netapp/Sun aren't standardized, other than they are 19" wide. This is why

  • "Earlier this year, IBM predicted that in ten years all data centre servers might be water-cooled."

    The costs of cooling air will be replaced by the costs of obtaining water. This system will not be for "water challenged areas".....Californy, etc.
  • Wow, amazing, they finally produced something like what has been done on my website more than 9 years ago:
    http://www.octools.com/index.cgi?caller=articles/submersion/submersion.html [octools.com]

  • I'm surprised that hot chips don't already include a layer of microfluidics right inside the package. People have been dealing with overheating chips inefficiently for years. There's clearly an opportunity to sell chips with fluid cooling built right into them.

    I think eventually buildings will have fluid cooling systems attached to heat sinks for all kinds of purposes. Geothermal heat pumps [wikipedia.org] already are popular for making heating and cooling up to 4x as powerful as the electricity powering them (instead of t

  • how about just cutting down the ac to dc to ac to dc part and make a common DC bus with the big and hot ac to dc part away from the severs and they can just have not as well dc to dc in them.

    water has a lot that can go bad with it and do you want some water to mess up a $1000+ sever?

    • Going DC doesn't save as much as you might think

      There was an apc paper (search for DC on http://www.apc.com/prod_docs/results.cfm?DocType=White%20Paper&Query_Type=10 [apc.com] to find it) on this not long ago.

      They considered five systems, three existing and two hypothetical and looked at the total efficiancy including UPS, distribution and PSU in equipment (remember even with DC distribution you still need a PSU and generally said PSU needs to be isolating).

      * american AC: 480V/277V three phase from the UPS conver

  • I had a start up, Nisvara Inc. 2002 - 2006 We had water cooled and could run whole server rooms with no air conditioning at all! Even had a partnership with NASA Ames.

    Our system used sealed copper tube, and something I called a thermal ground, basically a copper or aluminum plate with the tube bonded too it. Then shims that connect the heat sources, the CPU, Northbridge, Southbridge and CPU Power supply and possible ram. Powersupply and hard drives were also connected to the plate to remove the h

Real Programmers think better when playing Adventure or Rogue.

Working...