LIQUID COOLING AT THE CHIP LEVEL
FOCUS on COOLING
Could water piped over the chips as they heat be the next big thing in cooling?
The next horizon for data center cooling: the application of liquid right at the chip, not server or rack, level. Why would this be attractive? As a medium, water is by basic physics a superior disperser of accumulated heat than air: IBM estimates that water as a coolant has the ability to capture heat about 4,000 times more efficiently than air, and its heat-transporting properties are also far superior. This is why we have had a range of rack level liquid (note that water is not always the preferred substance) employing servers for some time, like the IBM eServer Rear Door Heat Exchanger (www. tinyurl. com/2ekl24h) and the HP Modular Cooling Systems family (www. tinyurl.com/y9g35u
APC and Emerson Network Power also offer liquid cooling technologies, including the Liebert XD, which uses liquid refrigerant rather than water. Another such is the Iceotope solution (www.iceotope.co.uk
), which acts by immersing the server in liquid and then transferring the heat away using water. In this case, the liquid is an inert synthetic coolant, rather than water acting as a “bath” of coolant within a sealed compartment, creating a cooling module. These cooling modules fit into a chassis, several of which can be mounted in an industry standard 19” data centre rack. The chassis provides each module with electrical power and a connection to a local water circuit that pumps water through channels in the connected modules to remove heat from the closely-coupled (but sealed) motherboard compartment. In effect, each module is a tightly integrated coolant-to-water heat exchanger.
It is claimed that a typical air cooled data center, running around 1,000 servers, costs around $800,000 to cool over three years, a cost it says can be slashed as low as $50,000 or so with its approach. That’s because CRAC and chiller
equipment use could be minimised, with the liquid cooled servers to a re-circulating “warm” (rather than chilled) water supply that transfers heat from the servers to the air outside the data centre, an approach it dubs “end to end liquid” cooling. In a data center in which all servers are cooled this way, the cooling costs can be reduced by as much as 93% (see www. tinyurl. com/2ajutyd). There are a number of chip-level product specialists. One is Parker Aerospace, with SprayCool (www.tinyurl.com/29udfzp
), which uses inert (so nonconductive/noncorrosive) liquid (3M’s Fluorinert) in a closed loop system, directly applied to hot server (x86, e.g. the Dell PowerEdge or HP Proliant) racks, which cools the components by essentially evaporating at contact. It applies the liquid directly to the hot server components from a reservoir at the bottom of the server rack, and is a closed-loop system as the gas produced is cooled back down to liquid by the building’s chilled water supply.
But In many ways IBM can claim leadership in the water cooling at the basic chip level, an idea it essentially revived in 2005, pioneering an approach where very small pipes (50 microns) pass water over chips physically stacked on top of each other in a special container. The breakthrough has been applauded as delivering cooling right at the place it is most needed while the fact that heat is trapped in that water also means it can be transported away and used in other parts of the immediate environment, further saving power/cost. Most notably, IBM has done some work involving a closed-circuit HPC blade-based system called Aquasar (www. tinyurl.com/mj3k42
) that reuses excess heat from the building it’s housed in, in this case a Swiss technical university. It uses two blade servers that house specially adapted chips: a mixture of QS22 and HS22 IBM PowerXCell 8i processors, and Intel Nehalem Xeons. The water-cooled supercomputer will have a peak performance of about 10 teraflops when it becomes operational this year. In what sense is it a chip-level solution? Liquid cooling channels water at high speeds
through tiny (50 micron, as in a hair’s breadth) copper pipes over the processors themselves. The claim is that chip-level cooling with a water temperature of about 60C will keep the chip at operating temperature well below 85C. The high input temperature of the coolant results in an even higher-grade heat as an output, which in this case will be about 65C. In this configuration, the total node farm will need about 10 litres of water for cooling, and a pump ensures a flow rate of roughly 30 litres per minute – removing an estimated 85% of the heat load of the chips.
IBM says this approach gets rid of the need for water-chillers (that in turn use power, of course) as the water is heated constantly by the chips and then cooled to the required temperature as it passes through a passive heat exchanger, thus delivering the removed heat directly to the environmental heating system. Aquasar style systems have the promise of reducing overall energy consumption by 40% compared with an equivalent air-cooled system.
But even at Big Blue this stuff is cutting edge and that water-cooled data centers of this sort won’t probably be mainstream for years. There is also the issue of extra capex – as much as 10% and 30% more than for conventional data centers – and the copper pipe-based cooling and heat distribution networks also have to be installed. So – one to watch, but it does seem promising.
Water cooled chip set up
| Page 2
| Page 3
| Page 4
| Page 5
| Page 6
| Page 7
| Page 8
| Page 9
| Page 10
| Page 11
| Page 12
| Page 13
| Page 14
| Page 15
| Page 16
| Page 17
| Page 18
| Page 19
| Page 20