This page contains a Flash digital edition of a book.
Keeping it cool


As data centres come under increasing economic and environmental pressures,


Beth Harlen explores the impact of the latest


cooling solutions P


atrick Giangrosso, general manager at Coolcentric, put it best when he said that data centres are at a crossroads. One of the most critical


issues being faced, he explained, is that as the average heat density per enclosure increases, it becomes increasingly difficult to provide the cooling necessary to properly operate the data centre. In some cases, the cooling that must be provided is twice what the heat loads require, due to inefficiencies within the cooling system. Giangrosso added that existing data centres need an easily implementable, non-disruptive cooling solution that can accommodate existing infrastructure, completely satisfy today’s cooling needs, and grow as technology and user-available resource requirements grow. Meeting all those criteria is no small


undertaking, as there are several factors at play. ‘Increased density is a hugely common system requirement – one that has led manufacturers to introduce a new generation of multi-core processors and densely packed, highly integrated server components like the Dell C8000 PowerEdge and Supermicro FatTwin series,’ said Dr Alex Ninaber, technical director at ClusterVision. ‘While addressing the performance-per-footprint requirements, these high-density, multi-function components can significantly challenge the effectiveness and power efficiency of standard on-board cooling. ‘From an economic perspective, HPC


consumers are also increasingly aware of their total cost of ownership. With energy costs and heat dissipation accounting for up to 40 per cent of the total ongoing power budget, people are understandably looking for improvements both in cooling performance and the overall power-efficiency. Liquid heat-exchange and oil-submersion cooling can be particularly cost-effective for new data centres where the


38 SCIENTIFIC COMPUTING WORLD


high inlet temperatures and re-use of the drawn heat can be used to optimise the overall thermal management of the installation,’ he added.


Looking to liquid According to Graham Whitmore, president of Motivair, in the past few years there has been an increased focus on rack-level water cooling for HPC clusters as water and refrigerant cooling systems replaced air cooling, which became less effective for higher density loads. ‘Rack-mounted coolers typically operate with cooling water supply temperatures from 60 to 75o


F – always


above the data centre dew point temperature,’ he said. ‘An outdoor water-cooling source of chillers, cooling towers, or aquifer water is required to reject the heat from rack-mounted coolers, which are non-invasive and easily installed on new racks or retro-fitted to existing racks. ‘In-server water-cooling can be achieved with


warmer water because the contact points (cooling pads) on the circuit boards replace the standard OEM heat sinks, where the board temperatures are highest. Te cooling source for warmer water can be an outdoor radiator or cooling tower with pumped water, and no refrigeration is needed for the higher water temperature. But installation must be incorporated into the servers and requires direct water connections to all the circuit boards via water capillaries and headers inside the racks and the servers,’ said Whitmore.


Stephen Empedocles, director of business


development at Asetek, added that liquid- cooling is a very different approach to the one being taken by most data centres today: ‘Water is approximately 4,000 times more efficient at removing heat than air. And yet, only a tiny fraction of data centres today use liquid cooling.’ He stated that, although liquid cooling isn’t new, it’s also not very pervasive and tends to be restricted to a handful of the highest-performance supercomputing clusters. He attributed this to the upfront cost and the complexity of operating and maintaining the highly customised equipment.


Going under Te term ‘liquid-cooling’ covers a spectrum of solutions using a range of temperatures and methods. Fully submerging servers in racks filled with a dielectric coolant is the approach being taken by Green Revolution Cooling. Te company’s CEO and founder, Christiaan Best, explained the technlogy: ‘Components are completely submerged in a safe, non-toxic oil-based coolant that circulates at 40°C or higher – depending on whether it is being used for heat recapture. Te figures are pretty impressive: the removal of fans and cooler components represents a server power reduction of roughly 20 per cent at the server level, while the cooling system itself uses just one to two per cent of


@scwmagazine l www.scientific-computing.com


Shutterstock.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52