This page contains a Flash digital edition of a book.
HPC 2012 Infrastructure

Giving liquid cooling proper attention Peter Hopton, Iceotope CTO, discusses why liquid cooling is gaining ground

Liquid cooling has always attracted interest, but not always for the right reasons. However, the UK IT industry has recently started to take note of liquid cooling’s potential.

It’s no longer a case of vendors and IT

operators recoiling at the prospect of putting liquid near electronics; in fact it’s now seen as a positive alternative to traditional cooling methods and much of this is down to the great technological advancements being made in this area. Dubbed ‘the next generation of liquid cooling’, modular solutions are leading the way and it’s easy to see why the IT sector is pursuing these options – most notably the HPC market which has its own specific demands. Given that HPC compute facilities are oſten

deployed in existing buildings and in city centre locations, converting space to an air-cooled data

centre is an expensive activity and achieving efficiency in these space constraints can be difficult. Noise is another problem: the equipment and fans of traditional computer-room air conditioning units can be deafening not just for the users, but anyone nearby. Next-generation liquid cooling is better suited

for operating within these confined spaces. More IT can fit in the same space and no air-conditioning or air handling equipment is required which removes any disruptions to daily operations in terms of noise. So what is ‘next-generation

conventional HPC or data centre environments and that these facilities remain ‘dry’. Within the module, heat is harvested using convection and removed through a sealed water jacket on the outside of each module, so that no excess energy is wasted by using powerful pumps. With some extremely clever engineering

liquid cooling’? Where other forms of liquid cooling use oil, dunk servers into pools of liquid or pump liquids around servers at high pressure, next-generation liquid cooling does not. Instead, non-oil-based inert liquids are placed inside sealed, hot swappable units (or modules), which ensure these procedures comply with

“Te UK IT industry has recently started to take note of liquid cooling’s potential”

designs, liquid cooling is no longer in the HPC and data centre wilderness. Indeed, it’s hard to see why it wouldn’t be at the forefront of cooling technology going into 2013 and beyond. And if liquid cooling does become the default option for high-density compute environments, less attention will be given to the

process involved as liquid cooling systems would gradually operate more and more like their air- cooled counterparts – but with all the benefits of using liquid, which is approximately 4,000 times more effective at removing heat, not to mention the potential for heat reuse.

The habit of monitoring Organisations need to routinely monitor their IT infrastructure, says Chris James, EMEA marketing director at Virtual Instruments

While the general push for improvements and cost control continues, there are still efficiencies to be made in enterprises by managing IT investment and activity with greater precision. IT hardware purchase decisions have traditionally involved an element of guess work with over-provisioning to ensure that there is enough capacity whatever the anticipated traffic load. However, it is now possible to monitor performance across the whole IT infrastructure and in real-time to identify any application latency, so that more hardware is only introduced when and where it is actually needed. With complete visibility across the whole storage infrastructure, managers can not only make more accurate investment decisions, but also deliver greater service reliability through adopting the best

Further information

Aegis Data

Aspen Systems

Dynatron Corporation

Emerson Network Power

Green Revolution Cooling


Motivair Corporation



Schneider Electric

VA Technologies

Virtual Instruments

practices of benchmarking, continuous monitoring and infrastructure optimisation. Tis trend is particularly borne out in large

“Tere are still efficiencies to be made in enterprises by managing IT investment and activity”

enterprises, where we are seeing first-hand the importance of benchmarking with monitoring to ensure that mission-critical business application migrations to the private cloud are successful. While early cloud adoption was driven by price and convenience, it oſten fell short on key reporting aspects which meant that migration to the cloud for such important applications was not contemplated. For such migrations, as with all business best practice, IT managers

need to be able to accurately demonstrate that systems are meeting performance, availability and security requirements. In fact, the ability to benchmark, allows them to evaluate existing

applications and infrastructure performance before migration, so that meaningful Service Level Agreements (SLA) can be determined and measured against, post migration. Meanwhile, real-time monitoring during

migration identifies any potential bottlenecks or failures so that mitigating action can be taken to eradicate downtime. Following migration, continuous monitoring allows measurement of performance against SLA’s and can also help identify any infrastructure latency, enabling the data centre manager to take action so that all capacity can be used efficiently and costly over- provisioning is avoided. As Aristotle said: ‘We are what we repeatedly

do. Excellence then, is not an act, but a habit.’ With this in mind, we’ll see plenty more benchmarking, migration monitoring and infrastructure optimisation in 2013. l


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32