This page contains a Flash digital edition of a book.
HPC 2013-14 | Energy efficiency

the US. Te project will adopt a whole- system approach in four main areas: dynamic reconfiguring of node resources in response to workload, allowance for massive concurrency, a hierarchical framework for power and fault management, and a ‘beacon’ mechanism that allows resource managers and optimisers to communicate and control the platform. Te result will be an open-source prototype system that runs on both homogeneous and heterogeneous computer architectures. Funded by the European Commission,

Dell’s 12th generation PowerEdge servers use fresh air cooling

requirements, means that on average that server will be better utilised, and therefore more efficient.’ But Lyon warns that there is no clear answer that would enable every data centre to maximise its efficiency. Tat would take a considerable amount of analysis from an IT perspective in terms of server utilisation. ‘More effective cooling will however always improve the efficiency of the data centre,’ he added. Te need to focus on the servers when

addressing the issue of power consumption is a view shared by George Vacek, director of Convey Computer’s life sciences business unit. ‘Tere are multiple facets of how to do efficient computing,’ he commented. ‘For instance, you can make soſtware improvements, but at a certain point there’s nothing more you can do to tweak the algorithm to make it more efficient. Te only option then is to investigate the hardware itself.’ Vacek emphasised the importance of this approach by stating that in terms of power and cooling, it costs more to run a typical x86 server for a year than it does to actually purchase the system. Te fact is that the driving consideration in HPC is no longer the initial financial outlay – yet another reason why energy efficiency is so vital a factor. Convey Computer’s approach is to use

programmable coprocessors to ensure that resources are not wasted. Te company’s hybrid-core (HC) technology combines a traditional x86 environment with a reconfigurable coprocessor, reducing application run times. ‘We’re hitting fundamental physical limits on general purpose architectures in terms of how many


“Te embedded sector is very good at power management, because it has had to focus on it for years. Where it stumbles is the use of classic HPC techniques for parallelising code”

instructions you can cram in on the chip and still be able to keep it cool,’ said Vacek. ‘Tat trend physically can’t go any further, and so we have to look at reconfigurable, application-specific instruction to fully utilise available resources. If we do that, we’ll be able to continue to improve performance for some time.’

Thoughts of exascale Hardware and cooling technologies have been the predominant focus of efforts to reduce the power consumption of systems, but two key projects launched in the US and Europe in 2013 are looking purely at that final piece of the energy efficiency puzzle: soſtware. Te three-year US project, Argo, aims to design and develop a platform-neutral prototype of an exascale operating system and runtime soſtware. Led by Argonne National Laboratory’s Peter Beckman, researchers in Argonne’s Mathematics and Computer Science Division will collaborate with scientists from Pacific Northwest National Laboratory and Lawrence Livermore National Laboratory, as well as with several universities throughout

the Adept (addressing energy in parallel technologies) project has brought together partners from the HPC industry and embedded industry to develop a prototype tool that will be able to quantify and predict how individual codes will perform, and what energy efficiency gains can be made on any type of specified hardware. Dr Michèle Weiland, a project manager at EPCC who put forward the initial project proposal, explained that these two industries have complementary skills to offer: ‘Te embedded sector is very good at power management, because it has had to focus on it for years. Where it stumbles is the use of classic HPC techniques for parallelising code. Tis is, of course, where our expertise lies, but we in HPC lack in-depth knowledge of managing energy budgets because we have never really given much consideration to power – until now.’ Interestingly, the driving force behind the

Adept project ( is not the improvement of energy efficiency, but enabling users to recognise the benefits of the trade-off between power and performance. For example, certain modifications to a code may slow it down by five per cent, but could lead to an energy saving of up to 20 per cent. ‘Tere are certainly tools currently out there in the market that can provide an estimate of how an application will perform on a GPU, for example, but none of them offer details of both performance and energy efficiency,’ said Weiland. ‘We want to ensure that the soſtware engineering process can be much more guided through the use of our tool.’ To find this balance, the researchers

will first examine how best to measure power usage in a very fine granularity for a variety of hardware in a way that is easily understandable and reproducible. Tey will focus on a range of computing architectures, including the use of CPUs, GPUs, ARM processors, and FPGAs, as well as mobile processors. Weiland commented that, while mobile processors aren’t as powerful as conventional CPUs or even FPGAs, users

Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36