This page contains a Flash digital edition of a book.
high-performance computing


Software softens energy demands


Software approaches to energy efficiency in HPC may yield unexpected improvements in the hardware of next-generation mobile phone networks,


Tom Wilkie discovers I


nternational efforts to improve energy efficiency in high-performance computing may, somewhat surprisingly, make mobile phone communication cheaper and faster. At the International Supercomputing


Conference and Exhibition in Frankfurt in July (ISC High-Performance), Allinea, the soſtware development tools company, is expected to unveil a new addition to its range: an energy profiling tool. Te following month, Adept, a European research project addressing the energy-efficient use of parallel technologies, is expected to release a set of benchmarks that it has developed to characterise the energy consumption of programming models on different architectures. Meanwhile, a different EU-funded project, Deep, is taking a holistic approach to energy efficiency, using soſtware in a different way to optimise not just the computer’s but the whole building’s energy consumption. Tese developments underscore not only the


rising importance of energy efficiency in the future of high-performance computing, but also the growing interest in exploring ways of reducing power consumption in addition to the frontal approach of developing more efficient cooling systems.


Estimating energy in advance One of the goals of the Adept project is to address the problem of how a soſtware developer can know how much more (or less) energy their code will consume if it is run on a different architecture – moving from OpenMP to Cuda for example. Te point is to find a way of predicting the energy consumption without having to go through the labour of actually rewriting and porting the code, running it on the alternative architecture, measuring the results, and only then finding out


34 SCIENTIFIC COMPUTING WORLD


if the whole time-consuming exercise has been worth it. Te Adept approach has been to create a


tool that builds a ‘model’ of computer code to estimate the effect of running elements of the code on GPUs, perhaps, rather than CPUs. For a piece of application soſtware, the tool creates ‘a fingerprint of instructions and a model that’s independent of hardware, from which we can predict power usage’, according to Nick Johnson,


THERE IS NO


PERFECT MEASURING SYSTEM


Applications Consultant at the Edinburgh Parallel Computing Centre (EPCC) in Scotland, which is coordinating the project. ‘Once you have a good model, you can plug in parameters for the target architecture that you’re thinking of running it on.’ He stressed that there is a trade-off between


performance (in terms of run-time for an application) and energy efficiency. ‘For some fingerprints, you might find that performance is worse, but that may be acceptable if there’s a saving in power. You might be willing to have code take 50 per cent longer to run, if there is a 50 per cent saving in power.’ However, David Lecomber, CEO of Allinea,


differed somewhat in his view: ‘You would not want to optimise for energy, if it cost you time as well’. If you slow the run-time, then users will complain, he noted drily. However, there are many applications for which you can slow down and save power without lengthening the run


time, he continued. For example, most processors spend a lot their time idle, waiting for data to arrive from memory: they do not need to be working at full tilt and so they can work more slowly, decreasing energy consumption, as they match the speed of the data. Storage is an even more extreme example, because it is slower still in terms of data transfer. In this way, simply tuning an application


program for better performance can also lead to lower energy costs. If the job takes less time to complete, then the energy consumption overall will be lower. Aſter all, he remarked, the end-goal of all this is ‘science/Watt’ – i.e. the amount of useful science (or engineering) that can be got out of a supercomputer, rather than the number of computations. Supercomputing sites ‘need to look at how their applications are running on their system’, he said.


Energy-efficient HPC and mobile phones Over many years in HPC, parallelisation and concurrent computation have been used to decrease the overall time that an application requires to run to completion. It’s widely accepted that recent advances in low-power multi- core processors from the mobile phone and embedded markets have widened the choice of hardware architectures available to HPC. What is perhaps less appreciated is that advances in such processors are also forcing programmers of embedded systems to use the sorts of parallel computing techniques that are familiar to HPC programmers.


➤ @scwmagazine l www.scientific-computing.com


McIek/Thomas Pajot/Shutterstock.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54  |  Page 55  |  Page 56