HIGH PERFORMANCE COMPUTING
Energy-efficient supercomputing
WITH A MYRIAD OF TECHNOLOGIES AVAILABLE TO HPC USERS, IT IS NOT ALWAYS CLEAR WHICH TECHNOLOGY PROVIDES THE MOST ENERGY-EFFICIENT SOLUTION FOR A GIVEN APPLICATION, FINDS ROBERT ROE
By increasing the energy efficiency of a supercomputer scientists can save huge amounts of money over
the total lifecycle of the system. With ever-increasing core counts and increasingly large supercomputers, the drive for increased computational power comes at a cost. By reducing the amount of energy spent on running these systems HPC centres and academic institutions can put that investment into other areas that benefit scientific research. While there has been lots of excitement about the use of new technologies, both in computing hardware and cooling technologies, the focus for most users is still on making the most of the more traditional resources available to them. Mischa van Kesteren, pre-sales engineer at OCF states that when dealing with customers the focus is almost always on the utilisation of a cluster and helping them to fit the technologies around the type of applications they are using rather than just selecting the most efficient technology
4 Scientific Computing World December 2019/January 2020
on paper. ‘I think the main step with a new build is to understand what your workload is going to look like. Energy efficiency comes down to the level utilisation within the cluster,’ said van Kesteren. ‘There are definitely more energy- efficient architectures, in general, higher core count, lower clock speed processors tend to provide greater raw compute performance per watt but you need to have an application that will parallelise,’ said van Kesteren. ‘If you look at something like general-purpose GPUs, Nvidia likes to talk about how energy-efficient they are, and that is all well and good if you have an application that can use all those hundreds of cores at once.’ ‘You need to understand your application
as somebody that is coming into this from a greenfield perspective. If your application doesn’t parallelise well, or if it needs higher frequency processors, then the best thing you can do is pick the right processor and the right number of them so you are not wasting power on CPU cycles that are not
being used,’ van Kesteren continued. The drive for energy efficiency in HPC is clear as it not only reduces the huge power costs but also provides more scientific output for the economic output. However, energy efficiency is not always the primary concern when designing a new cluster as many academic centres will focus on getting the most equipment they can for a given budget that fits into the power envelope available in their datacentre. ‘Ultimately computing is burning through
energy to produce computational results. You cannot get away from the fact that you need to use electricity to produce results so the best thing you can do is to try and get the most computation out of every watt you use,’ said van Kesteren. ‘That comes down to using your cluster to its maximum level, but then also making sure you are not wasting power.’ Cooling technology can also play a
big role in energy efficiency but some of these technologies require a specific infrastructure or datacentre design that is g
@scwmagazine |
www.scientific-computing.com
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28