This page contains a Flash digital edition of a book.
Houston, we have a problem

John Barr looks at some of the issues surrounding the future of data centres


he power efficiency of HPC systems is improving, but at a slower pace than computer performance is increasing. A new generation of HPC

systems usually draws more power than its predecessor, even if it is more power- efficient. A major problem for the next generation of HPC data centres today is how to power and cool the systems – both at the high-end, where exascale systems are on the horizon, and at the mid-range where petascale departmental systems will become a reality. Without significant changes to the technologies used, exascale systems will draw hundreds of megawatts. According to the report Growth in data centre electricity use 2005 to 2010 by Jonathan Koomey of Analytics Press, data centres were responsible for two per cent of all electricity used in the USA in 2010. So the power required to run HPC data centres is already a problem, and one that is only going to get worse. We are seeing a rise in the use of HPC

use in enterprise, e.g. the use of HPC tools and techniques to handle big data, and the processing of data at internet scale. New approaches are being applied to the design of HPC systems in order to improve their energy efficiency (e.g. the use of processors or accelerators that consume less power), but this adds to system complexity, makes the systems more difficult to program and offers only incremental improvements. Te bottom line is that as HPC systems become ever more capable, they put increasing demands on the data centres in which they live. Reducing the power consumed by HPC


data centres has three components. Te first is to ensure that the data centre itself is efficient, the second is to cool the HPC systems effectively, and the third is to design HPC systems that consume less power.

Efficient data centres Ten years ago the electricity used to power a supercomputer was a relatively small line item in the facility costs. Te cost of powering and cooling an HPC system during its lifetime is now of the same order of magnitude as the purchase costs of the system, so has to be budgeted for during the procurement. It is not uncommon for an HPC data centre to draw 10 times as much power today as it did 10 years ago. HPC data centres house supercomputers,

but also much ancillary equipment that supports these supercomputers. Once you have bought your power-efficient supercomputer, you need to ensure that the data centre as a whole is efficient. Te Green Grid uses the power usage effectiveness (PUE) metric. In an ideal world the total facility energy would be close to the power used by the IT equipment, giving a PUE of close to unity.

Flaws in the system While PUE can be a useful metric, it is flawed. If an HPC system is updated and the revised system draws less power, if nothing else in the data centre changes the PUE actually gets worse, despite the fact that less power is being used. So care must be taken when considering PUE figures.

Te total facility energy covers things such

as UPS, battery backup, and cooling. A PUE figure of around 2 is common, while some of the largest internet-scale companies and HPC data centres now claim PUE figures of better than 1.2. Te league table for the fastest supercomputers, the TOP500 list (, lists the fastest 500 HPC systems in the world (or at least those that have been made public), providing interesting information about the technology used to build each system, its size and performance, as well as the power it consumes. An alternative supercomputer league table

is the Green500 list ( Tis lists the most energy-efficient supercomputers in the world. Rather than focusing on peak

@scwmagazine l

Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40