This page contains a Flash digital edition of a book.
high-performance computing


many HPC centres can live with occasional unscheduled down time so it may be better to provide UPS protection only for management nodes and storage. If a data centre is purpose- built to house a specific machine it can be very focused and efficient. However, many HPC facilities need to support more than one large system, and also a range of smaller systems exploiting different technologies. A modern high-end HPC system is probably liquid cooled, but older or smaller systems may use a combination of liquid cooling, air cooling and hybrid options (such as air- cooled components with liquid-cooled rear doors). In these mixed environments, it can be


difficult to run a data centre at a PUE of better than 1.5. When a parallel application executes across


all of a large HPC cluster, the power draw can be dramatic. An HPC system moving from just ticking over running the operating system


MORE DYNAMIC


COOLING SYSTEMS ARE NOW REQUIRED TO ADAPT TO THE CHANGING WORKLOAD


to being busy can double the power required, while hammering away on a computationally very intensive parallel application can take 50 per cent more power. Tis behaviour is very unlike the use-pattern in a commercial data centre or cloud system, which tend to even out spikes as the workload is generated by many users running many applications rather than a single user running a single application. Tis variable power usage is an issue for two reasons. Te first issue is cooling. In a commercial


Terms used to describe HPC performance


l flop/s: floating point operations per second (i.e. arithmetic calculations per second) l Megaflop/s: 109 l Teraflop/s: 1012 l Petaflop/s: 1015


flop/s flop/s flop/s


l Petascale: a machine with performance in excess of one petaflop/s (or a storage system of more than one petabyte) l Exaflop/s: 1018


flop/s


l Exascale: a machine with performance in excess of one exaflop/s (or a storage system of more than one exabyte)


16 SCIENTIFIC COMPUTING WORLD


data centre that allocates part of its capacity for HPC, this can cause physical hot spots, while for large HPC data centres the requirements on the cooling capacity for the whole data centre can change significantly at short notice. Te days of running the cooling system all


of the time at the rate required to cool the system when it is running at full tilt are at an end, as the cost is too high. More dynamic cooling systems are now required to adapt to the changing workload. Tis is very important for large HPC data centres, which must work hand in glove with their power providers to ensure they have the power that they need, when they need it, at a price that is affordable. Designing a contract that allows such


flexibility, and remaining within the confines of that contract in order to ensure that you have the required power at the required price, can be a difficult balancing act. High-end HPC data centres today have power supplies in excess of 10 MW, and that requirement is only going to increase – at least in the short- to mid-term.


Efficient cooling Water is orders of magnitude more efficient than air at removing heat, and if warm water is used rather than cold water, there is no need to use chillers (at least for most of the year in most parts of the world). So the theoretically ideal way to cool the system is using warm water – but this can make the system infrastructure much more complex and therefore expensive. For air-cooled systems it is important to separate the warm and cool areas in a data centre. Allowing warm and cool air to mix makes it more difficult to maintain the desired temperature for the supercomputer. While many modern data centres handle this very well, others do not. Increasing the inlet water temperature can have a significant impact. In many countries, this allows the warmed water to be cooled by external air most of the time without the need for chillers, leading to a drop in PUE of more than 10 per cent. Some HPC systems use air cooling (and


whatever cooling technology is used, it is still a good idea to keep the data centre at a temperature that is comfortable for humans to operate in). Alternative options include


@scwmagazine l www.scientific-computing.com


Robert Lucian Crusitu/Shutterstock.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40