This page contains a Flash digital edition of a book.
HPC 2013-14 | HPC trends The market at a glance


John Barr explores the biggest trending topics in HPC through 2013 and 2014


Many complex factors are driving developments in the HPC industry, but there are two fundamental issues that underpin changes to the architecture and use of HPC systems. Tey are the need to reduce power consumption, and the emergence of big data, bringing the potential to deliver value with HPC to new market segments. A few decades ago no-one worried about


the power consumption of computers – it was so low it was not important. However, as it has increased – as a result of device clock speeds moving from kilohertz through megahertz to gigahertz – the cost of power and cooling for large systems for three years now exceeds the initial cost of the system. If the industry had continued to be driven by ever-increasing clock speeds then high-end HPC systems would have needed their own nuclear reactor as the power requirement would have become unsustainable.


HPC and big data HPC used to be a niche area – of interest only to specialists – but it is now going mainstream. HPC delivers solutions in a wide range of market segments including climate modelling, defence, finance, government (including for the secret services), life sciences, and manufacturing. For centuries, scientific research has had two branches: theory and experiments. A third branch, that of simulation using HPC systems, now has equal weight. You might think that big data is a new problem, but it is one that the HPC industry has been struggling with for many years. Indeed, Tony Hey and Anne Trefethen equated HPC and the ‘data deluge’ when writing about the challenges that the UK’s e-Science programme faced in 2003. HPC has always dealt with very large data sets. Now that much processing is happening at an internet scale and in the cloud, ‘big data’ has gone mainstream, and HPC tools, techniques and approaches must also go mainstream if this data is to be collected, analysed, understood and stored. Processing big data oſten requires the performance or capacity available in HPC


4


systems, and many of the challenges of cloud computing have already been addressed in the HPC world by the related field of Grid computing. Tere is more discussion about HPC in the


Cloud in the October/November 2013 issue of Scientific Computing World.


What does this mean for processors? Reducing power consumption has far-reaching implications. Te evolution of HPC systems from one generation to the next has oſten been more of the same but bigger and faster, but the problem is that faster means hotter – and hotter is a non-starter. So the logical conclusion is that the next generation of HPC systems needs to be slower – which isn’t what HPC is all about. Te way to square this circle is to use many more multicore processors that operate more efficiently than the previous generation of processors, using a slower clock speed – but with much more parallelism. Mainstream multicore x86 processors have been with us for less than a decade, but the industry has gone from what we now call a single core processor through dual and quad core processors to 10 core (supporting 20 threads) in the case of Intel, or 16 core in the case of AMD. However, comparing cores from different vendors can be like comparing apples and oranges, with the


“Te pace at which the core count of mainstream processors is increasing has been insufficient to satisfy many HPC users, so compute accelerators are seeing increased use”


throughput delivered by a top Intel processor being broadly similar to that of a top AMD processor, despite having a different number of cores. Te pace at which the core count of


mainstream processors is increasing has been insufficient to satisfy many HPC users, so compute accelerators are seeing increased use. Te idea of a compute accelerator goes back to the 1980s with attached processors from the likes of Floating Point Systems (now part of Cray) accelerating mid-range VAX computers, and the idea has now been delivered in modern technology packages by Nvidia (GPUs) and Intel (Xeon Phi). Compute accelerators lack many of the capabilities of mainstream processors, but they are exceedingly good at number-crunching, which is achieved through a high degree of parallelism (61 cores and four threads per core in the case of the Xeon


Spectral-Design/Shutterstock.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36