This page contains a Flash digital edition of a book.
high-performance computing

Accelerating the development of HPC

Robert Roe investigates how accelerators are

driving the design of new architectures in HPC to solve the challenges of exascale computing


ince their introduction to HPC, accelerators have seen increasing success as more people have developed the knowledge and skills to

use them and programmers have adapted their previously serial code to run more efficiently in parallel. Two of the three next-generation US

national laboratory supercomputers, funded through the US Department of Energy (DOE), will make use of Nvidia GPUs tightly integrated within the IBM Power architecture. In addition, the DOE’s FastForward2

project, funded in conjunction with the US National Nuclear Security Administration (NNSA), aims to develop extreme scale supercomputer technology still further. Te project awarded $100 million in contracts to five US companies which included Nvidia, AMD, and Intel. Tese programmes, and similar projects

in Europe, demonstrate that accelerators are a critical tool for HPC as it approaches the exascale era. Parallelising code and running it on power-efficient accelerators has become a necessity, if we are to reach the ambitious power consumption targets set out by the US and European governments. Roy Kim, group product manager in

Nvidia’s Tesla HPC business unit, stated: ‘It’s an interesting time right now for HPC. A few years ago, Nvidia GPUs were the only accelerators on the market and a lot of customers were getting used to how to think about and how to program GPUs. Now if you talk to most industry experts, they pretty much believe that the path to the future of HPC,


The internal processor Die of the Intel Xeon Phi coprocessor product family, manufactured using 3D Tri-gate 22nm transistors

the path to exascale, will be using accelerator technology.’

An ecosystem, not just a GPU A large part of the success of GPUs has revolved around Nvidia and its ability to generate a large skill-base around its GPU programming language, Cuda. Not only did it have a head start due to its dominant position in the consumer GPU market, but it also invested heavily in higher education. Tis enabled the development of academic programmes such as the Cuda centres of excellence, in addition to GPU centres and other activities to increase adoption of the technology. Te end result of all this activity is that

Nvidia managed to generate a soſtware ecosystem based around students and computer scientists who understand Cuda and how to make the best use of Nvidia GPUs, which helped to spur technology adoption. Kim said: ‘Te higher education community really fuels the rest of the industry. Whether it’s

HPC or enterprise computing, the next wave of developers comes from higher education and we recognise that. We have these academic programmes as well as a lot of resources invested, to ensure that parallel programming is taught on GPUs.’ Kim stressed that a lot of the success that

Cuda has had is because it made parallel programming easier, so that developers were



able to spend time optimising applications or creating new code rather than having to grasp the concepts with parallel programming. However, Nvidia is not the only horse in the

race today, as companies like AMD and Intel set their sights firmly on the server and HPC markets.

@scwmagazine l


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54  |  Page 55  |  Page 56