high-performance computing
➤
scaling across multiple nodes,’ Kim concluded. Snell said: ‘For clusters,
everything was really converted over into MPI. Tat was not the dominant programming model before we went to clusters, and people went through a painful conversion process to get applications over into MPI. Even today, a lot of applications either are not in MPI or they do not scale as well with MPI as they did with other models.’ Snell continued: ‘How does MPI
evolve? Even without accelerated components like Xeon Phi or a GPU, just having a multicore processor on the x86 side there is a question of whether the existing MPI programming model is sufficient to get enough parallelism capabilities out of those chips.’
Tuning applications At one of the sessions on the last day of the ISC High-Performance conference in Frankfurt this year, tuning applications to run on massively parallel computers was the focus. Bronis de Supinski, chief technology officer at the
Livermore Computing Center, part of the US Lawrence Livermore National Laboratory, highlighted the LLNL’s strategy to deal with the programming conundrum that it faces with its upcoming supercomputer, Sierra, funded through the US DOE’s Coral programme. Supinski said: ‘Te main thing that we are looking at is improving application performance over what we are getting on Sierra, Sequoia, and Titan.’
‘Te idea of RAJA is to build on top of new features of the C++ standard.’ Te RAJA abstraction layer is designed to simplify porting C/ C++ code to by reducing developer disruption – which helps to support the need for interoperability between these new systems and older applications. Supinski said that although
the LLNL did have a target peak performance figure it was not the primary objective of developing the
THE MAJOR ROADBLOCK WILL BE THE SCALABILITY OF SOFTWARE, RATHER THAN ENERGY COSTS
Supinski, who also chairs the
OpenMP Language Committee, explained that the LNLL was planning to use a combination of Open MPI to provide intra-node parallelism, and Open MP to address node-level performance. While these tools will be used to support the concurrency and node performance, Supinski remarked that on top of using these tools the LNLL had developed a programming tool called RAJA.
24 SCIENTIFIC COMPUTING WORLD
new system: ‘Tat is a very low bar. We will actually pretty well exceed that,’ he said. ‘We asked for an aggregate
memory of 4 PB and what we really care about is that we have at least one GB per MPI process. It turns out, hitting the four petabytes was the most difficult requirement that we had.’ He went on to explain that memory budgets and memory pricing were a hindrance in achieving this requirement. ‘In my
opinion, it is not power or reliability that are the exascale challenges, it’s programmability of complex memory hierarchies,’ Supinski said. Supinski said: ‘We don’t care
about FLOPS rate, what we care about is that you are actually getting useful work done.’ Tis is the kind of view that will become more and more familiar as computing moves to a more data-centric model. In many cases, future systems will be more reliant on data than pure number-crunching power, because of the increasing size of datasets. In turn, this will drive memory size and the development of the complex memory hierarchies that are required to handle the flow of that data.
Is Open Source the solution? It is far too early to predict accurately which programming models will be most popular for each architecture. Te Sierra system will be one of the largest supercomputers in the world and will be based on IBM’s OpenPower system using Nvidia Volta GPUs. Te work done on these IBM
@scwmagazine l
www.scientific-computing.com
Scyther5/
Shutterstock.com
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44