This page contains a Flash digital edition of a book.
The new realism: software runs slowly on supercomputers


No supercomputer runs real applications faster than five percent of its design speed. Robert Roe and


director of the HLRS supercomputer centre in Stuttgart, Germany. Te HLRS was not targeting exascale per se, he said, because the centre’s focus was on the compute power it could actually deliver to its users. According to Resch, simple arithmetic meant


Tom Wilkie report on recalibrating expectations of exascale and on efforts to tune software to run faster


T


he speed with which supercomputers process useful applications is more important than rankings on the Top500, but exascale computers are going


to deliver only one or two per cent of their theoretical peak performance when they run real applications. Both the people paying for, and the people using, such machines need to be realistic about just how slowly their applications will run. Tuning applications to run efficiently on


massively parallel computers, and the inadequacy of the traditional Linpack benchmark as a measure of how real applications will perform, were persistent themes throughout the ISC High Performance Conference and Exhibition in Frankfurt in July. Bronis de Supinski, chief technology officer at


the Livermore Computing Center, part of the US Lawrence Livermore National Laboratory, told the conference: ‘We don’t care about the Flops rate, what we care about is that you are actually getting useful work done.’ But according to Jack Dongarra from the


University of Tennessee: ‘You’re not going to get anywhere close to peak performance on exascale machines. Some people are shocked by the low percentage of theoretical peak, but I think we all understand that actual applications will not achieve their theoretical peak.’ Dongarra’s judgement was echoed in a


workshop session by Professor Michael Resch, 20 SCIENTIFIC COMPUTING WORLD


that if an exascale machine achieved a sustained performance of only 1 to 3 percent, then this would deliver 10 to 30 Petaflops. So buying a 100 Petaflop machine that was 30 per cent efficient – which should be achievable, he claimed – would deliver the same compute power, for a much lower capital cost and about one tenth the energy cost of an exascale machine. Te Jülich Supercomputing Centre (JSC)


in Germany is tackling the issues of soſtware scalability and portability by setting up what it calls ‘the High-Q Club’, Dr Dirk Brömmel, a research scientist at JSC, told the session entitled: ‘application extreme-scaling experience of leading supercomputing centres’. ‘We want to spark interest in tuning and scaling codes’ said Brömmel. ‘Te work does not stop there; the ultimate goals of the programme are to


encourage our users to try and reach exascale readiness.’ At the Lawrence Livermore National


Laboratory, effort is going into developing APIs and tools to create applications and effectively optimise how code is run on a cluster. Supinski stated that the LLNL’s plan was to use a programming tool developed at Livermore called Raja. ‘Te idea of Raja is to build on top of new features of the C++ standard. Te main thing that we are looking at is improving application performance over what we are getting on Sierra, Sequoia, and Titan. Application performance requirements are what we really care about.’ Te US Coral programme means that


applications will be running on systems with a Linpack performance well in excess of 100 petaflops within the next couple of years, and Supinski highlighted that the US national labs will take account of memory requirements as much as processing speed in their approach to the problem of tuning soſtware applications to run on the new hardware. Supinski said: ‘We did have a peak performance


On the exhibition show floor at ISC High Performance, vendors were eager to display their wares. As part of its international export drive (reported on page 18), Sugon explained its liquid cooled technology to the Editor


@scwmagazine l www.scientific-computing.com


Sugon


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40