This page contains a Flash digital edition of a book.
high-performance computing


more closely match a broad set of important applications, and to give incentive to computer system designers to invest in capabilities that will have impact on the collective performance of these applications. Its creators regard the Linpack measure and HPCG might best be seen as ‘bookends’ of a spectrum: the likely speed of an application lies between the two and the closer the two benchmarks are, the more balanced the system. Although the Europeans may not have


the investment to install new systems at the same rate as their American counterparts, they are also looking towards the exascale era of supercomputing. Te high-Q club is one example of the preparation taking place to focus application users on scaling their codes so that they can run across all of the JSC’s IBM Blue Gene racks. Brömmel said: ‘Te trend towards much


higher core counts seems inevitable. Users need to adapt their programming strategies.’ Brömmel is heavily involved with soſtware optimisation at the JSC, organising the JUQUEEN Porting and Tuning Workshops and also initiating the High-Q Club. He went on to explain that the JSC had


In his conference keynote address, Jürgen Kohler, senior manager, NVH CAE & vehicle concepts at Mercedes- Benz, warned that Europe is failing to persuade enough small and medium companies to take advantage of high-performance computing and engineering simulation, even though HPC and computer-aided engineering are essential tools in modern industry


figure in there, but that is a very low bar. We will actually pretty well exceed that.’ He continued: ‘We also asked for an aggregate memory of 4 PB and what we really care about is that we have at least one GB per MPI process. It turns out hitting the four petabytes was the most difficult requirement that we had.’ He went on to explain that memory budgets and memory pricing was a hindrance in achieving this requirement. ‘In my opinion, it is not power or reliability that


are the exascale challenges: it’s programmability of complex memory hierarchies,’ Supinski said. Dongarra issued his warning about how poor


the performance of supercomputers actually is, compared to the theoretical performance as measured in the Top500 list, while discussing the results of a different benchmark for measuring the performance of supercomputers: the High Performance Conjugate Gradients (HPCG) Benchmark. As had been widely expected, when the Top500 list was announced on the opening day of the conference, Tianhe-2, the supercomputer at China’s National University of Defence Technology had retained its position as the world’s No. 1 system for the fiſth consecutive time. Te Tianhe-2 also scored first place in the alternative metric for measuring the speed of supercomputers, the HPCG, announced on the last day of the full conference.


www.scientific-computing.com l Te Top500 bi-annual list uses the widely


accepted Linpack benchmark to monitor the performance of the fastest supercomputer systems. Linpack measures how fast a computer will solve a dense n by n system of linear equations. But, the tasks for which high-performance computers are being used are changing and


THE TREND TOWARDS MUCH HIGHER CORE COUNTS SEEMS INEVITABL


so future computations may not be done with floating-point arithmetic alone. Consequently, there has been a growing shiſt in the practical needs of computing vendors and the supercomputing community, leading to the emergence of other benchmarks, as discussed by Adrian Giordani in How do you measure a supercomputer’s speed? (SCW June/July 2015 page 22) Te HPCG (High Performance Conjugate


Gradients) Benchmark project is one effort to create a more relevant metric for ranking HPC systems. HPCG is designed to exercise computational and data access patterns that


@scwmagazine


initiated the programme to build up a collection of soſtware codes, from several scientific disciplines, that can be successfully and sustainably run on all 28 racks of Blue Gene/Q at JSC. Tis means that soſtware must scale massively – effectively using all 458,752 cores with up to 1.8 million hardware threads. So far 24 codes have been given membership


to the High-Q club. Tey range from physics, neuroscience, molecular dynamics, engineering, to climate and earth sciences. Two more have been accepted into the programme, although work has not yet finished on optimising these codes. Developing understanding and expertise


around a series of codes on a particular computing platform enables the researchers at a HPC centre to share in the experience and knowledge gained. Tis sentiment is shared by the LNLL as it has chosen to setup a centre of excellence to aid in the development and porting of applications to Sierra, which is based on the IBM Power architecture and Nvidia GPUs. Supinski explained that the centre of


excellence (COE) would be a key tool in exploiting these technologies to make the most of the next generation of HPC applications. ‘Tat’s a real key aspect of how we plan to support our applications on this system. Tis will involve a close working relationship between IBM and Nvidia staff, some of whom will be actually located at Livermore and Oak Ridge working directly with our application teams.’ l


AUGUST/SEPTEMBER 2015 21


ISC Events


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40