This page contains a Flash digital edition of a book.
high-performance computing


➤ extracting meaning about system and application performance. Performance evaluation experts describe a


term called proportionality, which means if the metric increases by 20 per cent, then the real performance of the system should increase by a similar proportion. Tis cannot be represented by a single test. ‘My attitude hasn’t changed — Linpack is


useful as a diagnostic; but, it’s a very limited indicator of supercomputer performance, particularly within the rules of the Top500 list. A metric has to represent complex components for applications; think of a better metric that comprises enough tests to represent all the major algorithms and uses, such as a composite measure. Tis will benefit the community,’ said Kramer. A composite metric would test 7 to 12


application characteristics, for example, and is a more accurate and representative measure, according to Kramer. Tese tests generate representative numbers which would feed into an overall metric of how well the soſtware problems are solved. Te Standard Performance Evaluation


Corporation (SPEC) benchmark is one such metric that enables computer scientists to understand the in-depth behaviour of a system as a whole, when running an application. Kramer recalled a recent presentation by which NASA used SPEC performance to get a realistic idea of how their systems perform. However, due to SPEC’s fixed-size approach and the way it was originally designed for


MANY WILL


HESITATE TO GIVE THE CURRENT ONE UP AND INVEST IN A BRAND NEW BENCHMARK


single-processor workstations, it cannot maintain a single performance database stretching back over two decades for performance trends. Te Sustained Petascale Performance (SPP)


metric is another tool used on the Cray Blue Waters system by Kramer and his colleagues so they can get a more detailed understanding of each application’s performance, workload and the overall sustained performance of the entire system. Te fact is if a new benchmark is introduced,


that is drastically different than today’s, it is understandable that many will hesitate to give the current one up and invest in a brand new benchmark. It would take decades to get the same metrics.


24 SCIENTIFIC COMPUTING WORLD


‘Te funds for supercomputers are usually


provided by governments who want to show a prominent place in computing, as measured by the Top500 ranking. Even though the procurement people oſten know better, they have little choice but to pursue a system that is cost- effective for Linpack, but very cost-ineffective for everything else,’ said Gustafson.


More radical approaches Gustafson’s approach is to ‘boil the ocean’ and get people to change the way they think about the purpose of computing, and how they measure results, speed, and the quality of answers obtained. Resources could be focused on increasing the communication speed between servers, or within the server, instead. Ten the speed of real applications would increase proportionately. Gustafson has two potential solutions, one


of which is the Fastest Fourier Transform (FFT) benchmark, which could be used as part of a larger composite metric as Kramer describes. FFT has historical data that goes back to the 1960s and is scalable. Unlike Linpack, it is communication-intensive and more representative of real, present-day workloads. But, the catch is someone has to do the hard work of mining all the historical FFT benchmark history to create a useable database comparable to the Top500.


A more fundamental issue according to


Gustafson is that Linpack is a physicist’s measure: great for testing numerical calculation capability, which is what physicists wanted supercomputers for originally. But, science’s shiſt to a more data- centric approach in its computing needs has also been mirrored by the rise of commercial ‘Big Data’. In his book Te End of Error: Unum


Computing Te Future of Numerical Computing, published by CRC Press in February this year, Gustafson advocates a radically different approach to computer arithmetic: the universal number (unum). Te unum encompasses all IEEE floating-point formats as well as fixed-point and exact integer arithmetic. Gustafson believes this new number type is a ‘game changer’. It obtains more accurate answers than floating- point arithmetic yet uses fewer bits in many cases, saving memory, bandwidth, energy, and power.


Efficiency improvements can be anywhere


between 1.2 to 8 times depending on the application, according to Gustafson. A unum library created in C, C++ or Python could provide vendors with access to these benefits. Such ideas could nudge the benchmark


measuring system for supercomputers and their applications on to a very different track. Measuring a supercomputer’s speed is going to remain a challenge for some time to come. l


@scwmagazine l www.scientific-computing.com


Ton Snoei/Alexey Painter/Shutterstock.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54  |  Page 55  |  Page 56