This page contains a Flash digital edition of a book.
supercomputing software challenges


The compute equivalent of the Mars landing


The ‘super’ in supercomputing also refers to the challenges we face, not only in terms of hardware but software too.


Paul Schreier explores the issues ‘M


oving to exascale is the compute equivalent of the Mars landing and for me it’s just as exciting.’ Tis comment


by Franz Aman, chief marketing officer at SGI, mirrors the general industry attitude that we not only face considerable work in hardware, but need a fundamentally new approach to soſtware. Clearly, today’s soſtware just won’t scale up to run on millions of cores and it will also have to execute on the innovative hardware that systems will require once we reach exascale. Taking an interesting approach are Tom


H. Dunning, director at the US National Center for Supercomputing Applications (NCSA), and his team. When the Cray 1 was first introduced, computational scientists and engineers had to restructure and rewrite their applications to take advantage of its


34 SCIENTIFIC COMPUTING WORLD


vector capabilities. Tese codes performed well, with some tweaking, on future Cray computers but within a decade and a half computing technology was moving toward parallel systems. We are now at the end of the ‘traditional’ parallel computing era and computational scientists and engineers are once again faced with the task of rewriting and re-designing their applications for new computing technologies. What has really changed in the past 30


years is the intent of computer designers. In the 1970s, some of them, like Seymour Cray, focused on improving the performance of algorithms common in scientific computing, such as with common operations like the linked triad. With the switch to commodity systems in the 1990s, the connection to scientific computing (a small market) was lost. Researchers had to not only restructure and


rewrite their codes for new, ‘massively parallel’ computing technologies, but also deal with the limitations of commodity hardware such as longer memory latencies and bandwidths that were low relative to the speed of the CPU’s arithmetic units. Tere is a dramatically increasing gap


between a computer’s peak performance and what is realised in real science and engineering applications. Tis is also true for the Linpack benchmark – it is no longer a reasonable proxy for the performance of a computing system on many, if not most, science and engineering applications. Te situation is further complicated by the fact that researchers, in an attempt to address the most pressing problems, are developing soſtware that is extremely complex. In fact, program performance is limited by characteristics of the computing system other than CPU speed and the ability of soſtware such as compilers to automatically find parallelisms and opportunities for reducing latency. For data-intensive applications, I/O performance is paramount. For other applications such as agent-based modelling, the size of the memory is critical;


www.scientific-computing.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52