This page contains a Flash digital edition of a book.
Shutterstock.com


Widening horizons for high-performance computing


Although the past decade has seen a huge growth in high-performance computing, Tom Wilkie believes there is a lot more still to come


1994 - 2014


webcasts, and, yes, magazine pages, are filled with stories reflecting the growth in the applicability of high-performance computing outside of the narrow confines of the US National Laboratories and defence establishments around the world. Around the world, different groups – usually


on-paper title in those days) contained very little mention of the emerging field of high- performance computing. How times have changed! Tomas Sterling


W


and Donald Becker built the first Beowulf cluster at NASA’s Goddard Space Flight Centre in 1994 – the year in which Scientific Computing World itself was launched. Although the idea was to create a cost-effective alternative to the large supercomputers of those days, the benefits for ‘ordinary’ scientists and engineers were barely perceptible even a decade ago. But Beowulf heralded a switch to commodity off-the-shelf-components that has cut costs over the intervening years to the point where high-performance computing is nowadays within the reach of not just major industrial companies, but small and medium- sized enterprises as well. Simulation and optimisation of engineering


designs for automotive and aerospace applications are now done almost entirely in silico. Oil exploration companies find that intensive computation to extract as much information as possible from their data cuts the money that they have to spend on highly expensive exploration activities out in the field. As Scientific Computing World celebrates


its 20th anniversary, it is now a multi-media publication appearing in print and digital formats. And its website, email newsletters,


26 SCIENTIFIC COMPUTING WORLD


hen Scientific Computing World celebrated its tenth anniversary, the pages of the magazine (for it was almost entirely a print-


with national or inter-governmental support – are exploring ways of getting to exascale (1018


floating point operations per second),


which represents a thousand-fold increase over the first petascale computer that came into operation in 2008. Scot Schultz from Mellanox argues in the


following pages, there will not be any single dominant microprocessor architecture for next-generation exascale class installations, and alternative architectures to x86, such as Power and 64-bit ARM are already in view. Tis makes the interconnects all the


more important for unless they are fast and efficient, movement of data will be the limiting factor on speed of computation. Te variety of architectures may well appear to pose a problem for compiling a program and getting it to run on different machines, but Doug Miles from PGI sees open source as the remedy for this, enabling proprietary compiler developers to focus on innovating in higher-level optimisations, parallel programming models, and productivity features. Giampietro Tecchiolli CTO of Eurotech


argues that the practical argument for exascale computing is less the building of such a machine itself, rather that progress on the road to exascale will enable cheap and cost effective petascale computing to diffuse yet more widely into science and engineering, with all that that means for improving engineering and design capabilities. Mark Seager, chief technology officer, Technical Computing Ecosystem at Intel


HIGH-PERFORMANCE COMPUTING HAS GROWN BEYOND THE NARROW CONFINES OF NATIONAL LABS AND DEFENCE ESTABLISHMENTS


has no doubts. In his view, the single most important truth about high-performance computing over the next decade is that it will have a more profound societal impact with each passing year, in areas such as disease research and medical treatment; climate modelling; energy discovery; nutrition; new product design; and national security. But computing depends on people, not


just technology – talented, creative people as stressed by Altair’s Sam Mahalingam. So we conclude this overview of HPC with two contemporary initiatives to attract new people and equip them with the skills they need to drive next wave of evolution in HPC. Jon Bashor looks at the Student Cluster Challenge initiative and Paul Messina calls for more initiatives like the Argonne Training programme on Extreme-Scale Computing that he has organised.


@scwmagazine l www.scientific-computing.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45