This page contains a Flash digital edition of a book.
high-performance computing HPC for the next

generation of scientists – but not for beginners

Argonne has pioneered a course to equip researchers with the skills they need for computational science and engineering (CS&E). Its organiser,

Paul Messina, calls on more institutions to offer such training T

oday’s high-end computers are not easy to use: they have tens of thousands to millions of cores, and the architectures of both of the individual processors

and the entire system are complex. Achieving top performance on these systems can be quite difficult. Furthermore, every three to five years, HPC architectures change enough that different optimisations or approaches might be needed in order to use the new systems efficiently. But the characteristics of high-end computer

systems are by no means the only source of difficulty. Te scientific problems that are tackled on such systems are typically quite complex, and are oſten at the leading edge of the scientific or engineering domain. Consequently, CS&E projects are usually carried out by teams with several to as many as dozens of researchers who have expertise in different aspects of the science, mathematical models, numerical algorithms, performance optimisation, visualisation, and programming models and languages. Developing soſtware with a large team instead of one or a few collaborators brings its own challenges and need for expertise in soſtware engineering, which also needs to be included in the team. Reflecting on the CS&E landscape described

above, I was motivated to organise the Argonne Training Program on Extreme-Scale Computing (ATPESC) – an intense, two-week programme that covers most of the topics and skills that are needed to conduct computational science and engineering research on today’s and tomorrow’s high-end computers. Te programme has three goals. First, to provide the participants with in-depth knowledge on several of the topics, especially programming techniques and numerical algorithms that are effective in leading- edge HPC systems. Second, to make them aware of available soſtware and techniques for all the topics, so that when their research requires a certain skill or soſtware tools, they know where to look to find it instead of reinventing the tools

38 SCIENTIFIC COMPUTING WORLD The Mira system at Argonne

or methodologies. And third, through exposure to the trends in HPC architectures and soſtware, to indicate approaches that are likely to provide performance portability over the next decade or more. Performance portability is important for

applications that oſten have lifetimes that span several generations of computer architectures. It is tempting to write soſtware tailored to the platform one is using and exploiting its special


features at all levels of the code. Researchers in the early stages of their careers may not be aware that by taking that approach, they are exposing themselves to having to rethink their algorithms and rewrite their soſtware repeatedly. At Argonne National Laboratory, we have

Mira, an IBM Blue Gene/Q system with nearly a million cores; therefore we have first-hand experience with the challenges described above. Mira is the current flagship system in the Argonne Leadership Computing Facility (ALCF) that is funded by the Office of Science of the US Department of Energy. Te use of systems like

Mira can enable breakthroughs in science, but to use them productively requires significant expertise in computer architectures, parallel programming, algorithms and mathematical soſtware, data management and analysis, debugging and performance analysis tools and techniques, soſtware engineering, approaches for working in teams on large multi-purpose codes, and so on. Our training programme exposes the participants to all those topics and provides hands-on exercises for experimenting with most of them. Te ATPESC was offered in the summer of

2013 for the first time. It was offered again this year from 3-15 August, taking place in suburban Chicago. Te 64 participants were selected from 150 applicants and are doctoral students, postdocs, and computational scientists who have used at least one HPC system for a reasonably complex application and are engaged in or planning to conduct computational science and engineering research on large-scale computers. Teir research interests span the disciplines that benefit from HPC, such as physics, chemistry, materials science, computational fluid dynamics, climate modelling, and biology. In other words, this is not a programme for

beginners. Te strong interest in our programme indicates that we are filling a gap. For example, not many university graduate programmes in the sciences cover soſtware engineering or community codes. Some research laboratories are planning to

offer similar training programmes. We welcome the proliferation, since there is a growing need for knowledgeable computational scientists and engineers as the value of HPC is recognised in many fields.

Paul Messina is the director of science for the Argonne Leadership Computing Facility at Argonne National Laboratory.

@scwmagazine l

Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45