This page contains a Flash digital edition of a book.
towards exascale


Director of the National Center for Supercomputing Applications (NCSA), Thom Dunning is the principal investigator for the Blue Waters 10-petaflop supercomputer


‘B


lue Waters, with hundreds of thousands of computer cores, hundreds of Tbytes of memory, tens of thousands of disk drives,


and hundreds of Pbytes of archival storage, will be an extraordinary new computing resource for science and engineering research. It’s very clear, however, that we are not going to get to exascale computing using the type of technology that today’s petascale systems are built on. The Chinese and Japanese have been doing some interesting work with GPUs and NCSA actually has a small CPU-GPU system in production, but if you look at the


mainstream petascale systems, they are all built using multicore chips. Blue Waters is built using eight core Power7 chips from IBM. But, we cannot reach exascale computing using multicore technology; we will need to use many-core chips with hundreds to thousands of cores per chip in these computers. So, there will be a fundamental change in technology as we proceed to exascale. ‘This change in technology has a number


of ramifications. One major concern is the availability of software for these truly massively parallel computers, both the computing system software and the scientific


and engineering applications. There is a significant amount of work to be done to develop new software and algorithms – all of which is required in order for us to reach the exascale. And, the development of new software always takes longer than expected (some of this effort is research, not development). The target is to have exascale systems deployed and operational before 2020, but, with the software challenge in mind, that has to be regarded as a “soft” target.’


Stanley C. Ahalt, director of the Renaissance Computing Institute (RENCI) at the University of North Carolina


The world’s most powerful supercomputer is currently the Chinese Tianhe-1A system at the National Supercomputer Center in Tianjin, achieving a performance level of 2.57 Pflops (quadrillions of calculations per second)


community that reaching exascale within the next 10 years is not only feasible, but desirable. There are several technology hurdles that will have to be overcome and the issue of power is going to be critically important not only in terms of how much is consumed, but also how it is dispersed across the entire architecture. In some cases the question also arises of how we are going to dissipate that power from very specific locations in the machine in order to keep


‘I ‘I


believe the push for bragging rights with the first exascale system will accelerate the achievement of this capability, bringing us this milestone


by 2018 or 2019. Dozens of government organisations and research groups around the globe are focused on that calendar goal For fear of being outdone, various countries driving this research will escalate their efforts as needed, but unfortunately I do think this first milestone will be somewhat meaningless. It will be a benchmark – and not much more. ‘The real question is when will we


see a system that delivers exascale-level performance to a critical application? It is far


28 SCIENTIFIC COMPUTING WORLD


have been convinced by my colleagues in the vendor


it sufficiently cooled. A number of vendor groups are working on resolving these types of problems and among all the possible solutions that are being pursued, one or more will be successful. ‘While I respect the fact that exascale


computing is likely to be a very expensive undertaking, I believe the rewards we will reap as a consequence of our efforts will be worth it not only because we will reach an exascale class computer, but also because of what we will learn along the way that will spin off other advantages into a vast array of cyber infrastructure technology.’


Mike Bernhardt, founder and senior investigative reporter, The Exascale Report


too early to make any credible predictions in this area, but understanding the complexity of the software challenge, sorting through the many different opinions on how to approach this, and also considering the unlikely reality of steady, continued funding from governments that will carry across election years, we are far more likely to have a lag between the first exascale benchmarks, and the first critical application to actually take advantage of such a system – by several years. This could all change of course, based on the first and foremost challenge; funding. ‘Scientists will also have to adapt to a new way of thinking – a new way of looking at


problems. They will have to learn to think beyond the resource limitations they have become accustomed to. Much larger models and simulations will result in a streamlined process of experimentation and analysis, and scientists will be able to operate on much larger data sets. They will be able to tackle problems that simply were beyond our processing capabilities in the past. exascale will bring us a much faster time to the point of intelligent decision-making.’


www.scientific-computing.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44