This page contains a Flash digital edition of a book.
The long road T


Bill Feiereisen, senior scientist and software architect in Intel’s exascale architecture pathfinding group


here are many technologies that we are investigating beyond current state-of-the-art. In aggregate, they will


contribute to improving power efficiency beyond its already lean values. Among them are alternate processor architectures that concentrate on extreme parallelism while stripping out any unnecessary functionality, fine grain power management across the processors, the memory systems and the system architecture itself, as well as the system soſtware that directs it. But in the spirit of co-design there is much work on the applications and algorithms themselves. For example, one of the largest usages of power is simply moving data around the machine. Tis can be minimised by alternate application and algorithm strategies.


Exascale computers will require parallelism


EXASCALE WILL REQUIRE PARALLELISM IN THE EXTREME


in the extreme, managing as many as one billion computing threads of execution. It is not clear that current programming methods will be able to manage computations of this size. Te Tianhe-2 machine in Guangzhou, China, for example, contains more than three million execution cores. Its largest computations will need to manage data communication and movement in this large distributed space. Individual cores are not expected to be more powerful, requiring


the increased performance from increased parallelism. By this math an exaflops machine will have about a billion cores. Current top computers provide a number of schemes to manage computation and data movement, message passing, threading, as well as


Dave Turek, vice president of advanced computing at IBM T


he efficiency of power usage is an important challenge, but very dominating issue revolves around the cost of the


system. If one were to try and keep the memory per core on an exascale system on par to what people are accustomed to today, the cost of the system would likely be viewed as very unaffordable. Te infrastructure needed to handle the data associated with a system of this size as well as other subsystems will explode in cost unless fundamental compromises are made to the overarching design relative to what users are used to. Finally, reliability of the system in terms of both hardware and soſtware will prove daunting as the sheer scale of the components will present observable failure rates outside the bounds of what most users would expect. Whenever technology is expected to increase


34 SCIENTIFIC COMPUTING WORLD


by orders of magnitude of performance or capability the impact of scaling becomes very pronounced and daunting. One cannot simply deploy more of the current standard in order to get to exascale, because cost and space will render the approach untenable. Consequently, invention and innovation are required across all aspects of the system simultaneously. All of these barriers can be


overcome but at what cost? One of the real cost issues associated with exascale has to do with the possibility that changes will be required to the programming models used today. Tis has the potential to put a tangible burden on users in terms of migrating and optimising current codes as well as the


training required for new programming models. Most of the companies pursuing exascale have, therefore, made a real effort to eliminate or minimise the impact on programming models. Looking to the future, someone could achieve


ALL OF THESE BARRIERS CAN BE OVERCOME BUT AT WHAT COST?


exascale today with enough money, space and power, but that is likely to be a system of limited utility and shaky reliability at best. I think a practical system (with capabilities that extend beyond just a Flop-centric design) will likely be anywhere from five to eight years away. Tere


are unlikely to be radical developments that will make exascale a fait accompli, but everyday there are incremental changes in all aspects of technology that make the ultimate goal feasible.


@scwmagazine l www.scientific-computing.com


capabilities built into some higher level languages, but these capabilities will be severely challenged when required to scale that additional 30 times beyond what is already necessary on Tianhe-2. And this will already be a challenge in this largest machine on the planet. Tere is much work in the community to define new programming models and to make them useful by also providing the tools that allow a programmer to not only write, but to debug and tune. Each of these barriers can be overcome


individually, but the greater challenge will be to overcome them together, because they interact. Mastering just one of the challenges will not solve the overall problem. I cannot emphasise how important the concept of co- design is. Computing at this scale is one of the most complex multi-disciplinary problems in the engineering and scientific community.


A level of debate surrounds the question of how and when we might reach exascale computing. Industry experts share their views on the subject


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52