This page contains a Flash digital edition of a book.
towards exascale


would have a significant impact without changing cost or power consumption? We have done something similar in the past but this is the first time we have taken a real community- based approach. Tere are some particular algorithms where a little change to the way the floating point structures are designed could make a large difference.’ Bill Gropp, who leads the Illinois Parallel


Computing Institute at the US National Centre for Supercomputing Applications, suggests that developers need to be careful that they do not simply think in terms of the ‘usual suspects’ when it comes to Exascale applications. ‘Tere are applications that are running now where it is too late to rewrite them for Exascale because of the investment that has already been made in parallelising them.’ But the history of computing is full of examples of something unexpected coming out of leſt field, he says, and in these application areas – bioinformatics perhaps – it will be worth starting from scratch. But, he stresses, the programming models that come out of each of these areas have to be able to work together: ‘Exascale is going to require a rethinking of algorithms and programming language support for that. People should not lose sight of the fact that Exascale really is different.’ Gropp was one of the main architects of


the Message Passing Interface (MPI), the standardised and portable message-passing system that has fostered the development of the parallel soſtware industry and thus of portable and scalable large-scale parallel applications over the past decade or so. But, he warns: ‘Te situation is not as nice as it was when MPI started. One of the things people oſten forget is that by the time we met to do the MPI standard, there had been several prior attempts at standards and there had been several vendor-specific message passaging systems. Tere was a lot of experience with what worked, and what the challenges were. Te issue was how to eliminate the differences between them. ‘Te challenge now is in a couple of


different directions and in those we are not in as good a shape. At Exascale, you might have MPI-process and then you’d have something else inside that process which may have 10,000 or 100,000 cores to work with. So one would need a richer programming model there. Te route forward is going to be having more things that can be combined with MPI. We will have to move away from the “MPI everywhere” model.’ Tere may need to be a period of ‘creative chaos’ before the features of an Exascale system will become clear. But the challenges are being overcome.


Since he spoke to Scientific Computing World a 28 SCIENTIFIC COMPUTING WORLD


sort of money. In Turek’s view, the circle can be squared if the technological innovations embedded in Exascale can be made more affordable on a shorter timescale and propagated more broadly. He uses the issue of energy efficiency itself


year ago (Feb/March 2011, page 26), Beckman believes: ‘We’ve had pretty good progress. From a scientist’s perspective, it’s always slower than you hope for. Part of that is the length of time it takes in the machinery [of Government] to get such a large programme launched.’ Te US, Japan and Europe had put


significant investments together for Exascale but ‘we’re still in the early stages. Over the past year, one of the most notable changes here in the United States is that the US Congress has become very supportive of Exascale computing. Last November, 24 Senators wrote to President Obama saying that the US needs to invest in Exascale computing, take this seriously and create a coherent plan for the nation.’ Te US Department of Energy will be duly submitting its plan to Congress on 10 February. Exascale will happen eventually. Te point


of the initiatives being taken around the world is to make it happen earlier – to bring it forward about a decade or so. According to Dave Turek, VP of High-Performance Computing at IBM: ‘Te ambition to push the limits of computing is always difficult because it transcends what most of the marketplace thinks they need at any time. So to build Exascale means taking on a lot of risk, because it turns out when you deploy technology at the extreme for the first time, it will be roughly five to seven years before it becomes widely adopted.’ For commercial companies


such as IBM and Cray who hope to make money out of making computer systems, how can this gap be bridged? It will not be by direct sales of Exascale computers because, towards the end of this decade, such systems will cost in excess of $100M and only a few customers will have that


to illustrate the point. ‘Te issue of power efficiency only manifests itself at the real extreme. If you are physically building an Exascale system in 2020, you would be looking at anything from 20 to 40 to 50 MW of power and in US prices today that is somewhere between $20M and $50M a year. But if that architecture were modular and you can decompose it to something smaller, you would see that that would create a highly efficient computer in the Petaflop domain. If the energy costs of an Exascale machine were $20M then you divide by 1,000 and you get the cost of a Petascale machine computer with the same energy profile as a Linux cluster today. You’d say that is really affordable at $20k. Tat is a dramatic step forward that does not manifest itself as a benefit to the very high-end user but does manifest itself as a benefit to everybody else in the marketplace.’ Te wider commercial benefits are also one


Further


of the motivations for the Mont-Blanc project, even though it is receiving funding from the European Commission. According to Jacques Philouze of Allinea: ‘As a European project, it has always been the aim to rally consortia of European vendors to show we can hold our own against our American cousins. It’s an all-star team, including ARM and Bull.’ For Alex Ramirez, this


information:


Argonne National Laboratory www.anl.gov


Mont-Blanc Project www.montblanc-project.eu


University of Illinois’ National Center for Supercomputing Applications (NCSA) www.ncsa.illinois.edu


Gnodal www.gnodal.com


Allinea www.allinea.com


Cray www.cray.com


IBM


www-03.ibm.com/ systems/deepcomputing


wider agenda is also part of the reason that the project has opted for an unconventional architecture: ‘When you are coming late to a race, you have to take a different path, because you cannot run faster than the others. Te Japanese, Chinese, and Americans have been building their own computer technologies for a long time and we Europeans have been essentially customers for that technology. So if we now want to develop our own Exascale computing technology, we have to try something different.’ As Dave Turek remarks:


‘You can’t extrapolate the path to Exascale unless you have a vision of what computing is likely to be like in 20 years’ time. Te real issue is how different that future will be from today.’


www.scientific-computing.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48
Produced with Yudu - www.yudu.com