search.noResults

search.searching

note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
high-performance computing


four or 10 processors, but once you get past 100 you start to see some of the problems with a naïve algorithm,’ said Holmes. ‘Having solved those you encounter a new set of problems at a thousand processors because you show up different levels of parallelism there and different algorithms work less well there than they would. At 10,000 processors you see another set of problems, and so on’. Mark Bull gave an example of this with


MPI libraries. He explained that most MPI libraries are implemented to require that every process stores at least some bit of information about every other process. ‘Today people are running MPI programs


with a million processes, and that’s ok if you want a few bites of storage for each one of a million processes – that consumes a few megabytes.’ ‘If we want to go to a hundred million or


a billion processes which are what we are going to have to do to get beyond exascale then that is a few gigabytes and that quickly consumes your entire memory. It just gets out of control – we have to deal with that somehow,’ stressed Bull.


European initiatives Te EPCC is running two projects that are aimed at solving some of the challenges for exascale computing. Te first project, Epigram, which recently finished aſter three years of research, was an EC-funded FP7 project on exascale computing with the aim of preparing message passing and PGAS programming models for exascale systems by fundamentally addressing their main current limitations. Te concept is to introduce new


disruptive concepts to fill the technological gap between the petascale and exascale programming models in two ways. First, innovative algorithms are used in both message passing (MP) and partitioned global address space (PGAS), specifically to provide fast collective communication in both MP and PGAS, to decrease the memory consumption in MP, to enable fast synchronisation in PGAS, to provide fault tolerance mechanisms in PGAS, and potential strategies for fault tolerance in MP. Te project also aims to combine the best


features of MP and PGAS by developing an MP interface using a PGAS library as communication substrate. Te idea is to use PGAS to overcome some of the shortcomings and overheads associated with the use of MPI for HPC message passing. Bull highlighted that PGAS and MPI are


similar in intent but use different ways of addressing memory across a system: ‘Tey


www.scientific-computing.com l


have the same sort of flavour because they support some global address space which means that any process can read and write memory locations everywhere. Tis gives you the essential remote read and remote write functionality.’ Bull explained that the difference is MPI


starts with a view of the machine as lots of separate bits of distributed memory: ‘Rather than there being partitions of a global address space, there are lots of separate address spaces. You send messages


EVENTUALLY THE THING IS SO HOT THAT IT MELTS THE SILICON THAT IT IS MADE OF BECAUSE YOU CANNOT DISSIPATE THE HEAT OUT OF IT FAST ENOUGH


in between those address spaces rather than reading and writing directly to them.’ ‘If you don’t know when you write the


program where the data that you want is going to be at any one time then coding both sides of a two-sided message is difficult. You want to know where the data is going to be ahead of time.’ Tis also relates to the issue of memory


usage that Holmes referred to earlier as each process requires information about each other process. Tis two-sided message passing could become a hindrance in the future, so the EPCC is exploring other methods of communication such as PGAS. ‘Te big advantage of the PGAS or single


sided approach is that you can essentially do data dependent accesses anywhere in the machine,’ said Bull. ‘Tere are certain applications that can benefit from that or at least it makes them more tractable to implement.’ Te second project that takes the lessons


from the recently finished Epigram project is the Programming Model INTERoperability ToWards Exascale (Intertwine). Tis project takes the lessons learned from Epigram and other projects and investigates the use of APIs with a particular focus on interoperability between different HPC programming frameworks used in HPC. Te project covers MPI, GASPI, OpenMP, OmpSs, StarPU and PaRSEC, each of which has a project partner with extensive experience in API design and implementation. ‘Te difference is we have expanded that to include not just MPI and PGAS, but to


@scwmagazine


include some of the node level programming models such as OpenMP also including some of the newer runtime programming models like OmpSs for example,’ said Bull. For many HPC programmers, it is clear


that no single programming framework will be the ‘silver bullet’ that solves the challenges of exascale computing. While there are some projects investigating this Bull and Holmes think that even if a silver bullet programming model could be created, it would require potentially years of work to port applications and make sure that this framework would work effectively across a wide range of supercomputing platforms. Instead, Intertwine has taken a view of


using the current tools but adapting them to better suit exascale soſtware development. Holmes stated: ‘It would be lovely to have


a plan-A solution of “let’s throw the whole lot of what we currently do in the bin and use this magic silver bullet programming model which is easy to use and it translates into really efficient code”. ‘Tis thing doesn’t exist, so we need a


plan B until that thing exists and Epigram and Intertwine are focusing on this Plan B, added Holmes. ‘Let’s modify the things we have got and take an incremental approach – tweaking these things as we go so we can use a combination that makes sense for a particular machine and a particular application.’ However, the EPCC and Europe are not


the only organisations trying to solve this challenge. Te US and Asia are working on their projects such as Legion which is being developed by the US at Stanford with funding from the US Department of Energy’s ExaCT Combustion Co-Design Center and the Scientific Data Management, Analysis and Visualization (SDMAV) programme. Mark Bull admitted that nobody knows


quite which model will win out in the end: ‘As an industry, it is time to hedge our bets with programming just as we have seen with recent deployments at the BSC where supercomputers are adopting various next- gen technologies as it is unclear exactly what exascale programming might look like.’ However, a solution must be found for


this issue as Holmes expects that in the future the level of parallelism will continue to rise – this will also be true even for HPC programmers that are not looking at exascale computing: ‘In 10 years’ time, every machine is going to need this level of programming unless we can find some silver bullet every programmer is going to have to deal with these issues day to day,’ he concluded. l


DECEMBER 2016/OCTOBER 2017 25


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32