high-performance computing
lecturer in computational mechanics at the University of Manchester, the ISVs see their market as desktops and workstations, not HPC, let alone Exascale. Reporting on the results of a survey of 250 companies conducted late last year for NAFEMS, the international organisation set up to foster finite element analysis, he told the meeting that ISVs were moving towards supporting accelerators such as Nvidia GPUs and the Intel Xeon Phi coprocessor. However, the idea of rewriting their code to port to FPGAs, ARM processors – or to cater for such features important to exascale as fault tolerance and energy-awareness – was, for many ISVs, ‘not on their roadmap. Some vendors don’t know what it is,’ he said. In this, the ISVs are simply following the
market, according to Margetts: the survey revealed that the standard workstation and laptop are where most engineers do most of their work. Te ISVs are not going to deliver Exascale soſtware, he continued, and hence Exascale engineering soſtware needs to be open source. In Parsons’ view: ‘Hardware is leaving
soſtware behind, which is leaving algorithms behind. Algorithms have changed only incrementally over the past 20 years. Soſtware parallelism is a core challenge. We need to see codes moving forward.’
Insatiable appetite for exascale Despite these issues, it became very clear that there is an almost insatiable appetite for computing power and that some commercial users and many not-for-profit researchers are eagerly awaiting the advent of exascale machines. According to Eric Chaput’s presentation to a satellite meeting reviewing European exascale projects, the ultimate goal of Airbus is to simulate an entire aircraſt on computer. Chaput is senior manager of flight-physics methods and tools at Airbus, and his remarks prompted one audience member to observe that even exascale would not be enough for Airbus, but rather it needed zetascale or beyond. Te requirement for ever more computing
resources in Europe was powerfully reiterated by Sylvie Joussaume in the course of the panel discussion that concluded the conference. Tis time, the emphasis was on research for the public good rather than commercial benefit. Dr Joussaume, the chair of Prace’s Scientific Steering Committee in 2015, is a senior researcher within the French CNRS and an expert in climate modelling. She stressed that European climate researchers needed access to the next generation of the most powerful machines if they were to maintain their expertise in the subject. Te minds of many delegates had been concentrated by the US Department of
www.scientific-computing.com l Mateo Valero, director of the Barcelona Supercomputer Centre, discussed energy-efficient architectures
Energy’s announcement that three next- generation systems are to be built as part of the Collaboration of Oak Ridge, Argonne, and Lawrence Livermore (Coral) by two consortia, headed by IBM and Intel respectively at a cost of more than $425 million. Tis Coral procurement is intended to develop supercomputers that will leapfrog the international competition and open up the way to exascale machines. However, Europe is a loose collaboration
of individual nation states and the European Commission is not permitted a budget that would allow it to place heavily subsidised contracts with a European supercomputer vendor in the same way as the US Department
HARDWARE IS
LEAVING SOFTWARE BEHIND, WHICH IS LEAVING ALGORITHMS BEHIND
of Energy appears to be free to do for US high tech companies. Nonetheless, the head of the e-infrastructure unit at the Commission Augusto Burgueño Arjona told the meeting, a high-performance computing strategy was an essential building block to meet the aims of Europe’s Digital Single Market. Tere was a need to develop infrastructure for innovation combining Cloud, HPC, and big data and, he assured his audience, Prace was seen as a fundamental part of that.
Prace – the next five years Te European Commission’s policy will face its first significant test later this summer. Technically, Prace, the Partnership for Advanced Computing in Europe, is coming to the end of its first phase. It has allowed
@scwmagazine
researchers access to some of the most advanced computing resources in the world, even if they happen to live and work in countries that have only very limited national computing facilities. Currently, four hosting countries – Spain, France, Germany, and Italy – allow researchers from other European member countries run time on their premier national facilities. Although everyone agrees that Prace has been
a scientific success, some of the hosting nations feel that the current funding arrangements have been a little unfair. For the next phase, it was expected that the European Commission would provide funds to reimburse to the hosting nations at least some of the operational expenditure for Prace jobs running on the national machines. But it appears that this may not happen, so it may have to be arranged at the level of the Prace membership rather than the Commission. Funding the next phase of the project therefore will be tricky, especially to ensure that the arrangements are transparent to all. Sanzio Bassini, chair of the Prace council,
was asked about the transition to Prace 2.0. He acknowledged that ‘one of the most pressing issues is the participation on all members of the association to the operational costs of the infrastructure, and the sustainability of the model moving forwards.’ He raised several options, such as basing
contributions to the Prace infrastructure from each country’s GDP, but another option would be to subsidise these HPC resources by selling a certain portion of the computational cycles to industry. Alternatively these strategies or some
combination of ideas could be employed in tandem reducing the burden on the participating countries that provide the computational muscle to Prace.
JUNE/JULY 2015 17
➤
Jason Clarke Photography
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52 |
Page 53 |
Page 54 |
Page 55 |
Page 56