search.noResults

search.searching

dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
HIGH PERFORMANCE COMPUTING


EPEEC European programming


ROBERT ROE TALKS TO ANTONIO PEÑA EPEEC, PROJECT COORDINATOR AND SENIOR RESEARCHER FOR THE BARCELONA SUPERCOMPUTING CENTER ABOUT PROGRESS ON A EUROPEAN PROGRAMMING FRAMEWORK FOR HPC


European researchers are working on a new programming environment for exascale


computing that aims to make using these systems more efficient and programmers more productive. The European Programming Environment


for Programming Productivity of Heterogeneous Supercomputers (EPEEC) is a project that aims to combine European made tools for programming models and performance tools that could help to relieve the burden of targeting highly- heterogeneous supercomputers. It is hoped that this project will make researchers jobs easier as they can more effectively use large scale HPC systems. As the (HPC) community works tirelessly


to address the heterogeneity of upcoming exascale computers, users must be able to effectively use different architectures of processing units such as GPUs and FPGAs, heterogeneous memory systems including non-volatile and persistent memories, and networking.


The EPEEC project is aiming to


overcome this challenge, having compiled five software components developed by European research institutions and companies to fine-tune and develop these tools for its programming environment. All of these tools already exist and are known to the HPC community, but, by combining these tools and integrating them and developing new features EPEEC researchers hope to develop a stable and productive platform for exascale computing. EPEEC project coordinator and


senior researcher for the Barcelona Supercomputing Center (BSC), Antonio Peña, explains that he and his colleagues saw a gap in the market for tools that could


4 Scientific Computing World Spring 2020  EPEEC programming environment @scwmagazine | www.scientific-computing.com


support researchers trying to make use of large HPC systems. ‘We found that in Europe there were a lot of pieces of technology that could be used together as a programming environment for exascale but, first, they needed some improvements and there needed to be some improvements to make them play better together.


Optimising exascale ‘All of the programming environment components already existed so apart from the integration we are adding features in the components themselves. For example, in OmpSs we are adding the capability to combine it with OpenACC and also with OpenMP offloading so that we get higher programming productivity when using heterogeneous systems like GPUs,’ added Peña. Directive based offloading will help domain scientists to better exploit parallelism in their code without the need for significant experience in parallel programming languages. ‘Offloading is needed to use devices


like accelerators so you can offload to those devices, the parts of the code that are more suitable to be executed in those architectures and leave the CPU to do what


it does best. This allows you to efficiently use all of the hardware resources that are available,’ added Peña. The project is integrating and improving


five components used in HPC today, namely OmpSs, GASPI, Parallelware, ArgoDSM and existing performance tools from the BSC. While the EPEEC project is building on


these software components each has its own place in the HPC software stack. OmpSs is a shared-memory programming model based on directives and tasking, forerunner of initiatives for the OpenMP Standard. In the context of EPEEC, OmpSs is being developed to include accelerator directives and tasking within accelerators, seeking to make an impact on the OpenMP and OpenACC standardisation bodies. If successful this could lead to improved


programming model composability. Additionally, the developments of a distributed-shared memory backend on top of the ArgoDSM middleware to seamlessly distribute OmpSs workloads across compute nodes will ease cluster deployments while optimising performance with respect to state-of-the-art solutions. GASPI is a distributed-memory


programming model that is established as the European lead on one-sided internode


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32