search.noResults

search.searching

dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
HIGH PERFORMANCE COMPUTING


parallelism in their code. Allowing domain scientists to efficiently use accelerators. ‘We use Parallelware as a starting point


for application developers to help them parallelise and use accelerators based on directives. It will help users to start annotating their code with directives both to use CPU and GPU/FPGAs. That is an optional starting point but the entire project is aimed at improving coding productivity and that is the perfect starting point for such a project,’ stated Peña. ‘It is aimed at more towards people that do not have experience writing parallel code.’ ‘This project is aimed at the majority of


 EPEEC project minicluster


communication. Developments in EPEEC include automatic data compression and eventually-consistent collectives, along with enhanced interoperability with runtime systems implementing intranode programming models. OmpSs+GASPI programming will offer an alternative to the ArgoDSM backend for developers seeking large-scale deployments. Parallelware is a static analysis tool that will help users get started on porting their serial codes to parallel computing, by providing hints to directive annotation in OmpSs, OpenMP, and OpenACC. ArgoDSM is a modern page-based distributed shared memory system first released in 2016. ArgoDSM is based on recent advances on cache coherence protocols and synchronisation algorithms at Uppsala University. ArgoDSM is a page- based distributed shared virtual-memory system that operates in userspace. The project is also looking to test this new model by running existing large scale HPC applications using the AVBP, DIOGENeS, OSIRIS, Quantum ESPRESSO, and SMURFF. Coming from different fields and covering compute-intensive, data-intensive, and extreme-data scenarios, these applications serve as co-design partners and will be used to provide pre-exascale demonstrations of the programming model in action. ‘We have five big applications that we aim at running and adapting these big applications which have plenty of tradition under them. It is a challenge to run those old and complex codes in a new environment but it is also very valuable to us because they help us to better understand their needs,’ notes Peña.


www.scientific-computing.com | @scwmagazine


‘The main goal is to achieve something that is production-ready. We aim at this because we start from components that are themselves already production-ready. So far we are testing the new features which are pretty advanced. We are about to publicly release the midterm prototype and since we are about to release most of the features that we are developing, I am confident that in a year and a half thanks we will achieve our goal of production-ready code,’ said Peña. ‘In Parallelware we will have further


support for tasking for OmpSs and directive- based programming with accelerators. For GASPI we will have automatic compression, and also eventually consistent data types. We are integrating the version of OmpSs@ cluster on top of ARGO DSM. This will add more efficiency in quickly distributing the tasks among a cluster. This will also provide better code maintainability for us,’ stated Peña.


Lowering the barrier to HPC Peña stressed that many of the improvements developed in this project are aimed at domain scientists that do not have large amounts of experience using HPC programming frameworks. Parallelware, for example, is aimed at helping users express


‘All of the programming environment already existed so apart from the integration we are adding features in the components themselves’


scientists that may not have the passion or the time to understand low-level [coding] and to get their science on the systems as fast as possible. They do not care if they have lost some percentage of the machine, they want to get their code run on the system as soon as possible with more than reasonable efficiency,’ stated Peña. The integration of ArgoDSM with the EPEEC framework will work differently than some other components as it will not be directly exposed to application developers, instead the software will sit under OmpSs. ‘In OmPSs we have two flavours, both of which use more or less the same interface with the user. One is like OpenMP tasking more or less but we have a version called OMPS cluster in which the different tasks the user creates are automatically distributed across the cluster nodes. That is also meant to increase productivity as those developers who do not want to mess with OpenMP or OpenMP + MPI. They can just code ‘shared memory’ and then let the runtime distribute on distributed memory,’ stated Peña. ArgoDSM system provides a virtual


view shared memory view in a distributed memory system. The EPEEC researchers are developing an OmpSs@cluster version that is currently being used with MPI but eventually will be used directly with ArgoDSM. ‘This is a very efficient component that does smart data movement s across compute nodes and it is much more simple to code than using MPI because it exposes a shared memory system and does the distributing for us,’ added Peña. With exascale computing on the horizon,


there are a number of challenges both in terms of the physical challenges of creating an exascale system – particularly within the given 20 MW power budget. However, beyond the creation of exascale supercomputers, it is clear that there is still some way to go before these systems can be used efficiently. If this project is successful it will go a long way towards increasing the productivity of HPC in Europe.


Spring 2020 Scientific Computing World 5


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32