search.noResults

search.searching

dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
HIGH PERFORMANCE COMPUTING


that we should form a consortium with some other countries and invest in a Tier-0 system jointly. We had a northern European discussion going on, but that stalled due to political events... but now if you look at EuroHPC, that is actually the model that they have followed. The first two pre-exascale systems will all be consortia of countries that have put money in, alongside additional funding from the European Commission. I have been leading a team over the last six months that has been writing a business case, or options analysis, for what we might do next. If you look at what is happening in Japan,


the US and to a lesser extent Europe, the development of these huge systems, the implementation and developing them as services is coupled with big software programmes. I sit on the Gordon Bell Committee and what has been fantastic to see with Summit, the 200 Pflop system from the US that came online last year, is the high quality of papers coming from people who have been using the entire system to do science that was impossible before.


But that has only happened because


the US has invested in an exascale software programme, while also beginning investments in exascale hardware. If you are going to buy one of these systems


www.scientific-computing.com | @scwmagazine


you need to couple it with a big software programme or you will not see the benefits. The challenge with these very large


computers is that the codes that we run today on a system like ARCHER do not scale properly on an exascale system. If you consider what an exascale system might look like, you are probably talking about 6,000,000 cores, on ARCHER today we have around 150,000. On the large systems today you are probably talking about something between 700,000 and 1,000,000 cores. Getting to these very high core counts is


very technically difficult from a computer science perspective. That is what the programmes are focusing on in the US, Japan and Europe.


What is the UK doing to prepare for these larger systems? Over the last 20 years we have been living in a golden age of simplicity. It didn’t feel like it at the time, but it was much easier. Many codes today are struggling to get beyond 20,000 cores and yet we are going to build computers over the next few years that will have five or six million cores. There is an enormous amount to be


done, not just to make existing algorithms go faster, but to go back to the drawing board. Algorithms that were designed in the 80s or 90s are no longer fit for


“Can we partner with another country, or should the UK think about investing in a large Tier-0 system?”


purpose, we need to rethink how we are modelling. The Advanced Simulation and Modelling


of Virtual Systems project is an example of this. Rolls Royce wants to do the first high-resolution simulation of a jet engine in operation. They want to model the structure, fluid dynamics, combustion, electro magnetics, the whole thing. So that over the next decade they can go to virtual certification. But in order to do that, we need to


rethink who we model different parts of the engine.


The models we have today scale to, in


some cases, a few thousand cores or less, so we have to rethink the mathematics, the algorithms and what it means to model in the face of very high levels of parallelism. It is a fascinating topic, but it is also extremely hard. But, if we get it right, it will be the most


critical change to modelling and simulation in the last 30 years.


October/November 2019 Scientific Computing World 15


Egorov Artem/Shutterstock.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36