search.noResults

search.searching

note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
HPC 2017-18 | High-performance computing


earthquake, including non-linear frictional failure on a megathrust-splay fault system. Last fall, the HLRS also helped Ansys scale


the Ansys Fluent CFD tool to more than 172,000 computer cores on the HLRS supercomputer Hazel Hen, a Cray XC40 system, making it one of the fastest industrial applications ever run. Dr Wim Slagter, director of HPC and Cloud


Alliances at Ansys, explained: ‘To overcome large-scale simulation challenges, we established a multi-year program with HLRS and worked on improving the performance and scalability of our CFD solvers. Apart from improving the transient LES (Large Eddy Simulation) solver, we focused on a variety of aspects, including the enhancement of our advanced multiphysics load balancing method, the optimisation of file read/write operations and the improvement of fault tolerance for large-scale runs. ‘We started by further optimising the linear


CFD solver, predominantly the AMG (Algebraic MultiGrid) portion of it. We optimised the partition and load balance capabilities to enable the good balancing at very high core count. To enhance the simulation throughput, we developed better reordering algorithms for improved memory usage, and we enhanced the transient combustion convergence speed. We also improved the parallel IO capability and developed better data compression strategies. Because of these very high core counts – up to 172,000 CPU cores – parallel solver robustness was obviously crucial here; we wanted to have a robust solver that can be “fired and forgotten”,’ Slagter added. Another major research finding out of Jülich


in the last year, which came from the Center for Teoretical Chemistry at Ruhr University Bochum, uncovered the previously unknown complexities in the relationship between sulphur atoms’ bindings. Tese bindings link long molecules together to form proteins and rubber. For example, if you stretch a rubber band again and again, the sulphur bridges will break and the rubber becomes brittle. Tis rubber band example is familiar to


most people, but a correct interpretation of the experimental data was lacking. However, this research found that the splitting of these bonds between two sulphur atoms in a solvent is more complicated than first assumed. ‘Depending on how hard one pulls, the


sulphur bridge splits with different reaction mechanisms,’ Dr Dominik Marx, professor and director of the Center for Teoretical Chemistry at the Ruhr University Bochum, explained. In essence, the simulations revealed that more force cannot be translated one to one into a faster reaction. Up to a certain force, the reaction rate increases in proportion to the force. If this


The High Performance Computing Center in Stuttgart, part of the Gauss Centre for Supercomputing


threshold is exceeded, greater mechanical forces speed up the reaction to a much lesser extent. Previous simulation and modelling methods


drastically simplified the effects of the surrounding solvent in order to reduce the processing power required. Work done at the Cluster of Excellence RESOLV in Bochum had already uncovered the key role the solvent plays in chemical reactions. But correctly incorporating the role of the


surrounding solvent requires immense computing effort. Tis computational power was made available to Marx and his international team by a special ‘Large Scale Project’ granted by the GCS on the Jülich Blue Gene/Q platform Juqueen. Without which, the detailed simulations to


To overcome large-scale simulation challenges, we established a multi-year program with HLRS and worked on improving the performance and scalability of our CFD solvers


interpret the experimental data on sulphur atom bindings would not have been possible. Tere has also been some fascinating extreme


scaling work on the Juqueen supercomputer (based at the Jülich Supercomputing Centre) in the last 12 months at the High-Q Club. Dr Dirk Brömmel, senior scientist from High-Q Club, explained: ‘Te club was set up to showcase codes that scale to the 450,000+ cores or (ideally) 1.8 million threads. Tis helps to identify where possible bottlenecks are in future systems and if a solution is found on Juqueen, many of the codes have also found that their scalability has been transferable to other machines.’ For example, the Model for Prediction


Across Scales (MPAS) is a collaborative project that develops atmosphere, ocean and other


earth-system simulation components for use in climate, regional climate and weather studies. Te MPAS-Atmosphere (or MPAS-A) component is a member of the High-Q Club. Te primary applications of MPAS-A are in global numerical weather prediction, with a special interest in tropical cyclone prediction and convection- permitting hazardous weather forecasting and regional climate modeling. Te extreme-scale performance of MPAS-A


has improved greatly in the last 12 months, as Dominikus Heinzeller, a senior scientist at the Karlsruhe Institute of Technology, explained: ‘Our alternative I/O layer is based on the SIONlib library, developed by Dr Wolfgang Frings and colleagues at Research Centre Jülich, and integrated in the MPAS model framework in a completely transparent way: users can choose at runtime whether to write data in SIONlib format or in the netCDF format that has been traditionally used in MPAS.’ ‘In numbers, we diagnosed a speedup of


a factor of 10 to 60 when reading data in the SIONlib format, and a factor of four to 10 when writing data. Combined with a simplified bootstrapping phase, the model initialisation time, as one example, could be reduced by 90 per cent in large-scale applications on the FZJ Juqueen and LRZ SuperMUC supercomputers. Tis strategically important development work was supported by Michael Duda, a soſtware engineer at NCAR (National Center for Atmospheric Research), and funded by the Bavarian Competence Network for Technical and Scientific High Performance Computin’, Heinzeller added.


The Hartree Centre Te Hartree Centre in Daresbury is home to some of the most technically advanced high performance computing, data analytics, machine learning technologies and experts in the UK. Alison Kennedy, director at the Hartree Centre, said: ‘Our goal is to make the transition from pursuits in research that are interesting,


27


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32