search.noResults

search.searching

note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
high-performance computing


and approach have far-reaching consequences for all its users, including those from industry. Resch said HLRS users are ‘working with


codes that do not benefit from the theoretical peak of accelerators but rely heavily on memory bandwidth. For HLRS, the main motivation is the best solution for our users. Tat means that we widely ignore peak performance and TOP500 ranking.’ Many industrial users face similar problems


with memory bandwidth, so their applications would benefit from this architecture just as the academic users do. ‘HLRS offers a number of systems in addition


to Hazel Hen,’ said Resch. ‘Tere is another ‘workhorse’ based on an NEC cluster. Te second system evolved over time and helped us to cover workloads that do not require densely coupled systems and extremely fast networks.’ HLRS also houses several smaller systems


such as a single-cabinet NEC-ACE system. ‘Since we are interested in new technologies we always have some smaller systems on the floor to test and experiment and learn about new approaches in hardware and soſtware’ commented Resch.


USERS ARRIVING AT THE TOP – WHICH IS HLRS – HAVE A LOT OF EXPERTISE BEFORE THE EVEN SUBMIT A PROPOSAL


Tis abundance of supercomputing facilities


puts Germany in a distinctive position because it has three world-class supercomputers with varied architectures housed in three separate HPC centres under the single banner of the Gauss Centre for Supercomputing (GCS). Within GCS are three national supercomputing centres; High Performance Computing Center Stuttgart (HLRS), Jülich Supercomputing Centre (JSC), and Leibniz Supercomputing Centre, (LRZ) at Garching, near Munich. Resch described the Gauss Centre as an


association created ‘to coordinate the activities of the three German national supercomputer centres’ as well as to speak with a ‘single voice towards Europe, our funding agencies and our users.’ ‘Te three centres are targeting three different


user communities that require different architectures.’ said Resch.


Meeting expectations HLRS was the first German national supercomputing centre, officially established in Germany in 1996. Resch explained, that from its inception, HLRS has two key areas where it takes


www.scientific-computing.com l


a leading role. ‘On the one hand HLRS takes over the role of Engineering HPC centre within the GCS – and, on the other hand, HLRS takes the leads in industrial usage of HPC within GCS.’ However, Resch stressed that first and


foremost HLRS is an academic centre with the clear duty to support public research: ‘Te focus is on solutions for researchers with a focus on engineering, so the hardware and soſtware we choose typically match the needs of industry. When going into procurement, we consider industrial advice that helps to identify issues that researchers – eager for speed and fast solutions – tend to underrate.’ By choosing varying HPC architectures,


each supercomputer is designed to support a different set of user applications. Ultimately, this helps Germany to provide a balanced HPC infrastructure to support as wide an array of users as possible. A big part of providing a balanced ecosystem


is training to support users and prepare them to use the supercomputing facilities available. ‘In more than 20 courses (most of them five-


day courses) we train 600 to 800 users every year. We have extended this training to our European customers over the last years as part of our PRACE activities,’ explained Resch. Resch said this number would be increased


as the HLRS is opening a new centre to train industrial customers stating ‘We have built a new training centre that will be operational early in 2017.’ For scientific users, Resch confirmed that


HLRS is focusing on a concept he refers to as the ‘embedded scientist.’ ‘Tis concept includes our scientists in the working groups of our key users.’ He explained that HLRS users ‘undertake common projects, research work and publications’ which allows HLRS ‘continuously to support the optimisation process for user soſtware.’


Applications at the heart of HLRS ‘About two thirds of the user research at HLRS is focused on computational fluid dynamics in the widest sense. One of the key projects is aeroacoustics simulated by colleagues in Aachen, while a second highlight is a regional climate analysis done by colleagues from the University of Hohenheim. Te aeroacoustic projects completed in


collaboration between HLRS and Aachen University were based on acoustics of a ducted axial fan and a jet engine respectively. Both projects were extremely scalable, using all 92,000 cores of Hazel Hen’s predecessor ‘Hornet’. Te researchers on the fan acoustics project


conducted a large-scale simulation run to increase accuracy when predicting the acoustic


@scwmagazine


field created by a low-pressure axial fan using computational aeroacoustics (CAA). Te goal was to achieve a better understanding of the development of vortical flow structures and the turbulence intensity in the tip-gap of a ducted axial fan. Tis project was completed in approximately 110 machine hours and produced more than 80 TB of data. Te engine research project focused on large


eddy simulations of a helicopter jet engine aimed at analysing the impact of internal perturbations because of geometric variations


FOR HLRS, THE MAIN


MOTIVATION IS THE BEST SOLUTION FOR OUR USERS. THAT MEANS THAT WE WIDELY IGNORE PEAK PERFORMANCE AND TOP500 RANKING


on the flow field and the acoustic field of the engine. Te simulation required a time span of 300 machine hours to compute Cartesian meshes up to one billion cells. In addition to the engineering work, climate


research is also a heavy user of HPC resources at HLRS. One particular project carried out in collaboration with the Institute of Physics and Meteorology of the University of Hohenheim focused on highly complex climate simulation for a period long enough to cover various extreme weather events in the northern hemisphere. Tis project increased the resolution of


previous simulations, which in turn allows researchers to predict the weather with a higher degree of certainty, particularly for extreme weather events. Te simulation was performed using the Weather Research and Forecasting (WRF) model on 84,000 compute cores of Hornet. HLRS is at its core a HPC centre


committed to supporting its user community. Whether through training and education or procurement, every decision is made to increase the amount or speed of research that can be conducted using HLRS’ computing resources. Te driving force here is application


performance, not FLOPS, or some benchmark that is not related to the kinds of applications that will be run on the system. Applying these methods, together with sufficient training and education, HLRS has created not only a powerful supercomputer but also a community of HPC users that know how to use the latest computing technologies. l


AUGUST/SEPTEMBER 2016 15


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36