search.noResults

search.searching

note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
high-performance computing


of the two installation phases of more than 6.8 petaflop/s – roughly number 11 on the current top500.


Pursuing energy efficiency In addition to the application focus which characterises the LRZ user community, the centre also focuses heavily on optimising its HPC resources for energy efficiency. Bode stated: ‘We have a specialised computer


building that uses different types of cooling – including direct warm water cooling – without any need for compressors or other things that help us to moderate our electricity bill.’ SuperMUC uses a form of warm water cooling


developed by IBM. Active components like processors and memory are directly cooled with water that can have an inlet temperature of up to 40 degrees Celsius. Tis ‘high-temperature liquid cooling’, together with innovative system soſtware, cuts the energy consumption of the system up to 40 per cent. Tese energy savings are increased further as the LRZ heats its buildings using this waste heat energy and creates chilled water through the use of adsorption machines. By reducing the number of cooling


components and using free cooling, LRZ expects to save several million euros in cooling costs over the five-year life of the SuperMUC system. Bode also commented that the LRZ is focused


on continuing research and development to optimise energy usage further in the data centre. He explained that the procurement process for LRZ is conducted under what he refers to as a ‘competitive dialogue’, which allows LRZ to be part of a co-design process to ensure that compute and cooling infrastructure can be designed and optimised simultaneously. Perhaps the most crucial point that separates


the Gauss centres from other HPC organisations is the focus on a select group of applications. Tis allows users access to a more specialised


support network and enaables LRZ to optimise hardware, soſtware and support services for a specific set of users. Of the main application areas mentioned, Bode commented that life sciences was experiencing the most growth. Tis is due to a rise in bioinformatics, medical and personalised medicine based applications. ‘We see this field growing rapidly in the number of users and the type of applications’ said Bode. Tis focus on users and energy efficiency


drives the procurement of new hardware at LRZ. An example of this is the addition of nodes with Intel Xeon Phi accelerators, as it allows users to begin to evaluate the technology and prepare applications to be ported across to many-core architectures. Bode said: ‘It is quite clear that in the future some percentage of the system will be devoted to accelerators.’ Bode went one step further explaining that


to understand the requirements of LRZ users fully the centre needs to analyse how many applications will be suited to this new technology: ‘To rush out and buy a new supercomputer full of the latest technologies would certainly provide a theoretical performance boost. But, for a centre with hundreds of active users, this would mean months if not years – where the HPC hardware was not being used efficiently, as applications are optimised for this new HPC architecture. ‘We have a history of about 20 years of


supercomputing, meaning that we have about 700 WE HAVE A


SPECIALISED COMPUTER BUILDING THAT USES DIFFERENT TYPES OF COOLING, INCLUDING DIRECT WARM WATER COOLING


to 1,000 application groups, of which about 50 per cent are still active users. Let’s say 500 active applications with a huge amount of soſtware, which is nearly impossible to transfer to today’s accelerators. We need to find out what real percentage of applications can make good use of accelerators.’


Future procurements One of the advantages of running SuperMUC as separate phases is that it allows the LRZ to keep a large cluster up and running for the active user community. Bode explained that procurement is based on testing different products, based on benchmark kernels taken from representative application areas of LRZ users. Tis is important because it ensures that


the amount of time spent porting and fine tuning applications for new hardware are reduced, keeping the user community working and ensuring that HPC resources are used at maximum efficiency. Another important aspect of the procurement


of new systems is that SuperMUC is split into different phases. If the LRZ were to shut down the entire cluster, then it would be leſt with huge numbers of users with no HPC resources to continue their research. Splitting the HPC resource into separate phases allows the LRZ to keep a large cluster up and running while new hardware is installed and optimised. Tis will also be true of the next phase of


upgrades at the centre, which are planned for 2018/2019. Bode stated that ‘this will be a very large upgrade that will be overlapped with phase 2 of SuperMUC, but phase 1, the Sandy-Bridge based system, will be decommissioned and replaced.’ Te procurement process will begin in 2018 with installation sometime in 2019. ‘We always want to make sure that we have at


least one very large system running, which is why we have this overlapping procedure of installation and running of systems,’ explained Bode. Tis new procurement will also allow the LRZ


to strengthen further its commitment to energy- efficient HPC, as Bode explains: ‘Today some of our supercomputing hardware cannot be cooled with direct liquid cooling.’ Te next phase of procurement will remove the


Leibniz Supercomputing Centre www.scientific-computing.com l @scwmagazine


oldest Sandy-Bridge processors, but it will also replace some of the infrastructures. Bode said the installation would also include liquid-cooled electrical components that will allow the centre to reduce waste heat even further so that more than 90 per cent of waste heat can be removed through liquid cooling. Te focus on providing energy efficient HPC drives not only the procurement of new hardware at LRZ, but also the development of soſtware and tools that can facilitate the optimisation of applications to make the most efficient use of resources. l


OCTOBER/NOVEMBER 2016 11


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36