Energy-efficient supercomputing
The Leibniz Supercomputing Centre (LRZ) is driving the development of new energy-efficient practices for HPC, as Robert Roe discovers
I
n the second profile of the three HPC centres under the banner of the Gauss Center for Supercomputing (GCS), the focus is now on the Leibniz Supercomputing Centre (LRZ),
located in Garching, Munich. LRZ serves HPC users with a particular application focus on life sciences, environmental sciences, geophysics and astrophysics, and engineering. However, as Professor Arndt Bode, chairman
of the board of directors at LRZ explains, its role is more than just providing HPC – it also provides IT services to the local academic and research communities. He said: ‘We are not a single computing centre
for one university or research institution. We serve universities, colleges, the Bavarian state library, internet visualisation services; we provide backup and storage and, of course, we are also running many HPC servers.’ Each GCS centre hosts its multi-petaflop
supercomputer, placing all three institutions among the most powerful computing centres worldwide. With a total of more than 20 petaflops, GCS offers by far the largest and most powerful supercomputing infrastructure in
10 SCIENTIFIC COMPUTING WORLD
Europe to serve a broad range of science and industrial research activities. Te LRZ provides general IT services for
more than 100,000 university customers in Munich and the Bavarian Academy of Sciences and Humanities (BAdW). It also supports a communications infrastructure called the Munich Scientific Network (Münchner Wissenschaſtsnetz, MWN); a competence centre for data communication networks; archiving and back-up on extensive disk and automated magnetic tape storage; and a technical and scientific high-performance supercomputing centre for all German universities.
HPC resources Te flagship supercomputer at LRZ is SuperMUC, although the supercomputer is effectively two separate installations with individual entries in the Top500, the bi-annual list of most powerful supercomputers based on a Linpack benchmark. Phase 1 is rated as number 27 on the Top500, while phase 2 sits at number 28 on the list, as of June 2016. SuperMUC Phase 1 consists of 18 ‘thin node’
islands based on Intel Sandy Bridge processor technology, six ‘thin node’ islands based on Intel Haswell processors, and one ‘fat node’ island based on Intel Westmere processors with each island consisting of at least 8,192 cores. All compute nodes within an individual island are connected via a fully non-blocking Infiniband network. Tis network consists of FDR10 for the ‘thin nodes’ of Phase 1, FDR14 for the Haswell nodes of Phase 2 and QDR for the Fat Nodes. Bode stressed that the different processing
technologies within each phase mean that they are effectively used as separate computing resources. He explained that the ‘fat nodes’ are a consequence of the previous supercomputer at LRZ, based on an SGI Altix shared memory system. ‘We had many applications that had quite large memory requirements, so we started with a system based on Westmere with a rather large main memory.’ Phase 1 also includes a smaller cluster
SuperMIC, consisting of 32 Intel Ivy Bridge nodes, each with two Intel Xeon Phi accelerator cards. However, the many-core Xeon Phi nodes are primarily used for application development and optimisation. Bode explained that this is why there is a relatively low number of Intel Xeon Phi nodes. Combining the two phases, SuperMUC
creates a supercomputer totalling more than 241,000 cores and a combined peak performance
@scwmagazine l
www.scientific-computing.com
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36