This page contains a Flash digital edition of a book.
HPC news


FRAUNHOFER INSTITUTE INSTALLS NEW CLUSTER


The Fraunhofer Institute for Laser Technology (ILT) has installed a new high-performance computing cluster within its Centre for Nanophotonics. The cluster will allow researchers to simulate laser-based production processes across a wide range of time and length scales, including novel techniques in micro- and nanophotonics. Variables for industrial processes


are diffi cult to optimise through direct measurements in the micrometer- scale process zones owing to the tiny dimensions and very high temperatures involved. Computer simulations therefore offer an attractive alternative for those looking to optimise performance; simulations are easier to automate and often more cost-effective than experiments, while also allowing fl uctuations and measurement uncertainties to be excluded or specifi cally taken into account. Additionally, simulations of laser-based production processes tend to be multi-scale problems in which a large expansion of the component has to be calculated at a very high resolution. While this would be diffi cult to account for in an experimental set-up, it is easy to model in silico.


The required large number of grid


points in the simulations exceeded the capacity of conventional workstations in terms of processing time and storage space, and so the institute required a purpose-


built supercomputer for these applications. The funding provided by the state of North Rhine- Westphalia for the new Centre for Nanophotonics in Aachen allowed the Fraunhofer ILT to build a cluster capable of handling simulations of these multi-scale tasks. The fi nal stage of the high-power computer system was installed and started up in November 2010. The cluster is based on a


heterogeneous architecture consisting of both multi-core processors and nodes based on the Nvidia CUDA framework – a system that allows parts of the calculations to be performed on specialised graphics processors (GPUs). This modern concept is particularly suitable for the massively parallel execution of frequently recurring calculation steps. This modern concept is particularly suitable for the massively parallel execution of frequently recurring calculation steps.


The installed cluster system has 376 CPUs and eight graphics processor systems with 1,920 GPUs. The storage capacity amounts to almost two Tbytes of main memory and 67 Tbytes of hard disk storage, of which 20 Tbytes are on redundant interconnected drives. Data is exchanged within the cluster by means of a fast Infi niBand network. The theoretical total computer power approaches 10 Tfl ops.


Agriculture genomics gains HPC solution


Collaboration between Biogemma, GenomeQuest and SGI has resulted in the fi rst high-performance computing solution deployed for agriculture genomics. Serving three international breeding companies and one French technical institute, Biogemma has installed the GenomeQuest solution for sequence data management and analysis on the SGI Altix UV 1000 at its research headquarters. The GenomeQuest workfl ows used are RNA-Seq analysis, multi-genome analysis, de novo assembly, and polymorphisms discovery from SNP to large rearrangement.


Olivier Dugas, head of Upstream Genomics at Biogemma, commented:


18 SCIENTIFIC COMPUTING WORLD


‘With the advent of next generation sequencing (NGS), our data delivery rate for species of interest is approaching hundreds of Gigabases per month and we will be dealing in Tbytes by the end of 2011. With this collaboration and the unique technologies from GenomeQuest and SGI, our researchers are equipped to handle these huge data sets and perform the complex analytical processes that will lead to powerful genomic-based discoveries. For example, working at whole-genome resolution, we can now identify SNP and structural variations between individual plants and across hundreds of samples.’


Cluster aids computational science research


Louisiana State University (LSU) has purchased an IBM Power7 system to advance computational science research at its Center for Computation and Technology (CCT) and Department of Chemical Engineering. The system, dubbed ‘Pandora’, will


support a wide range of computational science, such as cyberinfrastructure frameworks, domain-specifi c application development, and advanced computational modelling in fl uid dynamics, biology, chemistry, oceanography, astrophysics, and materials. Pandora contains eight 32-core


IBM Power 755 nodes running 3.3GHz Power7 processors. These give the system 6.8 Tfl ops of peak performance. Each node is organised as a four 8-core Power7 processor, so it can operate at higher core processing speeds than LSU’s other


HPC systems. The IBM Power 755 platforms are suited to highly- parallel, computationally-intensive workloads such as weather and climate modelling, quantum chemistry simulations, astrophysics models, materials design and petroleum reservoir studies. ‘We envision this resource as a stepping stone to Blue Waters, the Power7-based Petascale computer being built and housed at the University of Illinois as NSF’s Track 1 leadership class system,’ said Honggao Liu, LSU’s HPC director. ‘The proposed system will advance CCT and LSU in supercomputing technology and enable academic departments and multi-organisation collaborations at LSU to operate in well-equipped research environments that provide a direct growth path to national level systems.’


Russia’s fi rst Pfl ops supercomputer enters second stage of development


T-Platforms Group will deliver and install the second phase of expansion of the powerful Lomonosov supercomputer at M.V. Lomonosov Moscow State University (MSU), increasing the overall supercomputer performance up to 1.3 Pfl ops (quadrillion operations per second). Currently, the company is working on the fi rst part of the Lomonosov supercomputer expansion project, which will enable the system to reach a peak performance level of 510 Tfl ops before the end of December. The second stage, which will increase the peak processing capacity by another 800 Tfl ops, bringing the total system peak performance level to 1.3 Pfl ops, will be based on the latest TB2-TLTM hybrid blade system from T-Platforms equipped with Nvidia’s TeslaTM X2070 GPUs.


Due to the compute density of the new platform, only eight standard computer cabinets are required to expand Lomonosov,


with each one providing peak performance of 100 Tfl ops in double-precision operations. The new solution has allowed MSU to reduce the price/performance ratio down to $31,000 per Tfl op. The solution will also include an additional 100 Tbytes of high- reliability storage for user data, and will increase the archive system volume up to one Pbyte. The network infrastructure will be expanded, however since the engineering infrastructure installed during the fi rst stage of the project was originally designed for system expansion up to one Pfl op, additional power and cooling infrastructure is not required. The fi rst stage of Lomonosov, currently operating and based on Intel Xeon processors, will be integrated with the new hybrid system based on GPUs through a shared Infi niBand interconnect infrastructure, which will allow the two systems to operate as a single computer. The commissioning of the complex will be completed at the end of the second quarter of 2011.


www.scientific-computing.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44