This page contains a Flash digital edition of a book.
HPC news Mont-Blanc project sets exascale aims


Energy effi ciency is already a primary concern for the design of any computer system and it is recognised that future Exascale systems will be strongly constrained by their power consumption. This is why the Mont- Blanc project, which was launched on 14 October in Barcelona with a kick-off meeting, has set itself the following objective: to design a new type of computer architecture capable of setting future global HPC standards that will deliver Exascale performance while using 15 to 30 times less energy.


This new project, coordinated by the Barcelona Supercomputing Center (BSC) and with a budget of more than 14 million Euros, including more than 8 million Euros funded by the European Commission, has three objectives: ● To develop a fully functional energy-effi cient HPC prototype using low-power commercially available embedded technology;


● To design a next-generation HPC system together with a range of embedded technologies in order to overcome the limitations identifi ed


in the prototype system; and


● To develop a portfolio of Exascale applications to be run on this new generation of HPC systems.


With energy effi ciency being a key issue, supercomputers are expected to achieve 200 Petafl op/s (PF) in 2017 with a power budget of 10MW, and 1,000 PF (1 Exafl ops/s) in 2020 with a power budget of 20MW. That means an increase in energy- effi ciency of more than 20 times compared with the most effi cient supercomputers today.


Oak Ridge awards supercomputing contract The new system will employ the


The US Department of Energy’s Oak Ridge National Laboratory has awarded a contract to Cray to increase the Jaguar supercomputer’s science impact and energy effi ciency. The upgrade, which will provide advanced capabilities in modelling and simulation, will transform the DOE Offi ce of Science-supported Cray XT5 system, currently capable of 2.3 million billion calculations per second (petafl ops), into a Cray XK6 system with a peak speed between 10 and 20 petafl ops.


latest AMD Opteron central processing units as well as Nvidia Tesla graphics processing units – energy-effi cient processors that accelerate specifi c types of calculations in scientifi c application codes. The last phase of the upgrade is expected to be completed in late 2012. The system, which will be known as Titan, will be ready for users in early 2013. With 299,008 cores and 600 terabytes of memory for solving critical problems, Titan will continue


print_ad_sci_aurora_borealis.pdf 1 30/09/2011 12:58:26


the legacy of its predecessor as one of the nation’s most powerful tools for science.


The computationally intense numerical experiments that will run on the system will focus on DOE priorities in energy technology and science research. Titan’s pioneering simulation projects will include the commercially viable production of biofuels and biomaterials from cellulosic materials, such as switchgrass and poplar trees.


The US Department of Energy’s Argonne National Laboratory has selected Allinea DDT as the default tool for parallel debugging on its Blue Gene and Linux systems. Curie, a French supercomputer open to European scientists, has joined PRACE. Installed in France at the Très Grand Centre de Calcul (TGCC) operated by CEA near Paris, Curie is the second PRACE Tier-0 system. The Portland Group’s products now include support for the upcoming microprocessors from AMD, based in the latter’s Bulldozer architecture. Dr Maria Ramalho has been appointed as managing director of PRACE Research Infrastructure, and will be based in an offi ce in Brussels, Belgium. The OpenMP Architecture Review Board (ARB) has appointed Michael Wong of IBM Canada as CEO of the OpenMP organisation, with Matthijs van Waveren appointed marketing coordinator.


IN BRIEF


Energy efficient liquid cooled


1 Petaflop in just 15 m² www.scientific-computing.com DECEMBER 2011/JANUARY 2012 17


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40