This page contains a Flash digital edition of a book.
HPC 2012 HPC facilities


Matt Wood, technology evangelist and programme manager, data-intensive computing, Amazon Web Services, examines how the cloud has enabled HPC on-demand


Inside the computer room at CSCS


to a strong team of systems engineers that can put everything together, diagnose any problems and ensure its stability, we need people who can work with users to make their applications run as quickly and efficiently as possible. Te final step is enabling scientists to


work effectively with these systems. Active programmes are needed that provide training on how to use the systems, how to program a parallel application, the best methods for debugging and optimising code and how to run performance analyses. Again, actively engaging with scientists and principal investigators is a fundamental point. We have 40-50 per cent of our


a new building as the infrastructure of the old one provided too many limitations. Te second was to commence a series of projects with the scientific community that focus on preparing code for upcoming technologies. HP2C, the Swiss Platform for High-Performance and High- Productivity Computing, is one such project that began in 2009 and is due to end in 2013. Te third key area within the strategy is


budget available to use on strategic decisions and in order to ensure we are heading in the right direction we need to interact with users on their level and really understand the scientific problems they are attempting to solve. Te key is to have people within the centre who are accepted as bringing value to the research project, without necessarily being part of it, and who can provide assurance to the scientific community that the decisions we make will positively affect their projects. During the next three years we will be


implementing the Swiss National Strategy for High-Performance Computing. Produced in 2009, this is essentially a document that focuses on three key areas. Te first was to construct


Further information


Amazon Web Services www.aws.amazon.com


Argonne Leadership Computing Facility www.alcf.anl.gov


CSCS – the Swiss National Supercomputing Centre www.cscs.ch


“I personally believe that the future of large-scale high- performance computing lies in a pooling of resources and know-how”


the implementation of larger machines. At the end of 2012, CSCS will receive a first-of-its- kind supercomputer, the Cascade from Cray. Tis new generation of system is not due to be officially launched until 2013, but we will be installing a 750Tflop machine within the next few months. In 2013, CSCS will work with Cray to upgrade the system and we expect to be in the multi-petaflop range by 2015, before beginning our next cycle of investments in 2016. Looking even further ahead, I


personally believe that the future of large-scale HPC lies in a pooling of


resources and know-how. Fundamental research is an international enterprise and collaboration is a core part of that – and not just in terms of scientific expertise. As supercomputers become larger and more complex, so do their costs and much like we have seen in the field of fundamental physics and CERN, I believe that in the next few decades we will see similar developments in HPC.


Te use of cloud is bringing the highest levels of computing performance to researchers’ and engineers’ fingertips worldwide, and the University of Melbourne and the University of Barcelona are among organisations that have benefited from HPC on-demand. Representatives of Spain’s National Centre for Particle, Astroparticle and Nuclear Physics (CPAN) and Melbourne’s High Energy Physics Research Team, have deployed a DIRAC distributed computing framework in the cloud to integrate those resources with grid resources for the Belle Experiment, a high energy particle physics experiment. Te Belle Experiment is taking place at the KEK B-Factory particle collider in Japan through an international collaboration of 400 physicists and engineers with the common aim of studying matter and anti-matter asymmetries. Te collaborative nature of this experiment makes it a perfect use case for cloud computing. As researchers collaborate with experts


in specialist functions and form distributed public-private partnerships, cloud brings shared data spaces with easy access to on- demand computing resources. Public access to data sets and associated data analysis tools help create an ecosystem for data sharing and analysis that could even portend a larger trend in scientific collaboration. Projects such as the Belle Experiment are


also by their nature very compute-intensive, and we’re seeing large-scale modelling and simulation, and especially large-scale data analysis, increasingly challenge existing infrastructure and workflow methodologies. Data-intensive workloads require massively parallel frameworks that are ideally built on top of commodity hardware. Such systems, including Hadoop and non-relational databases, are becoming part of the solution for difficult computing challenges, and tuned to run in the cloud too. In the next few years, cloud computing will play a key role in providing the technology infrastructure that will fuel the data-driven future of scientific research and multiply researchers’ capabilities to solve problems of massive scale. l


5


Humus


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32