HPC DIRECTOR: UK ROUND-UP
will work for 90 per cent of the year, contributing to cost reductions. As a central, shared system (not devolved down to groups), utilisation of the HPC system is high at approximately 95 per cent currently. In essence, the more consolidated the system, the higher the utilisation, which in turn means that power usage has a greater effect.
The BlueCrystal facility at Bristol’s ACRC. Image credit: Timo Kunkel
Many of the research projects conducted on BlueCrystal can produce TBs of data per week. The ACRC is currently installing a resilient and expandable petascale research data storage facility called BluePeta to provide storage, not only for traditional HPC users such as climate modellers and particle physicists, but for researchers from other disciplines, such as the arts and social sciences, who also have large datasets and are beginning to explore data intensive computation.
HPC is of increasing interest to undergraduates and taught postgraduates, as well as to research students. ACRC runs a range of HPC-related short courses, and the university’s Department of Computer Science has an undergraduate HPC module and will develop an HPC Masters’ course for 2011 to 2012.
The university is committed to ongoing continued investment in HPC and storage to reflect rapid advances in technology. The next phase of BlueCrystal in 2011 will explore heterogeneous architectures and general-purpose computing on GPUs.
www.bris.ac.uk/acrc
Tony Weir, service director, Edinburgh Compute Data Facility (ECDF), home of the Eddie supercomputer
Launched in July 2010, Eddie was designed with power efficiency in mind. The system has a full power draw per node of just 271.86W (versus 312.5W for other solutions proposed). Subsequently, the university will benefit from energy savings worth £135k over three years over the costs incurred by Eddie’s predecessor. Using ‘free Scottish air’ to cool water filled pipes, the university can cool hot air from the system down to approximately room temperature. This cooling process
18 The HPC system design, by integrator
OCF, uses IBM System x iDataPlex servers, running Intel Westmere processors. The IBM GPFS policy engine means we can integrate very fast solid state disk with fast fibre channel with slower SATA storage, ensuring optimum performance of data on the system. Using GPFS, 40TB of data storage using IBM System Storage (a combination of fibre channel and solid state drives) is fully-integrated with 90TB of SATA storage. The system also includes BLADE Network Technologies and Qlogic switches. Users’ research is multi-disciplinary and includes bioinformatics, speech processing, particle physics, material physics, chemistry, cosmology, medical imaging and psychiatry. Access is available from local PCs; researchers working outside of the site can also gain access. We will soon be launching subject-specific user, computational and data portals.
Alongside low power usage, meeting users’ requirements was of key importance during design. We undertook a wide- ranging requirement-gathering exercise: Every user wanted more compute power, so Eddie will double and redouble again over a three-year period. Currently, some researchers have seen a five-fold speed increase for their application code, compared to Eddie’s predecessor. One user has seen an approximately 40-fold application code speed-up using our small farm of GPU processors. Shortly, we will purchase Phase 2 of the project (doubling the compute power again). We may add more GPUs. A future challenge will be how to manage and store large amounts of data. In the last six months, research groups have begun asking for 20-100TB of storage, per project! Previously, researchers only required 1-2TB per project.
www.ed.ac.uk/schools-departments/ information-services/services/research- support/research-computing/ecdf
SCIENTIFIC COMPUTING WORLD DECEMBER 2010/JANUARY 2011
Dr Jon Lockley, manager, Oxford Supercomputing Centre (OSC)
OSC is the central HPC resource for the University of Oxford. As such our user set encompasses many of the academic disciplines within the university. Physical sciences (particularly chemistry and its various sub-disciplines) still dominate local usage, but uptake is becoming more widespread from other areas including the life sciences, arts and humanities. Local usage is charged directly unlike many of our academic counterparts.
OSC operates a range of HPC systems of varying architectures and hardware: Clusters, shared memory, Intel, AMD and GPGPUs. This allows us to choose the best cost-performance system for a given projects and application set. We generally install one new major piece of infrastructure each year to keep the facility agile and responsive to user requirements. In addition, there are a number of other services run from our parent department, the Oxford e-Research Centre, including a Microsoft cluster; grid computing (both national and local) and we are prototyping a cloud-based service.
‘We place a strong emphasis on quality of applications software and the knowledge of our users’
Increasingly we see projects that are at least as much driven and constrained by data as by more traditional simulation/number- crunching considerations. We are therefore basing planning more around the storage and management of data than in previous years. We also anticipate a greater demand for visualisation services as part of this effort. Looking to the future, there are a number of technologies of interest (hence our recent investment in GPGPUs). However, we also place a strong emphasis on the quality of applications software and the knowledge of our users – an HPC centre is only as good as the jobs it is running. OSC provides local training and applications support, but our view is that the skill set of HPC users needs constant effort to be maintained and enhanced as part of a national strategy.
www.oerc.ox.ac.uk/computing-resources/osc
www.scientific-computing.com
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36