search.noResults

search.searching

note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
UK HPC updates its hardware


Robert Roe interviews experts from the University of Bristol to discover their strategy


to advance UK HPC research F


or more than a decade the University of Bristol has been contributing to scientific research using high performance computing (HPC). Now it


has invested more than £16 million in HPC and research data storage. To meet the needs of the wide array of


researchers using the HPC resources the university launched its latest system, named BlueCrystal 4 (BC4), earlier this year. Designed, integrated and configured by the HPC, storage, and data analytics integrator OCF, BC4 has more than 15,000 cores – the largest UK University system by core count, with a theoretical peak performance of 600 Tflops. Simon Burbidge, director of advanced


computing at the university’s Advanced Computing Research Centre, commented on the importance of developing a strong HPC presence: ‘We went through a long process to determine what to get and to get it for the best value. It is part of our long-term programme to increase the use of computational resources, in terms of the way they can help with modern research. ‘It is the culmination of a year’s work to get it in


and get it working, but, actually, it is part of a 10- year programme we have to promote HPC and its use within the research community.’


Accelerating research More than 1,000 researchers will be taking advantage of the new system. BC4 is already aiding research into new medicines and drug absorption by the human body. ‘We have researchers looking at whole-planet


modelling, with the aim of trying to understand the earth’s climate, climate change and how that’s going to evolve, as well as others looking at rotary blade design for helicopters, the mutation of genes, the spread of disease and where diseases


12 SCIENTIFIC COMPUTING WORLD


come from,’ said Dr Christopher Woods, EPSRC research soſtware engineer fellow at the university. ‘Early benchmarking is showing that the new


system is three times faster than our previous cluster – research that used to take a month now takes a week, and what took a week now only takes a few hours. Tat’s a massive improvement.’


Choosing the right technology However, while most new systems are optimised around a series of the most demanding applications, Burbidge and his colleagues were keen to take the needs of the wider user community into account. ‘Te first thing we do is talk to the users and


to check what it is that they want to do, the size of runs they want and what they hope to accomplish in the next two to three years, so we can size the system to suit,’ stated Burbidge. ‘If we find that people do need that extra memory, then we do take that into account, and that is what we have done with those large memory nodes. We also have a set of benchmark programs, real applications and some realistic-sized problems that we use to demonstrate the performance of the system. ‘We do have a wide workload from very large-


scale simulations of atmospheric physics and climate forecasting or aeronautical simulations, but there are lots of people doing equally important work in medicine, for example, doing epidemiological studies, statistics, population simulations,’ Burbidge concluded. BC4 uses Lenovo NeXtScale compute nodes,


each comprising of two 14 core 2.4 GHz Intel Broadwell CPUs with 128 GiB of RAM. It also includes 32 nodes of two Nvidia§ Pascal P100 GPUs plus one GPU login node. Connecting the cluster are several high-speed networks, the fastest of which is a two-level


IT IS PART OF A


10-YEAR PROGRAMME WHICH WE HAVE TO PROMOTE HPC AND ITS USE WITHIN THE


RESEARCH COMMUNITY OF THE UNIVERSITY


Intel Omni-Path Architecture network running at 100Gb/s. BC4’s storage is composed of one PetaByte of disk storage provided by DDN’s GS7k and IME systems running the parallel file system Spectrum Scale from IBM. Woods continues: ‘To help with the interactive


use of the cluster, BC4 has a visualisation node equipped with Nvidia Grid vGPUs, so it helps our scientists to visualise the work they’re doing, so researchers can use the system even if they’ve not used an HPC machine before.’ Housed at Virtus Data Centres’ LONDON4,


a UK-based shared data centre for research and education in Slough, BC4 is the first of the university’s supercomputers to be held at an independent facility. Tis is made possible by the direct connection to the Bristol university campus via JISC’s high-speed Janet network. Kelly Scott, account director, education


at Virtus Data Centres said: ‘LONDON4 is specifically designed to have the capacity to host


@scwmagazine l www.scientific-computing.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32
Produced with Yudu - www.yudu.com