This page contains a Flash digital edition of a book.
HPC DIRECTOR: UK ROUND-UP


Clare Gryce, coordinating manager, Research Computing, University College London (UCL)


UCL’s 2,560-core cluster, named Legion, supports a diversity of research from across the university’s faculties. Delivered by Dell and Clustervision and based on the Intel Woodcrest processor, it features a range of best-in-class compilers, libraries and applications that facilitate work ranging from the mature HPC-user communities such as materials science, high-energy physics and nanotechnology through to newer themes. Examples of the latter include epidemiology (analysis of HIV treatment regimes), economics (labour market modelling) and network research (analysis of next-generation wireless networks). High performance scratch is provided using the Lustre cluster file system on a DDN back end. Legion is complemented by a shared


memory service, based on SGI’s Altix 4700 line, and a large Condor pool implemented using student workroom machines. All services are managed and developed by the central Information Services Division, and are free at the point of access for UCL staff and sponsored postgraduate students. UCL researchers are also heavy users of the national HECToR service and the international resources such as the US TeraGrid. Legion’s workload includes a wide range of job classes from ensemble style tasks, each with tens of thousands of jobs, to parallel jobs using up to 500 cores. Optimising overall throughput is a key concern for the management team, and a finely-tuned scheduling environment maintains an average utilisation of more than 90 per cent. Legion is set to be expanded to 6,000 cores early in 2011 using Intel’s Westmere processor. This project will introduce a dedicated partition for high-throughput single node jobs within the architecture in response to the growth of the computational biology and computational medicine communities, who are looking to use Legion as the processing platform for a large number of sequencing facilities across UCL.


www.scientific-computing.com


The team is also looking at the use of accelerator technologies such as Nvidia’s Fermi GPGPU platform, and evaluating the potential performance gains offered. 2011 is also likely to see the establishment of a Windows-based cluster service for emergent user groups such as archaeologists. www.ucl.ac.uk/isd/common/research- computing


Dr David Wallom, technical director of the UK National Grid Service (NGS)


The UKNGS is a multi-site distributed computing facility available for all researchers within the UK. It operates a number of different services arranged in a hub and spoke model. At the centre there are a number of centrally provided services; these include a common authentication and authorisation system run through the UK e-Science certificate authority, accounting using an OGF standards-compliant data format, and service description publishing using the GLUE schema. The UKNGS also has centralised monitoring to ensure that the remote user facing services are available to a sufficient quality for the user community. The user-facing services operated by our member institutions consist of a number of different computational and data resources to which the general NGS user community has access. We currently have a range of different interfaces, architectures, and operating systems, allowing researchers to gain access to differing types of resources outside those that would normally be available either at all or at a scale beyond a single institution. The middleware chosen ranges from the European developed gLite, the US Globus and the OMII- UK developed GridSAM. Each of these has developed a different type of user community and as such we would not be willing to force one single choice onto the whole community.


The user community directly apply to the NGS through our website to be allocated a number of resources. The service is currently free at the point of use, allowing us to support not only the more traditional


research communities (those that would otherwise use a traditional HPC facility), but also those that are making their first foray into using distributed computing resources. As such, we are currently supporting users that are supported by every research funding agency in the UK at all different stages of their careers, from undergraduate students to world- leading professors.


As we move forward as an organisation, our focus is on how we will support the diverse research communities that are being established through the EU-funded ESFRI projects. As the UK representative in the European Grid Infrastructure (EGI), we will be the local contacts for those UK-based participants and users of these projects, many of whom have requirements for resources far beyond what is available currently. We must at the same time ensure that we continue to support the communities that we are already working with, providing resources and support to them and their hosting institutions. www.ngs.ac.uk


Ian Stewart, numerical analyst and director of advanced computing, Advanced Computing Research Centre (ACRC), University of Bristol Information Services


The ACRC has a mission to establish the University of Bristol as a world class centre for HPC-based research. The centre’s BlueCrystal facility currently consists of two machines, Phase1 with 384 AMD Opteron cores, used mainly for serial and GridPP (Large Hadron Collider project) jobs, and the 37Tflops Phase2 with 3,328 Intel Harpertown cores, used for large parallel jobs in areas such as climate modelling, drug design and aerospace engineering. BlueCrystal was designed and built by a consortium of IBM and ClusterVision, and on installation (June 2008) Phase2 was ranked 66th in the Top500. BlueCrystal currently has around 400 active users, mainly researchers and postgraduates undertaking research through 170 distinct projects in 23 departments across the university. The machines are housed in two state-of-the-art machine rooms that employ APC’s highly efficient water-cooled racks throughout.


SCIENTIFIC COMPUTING WORLD DECEMBER 2010/JANUARY 2011 17


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36