This page contains a Flash digital edition of a book.
HPC DIRECTOR


At the centre of CERN Helge


Meinhard, group leader, platform and engineering services,


IT Department, CERN


y responsibilities at CERN involve – among others – the running of a large batch facility, extending to several thousand worker nodes. This provides the environment for our thousands of users to carry out their physics reconstruction and analysis tasks. The largest project at CERN is of course the Large Hadron Collider (LHC), which itself is facilitating a number of experiments, the major ones of which are: ATLAS, CMS, ALICE, and LHCb. These experiments are producing very high data rates, and even large computing centres – such as the one we have at CERN – would be unable to cope with the demands these rates place on storage and processing. As a result, we have joined forces with more than 200 computing centres worldwide that have expressed interest in processing the data generated by the LHC. This network is known as the Worldwide LHC Computing Grid (WLCG), which we break down into three tiers – Tier 0 (of which the only one is CERN), Tier 1 (11 of the larger centres, with lots of mass storage), and Tier 2 (the rest). The tiers are not necessarily a reflection of the size of the centres (some Tier 2 centres are larger than some in Tier 1); instead, it is down to the functionality of the respective centre within the WLCG. The tier structure is functional, rather than managerial. All of our Tier 1 and Tier 2 partners operate independently, and take their own decisions in terms of how they run their own centres and what specifications they use for their hardware.


M At CERN, and therefore at Tier 0, our 42


task is to receive the data produced by the four detectors, store it away locally (first on disk, later on tape), and carry out a ‘first pass’ data quality check, calibration and reconstruction. We then distribute the data to the Tier 1 sites.


In total, CERN has more than 8,500 servers on the floor, and about half of these are used as worker nodes for the batch facility. We have a heterogeneous approach, which enables us to have a rolling procurement programme where we’ll bring in new servers and retire old ones, increasing capacity when and where it is needed. We run each batch of nodes until the end of the warranty period, before replacing them. Our upgrade programme is constant. We use a range of different suppliers for our servers, from top-level, high-profile manufacturers through to specialist integrators. At all times, of course, we have to match the equipment to the needs of our users.


‘The problems we need to overcome are very similar to those faced by just about any HPC centre in the world, namely: space, power, cooling and budget’


It is not only the servers that are heterogeneous; the user requirements are too. All our users use Scientific Linux, a distribution rooted in the high-energy physics community and derived from a major upstream vendor distribution. As well as that, though, every user group has slightly different needs in terms of software set-up on the worker nodes. In practical terms, it would simply not be possible to reinstall software on each worker node every time the nature of the user-groups changed.


For this reason, we have turned to cloud computing in order to virtualise our services on this batch farm. This allows us to install one and the same operating


SCIENTIFIC COMPUTING WORLD OCTOBER/NOVEMBER 2010


system on to all our worker nodes, which are now turned into hypervisors. On these hypervisors, we install virtual machines with images (or software sets) that correspond to the requirements of a particular task. Once that task is completed, we simply delete the virtual machine, and it is made available for another task immediately – even one using a different image.


In the long run, we would like to move more and more of our services over to virtual machines. Ultimately, we want to be able to decouple our service layer from our hardware layer. This will make us more flexible in terms of being able to retire and replace physical machines, for example. We have been working with Platform Computing for the past 15 years, first using its LSF (Load Sharing Facility) product, back when our number of nodes was in double digits. Over time, we have evaluated competitive products, but we have always concluded that LSF was the best match for our needs. Around 18 months ago, we were approached by Platform to see if we would consider testing and using its new product, Platform ISF, which gives access to cloud computing functionality. We are now collaborating with Platform, and this has enabled us to conduct tests of worker node virtualisation at a significant scale. We are also looking at an open-source solution for cloud computing of worker nodes to work alongside ISF, namely the Open Nebula project.


There are more than 10,000 registered users at CERN, plus more than 2,300 CERN employees. On any given day, there can be between 8,000 and 9,000 people on site. For the batch facility, there can be between 400 and 500 active user IDs at one time, and each of those can submit multiple jobs.


The problems we need to overcome are very similar to those faced by just about any HPC centre in the world, namely: space, power, cooling and budget. Balancing those requirements is a major challenge.


www.scientific-computing.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54  |  Page 55  |  Page 56
Produced with Yudu - www.yudu.com