HIGH PERFORMANCE COMPUTING
Addressing computing challenges at ISC
ROBERT ROE SPEAKS WITH DR MARIA GIRONE, CHIEF TECHNOLOGY OFFICER AT CERN OPENLAB – WHILE BELOW, KEREN BERGMAN DISCUSSES SILICON PHOTONICS FOR HPC
As the CERN organisation prepares for ‘high-luminosity’ experiments in 2026, the
organisation faces the significant challenge of supplying a computing infrastructure that can handle the huge amount of data generated. To overcome this, CERN openlab is working on a mixture of novel new approaches to data handling and the use of commercial cloud companies that can help expand the capacity available to CERN researchers. The challenge of creating the largest
particle accelerator is now complete but there is another challenge – harnessing all of the data produced through experimentation. This will be become even more when the ‘high-luminosity’ LHC experiments begin in 2026. The demands of capturing, storing,
6 Scientific Computing World June/July 2018
and processing the large volumes of data generated by the LHC experiments may be as much as 50 to 100 times larger than today, with storage needs expected to be in the order of exabytes. CERN is working to tackle many of
these challenges together with ICT industry leaders, through a public private partnership known as ‘CERN openlab’. Maria Girone, chief technology officer at CERN openlab, will discuss the work being done to prepare CERN’s computing infrastructure for future LHC experiments ahead of her keynote at ISC High Performance in Frankfurt in July.
What are the scale of the ICT challenges faced by CERN? CERN is home to the Large Hadron Collider (LHC), the world’s largest and most powerful particle accelerator. Built in a 27km-long tunnel, about 100m underground at the Franco-Swiss border, the LHC helps scientists to unlock the secrets of the universe. The particles within the LHC are made to
collide at close to the speed of light. This gives the physicists clues about how the particles interact, and provides insights
into the fundamental laws of nature. Within the LHC, there are up to one billion particle collisions per second. Custom hardware ASICs filter this down to approximately 100,000 collision events, which are then sent for digital reconstruction. More detailed algorithms whittle this down to around 100 ‘events of interest’ per second. In 2017, this process resulted in 40
petabytes of data being sent to the main CERN data centre, which is where archiving and processing takes place. This data centre – together with its remote extension in Hungary – hosts 230,000 processor cores and 15,000 servers. Today, over 230 petabytes are permanently archived on tape. Researchers at CERN have developed
custom-built disk and tape systems that can scale to huge capacity and are capable of delivering data at a rate of petabytes per day.
What is he approach to handling the data produced from LHC? The Worldwide LHC Computing Grid (WLCG) is used to store, distribute, and analyse this enormous volume of data.
@scwmagazine |
www.scientific-computing.com
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32