HIGH PERFORMANCE COMPUTING g
published a white paper highlighting many of the key challenge areas that we believe are ripe for tackling together with our industry collaborators. A series of planned upgrades to the LHC will result in the HL-LHC coming online in around 2026. This will crank up the performance of the LHC significantly and will increase the potential for discoveries. The higher the luminosity, the more collisions, and the more data the experiments can gather. An increased rate of collision events means that digital reconstruction of collisions becomes significantly more complex. At the same time, the LHC experiments plan to employ new, more flexible filtering systems that will collect a greater number of events. This will drive a huge increase in
computing needs. Using current software, hardware, and analysis techniques, the estimated computing capacity required
would be around 50 to 100 times higher than today. Data storage needs are expected to be in the order of exabytes by this time. Technology advances over the next seven to 10 years will likely yield an improvement of approximately a factor 10 in both the amount of processing and storage available at the same cost, but will still leave a significant resource gap. The solution to this is likely to be found in a series of hard-fought improvements and technology migrations – each contributing small advances – rather than a single revolutionary change. In order to achieve the required scale,
we will have to make use of a mix of new hardware architectures and new computing techniques. We do not have existing proof of concepts, but we do have examples from industry related to improvements in data analytics, and techniques like machine learning, that we think can help. High-performance
“In order to achieve the required scale, we will have to make use of a mix of new hardware architectures and new computing techniques”
hardware architectures like GPUs, FPGAs, and the next generation of CPUs also offer the potential to dramatically improve some applications. Tying this heterogeneous system
together and equipping it with the latest software techniques is going to be a huge R&D challenge over the next few years. Innovation is therefore vital; we believe that working together with leading ICT companies through CERN openlab can play a very important role in helping us to overcome the challenges we face.
Optical supercomputing Dr Keren Bergman, Professor of Electrical Engineering at the School of Engineering and Applied Science, Columbia University discusses her ISC keynote on the development of silicon photonics for HPC
How did you get started with this research? My career at Columbia began around 16 years ago and throughout that time my main focus has been looking at optical networks and optical interconnects for various scales of computing systems. Everything from looking at on-chip interconnects to datacentres and long haul metro area networks but my interest and focus on HPC has been going on for at least a decade. There was a DARPA report looking at the projections for building an exascale system back in the early 2000’s and then the report came out in 2008. All the way back then I was very keen and involved in making use of photonics in these systems.
8 Scientific Computing World June/July 2018
Back then was really the first time that we hit on how power consumption was going to become such a huge factor in scaling. Right around that time I would say mid 2000’s was also around the time that in the integrated photonics world there was also a corner turned. Silicon is not a very good
photonic material simply because it does not have a direct end gap like some semiconductors. This means that it is not a very good material to make lasers and detectors. On the other hand silicon is extremely prevalent, we have enormous capabilities in CMOS fabrication and so forth. And so we had the idea of moving photonics into the integrated circuit world – but still, with this barrier of not being able to do anything too useful in silicon was challenging.
What happened in the mid 2000s was coincidentally with
the 90 nm CMOS fabrication node. That was a turning point because, when you are able to fabricate with 10’s of nm precision, the optical devices that you can fabricate with that material have less loss. The optical wavelength is very large, integrated photonics is much larger than the transistor but because the technology was developed for CMOS to be able to lithographically make billions of transistors. Where you were able to
do tens of nanometre or a hundred nanometre-type precision we could create optical wave guides which are the equivalent of optical wires with enough precision that the walls are smooth enough that the losses are small. All of a sudden, because of a
technology that was originally developed for CMOS and the semiconductor industry, we are now able to make optical things in the CMOS
process with pretty good characteristics such as low loss – and this was one of the things that really propelled the field allowing us to take integrated optics right into the silicon platform. Fast-forward to now, and there has been really tremendous progress – so today we are able to make all the components of photonic links and this includes 50 GHz data modulators, 50
“We can send optical light over long distances without having distances without having to spend energy- without having to re amplify”
@scwmagazine |
www.scientific-computing.com
g
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32