HIGH PERFORMANCE COMPUTING
Supporting science
ROBERT ROE LOOKS AT HPC TECHNOLOGIES THAT COULD ENABLE THE NEXT GENERATION OF SCIENTIFIC BREAKTHROUGHS
HPC technology enables some of the most notable scientific breakthroughs, such as the image
of a black hole that was released in April this year. However, it is not just HPC where this technology is driving innovation as the technologies permeate into artificial intelligence and machine learning. The first image of the event horizon
for a black hole involved an international partnership of eight radio telescopes with major data processing at MIT and the Max Planck Institute (MPI) in Germany. Processing more than four petabytes of data generated in 2017, the project demonstrates the scope of science enabled by HPC systems. Many of the servers involved in the
project were provided by Supermicro. Michael Scriber senior director, server solution management at the company, highlights the development that has taken place to enable multiple generations of Supermicro technology: ‘Our involvement really spans a number of years, as you can tell from the white paper there are a number of generations of products that were involved. If it was a nice, uniform type of thing, it would be much more predictable – to know exactly when things are going to be done.’
‘One of the helpful things with using Supermicro is that our management tools go across all of those generations. Where some people have become rather proprietary about their management tools, our focus has been on open-source applications such as Redfish, for example. With open-source tools you are not stuck with something that is proprietary,’ noted Scriber.
Imaging a black hole Using an array of eight international radio telescopes in 2017, astrophysicists
10 Scientific Computing World August/September 2019
used sophisticated signal processing algorithms and global very long baseline interferometry (VLBI) to turn four petabytes of data, obtained from observations of a black hole in a neighbouring galaxy, into the first image of a black hole event horizon. However, Scriber notes that the project
uses multiple generations of server technology. If the team were to use the latest technologies they could accelerate
‘Where some people have become rather proprietary about their management tools, our focus has been open source applications’
the process of data processing and reduce the time taken to produce these images: ‘We have got some great systems that would have made this task so much easier; so much faster. One of the great things about working at Supermicro is that they are on the very cutting edge.’ Scriber noted that the newly released EDSFF all-flash NVME system could have enabled the scientists to process the data much faster, and with less rack space. ‘EDSFF is just coming out and it allows you to have much higher capacities. You are talking about half a petabyte of data in the 1U box. By the end of the year this will be a Petabyte of flash,’ said Scriber. ‘Sometimes storage is part of that bottleneck, because you need to get that data out to the compute nodes so it can be processed,’ added Scriber. Many academic clusters are built in stages over their lifespan, so the idea of having multiple generations of servers in a
single installation is not a new idea. In fact, it is something that Supermicro expects when it designs its systems. ‘What we have done is to continue
generations using that same architecture. So the TwinPro architecture has not gone away, but what we have done is modify that architecture to take advantage of the latest and greatest technologies,’ added Scriber. ‘You didn’t necessarily think years ago, “I am building this system so I can run a black hole project”. You bought it for other reasons, but it is so versatile that you can pull these things together and make this happen,’ Scriber concluded.
Specialised computing resources HPC users have many options available if they want to go beyond the traditional model of x86 server-based clusters. Heterogeneous systems using GPUs have become popular in recent years, even taking many of the top 10 positions on the Top500.
Another option for more niche tasks are
field-programmable gate arrays (FPGA) which can be configured specifically for a single application, such as signal processing data from radio telescopes or image processing for ML, and AI applications such as autonomous vehicles.
As John Shalf, department head for
computer science at Lawrence Berkeley National Laboratory describes in the ISC coverage from the last issue of Scientific Computing World, architectural specialisation with FPGAs or even application-specific integrated circuits (ASIC) could be important to overcoming the bottleneck introduced by the slowdown of Moore’s Law. Accelerating certain applications, pre-
processing data, or providing hardware acceleration, are some options for the use of FPGA technology. Gidel, for example, an FPGA specialist based in Israel, has been working on compression algorithms and applications for image processing for autonomous vehicles. Ofer Pravda – COO, VP sales and
marketing at Gidel – noted that the company has been working on FPGAs
@scwmagazine |
www.scientific-computing.com
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32