Predictions in HPC for 2020

Laurence Horrocks-Barlow, technical director of OCF predicts that containerisation, cloud and GPU-based workloads are going to dominate the HPC environment in 2020

a single socket configuration than any other competitors’ dual socket. This new AMD CPU is proving

There have been some interesting new developments in the high performance computing (HPC) market coming to light that will become more prominent in the coming months. In particular, containerisation, cloud and GPU based workloads are all going to dominate the HPC environment in 2020.

Processor evolution AMD’s second-generation EPYC ROME processor (CPU) has shown in benchmark testing to perform better as

12 Scientific Computing World Spring 2020

to be very powerful and able to support GPU computing, with the ability to leverage new memory technologies, support PCIe Gen 4.0 and significantly increase bandwidth with 64GB/s.

AMD has agreements with cloud providers AWS and Azure to put its CPUs on their cloud platforms and promote the use of AMD CPUs for HPC. This interesting move

reflects how a lot of our customers are now planning their next HPC cluster or supercomputer to include AMD in their infrastructure design to

better support AI workflows. AMD had previously been out of the HPC market for a period of time focusing primarily on consumer-based chips, but this has dramatically shifted in recent months. With Intel opting not to

support PCIe at present, it has given AMD a competitive advantage in the processor market. Lenovo has come a little later to the game, but we

will see new developments with AMD later on in 2020. Another new development

worth noting is Mellanox’s ConnectX-6n, which is the first adapter to deliver 200Gb/s throughput, providing high performance Infiniband to support larger bandwidth capability. Also, the recently launched Samsung Non-Volatile Memory Express (NVMe)

“Many research institutions are working towards a ‘cloud-first’ policy, looking for cost savings in using the cloud rather than expanding their data centres with overheads“

@scwmagazine |

Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32