search.noResults

search.searching

dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
HPC Yearbook 19/20


High-Performance Computing 2019-20


the ‘Frontier’ system at Oak Ridge National Laboratory. But AMD is not just striving for these large


scale deals with the US government labs as it has released a number of processors in its EPYC line which are delivering performance that is very competitive. Following on from the ‘Naples’ 7001 line of CPUs the company has now released a new HPC focused CPU in the ‘Rome’ Epyc 7002 generation Tis new processor, known as the Epyc


soſtware to applications,’ stated Rao. ‘In an AI-empowered world, we will


need to adapt hardware solutions into a combination of processors tailored to specific use cases – like inference processing at the edge –and customer needs that best deliver breakthrough insights. Tis means looking at specific application needs and reducing latency by delivering the best results as close to the data as possible,’ added Rao. Spring Hill can be added to any modern


server that supports M.2 slots — according to Intel, the device communicates using the M.2 standard like a PCIe based card rather than via NVMe. Intel’s goal with NNP-I, is to provide a


dedicated inference accelerator that is easy to program, has short latencies, has fast code porting and includes support for all major deep learning frameworks. Tese are some of the first steps the first


step taken by Intel to compete in the AI and ML markets. Te company announced its Intel Xe line of graphics cards earlier this year but beyond that, we have not seen anything from Intel that could rival Nvidia’s hold on the AI market. Te development of these processors - specifically designed to process neural networks could help to build a foundation for intel, allowing the company to gain a foothold in this market. David Yip, OCF’s HPC and storage


business development manager, thinks the rise of GPU technology means that HPC and AI development go hand in hand. Te increase in AI provides added benefit to the HPC ecosystem. ‘Tere is a lot of co-development, AI


and HPC are not mutually exclusive. Tey both need high-speed interconnects and very fast storage. It just so happens that AI functions better on GPUs. HPC has GPUs in


20


abundance, so they mix very well.’ In addition, he also noted that AI is


bringing new users to HPC systems who would not typically be using HPC technology. ‘Some of our customers in universities are seeing take-up by departments that were previously non-HPC orientated, because of that interest in AI. English, economics and humanities – they want to use the facilities that our customers have. We see non- traditional HPC users, so in some ways, the development of AI has done HPC a service,’ added Yip. Yip noted that the fastest system on the


What is important is not so much the FLOPS. We all know that for real HPC applications it is the bandwidth that counts





Top500 in an academic setting is based in Cambridge. ‘It is just over 2.2Pflops and you have to go back to about 2010 to get that kind of performance at the top of the Top500. ‘It is almost a decade ago, so there is a difference in these very large systems, but we do eventually see this kind of performance come down.’


Leaving a mark


AMD has had a strong year with notable contract wins in the US for the Department of Energy National Laboratories. It will deliver CPUs in the ‘Perlmutter’ system for Lawrence Berkeley National Laboratory (Berkeley Lab) and CPU and GPU components for


7H12, which has set records for performance in recent benchmark tests carried out by Atos. Te tests focused on SPECrate benchmarks, as well as in High Performance Linpack (HPL) with the latter being used in the Top500 to measure the fastest supercomputers. With a clock speed of 2.6 GHz, a 15 per cent increase over the Epyc 7742 processor, and requiring around 280 watts, it is more similar to a high- end datacenter GPU than a CPU typically used in HPC. Te chip also requires liquid cooling. Te Atos benchmarks made use of Atos’


Bull Sequana’s Enhanced Direct Liquid Cooling system combined with the AMD EPYC 7H12 processor. Te results of the Atos measurements currently top the best- published results for two-socket nodes on four SPECrate benchmarks. Additionally, it has set a new record for the HPL Linpack Benchmark on an AMD EPYC CPU, with an 11 per cent increase in performance. Tese benchmarks aim to measure how hardware systems perform under compute-intensive workloads based on real-world applications. ‘We’re extremely proud that our


BullSequana has achieved these world-record results. Our unique Enhanced Direct Liquid Cooling system provided the most efficient environment for achieving such performance of the AMD EPYC processor,’ said Agnès Boudot, senior vice president, Head of HPC and Quantum at Atos. ‘Our BullSequana equipped with the latest AMD chip, provides our customers with the highest available performance for HPC and AI workloads, with an optimised TCO, to support them in going beyond the limits of traditional simulation.’ ‘Taking on the processing challenges of


the world’s highest performing systems meant creating a solution up to the task, which AMD achieved with the 2nd Generation AMD EPYC processor,’ said Scott Aylor, corporate vice president and general manager, AMD datacenter solutions. ‘When paired with Atos’ BullSequana and their own impressive capabilities and customer relationships, we can deliver a whole new range of possibilities to address the processing needs of the modern datacenter.’n


www.scientific-computing.com/hpc2019-20


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32