search.noResults

search.searching

dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
HIGH PERFORMANCE COMPUTING g


portfolio of GPU systems and Nvidia HGX-2 system technology, Supermicro is introducing a new 2U system implementing the new Nvidia HGX A100 4 GPU board (formerly codenamed Redstone) and a new 4U system based on the new Nvidia HGX A100 8 GPU board (formerly codenamed Delta) delivering five petaflops of AI performance,’ said Charles Liang, CEO and president of Supermicro. ‘As GPU accelerated


computing evolves and continues to transform data centers, Supermicro will provide customers the very latest system advancements to help them achieve maximum acceleration at every scale while optimising GPU utilisation. These new systems will significantly boost performance on all accelerated workloads for HPC, data analytics, deep learning training and deep learning inference,’ added Liang. As a balanced datacentre


platform for HPC and AI applications, Supermicro’s new 2U system makes use of the Nvidia HGX A100 4 GPU board with four direct-attached Nvidia A100 Tensor Core GPUs using PCI-E 4.0 for maximum performance and Nvidia NVLink for high-speed GPU-to- GPU interconnects. This advanced GPU


system accelerates compute, networking and storage performance with support for one PCI-E 4.0 x8 and up to four PCI-E 4.0 x16 expansion slots for GPUDirect RDMA high-speed network cards and storage such as InfiniBand HDR, which supports up to 200Gb per second bandwidth. ‘AI models are exploding in


complexity as they take on next-level challenges such as accurate conversational AI, deep recommender systems and personalised medicine,’ said Ian Buck, general manager and VP of accelerated computing at Nvidia. ‘By implementing the Nvidia HGX A100 platform into their new servers, Supermicro


12 Scientific Computing World Summer 2020  Nvidia A100 GPU


provides customers the powerful performance and massive scalability that enable researchers to train the most complex AI networks at unprecedented speed.’ Optimised for AI and machine learning, Supermicro’s new 4U system supports eight A100 Tensor Core GPUs. The 4U form factor with eight GPUs is ideal for customers that want to scale their deployment as their processing requirements expand. The new 4U system will


have one Nvidia HGX A100 8 GPU board with eight A100 GPUs all-to-all connected with Nvidia NVSwitch for up to 600GB per second GPU- to-GPU bandwidth and eight expansion slots for GPUDirect RDMA high-speed network cards. Ideal for deep learning training, data centers can use this scale-up platform to create next-gen AI and maximise data scientists’ productivity with support for ten x16 expansion slots.


Customers that require


large scale HPC simulations or who are investing in AI and DL technologies may want to explore the latest server offerings to ensure that they have the best available performance to accelerate their simulations. For example, scientists who want maximum acceleration can look to Supermicro’s new A+ GPU system which


supports up to eight full-height double-wide (or single-wide) GPUs via direct-attach PCI-E 4.0 x16 CPU-to-GPU lanes without any PCI-E switch for the lowest latency and highest bandwidth. The system also supports


up to three additional high- performance PCI-E 4.0 expansion slots for a variety of uses, including high- performance networking connectivity up to 100G. An additional AIOM slot supports a Supermicro AIOM card or an OCP 3.0 mezzanine card. The A100 GPU includes


a new multi-instance GPU (MIG) virtualisation and GPU partitioning capability that is particularly beneficial to cloud service providers (CSPs). When configured for MIG operation, the A100 permits CSPs to improve the utilisation rates of


”AI models are exploding in complexity as they take on next-level challenges such as accurate conversational AI, and personalised medicine”


their GPU servers, delivering up to 7x more GPU Instances for no additional cost. Robust fault isolation allows them to partition a single A100 GPU safely and securely. The A100 adds a new third-


generation Tensor Core that boosts throughput over V100 while adding comprehensive support for DL and HPC data types, together with a new Sparsity feature that delivers a further doubling of throughput. New TensorFloat-32 (TF32)


Tensor Core operations in A100 provide an easy path to accelerate FP32 input/output data in DL frameworks and HPC, running 10x faster than V100 FP32 FMA operations or 20x faster with sparsity. For FP16/FP32 mixed-precision DL, the A100 Tensor Core delivers 2.5x the performance of V100, increasing to 5x with sparsity.


The GPU enables Bfloat16 (BF16)/FP32 mixed-precision Tensor Core operations running at the same rate as FP16/FP32 mixed-precision. Tensor Core acceleration of INT8, INT4, and binary round out support for DL inferencing, with A100 sparse INT8 running 20x faster than V100 INT8. For HPC, the A100 Tensor Core includes new IEEE-compliant FP64 processing that delivers 2.5x the FP64 performance of V100.


The Nvidia A100 GPU


is architected to not only accelerate large complex workloads, but also efficiently accelerate many smaller workloads.


A100 enables server


solutions that can accommodate unpredictable workload demand, while providing fine-grained workload provisioning, higher GPU utilisation, and improved TCO.


Whether looking for initial


test systems to explore AI research or to replace large computing systems at your organisation, by taking a look at the latest server technologies scientists can accelerate the performance of their research.


@scwmagazine | www.scientific-computing.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38