search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
HIGH PERFORMANCE COMPUTING SPONSORED CONTENT


Case study: the European Organisation for Nuclear Research delves into particle physics with Gigabyte servers


The European Organisation for Nuclear Research


(CERN) uses Gigabyte’s high-density GPU Servers with 2nd Gen AMD EPYC processors. Their purpose: to crunch the massive amount of data produced by subatomic particle experiments conducted with the Large Hadron Collider (LHC). The impressive processing


power of the GPU Servers’ multi-core design has propelled the study of high energy physics to new heights. The biggest and most


energy-intensive CERN project is its particle accelerator: the Large Hadron Collider (LHC). To detect the subatomic particle known as the beauty (or bottom) quark more quickly, CERN has decided to invest in additional computing equipment to analyse the massive quantities of raw data produced by the LHC. When CERN looked for ways


to expand their data processing equipment, the top priority was to acquire HPC capabilities. They specifically wanted servers equipped with 2nd Gen AMD EPYC processors and multiple graphics accelerators. The servers should also support PCIe Gen 4.0 computing cards. Gigabyte was the only company with the solution to match their demand. CERN selected the Gigabyte G482-s51, a model that supports up to eight PCIe Gen 4.0 GPGPU cards in a 4U chassis. Gigabyte has plenty of


experience in HPC applications. When AMD began offering PCIe Gen 4.0 technology on


their x86 platforms, Gigabyte immediately responded by leveraging technological know- how and expertise to design the first GPU Servers capable of supporting PCIe Gen 4.0. Gigabyte optimised the server’s integrated hardware design, from the electronic components and PCB, all the way down to the high- performance power delivery system.


Signal integrity was maximised by minimising signal loss in high-speed transmissions between CPU and GPU, and between


”Gigabyte has plenty of experience in HPC applications”


GPUs. This resulted in a GPU server that features slower latency, higher bandwidth and unsurpassed reliability. To effectively process huge quantities of data with HPC technology, more than the CPU and GPU’s combined computing power comes into play. High-speed transmission is crucial; whether it relates to the computing and storage of data between multiple server clusters, or the accelerated processing and communication of data between devices linked through the internet. The Gigabyte G482-s51 overcome this challenge with its PCIe Gen 4.0 interface, which supports high performance network cards. The increased bandwidth enables high-speed data transmission, which in turn enhances the performance of


www.scientific-computing.com | @scwmagazine


the entire HPC system, making it possible to process the 40 terabytes of raw data that the particle accelerator generates every second.


Custom-designed servers provide CERN with cutting- edge computing power To quickly analyse the massive quantities of data generated by experiments conducted with the LHC, CERN independently developed its own powerful computer cards to handle all the calculations. They paired these cards with graphics cards designed for image processing. The combined might of these specialised tools became the last word in cutting-edge computing. Gigabyte customised the G482-s51 to meet the client’s specific requirements, including specially-designed expansion slots and minute adjustments to the BIOS, and an advanced heat dissipation solution.


Specially designed expansion slots and minute adjustments to the BIOS: To account for CERN’s self-developed computer cards, Gigabyte customised the expansion slots of the G482-s51. Gigabyte also ran data simulations and made minute adjustments to the BIOS to better link all the computer cards to the motherboard, so that each card could achieve the maximum PCIe Gen 4.0 speed.


Advanced heat dissipation solution: CERN had special specifications for just about everything, from the power supply to the network interface


cards to the arrangement of the eight GPGPU cards. The heat consumption of these interwoven I/O devices is not all the same. Gigabyte leveraged its expertise in heat dissipation designs and integration techniques to successfully channel airflow inside the servers, so excessive heat would not be a problem. Gigabyte worked closely with AMD to expand the horizon for HPC applications. One of the most noteworthy advantages of the AMD CPU is its multi-core design. In the push to solidify the AMD EPYC processor’s position in the server market, Gigabyte has an important part to play. By creating an AMD EPYC Server that showcases top performance, system stability and steadfast quality, Gigabyte was able to satisfy CERN’s need for a solution capable of analysing large amounts of data and completing HPC workloads. Gigabyte’s responsive


customisation services and in-depth experience in research and development were just what the client needed to meet their specific requirements. By pushing computing power to the limit with state-of-the- art technological prowess, Gigabyte has taken an impressive stride forward in the application of HPC solutions to academic research and scientific discovery.


To learn more about Gigabyte server solutions, please visit https://www.Gigabyte.com/ Enterprise For direct contact, email server.grp@Gigabyte.com


Spring 2021 Scientific Computing World 15


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42