search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
THERMAL MANAGEMENT AND EMC


How to prepare for liquid cooling


By Simon Brady, product manager, liquid and high density cooling at Vertiv


T


he landscape of data centre cooling is undergoing rapid evolution, driven by the surge in high-performance computing (HPC) for artifi cial intelligence (AI) and machine learning (ML) workloads. As chip, server and rack densities continue to rise, traditional air cooling methods are proving insuffi cient to manage the escalating heat levels effectively. In response, enterprises are increasingly turning to hybrid solutions that integrate both air and liquid cooling technologies.


The adoption of liquid cooling heralds a new era in data centre thermal management. With nearly one in fi ve data centres already leveraging liquid cooling and an additional 61 per cent considering its adoption, it is clear that this technology is becoming mainstream. Liquid cooling offers unparalleled effi ciency in dissipating heat, particularly in high-density environments where traditional air cooling methods fall short.


Deploying liquid cooling Deploying liquid cooling is a signifi cant initiative that requires careful planning and consideration of the existing facility’s footprint, current thermal management strategy, workloads and budgets. Here is a roadmap for getting started:


Determine current and future workload requirements


In modern data centres, rack densities are increasing due to the latest x86 and AI-capable Graphics Processing Unit (GPU) chipsets, exceeding thermal design power (TDP) of 300W to 800W and nearing 1,000W or more. These chipsets power various applications like deep learning and AI chat generation, leading to AI servers with TDP of 6 kW to 10 kW per server. IT and facility teams must plan space allocation for these AI/HPC workloads to meet current demand and future growth, considering options like rack conversions or dedicated rooms with liquid cooling systems.


42 JULY/AUGUST 2024 | ELECTRONICS FOR ENGINEERS


Conduct a site audit


Teams must assess the feasibility of retrofi tting a facility with liquid cooling systems both technically and economically. This involves conducting a thorough site audit with partners, including computational fl uid dynamics (CFD) studies to analyse existing airfl ow and evaluate the capacity of current air-cooling equipment for integration into the new hybrid cooling infrastructure. Flow network modelling (FNM) is also essential for selecting coolant distribution units (CDUs), sizing piping and assessing the system’s ability to support server liquid cooling requirements. Water and power usage effectiveness (WUE and PUE) analysis, as well as total cost of ownership (TCO) studies, help optimise operations and identify areas for improvement.


Additionally, the power team should analyse infrastructure adaptability for power-intensive workloads like AI, while the joint IT, facility and power team


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54