THERMAL MANAGEMENT How AI is impacting power and
cooling in the data centre Sam Bainborough, Sales Director EMEA-Strategic Segment Colocation & Hyperscale at Vertiv
W
ith the arrival of ChatGPT and other AI driven apps, it is clear that artificial intelligence will continue to proliferate our daily lives. Statista predicts that the AI market will reach US$305.9bn in 2024, and grow to US4738.80bn by 2030. However, AI will only be able to succeed if the technology infrastructure evolves to accommodate the compute power required to process the data and run the IT systems that enable this growing phenomenon.
As AI workloads continue to soar and the need for computing processing and power becomes more intense, data centre operators find themselves at a crossroads; they must rethink their design strategies, making significant changes to network architecture, power systems and thermal management if they are to cope with the requirements of today and tomorrow. This adaptive approach is crucial for keeping up with the rising impact of AI.
For organisations to succeed in this dynamic landscape, a holistic approach is required, emphasising adaptability, energy efficiency, and reliability to effectively adapt to the evolving needs, not least because data centre owners are confronted with multiple challenges such as carbon reduction regulations, surging power requirements and heightened heat generation. Prioritising sustainability and efficiency is even more paramount and innovation in data centres is even more important.
Power = Performance High Performance Computing (HPC) is evolving even further with the rise of AI, causing a significant increase in power demands which has been fuelled by the adoption of specialised processors that are essential for managing complex AI tasks. Typical data centre computer racks are expected to increase from 5 kW to 7 kW today, equivalent to the size of a small residential backup generator, to 50 kW or more in the not-too-distant future, according to Omdia’s 2022 Data Center Thermal Management Market Analysis report. This means that prioritising energy-efficient hardware and leveraging advancements in processor technology is becoming paramount and
effectively addressing this challenge involves actively seeking and implementing innovative solutions that are capable of delivering increasing power, whilst employing energy- efficient methods.
Critical infrastructure providers are taking a leading role in providing innovative solutions for power management and optimisation. Effectively navigating the challenge of rising power demands entails a proactive exploration and implementation of such cutting-edge solutions. There is an emphasis on the integration of energy- efficient hardware within data centres, which involves not only adopting state-of-the-art hardware designs, but also staying abreast of advancements in processor technology. The strategic focus on power efficiency also aligns with the broader goal of promoting sustainability amid escalating energy consumption. By prioritising energy efficiency, deploying dynamic power solutions and grid support features, data centres can not only tackle the demands posed by growing AI workloads and HPC, but also contribute to a more environmentally conscious and sustainable future.
More Computing Power = More Thermal management
42 APRIL 2024 | ELECTRONICS FOR ENGINEERS
Another implication of AI stems from the heightened heat generated by the advanced processor technology such as high performance Central Processing Units (CPUs) and Graphics Processing Units (GPUs) that are essential for handling intricate workloads. As supercomputers continue to shrink and become more power-dense, there is a need for a closer examination of thermal management within data centres. Data centre thermal management designs now include chilled water systems and indirect adiabatic systems. There are currently two typical approaches:
• Air cooling: using of rear-door heat exchangers in conjunction with air cooling. This solution allows hot air to move outside the servers.
• Immersion cooling: submerging servers and other components in a thermally conductive dielectric liquid or fluid.
Today there is a resurgence of interest in chilled water systems with two distinct options for liquid cooling at the rack level. The first option involves directing liquid to the server itself, using a room-based heat exchanger to reject heat back into the air. This modular system allows seamless integration without substantial changes to existing infrastructure. The second option introduces a Coolant
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52 |
Page 53 |
Page 54