• • • AI • • •
AI TURNS UP THE HEAT ON DATA CENTRE DESIGN AND SPARKS A COOLING REVOLUTION
The recent announcement that OpenAI will collaborate with Broadcom to design and deploy 10 GW of custom AI accelerators across global data centres represents a turning point for the sector
By Kevin Roof, Director of Offer & Capture Management, LiquidStack
t doesn’t just reshape computers, it rewrites the engineering rulebook for how we design, power and cool future digital infrastructure. As AI accelerators evolve beyond general- purpose GPUs, each new silicon generation brings distinct thermal and electrical demands. Instead of conventional, uniform data halls designed for consistent rack loads, facilities are now divided into zones that support chips with varying power profiles and cooling demands. The data centre is becoming an ecosystem of micro-environments, and thermal engineering now sits squarely at the intersection of power and performance.
I From predictability to
performance diversity For years, data centre design thrived on predictability. Racks were standardised, loads were stable and air-cooling systems could be optimised around consistency. That world has changed. The rise of AI has introduced a new era of performance diversity where every chip, rack and workload behaves differently. Partnerships like OpenAI and Broadcom’s illustrate the shift: accelerators are now being tailored for specific machine-learning tasks, each drawing and dissipating energy in its own way. Some push beyond 1.4kW per chip, others focus on efficiency and thermal balance. The result is an environment where power and cooling are now part of the same conversation: one continuous system that must adapt in real time to the demands of modern hardware. Air-based systems are nearing their thermal limits, typically reaching around 40–60kW per rack even with advanced containment. To keep pace, liquid cooling has moved from niche to necessity. Next-generation direct-to-chip systems now channel heat directly from silicon to coolant,
unlocking densities once thought impractical and reshaping how data-centre designers think about energy, efficiency and scale.
Engineering for adaptability The next generation of AI data centres demands modularity not just in compute, but in cooling. The era of fixed infrastructure is over. As electrical systems evolved toward modular UPS and busway topologies, cooling is undergoing the same transformation. Modular coolant distribution units (CDUs) allow capacity to scale with compute, expanding or servicing in place to match performance demand without disruption. For engineers, this marks a convergence of disciplines. Power and cooling are no longer independent systems but interdependent components of one intelligent platform. Integrating variable-speed IE5 pump motors, sensors and smart control logic into the building management system creates a feedback loop of data and performance, the foundation of both efficiency and resilience.
Liquid cooling now sits at the centre of this evolution. The CDU is more than a thermal asset, it’s a powered, data-generating node within the electrical ecosystem. Designing around it means ensuring load management, electrical protection and continuity of coolant flow, even during transitions. This is engineering pragmatism at scale, efficient, safe and built for continuous operation.
Flexibility is the new resilience As companies like OpenAI and Broadcom deploy fleets of custom AI chips worldwide, no two sites are the same. Climate conditions, grid limitations and chip generations vary dramatically, and so must the infrastructure that supports them.
36 ELECTRICAL ENGINEERING • NOVEMBER 2025
Modular liquid cooling enables that adaptability. It allows capacity to be added incrementally, commissioned rapidly and upgraded over time without major redesign. In a world where density, heat and performance are accelerating faster than ever, flexibility isn’t just smart design, it’s resilience engineered into the heart of the data centre.
The engineer’s challenge for 2026
The OpenAI–Broadcom collaboration marks more than a hardware milestone, it signals a new phase in how computer infrastructure is conceived. Power and cooling are no longer supporting acts, they’ve become active participants in system performance. As hyperscalers push toward 10-gigawatt deployments of custom accelerators, the boundaries between chip, rack and facility are dissolving. Every watt of energy now carries a thermal consequence that must be managed in real time, with intelligence and adaptability built in from the start. The future won’t be built on static infrastructure. As AI accelerators become increasingly tailored, the systems around them must keep pace. Cooling will shift from a passive service to an active layer of intelligence, capable of learning, responding and expanding with each wave of innovation. That’s why modular liquid cooling is more than an operational advantage, it’s an architectural necessity. In the era of AI-defined silicon, adaptability is the new resilience. For electrical and mechanical engineers alike, the challenge is clear: design cooling not as an afterthought, but as an intelligent, active part of the performance equation. Because the next great leap in AI won’t be defined by processing speed, but by how intelligently we manage heat.
https://liquidstack.com
electricalengineeringmagazine.co.uk
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52