ARTIFICIAL INTELLIGENCE
populated racks can weigh several tonnes, especially when liquid cooling equipment is included. Floors need to be reinforced to manage this loading, and layouts must account for wider aisles, containment structures and pipework.
Airflow management still matters in hybrid facilities. Hot-aisle and cold-aisle containment remains important, particularly where liquid cooling is supplemented by traditional air handling. Engineers designing mechanical systems now need to balance two modes of cooling without compromising efficiency.
Many operators are settling on hybrid systems that combine liquid cooling for the most demanding racks with air handling for the rest. For engineers, the message is clear - successful AI facilities will need liquid infrastructure, from pipework and manifolds to sensors and monitoring, designed in as a core element.
Power distribution at high density Powering modern AI hardware is not just a question of supplying more energy. The distribution network has to be smarter, more resilient and more flexible.
Redundant distribution paths are critical to avoid downtime during maintenance or failure. Modern facilities are adopting advanced uninterruptible power supply (UPS) systems capable of smoothing out fluctuations and protecting sensitive electronics from transient faults. Intelligent power distribution units at the rack level allow operators to allocate and monitor supply more precisely, optimising usage across thousands of GPUs.
Engineers also face the challenge of scaling beyond the traditional 30 MW facility. In Europe, new campuses are being designed in the 200–500 MW range. In the US, some developers are working on sites that may eventually exceed 1 GW. These designs require close coordination with grid operators and, increasingly, on-site generation or storage to enable power stability.
Structural and mechanical design The physical properties of AI racks also alter the requirements of a building. Fully
DECEMBER/JANUARY 2026 | ELECTRONICS FOR ENGINEERS 17
Data proximity and latency The effectiveness of an AI model is not determined only by raw compute power. Data proximity plays a significant role. Training and inference workloads are highly sensitive to latency, especially in applications such as high-frequency trading, real-time translation or autonomous driving. If compute resources are located far from the data source, even milliseconds of additional latency can degrade accuracy or user experience. This is why site selection is changing. Enterprises are increasingly choosing data centre locations based on closeness to datasets and end users, rather than simply cost or available space. For engineers, this trend is driving demand for distributed and edge facilities. Smaller, high-density nodes close to population centres or industry clusters complement larger campuses, creating a multi-layered infrastructure designed for both scale and speed.
Flexibility as a design principle Ultimately, AI is not static. Models are
retrained, datasets expand and regulations evolve. Facilities must be designed to adapt without the need for wholesale rebuilds. That means supporting a path from 20 kilowatts per rack to more than 100 without redesign. It means modular capacity that can be added in phases without disruption. It also means portability, allowing workloads to move between sites to meet compliance requirements or to improve latency. Designing for flexibility enables infrastructure to remain viable over a long lifecycle, even as AI workloads continue to evolve.
The engineer’s role in the AI era For electronics and mechanical engineers, the rise of AI represents both a challenge and an opportunity. Facilities that once handled predictable, low-density workloads must now be reimagined for hardware operating at the limits of power and thermal design. The demands of GPUs, the need for liquid cooling, the scale of electrical distribution and the structural implications of rack density are all pushing data centres into new territory. At the same time, engineers are central to solving these challenges. AI will not advance on algorithms alone; it will advance because physical systems, from power electronics to liquid cooling manifolds, are designed and implemented with precision. The work of engineers is what turns AI from a digital ambition into a physical reality.
We are at an inflection point for engineers. The future success of AI will be determined as much by the design of pipes, pumps, conductors and circuits as by lines of code. Building facilities that can sustain AI workloads is not just an infrastructure challenge, it is one of the defining engineering challenges of our time.
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42