COMMENT
the outset could make their lives easier, for years into the future.
Just consider the amount of kit involved. Whether you do some sums on the back of a napkin or drop a few prompts into ChatGPT, Meta’s planned data centres are likely to house between one and two million GPUs spread across ten to twenty thousand racks, along with associated networking and storage.
That is an incredible amount of equipment to deploy and manage. And a correspondingly massive amount of cooling infrastructure to deploy and maintain too. And there will likely be failures or upgrades that require components to be replaced over time.
So, when designing this cooling infrastructure, Meta will need to consider not just the amount of heat it must shift today, or next year, but generations of silicon into the future.
This means Meta will need a cooling solution, or solutions, which don’t just offer raw cooling capability. It needs a liquid cooling approach that can evolve alongside its plans.
Even someone with the vast resources of Meta won’t want to tie-up capital or delay deploying racks while it waits for cooling components to be delivered. But neither will it want cooling equipment sat in storage, waiting to be deployed, or in place, but underutilised.
It needs partners who can supply at scale, continuously, and which have an engineering ecosystem capable of deploying and installing cooling at the same pace it rolls off the production line. Even better, Meta should choose a liquid cooling architecture that can be deployed incrementally. Why wait until the whole of Manhattan is covered in racks before equivalent of Central Park is ready to go online.
“
We know that data centre operators already face a massive challenge when it comes to managing the heat their facilities produce – and calming the public’s concern over their potential environmental impact.
A liquid vision?
That’s just the start of the data centre lifecycle though. From that point on manageability is key. The racks of compute and storage in a data centre are designed with redundancy in mind. Components are standardised and hot-swappable as far as possible. so, Zuckerberg’s engineers should ensure the same applies across the infrastructure keeping it all cool. A decoupled system – where the control
”
console is in a separate unit for example – will give Meta more options when it comes to scaling up over time. If there are faults in one component, it won’t need to replace the entire unit. So, cooling units and individual components, such as pumps or sensors, should be specced in the same way. And of course, this will be all for nothing, if those components are not easily accessible – and spares readily available. Front access will be critical and will also allow more and designing the aisles.
Presumably, Meta will have the brain and compute power to develop cutting edge predictive analytics to manage maintenance. But it will help immensely if the cooling architecture it chooses can serve up the right, real time information to inform those systems.
At a broader level, the heat the cooling system extracts must still go somewhere, Municipal authorities may be grateful for the jobs a data centre will bring but may not be happy if it creates microclimates in the immediate area. Environmental groups and regulators will be on the watch for any adverse environmental impact. Meta should be thinking from the outset about how this energy can be repurposed, whether for district heat systems or other industrial uses such as factories or heating greenhouses. A cooling partner with a broader vision of how heat can be reused, and the engineering skills to make this happen, will be a great help here too. Meta’s data centre ambitions are more than just a massive test of its resolve to dominate AI. They represent an engineering bet of truly titanic proportions. And that means they also present the biggest test yet for the liquid cooling sector. It’s in all our interests that it passes.
NOVEMBER 2025 | ELECTRONICS FOR ENGINEERS 9
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46