80% of the world's data centres have been built in places either too hot or too cold for the hardware inside

There are close to 9,000 data centres currently in operation across the globe—and a sizable chunk of that has been built in climates far too hot for all of that hardware to run efficiently. That's definitely not great news when some estimate the number of data centres will triple by 2030.

According to a report from Rest of World, the optimal temperature range for a data centre lies between 18 °C and 27 °C (or 64.4 °F and 80.6 °F). However, of the 8,808 data centres in operation, 7,000 have been built in places that typically experience a climate outside of this range. Furthermore, 600 (or about 10% of all data centres) have been built in places that regularly breach that temperate 27 °C ceiling (via Tom's Hardware).

Data centres are already power hungry, collectively hoovering up 1.5% of the world's total electricity consumption (or 415 terawatt hours) in 2024, according to the International Energy Agency. However, add to that a cooling system that's having to fight against the everyday climate it finds itself in, and that potentially adds up to an even greater strain on the local grid.

For one example, Rest of World highlights India, where around a third of its 213 data centres have been built in areas far too hot. Some of these data centres then have to contend with a local power grid that was already unstable, with the added strain of cooling demands potentially raising the risk of outages.

Many data centres currently deploy air-cooling mechanisms, though alternate methods are being explored. PS Lee bills himself as an 'expert in Sustainable AI Data Center Cooling' and is currently overseeing the Sustainable Tropical Data Centre Testbed in Singapore, a country with 1.4 gigawatts of data centre capacity and average temperatures of 33 °C (or 91 °F). Lee told Rest of World that "the old model of unconstrained, air-cooled growth is simply unsustainable,” and that his team is currently exploring direct-to-chip cooling. The hope is that this, alongside immersion cooling, could reduce energy use by up to 40%.

Data Center

(Image credit: Akos Stiller - Getty Images)

Lee predicts direct-to-chip and immersion cooling "will become standard features rather than exotic add-ons" in the next five years. He also predicts commercial deployment of large-scale seawater cooling—not unlike the 'AI Atlantis' off the coast of Hainan Island Province, China—will become both feasible and wider spread over the next decade. However, even if all of this comes to pass, newer data centres will be first in line, leaving plenty of older builds to either wait for a costly retrofit or obsolescence—bottom line, cooling will still be a concern.

With an earlier IEA report claiming AI power demands could quadruple over the next few years, it's hard not to feel a bit of time pressure in solving an equation that doesn't always add up. Alongside alternate cooling methods, alternate sources of power could also be pursued, with AI company Lambda already operating a data centre powered by a hydrogen fuel cell. It's worth noting that this example may be difficult to scale due to the hydrogen delivery infrastructure being nowhere near the behemoth that, say, gas is.

Renewable energy has its own drawbacks for powering datacentres specifically (namely, you'd need massive, expensive batteries to make it at all viable). As such, there's been a lot of interest in nuclear power—which comes with baggage that would take more paragraphs to explain than I have to spare. Long story short, when it comes to meeting data centre power demands, the math is simply not math-ing.