Data centre infographic

Do you know that global emissions from cloud computing range from 2.5%-3.7% of all global greenhouse emissions? It has been estimated that there are almost 10,970 data centre locations globally as of December 2023. Researchers have suggested various types of data centre cooling methods such as evaporative cooling, geothermal cooling, and liquid cooling.

Of these, liquid cooling for data centres have become extremely popular for managing the thermal load of AI accelerators or supporting HPC (high performance computing) clusters. This technology is becoming the backbone of modern data centres. The use of liquid cooling in data centres is expected to nearly double from 21% in early 2024 to 39% by 2026. Additionally, it has been estimated that data centres that use cooling systems reduced their total data centre power consumption by almost 10

Market dynamics

The demand for liquid cooling has shot up as edge computing, AI workloads, and high-frequency trading systems push power densities through the roof. In traditional data centres, the power density is typically around 5kW-10kW per rack but today, with racks housing AI processors or running complex simulations, it is more typically 40kW-125kW/rack, and in some cases even beyond 200kW. At these densities, conventional air cooling becomes not just impractical but a significant bottleneck.

In addition, hyperscalers and co-location providers are scrambling to address stringent regulations on energy consumption and sustainability. For example, the European Union’s push toward data centre sustainability standards by 2025 is forcing operators to reconsider cooling designs that are 30%-40% more energy efficient than current standards and aimed at reducing energy consumption by 11.7% between 2020 and 2030. Liquid cooling holds the promise to reduce power usage effectiveness to sub-1.1 levels. Liquid cooling reduces energy consumption by up to 90%.

Immersion vs direct-to-chip liquid cooling

There are two dominant architectures in liquid cooling: immersion cooling and direct-to-chip cooling. Each has its niche and choice may depend on application demands.

Direct-to-chip cooling involves pumping a coolant – typically a dielectric fluid, or even water depending on the setup – through microchannels directly integrated into cold plates mounted on CPUs and GPUs. The heat is absorbed right at the source and then expelled to an external heat exchanger. It is effective for high-power processors but has challenges like fluid distribution uniformity and potential leakage points, which are critical in environments like high-frequency trading where uptime is sacrosanct.

It is estimated, that, within the next two years, around 40% of IT workers anticipate that some of their equipment will be dependent on immersion or direct-to-chip cooling.

Immersion cooling

Alternatively, immersion cooling submerges entire servers into a non-conductive fluid. This method excels in ultra-high-density applications (i.e., AI training farms). The entire thermal footprint of the server is managed uniformly but achieving consistency across racks and ensuring the maintenance is quick and safe can be a challenge.

Multi-phase immersion, where the fluid changes phase and transfers heat away via evaporation and condensation, offers a high thermal efficiency but complicates fluid management and requires specialised condensers. Moreover, between 40% and 50% of the energy used in data centres is used by IT equipment, with the remaining 30%-50% going toward cooling systems, thus promoting growth.

Where is the innovation happening?

Coolant formulations: OEMs are working closely with chemical manufacturers to engineer coolants with improved thermal capacity and lower viscosity. Fluoro-carbon-based coolants are common in immersion systems, and there is a shift towards non-flammable, environmentally-friendly alternatives, such as Novec- engineered fluids that are tailored for better thermal stability. [Novec is 3M’s fire protection fluid which is stored as a liquid but which discharges as a gas.]

Cold plate microchannel design: A big focus is on optimising the geometry of the microchannels within cold plates. CFD (computational fluid dynamics) models are used extensively to minimise pressure drops and turbulence, ensuring maximum contact time with the heated surface. This is critical when dealing with 7nm or 5nm processors that have heat flux densities pushing past 1kW/cm2 .

Manifold and quick-connect designs: Quick-connect couplings have evolved to minimise the potential for leaks and reduce insertion force. Modern systems use zero-leak seals that are crucial when designing modular racks. Innovations in manifold designs are also making fluid distribution more consistent across multi-rack setups, ensuring that flow rate and temperature remain uniform.

Data centre demands

The data centre liquid cooling market is witnessing rapid growth driven by the rising power densities of modern servers, increased deployment of AI workloads, and the inefficiency of traditional air-cooling systems at managing heat in high-density environments. With global data centres pushing for sustainability and energy efficiency, liquid cooling technologies – such as direct-to-chip and immersion cooling – are gaining traction to reduce PUE and enhance cooling performance.

Moreover, according to forecasts, the data centre liquid cooling industry is expected to grow from its estimated valuation of $2.64bn in 2023 to $37.84bn by the end of 2036, with a compound annual growth rate of around 25.1% between 2024 and 2036.

The industry is growing as a result of the growth of data centres, particularly hyperscale data centres. As of December 2023, there were roughly 10,978 data centre locations worldwide. Additionally, as of March 2024, there were a reported 5,381 data centres in the United States, the most of any country worldwide. A further 521 were located in Germany, while 514 were located in the United Kingdom.

The liquid cooling market is currently led by specialised players like Asetek, CoolIT Systems, and Submer Technologies, who focus on both direct-to-chip and immersion cooling solutions. Hyperscalers like Google and Microsoft are aggressively piloting liquid cooling in select locations, focusing on AI and HPC clusters.

In July 2024, Microsoft embraced liquid cooling directly onto chips and is investigating microfluidics. Over a million cubic meters more of water were used by the corporation last year. By 2030, Microsoft wants to be water-positive (i.e., to return more water into freshwater resources than is withdrawn). The corporation stated that it used 6.4m cubic meters of water in 2022, mostly for its cloud data centres, in its 2023 ESG report. That amount had gone up to 7.8m cubic meters for 2023 in its 2024 projection.

Meanwhile, traditional players like Schneider Electric and Vertiv are integrating liquid cooling into their portfolios, either through acquisitions or partnerships.

As a case in point, Schneider Electric and liquid cooling technology specialist, Chilldyne, have formed an alliance partnership to improve the sustainability and efficiency of data centres. Schneider Electric will provide the Chilldyne Negative Pressure Liquid Cooling Solution, completing its line of Uniflair cooling and chiller systems.

In Japan, firms like NEC and Fujitsu are taking a conservative but steady approach, primarily focusing on R&D. Their innovations centre around minimising maintenance costs and integrating liquid cooling with existing air-cooled infrastructures to create hybrid systems. It’s a smart play, given the risk-averse nature of many Japanese data centre operators.

In November 2023, Fujitsu and SoftBank announced the completion of a nationwide all-optical network in Japan using a disaggregated architecture optical transmission system. They aim to create greener networks with reduced environmental impact.

Opportunities and obstacles ahead

There’s no doubt that the liquid cooling market is in a state of rapid evolution, but it’s not all smooth sailing. There are real barriers to adoption, especially around standardisation. Each vendor has its propriety interfaces, making interoperability a challenge. Furthermore, cost remains a sticking point; high upfront capex for plumbing retrofitting and the logistics of fluid handling deter small-scale adopters.

As rack densities continue to rise, however, air cooling’s inefficiencies will outweigh the costs of transitioning to liquid. And let’s not overlook sustainability – liquid cooling can significantly cut water usage when integrated and heat reuse systems, making it a long-term solution for eco-conscious operators.

Ultimately, the focus should be on optimising designs that balance thermal efficiency with cost-effectiveness and scalability. Whether direct-to-chip or immersion, the goal is to keep thermal resistance as low as possible while ensuring easy maintenance and minimising potential failure points. After all, data centres are the core infrastructure that keeps mission-critical applications running 24/7 – every decision matters. Moreover, the industry is poised to become the new standard in advanced computing environments.

FACT FILE

Quick stats

The data centre liquid cooling market was valued at $2.64bn (2023 and is projected to grow to $37.84bn by 2036.

Projected CAGR between 2024-2036 is 25.1%

As of December 2023, there were over 10,970 data centre locations worldwide.

The US had the majority of data centres worldwide – more than 5,350 at March 2024.

Research Nester reports that North America has 35% global market share of the data centre liquid cooling market

(Data: Research Nester)

About the Author

Aashi Mishra is a former electronics engineer and content developer at analysis firm, Research Nester.

Related: https://www.electronicsweekly.com/news/optimising-the-power-pathway-from-grid-to-chip-2025-05/