As AI reshapes the digital infrastructure landscape, the physics of cooling is becoming one of the most urgent challenges in data center design. Traditional air-cooling techniques, which were the backbone of hyperscale and enterprise environments, are no longer sufficient to manage the thermal output of today’s high-density workloads.
With CPUs now exceeding 300 to 500 watts, and GPUs pushing beyond 1.5 kilowatts, direct-to-chip (DTC) liquid cooling is emerging not just as an alternative but as a necessity. Yet implementing this technology at scale means far more than swapping out heat sinks; it requires rethinking how data centers are designed, built, and operated.
Building for AI in the golden triangle
Consider, for instance, a hyperscaler planning a 20MW campus just outside London, strategically located near major network interconnects and cloud regions. With AI workloads projected to account for nearly half the site’s capacity, the design team anticipates racks supporting densities of 50 to 80 kilowatts, powered by a mix of Nvidia and AMD GPUs.
In the initial design, based on conventional air-cooling assumptions, limitations quickly surfaced. CRAC units and raised floors could not manage the projected heat load, floor space requirements ballooned by nearly 40 percent, and energy forecasts exceeded regional emissions thresholds, raising concerns with local planning authorities.
By rethinking the approach and modeling DTC liquid cooling, the team demonstrated how rack densities above 80 kilowatts could be achieved while reducing PUE to a range of 1.05-1.15. The design also showed potential for waste-heat reuse in nearby commercial buildings – an outcome that would deliver both ESG benefits and stronger community alignment.
Adopting DTC liquid cooling required a shift not only in technology but also in delivery. Raised floors were eliminated, trimming construction costs. But the integration of coolant distribution units (CDUs) and new piping infrastructure added complexity, demanding close coordination across mechanical, electrical, and IT trades.
Airflow systems were redesigned, eliminating cold aisle containment and freeing up ceiling height. At the rack level, quick-disconnect manifolds enabled faster GPU swaps, though this required bespoke rack engineering and early-stage supplier collaboration.
The steepest challenge was skills. Most local contractors had never worked with CDUs or liquid cooling systems. Hands-on training during commissioning proved essential to getting the system live.
Counting the cost and the return
While liquid cooling carries a higher upfront cost, the long-term savings are compelling. For a 1MW deployment, the capital expenditure for cooling infrastructure was approximately £130,000, more than double the £60,000 required for air-based cooling. However, annual power costs were dramatically lower: £870,000 compared to £1.25 million.
Over five years, the total cost of ownership reached £4.45 million for DTC cooling versus £6.5 million for air cooling. That’s a 31.5 percent reduction, or £2.05 million in net savings, with a payback period of just 2.2 months.
These modelled figures are consistent with findings from independent studies, such as a 2024 California Energy Commission report, which highlighted similarly rapid ROI for liquid cooling deployments at scale.
In a market like the UK, where energy costs remain high and regulatory pressures are mounting, this shift from capex-heavy to opex-efficient infrastructure is gaining ground fast.
Performance, reliability, and ESG Impact
The hyperscaler didn’t just see financial benefits. Lower operating temperatures helped boost sustained AI training performance by 12 percent, reducing thermal throttling and accelerating model development.
Component reliability improved, with a 50 percent drop in heat-related failures, cutting downtime and reinforcing SLA performance. On the environmental side, more than 3,000 metric tonnes of CO₂ were avoided annually per megawatt, supporting SECR compliance and broader ESG goals.
The new design also unlocked greater long-term scalability. By streamlining mechanical systems and incorporating modular expansion zones, it created an adaptable framework where capacity can be upgraded seamlessly and future growth accommodated without disruptive or costly retrofits.
A scalable model for hyperscale AI infrastructure
This deployment was more than a cooling upgrade. It marked a fundamental shift in how infrastructure is conceived and delivered to support AI-driven demand. By engaging suppliers early, adapting construction methods, and overcoming on-the-ground skill gaps, the project became a blueprint for liquid-cooled, AI-ready infrastructure.
What emerged wasn’t just a more efficient facility. It was a more agile one, ready to support the next wave of AI demands with speed, scalability, and sustainability built in from the start.
Watch the DCD>Talks session: Liquid cooling at scale, where we explore how operators across Europe are navigating liquid versus hybrid cooling decisions, managing cross-trade coordination, and adapting delivery models to keep up with hyperscale AI workloads.
Read the orginal article: https://www.datacenterdynamics.com/en/opinions/how-hyperscalers-are-scaling-direct-to-chip-liquid-cooling-in-europe/



