
Singapore has brought a new kind of data centre online, one built from the ground up for liquid-cooled AI workloads rather than retrofitted around them. Nxera, Singtel’s regional data centre arm, has opened DC Tuas, a 58MW facility that combines Singapore’s largest direct-to-chip liquid cooling deployment with high‑density power and strict efficiency targets to support the next wave of GPU-heavy AI infrastructure.
A Facility Built Around Liquid Cooling
DC Tuas is designed specifically for the heat and power profile of modern AI hardware, where racks can push well beyond what traditional air cooling can handle efficiently. The eight-storey, carrier‑neutral site hosts Singapore’s largest direct‑to‑chip liquid cooling system in a multi‑tenanted facility, enabling customers to run dense GPU clusters without sacrificing performance or overspending on cooling energy.
Instead of relying primarily on chilled air, coolant is circulated directly to cold plates mounted on processors, extracting heat at the source and allowing much higher rack densities. This approach highlights DC Tuas’ reported power usage effectiveness (PUE) of 1.25 in Singapore’s tropical climate, beating the 1.4–1.6 range that many regional data centres struggle to improve on.
Meeting Surging AI Infrastructure Demand in Singapore
The opening lands in a market where AI capacity is scarce and demand is accelerating. The 58MW at DC Tuas brings Nxera’s total data centre capacity in Singapore to around 120MW, and more than 90% of the new site’s capacity was pre‑committed before launch. This reflects tight vacancy rates reportedly below 2% in the city‑state.
Bill Chang, CEO of Nxera and Singtel’s Digital InfraCo unit, framed the project as a response to these constraints rather than a speculative bet. He noted that the ability to deploy higher‑density, compute‑intensive AI workloads sustainably is now increasingly critical in a market where new data centre capacity is heavily regulated and physically limited.
For AI developers and cloud customers, the value they’d be getting is higher rack density without thermal throttling, lower cooling overheads, and more predictable performance for training and large‑scale inference workloads. In practice, that means operators can consolidate more GPUs into fewer racks, reduce physical footprints, and still stay within the confines of power and sustainability that regulators will accept.
Additionally, DC Tuas was designed for a hot and tightly regulated market. Singapore’s climate and policy environment make DC Tuas a useful test case for how AI‑ready infrastructure can evolve in dense urban markets, especially as temperatures drive up traditional cooling costs. For instance, since a 2019 pause on new data centre approvals, Singapore has pushed operators toward more energy‑efficient designs that use less of the national grid.
Why This Matters for AI and Tech Giants Building Data Centers
For AI companies, cloud providers, and enterprises in the region, DC Tuas is an indication of where infrastructure design is heading. Liquid cooling, once seen as an advanced or niche option, is rapidly becoming standard for facilities that want to host the latest GPU generations at scale, as those chips are now designed with liquid‑cooled operation in mind.
In Singapore, the model is especially significant as it suggests that land‑scarce, power-limited cities can still host meaningful AI infrastructure if they embrace higher rack densities, direct‑to‑chip cooling, and strict efficiency targets.
For customers, that translates into access to modern AI compute closer to users and data sources, rather than relying solely on remote, lower‑cost locations that may not match the latency or regulatory needs of Southeast Asian workloads.
