Aerial view of Microsoft’s new AI datacenter campus in Mt Pleasant, Wisconsin. Image Credit: Microsoft

Microsoft has recently unveiled its latest and largest AI datacenter named Fairwater, in Mount Pleasant, Wisconsin, claiming that it is the “world’s most powerful AI datacenter.”

With a staggering $7 billion investment, Fairwater is positioned as the world’s most powerful AI datacenter, representing a monumental move in AI infrastructure and cloud computing. 

Expected to be operational by early 2026, this massive facility will transform how frontier AI models are trained and deployed, while also boosting local economic development and advancing sustainable datacenter design.

Fairwater’s unprecedented scale and design

Occupying a massive 315-acre site, Fairwater includes three expansive buildings spanning a total of 1.2 million square feet. 

According to Microsoft, this is no ordinary datacenter built for web or business applications. Instead, Fairwater is purpose-built to operate as a single, giant AI supercomputer, interconnected by a flat network of hundreds of thousands of cutting-edge Nvidia GPUs.

“Unlike typical cloud datacenters, which are optimized to run many smaller, independent workloads such as hosting websites, email or business applications, this datacenter is built to work as one massive AI supercomputer using a single flat networking interconnecting hundreds of thousands of the latest NVIDIA GPUs,” the tech giant said in a blog post

“In fact, it will deliver 10X the performance of the world’s fastest supercomputer today, enabling AI training and inference workloads at a level never before seen.”

In other words, unlike traditional datacenters that run thousands of smaller independent workloads, Fairwater is optimized for continuous AI training workloads, with specialized racks connected vertically and horizontally in a two-story layout. 

This innovative arrangement minimizes latency and bandwidth bottlenecks, allowing GPUs across the facility to communicate effectively. Each rack reportedly contains 72 Nvidia Blackwell GB200 GPUs, connected via NVLink into a single accelerator delivering 1.8 terabytes of GPU-to-GPU bandwidth.

The entire facility can process an astonishing 865,000 tokens per second, a ten times greater performance than the fastest existing supercomputers. And because of this sheer scale, Microsoft says the datacenter can “help AI researchers and engineers build the world’s most advanced models.”

Fairwater: Advanced cooling for efficiency and sustainability

Modern AI hardware and datacenters are known to generate tremendous heat where traditional air cooling systems are usually overwhelmed. To address this, Microsoft implemented an advanced closed-loop liquid cooling system, housing the second-largest water-cooled chiller plant on Earth within Fairwater. 

According to Microsoft, Fairwater was designed to allow the circulation of cold liquid directly into servers, absorbing heat efficiently and then piping the warm water to cooling fins outside, where it is recirculated into a loop that is environmentally responsible. 

Perhaps the name “Fairwater” was coined against the backdrop of this environmentally conscious decision. And the result of this decision is that the datacenter has “enough fiber cable to circle the Earth four times, yet its annual water use is modest, requiring roughly the amount of water a typical restaurant uses annually or what an 18-hole golf course consumes weekly in peak summer,” Brad Smith, Vice Chair & President at Microsoft said in another blog post.

Aligning with Microsoft’s commitment to sustainability in the face of AI’s rising energy demands, this cooling technology system will reduce greenhouse gas (GHG) emissions and water consumption significantly compared to air-cooled datacenters.

A pillar of the global AI infrastructure

Fairwater also anchors Microsoft’s vision of a new global network of AI datacenters. Partner facilities in Norway and the UK, designed with similar architectural principles, will integrate with fairwater through Microsoft’s AI Wide Area Network. 

“Our AI WAN is built with growth capabilities in AI-native bandwidth scales to enable large-scale distributed training across multiple, geographically diverse Azure regions, thus allowing customers to harness the power of a giant AI supercomputer,” Microsoft said.

This network of datacenters also allows geographically spread datacenters to operate cohesively as one supercomputer, enabling distributed AI training at unprecedented scales.

Microsoft further affirms that they’re “building a distributed system where compute, storage and networking resources are seamlessly pooled and orchestrated across datacenter regions,” as opposed to being limited to the “walls of a single facility.”

With Fairwater’s vast scale, network sophistication, and sustainable engineering, it is set to be the powerhouse behind the next generation of AI breakthroughs.

Share.

I’m Precious Amusat, Phronews’ Content Writer. I conduct in-depth research and write on the latest developments in the tech industry, including trends in big tech, startups, cybersecurity, artificial intelligence and their global impacts. When I’m off the clock, you’ll find me cheering on women’s footy, curled up with a romance novel, or binge-watching crime thrillers.

Comments are closed.

Exit mobile version