
Data centers are power-hungry by design, but Nvidia wants them to work in the opposite direction.
At CERAWeek 2026, Nvidia and Emerald AI announced they are working with AES, Constellation, Invenergy, NextEra Energy, Nscale Energy & Power, and Vistra to develop a new class of AI factories that connect to the grid faster and operate as flexible energy assets. The goal is to get AI infrastructure online faster and make it capable of supporting the grid rather than only drawing from it.
The Problem Driving the Push
AI’s power demands have outpaced the grid’s ability to keep up. Data centers have historically accounted for less than 5% of the grid, but they are now headed toward 25% of the American power supply over the course of a decade, according to many reports. That massive growth has created a bottleneck of which getting new AI facilities connected to the grid can take years, as regulatory reviews are starting to slow the process down.
In many parts of the world, including major technology hubs in the U.S., there is a years-long wait for AI factories to come online, pending the buildout of new energy infrastructure.
How Emerald AI’s Platform Works
Emerald AI is developing software to control power use during times of peak grid demand while still meeting the performance requirements of data center AI workloads. The platform, called Emerald Conductor, coordinates computing workloads with on-site energy resources, including batteries and behind-the-meter systems, allowing operators to adjust power consumption in real time without compromising performance.
The practical mechanism involves workload triage. Some jobs can be paused or slowed, like the training or fine-tuning of a large language model (LLM) for academic research. Others, like inference queries for an AI service used by thousands or millions of people, can be redirected to another data center where the local power grid is less stressed. The result is a facility that can dial its consumption up or down depending on grid conditions without dropping critical operations.
What Nvidia Brings to the Table
These next-generation AI factories will use the new Nvidia Vera Rubin DSX AI Factory reference design, which includes the DSX Flex software library for connecting AI factories to power-grid services. This is not a minor software add-on, as it is baked into the facility architecture from the ground up.
DSX Flex is expected to be deployed at commercial scale later this year at the Nvidia AI Factory Research Center in Virginia, planned as one of the world’s first power-flexible AI factories with Nvidia Vera Rubin infrastructure. That facility will serve as the proving ground for whether this model can work at full industrial scale.
What This Means for AI Infrastructure
The longer-term goal is for power-flexible AI factories to unlock up to 100 gigawatts of extra grid capacity from the existing U.S. power grid. For context, 100 gigawatts can power roughly 75 million homes.
For developers, the more immediate benefit is speed. A grid interconnection study can take years of regulatory reviews, but if a facility can offer power flexibility at peak demand times, developers may get near-immediate grid hookups. That could significantly cut the time between when a data center is built and when it can actually go live.
