
Nvidia and Marvell Technology announced a strategic partnership on March 31st, 2026. Nvidia invested $2 billion in Marvell as part of the deal. The two companies will work together on custom AI chips and high speed networking.
In addition, Marvell will connect to Nvidia’s AI factory through a platform called NVLink Fusion. The companies will also collaborate on silicon photonics, a technology that uses light instead of electricity to move data at high speeds.
However, this is not just a chip supplier agreement. Nvidia wants to control the entire AI infrastructure stack from chips to networking to software.
By investing in Marvell, Nvidia signals that solving AI data center bottlenecks requires investment across the full infrastructure, not just faster GPUs.
Why Nvidia Put $2 Billion Into Marvell
For data centres to work, companies have to connect thousands of GPUs together to train AI models.
However, the connections between these chips create a stubborn problem. Copper wiring struggles to handle the bandwidth demands of modern AI systems.
Marvell brings two critical pieces to solve this problem. First, the company designs custom XPUs which are specialized AI accelerator chips.
Second, Marvell owns advanced silicon photonics technology through its acquisition of Celestial AI in February 2026. Under the agreement, Marvell will provide custom XPUs and NVLink Fusion compatible networking.
Additionally, Nvidia will provide the supporting technologies, including its Vera CPU and Spectrum-X switches.
How the Marvell Partnership Strengthens the AI Empire
Nvidia CEO Jensen Huang does not want Nvidia to be just a chip company. “Most people forget that Nvidia’s business is much, much more diversified than a chip company,” Huang said. “We’re full-stack and we can help people build AI factories anywhere.”
Hence, the Marvell partnership fits directly into this strategy. NVLink Fusion allows customers to build semi-custom AI infrastructure using a mix of Nvidia and partner components.
As a result, Marvell’s custom chips can now sit alongside Nvidia GPUs inside the same system.
Matt Kimball, a vice president and principal analyst at Moor Insights & Strategy said the standout element is that Marvell is adding NVLink support to its XPUs. “This is Nvidia extending control by anchoring itself as the connective tissue across what will be increasingly heterogeneous AI environments,” Kimball said.
What the Deal Means for AI Data Centers
Consequently, AI infrastructure is about to become more flexible. The Marvell partnership allows companies to deploy non-Nvidia accelerators inside a Nvidia connected environment.
Because of this, they are not locked into a single type of chip for every workload.
Marvell entering into the NVLink ecosystem gives enterprise customers more flexibility while still keeping Nvidia at the centre.
Furthermore, Marvell CEO Matt Murphy added that connecting Marvell’s custom silicon and photonics to Nvidia’s ecosystem enables customers to build scalable, efficient AI infrastructure.
Why This Signals a Shift in the AI Race
This investment does not guarantee Nvidia complete dominance. Competitors like AMD, Intel and Broadcom back the Ultra Accelerator Link consortium, an open alternative to Nvidia’s NVLink. However, Nvidia is working to shift from a proprietary to a ubiquitous model.
In addition, Jensen Huang framed the deal around the surge in AI inference demand. “The inference inflection has arrived. Token generation demand is surging, and the world is racing to build AI factories,” Huang said. “Together with Marvell, we are enabling customers to leverage Nvidia’s AI infrastructure ecosystem.”
Ultimately, the $2 billion investment shows that Nvidia is building the connective link for the entire AI industry. For data centers, this means more choices and better performance but Nvidia will remain at the center of everything.
