
Nvidia recently announced a multiyear, multigenerational partnership that will see Meta deploy millions of Nvidia’s current and next-generation AI chips across its data centers.
The deal covers Nvidia’s Blackwell and Rubin GPUs, Grace and Vera CPUs, and networking infrastructure, making it one of the most comprehensive hardware commitments Meta has made to a single vendor. Neither company disclosed a financial figure.
The Details of the Meta-Nvidia Hardware Deal
The scope of the agreement goes beyond chips alone. Meta will deploy millions of Nvidia Blackwell GPUs as well as the next-generation Rubin GPUs, which recently entered production.
On the CPU side, Meta becomes the first hyperscaler to deploy Nvidia’s Grace central processing units as standalone chips in its data centers, separate from GPUs, rather than bundled together in a server configuration. Nvidia confirmed this is the first large-scale standalone deployment of Grace CPUs. And both companies are also collaborating on deploying the Vera CPUs, aiming for large-scale deployment in 2027.
Beyond compute hardware, Meta will integrate Nvidia Spectrum-X™ Ethernet switches into its Facebook Open Switching System platform to support its AI infrastructure. The company will also adopt Nvidia Confidential Computing security system to enable AI-powered features on WhatsApp while keeping user data protected.
Why Meta Is Doing This
Meta CEO Mark Zuckerberg framed the deal around the company’s stated goal of delivering what he called “personal superintelligence to everyone in the world,” a vision he introduced back in 2025. The infrastructure being built under this deal is meant to support both the training of large AI models and the running of AI at scale across Meta’s platforms, which serve billions of users.
Zuckerberg announced Meta plans to spend up to $135 billion on AI in 2026 alone, which implies that this partnership is a direct output of that commitment. Additionally, engineering teams from both companies will work together on what they described as “deep codesign” to optimize Meta’s AI models for Nvidia’s hardware, covering recommendation systems, personalization engines, and emerging AI features.
A First for Nvidia CPU Business
The standalone Grace CPU deployment is a notable component of the deal for Nvidia. Grace CPUs are designed to handle inference and agentic AI workloads and use Arm architecture, putting them in direct competition with processors from Intel and AMD. The chips have also demonstrated the ability to run common tasks at roughly half the power consumption of traditional hardware.
“No one deploys AI at Meta’s scale – integrating frontier research with industrial-scale infrastructure to power the world’s largest personalization and recommendation systems for billions of users,” said Jensen Huang, founder and CEO of Nvidia. “Through deep codesign across CPUs, GPUs, networking and software, we are bringing the full Nvidia platform to Meta’s researchers and engineers as they build the foundation for the next AI frontier.”
Markets responded quickly to the announcement. Meta shares rose 0.89% and Nvidia shares gained 1.25% in after-market trading. AMD, which had recently secured a chip deal with OpenAI, fell 4% on the news.
Meta’s Other Chip Relationships
Meta also develops its own in-house silicon processors under its MTIA program and continues to use chips from AMD. The company was also reportedly in discussions about using Google’s TPUs. However, Meta’s in-house chip strategy has faced setbacks, with many reports implying that its efforts to develop custom silicon have run into technical problems and rollout delays, which may have contributed to the decision to expand its dependence on Nvidia’s full-stack platform.
As such, the scale and scope of this Nvidia deal, covering CPUs, GPUs, networking, and software codesign, makes Nvidia the central pillar of Meta’s AI hardware build-out for the foreseeable future, at least for now.
