
Google’s talks with Marvell Technology over two new AI chips point to the fact that the tech giant is still pushing hard on its own silicon strategy, even after extending a TPU and networking supply agreement with Broadcom through 2031.
According to many reports, the Marvell discussions involve a memory processing unit and a new TPU designed to run AI models more efficiently. One chip would act as a memory processing unit that works alongside Google’s tensor processing unit, while the second would be a new TPU built specifically for AI workloads.
The reported split between memory support and compute support matters because AI systems need both fast data movement and strong processing to handle large model workloads efficiently.
Why Marvell Matters
Marvell is already a major name in custom silicon, especially in data center networking and chip design services, so the talks fit its existing business model.
The company was responsible for the physical design for startup Groq’s first inference chip, whose Language Processing Unit technology Nvidia later licensed for $20 billion in December 2025. On the cloud side, Marvell runs a custom silicon business with a $1.5 billion annual run rate in fiscal 2026, building chips for Amazon, Microsoft, and Meta, in addition to its existing work with Google on the Axion ARM CPU.
Marvell shares rose after the Google’s talks news, showing how closely investors link the company to the broader custom-chip buildout in AI infrastructure.
Google’s TPU Strategy
Google has spent years building its TPUs as an alternative to Nvidia’s dominant GPUs, and that effort has become more visible as AI demand has grown. The TPU sales are also an important driver of Google Cloud revenue, which makes the chip program more than a technical side project and part of the company’s business case for AI spending.
The reported talks with Marvell suggest Google is still widening its supply base and refining the hardware behind its AI services rather than relying on a single partner.
The Broader Chip Race
The timing is important. Broadcom and Google recently entered a long-term TPU and networking supply agreement through 2031. Taken together, the two separate deals show that Google is keeping multiple chip relationships active at once, which helps it reduce supply risk and keep pressure on costs.
It also shows that the AI arms race is now less about GPUs and increasingly more about who can build custom chips that are faster, cheaper, and better matched to a specific workload.
The main question now is whether the discussions will turn into a formal contract just like the Broadcom’s deal.