Photo Credit: Silas Stein/picture alliance via Getty Images

DeepSeek is weeks away from launching V4, its most powerful AI model yet, and the chip choice at the center of that release may be more consequential than the model itself.

According to a report by The Information, cited by Reuters, DeepSeek’s upcoming V4 model will run on Huawei chips rather than the Nvidia hardware that powers most large AI systems today. 

DeepSeek has spent recent months working closely with Huawei and domestic chipmaker Cambricon Technologies to adapt parts of the model’s underlying code and conduct testing, and is also developing two additional V4 variants, each designed for different capabilities and built to run on Chinese-made chips.

A Deliberate Turn Away from Nvidia

DeepSeek has broken away from long-standing industry convention by denying Nvidia and AMD pre-release access to V4 for performance optimization. Instead, the Chinese company granted Chinese chipmakers, including Huawei Technologies, pre-release access. 

For context, it is standard practice in the AI industry to give chip manufacturers early access to an upcoming model so they can tune their hardware and software for optimal performance. And DeepSeek skipped that entirely for U.S. suppliers.

It is no brainer that DeepSeek is rewriting parts of V4’s code so the model can run on Huawei’s chips, an effort that Wei Sun, principal AI analyst at Counterpoint Research, described as requiring “substantial re-engineering.” Sun noted that the transition can slow development cycles and introduce performance trade-offs, especially for a model expected to be state-of-the-art.

What DeepSeek V4 Is Actually Built to Do

Beyond chip politics, V4 is being positioned as a multimodal AI model and a major leap in coding and reasoning capability. With context windows exceeding one million tokens, DeepSeek V4 can process entire codebases in a single pass, enabling true multi-file reasoning where the model understands relationships between components, traces dependencies, and maintains consistency across large-scale refactoring operations.

DeepSeek introduced Engram on January 13, 2026, a conditional memory system that separates static pattern retrieval from dynamic reasoning, and industry analysts immediately connected it to V4’s architecture. The model also continues DeepSeek’s use of a Mixture of Experts (MoE) architecture, which means that while V4 has one trillion total parameters, only around 37 billion activate per response, allowing it to run more efficiently while still drawing on a much larger knowledge base.

Why This Release Matters

Every other leading AI model, including GPT-5, Claude, and Gemini, runs on Nvidia GPUs. DeepSeek V4 is positioned to be the first frontier AI model that does not need Nvidia, and that carries real weight for the broader chip sanctions debate. 

U.S. export controls on advanced semiconductors were built on the assumption that China could not build competitive frontier models without American hardware. A capable, open-source V4 running on Huawei chips would directly test that assumption.

“If they have successfully trained V4 entirely on Huawei silicon, it signals a material shift in the geopolitical tech landscape,” Wei Sun said.

Share.

I’m Precious Amusat, Phronews’ Content Writer. I conduct in-depth research and write on the latest developments in the tech industry, including trends in big tech, startups, cybersecurity, artificial intelligence and their global impacts. When I’m off the clock, you’ll find me cheering on women’s footy, curled up with a romance novel, or binge-watching crime thrillers.

Comments are closed.

Exit mobile version