
Two of China’s biggest tech companies just put 10,000 AI chips to work in the same week, and the implications of this deployment goes beyond the country’s borders.
Alibaba announced the deployment of a 10,000-card intelligent computing cluster powered by its Zhenwu AI chips and built in collaboration with China Telecom at the Shaoguan data centre in Guangdong province.
The launch came shortly after Shenzhen, a city dubbed as “China’s Silicon Valley,” activated the country’s first 10,000-card cluster built entirely on Huawei’s Ascend 910C chips, which provides 11,000 petaflops of computing capacity. Together, these two deployments represent a concrete step in China’s push to build AI infrastructure that does not depend on American chips.
What These Clusters Are Built to Do
Alibaba’s data center features Zhenwu semiconductors designed for both AI training and inference, with the capacity to support models containing hundreds of billions of parameters. This scale puts it in the same category as the infrastructure used to train today’s most capable large language models (LLMs).
The system delivers four-microsecond latency and records a 30% improvement in both training speed and inference performance, with individual card throughput nearly ten times higher than earlier technology.
Alibaba and China Telecom have stated that the facility is expected to scale to 100,000 chips, which would make it one of the largest domestically-built AI computing clusters in the world.
The Chip Gap Question
A reasonable question is whether Chinese chips can actually produce competitive models. And the most likely answer is that they already are.
Each Huawei Ascend 910C chip delivers roughly 60% of the raw training efficiency of an Nvidia H100. Beijing’s strategy doesn’t focus on matching Nvidia chip-for-chip, but on leveraging large-scale cluster design and efficient networking to narrow the performance gap.
And the Shenzhen cluster result supports this as nearly 50 organizations had signed computing power agreements for it, bringing the combined booking rate across both phases of the facility to around 92%.
Why U.S. Export Controls Accelerated This Move
The timing of China’s domestic chip push is directly tied to American trade policy. U.S. restrictions on AI chip exports, including Nvidia products over the past three years, accelerated China’s development of homegrown alternatives. Companies like Alibaba and Huawei were effectively forced to build what they could no longer buy.
What This Means for AI Models in 2026 and Beyond
While more compute doesn’t automatically mean better models, it does remove one of the primary bottlenecks to training them. The Alibaba cluster, already deployed in healthcare and enterprise services, represents a shift from experimentation to large-scale deployment of AI systems in fields that matter.
Alibaba’s Qwen3-Max model that was launched in September 2025 was touted to outperform Anthropic’s Claude and DeepSeek-V3.1 on certain benchmarks, and it was trained before this new infrastructure went fully operational. With 10,000-card now live and a 100,000-chip clusters expansion planned, the next generation of Chinese models will have significantly more training capacity behind them.
This move is also complemented by the fact that major Chinese tech firms are investing heavily. In 2025, Alibaba committed over $50 billion in cloud computing and AI hardware to span three years, while ByteDance plans to spend around $20 billion on GPUs and data centers.
And the result of these investments is that more capable Chinese models will enter the market, many of them open-source, just as DeepSeek demonstrated. That raises the competitive floor between U.S. tech giants and China’s.
