
OpenAI is moving beyond software to control a critical piece of its AI future by building its own custom-made artificial intelligence chip. Partnering with semiconductor giant Broadcom, OpenAI plans to mass-produce its custom chip starting in 2026, marking a bold step to reduce reliance on existing GPU suppliers like Nvidia and cut costs while optimizing performance for its AI workloads.
This partnership, backed by a $10 billion commitment as reported by Financial Times, indicates the intensifying arms race among tech giants to develop specialized hardware tailored to their unique AI needs.
The ChatGPT-maker, in developing its AI products, depends on massive computing power that is traditionally supplied by GPUs from Nvidia and specialized processors from AMD. However, the boom in AI demand has put a strain on the supply chains and driven up costs, prompting OpenAI to look inward.
By designing custom silicon together with broadcom, OpenAI aims to gain control over how its AI models are trained and run, improve efficiency, and secure a steady supply of chips essential to supporting next-generation AI systems such as GPT-5.
Broadcom, a heavyweight in the semiconductor industry, brings technical expertise to the table. The custom chips, which will be developed in close collaboration with Broadcom, are expected to incorporate advanced AI accelerator architectures optimized specifically for OpenAI’s requirements.
This is a major deal for Broadcom’s custom AI accelerator business, as the chip maker also provides AI infrastructure and connectivity services to tech giants like Google, Meta and Apple.
Financial markets also reacted strongly to the news of the partnership. Broadcom’s shares jumped by 16%, adding over $200 billion to the company’s market cap, while Nvidia’s stock went down by 4.3%, raising concerns over increased competition in the AI chip production space.
This shake-up highlights the growing fragmentation of the AI hardware ecosystem as new players continue to challenge established GPU dominance.
OpenAI’s move to make custom chips makes a statement about the company’s shift towards hardware independence. It follows precedents set by other tech giants, especially with Google’s TPUs and Meta’s AI accelerators. They illustrate how vertical integration of hardware and software can bring about competitive advantages.
For OpenAI, controlling its chip design means they can tailor system capabilities and better manage the costs associated with operating with vast AI data centers. It also serves as an advantage to OpenAI’s wider plans to rapidly expand its computational infrastructure and double its compute fleet to meet the growing demand for powerful AI models and services.
These new chips are expected to be deployed internally, only to power OpenAI’s data centers and not sold commercially.