OpenAI In-House Chip Could Be Ready for Testing This Year

OpenAI is getting close to finalizing its first custom chip design, according to an exclusive report from Reuters that emphasizes the Microsoft-backed AI giant’s goal of reducing its dependency on Nvidia chips. The blueprint for the first-generation OpenAI chip could be finalized as soon as the next few months and sent to Taiwan’s TSMC for fabrication, which will take about six months — “unless OpenAI pays substantially more for expedited manufacturing” — according to the report. Even by usual standards, the training-focused chip is already on a fast track to deployment.

“Big tech companies such as Microsoft and Meta have struggled to produce satisfactory chips despite years of effort,” Reuters reports, noting the startup “has made speedy progress on its first design” and could have it ready for testing later this year.

Ars Technica writes that mass production would be likely to follow in 2026. “OpenAI’s first chip will focus primarily on running AI models (often called ‘inference’) rather than training them, with limited deployment across the company.” Reuters, on the other hand, says the in-house chip is “training-focused.”

SiliconANGLE references earlier reports that OpenAI was working with Broadcom in addition to TSMC. “Exactly what the OpenAI AI chip design will look like is unknown,” SiliconANGLE contextualizes, adding that it has been suggested “OpenAI is not looking to replace GPUs such as those provided by Nvidia Corp. but instead is looking to design a specialized chip that will undertake inference, the process of applying trained models to make predictions or decisions on new data in real-time applications.”

A move to proprietary chips, as well as the question raised by Chinese AI firm DeepSeek as to whether artificial intelligence systems can be capably run on fewer chips that are currently used could trigger an overall AI shake-up — one the markets are anticipating.

The chip is being designed by an in-house team of about 40 engineers led by Richard Ho, former Google TPU lead, now head of hardware at OpenAI.

“Ho’s team is smaller than the large-scale efforts at tech giants such as Google or Amazon,” Reuters writes, explaining that “a new chip design for an ambitious, large-scale program could cost $500 million for a single version of a chip” and “could double to build the necessary software and peripherals around it.”

No Comments Yet

You can be the first to comment!

Leave a comment

You must be logged in to post a comment.