OpenAI considers in-house chip manufacturing amid global shortage

Ryan Daws is a senior editor at TechForge Media with over a decade of experience in crafting compelling narratives and making complex topics accessible. His articles and interviews with industry leaders have earned him recognition as a key influencer by organisations like Onalytica. Under his leadership, publications have been praised by analyst firms such as Forrester for their excellence and performance. Connect with him on X (@gadget_ry) or Mastodon (@gadgetry@techhub.social)


OpenAI, the company behind the renowned ChatGPT, is reportedly delving into the prospect of manufacturing processing chips in-house amidst a worldwide shortage of these in-demand components.

Sources familiar with the matter disclosed to Reuters that OpenAI is actively exploring options, including evaluating an undisclosed company for potential acquisition to bolster its AI chip-making ambitions.

The shortage of chips, a fundamental component in AI technology, has prompted OpenAI to consider various strategies. These options include internal chip production, forging closer ties with its primary chip supplier NVIDIA, and diversifying its chip providers.

Earlier this year, OpenAI CEO Sam Altman voiced his concerns about the chip scarcity—resulting in delays to the company’s projects.

In a since-deleted blog post by Humanloop CEO Raza Habib, the AI expert wrote about his experience sitting down with Altman:

“A common theme that came up throughout the discussion was that currently OpenAI is extremely GPU-limited and this is delaying a lot of their short-term plans. The biggest customer complaint was about the reliability and speed of the API.

Sam acknowledged their concern and explained that most of the issue was a result of GPU shortages.The longer 32k context can’t yet be rolled out to more people. OpenAI haven’t overcome the O(n^2) scaling of attention and so whilst it seemed plausible they would have 100k – 1M token context windows soon (this year) anything bigger would require a research breakthrough.

The finetuning API is also currently bottlenecked by GPU availability. They don’t yet use efficient finetuning methods like Adapters or LoRa and so finetuning is very compute-intensive to run and manage.

Better support for finetuning will come in the future. They may even host a marketplace of community contributed models. Dedicated capacity offering is limited by GPU availability.”

If OpenAI proceeds with its plan to manufacture its own chips, it will join the ranks of industry giants like Google and Amazon who have already transitioned to in-house chip production. This move could potentially alleviate OpenAI’s dependency on external suppliers, empowering the company to meet the escalating demand for specialised AI chips.

Since the public launch of ChatGPT in November last year, the demand for specialised AI chips has skyrocketed—causing a surge in NVIDIA’s share prices as companies rush to procure the desirable hardware.

OpenAI has not made a final decision regarding the acquisition or in-house chip production, and discussions are ongoing to address the pressing chip shortage and sustain the company’s AI initiatives.

(Photo by Andrew Neel on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Tags: , , , , , ,

View Comments
Leave a comment

Leave a Reply