OpenAI asserts that it has no intention of utilising Google’s proprietary chip.

OpenAI stated that it currently has no intentions to utilise Google’s in-house chip for its products, two days following reports from Reuters and other news outlets regarding the AI lab’s consideration of its competitor’s artificial intelligence chips to address increasing demand.

An OpenAI spokesperson stated on Sunday that although the AI laboratory is currently conducting preliminary tests with certain Google tensor processing units (TPUs), there are no immediate plans for large-scale deployment.

Google refrained from providing a comment.

Although it is customary for AI laboratories to experiment with various chips, implementing new hardware on a large scale may necessitate an extended timeframe and demand distinct architecture and software support. OpenAI is utilising Nvidia’s graphics processing units (GPUs) and AMD’s AI chips to meet its increasing demand. OpenAI is concurrently developing its chip, with progress aligned to achieve the “tape-out” milestone this year, at which point the chip’s design will be finalised and dispatched for manufacturing.

OpenAI has engaged Google Cloud services to address its increasing demand for computing capacity, as exclusively reported by Reuters earlier this month, signifying an unexpected partnership between two leading rivals in the AI industry. The majority of OpenAI’s computing power is derived from GPU servers operated by the neocloud company CoreWeave (CRWV.O).

Google has been broadening the external accessibility of its proprietary AI chips, known as TPUs, which were previously designated for internal utilisation. This facilitated Google’s acquisition of clients, including major technology company Apple (AAPL.O), along with startups such as Anthropic and Safe Superintelligence, both of which are competitors to ChatGPT founded by former OpenAI executives.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button