Nvidia Corp. NVDA strengthened its partnership with OpenAI after the ChatGPT maker confirmed it has no plans to deploy Alphabet Inc.‘s GOOGL GOOG in-house artificial intelligence chips at scale, despite recent cloud collaboration between the rivals.
What Happened: “We’re proud to partner with OpenAI and continue powering the foundation of their work,” Nvidia wrote on X Monday, citing a Reuters report that OpenAI rejected Google’s tensor processing units (TPUs) for large-scale deployment.
OpenAI spokesperson confirmed to Reuters on Sunday that the AI lab is conducting early testing with some Google TPUs but has no active plans to use them at scale. The company continues relying on Nvidia’s graphics processing units (GPUs) and Advanced Micro Devices Inc. AMD AI chips to meet growing computational demands.
The statement comes two days after Reuters reported OpenAI signed a cloud services deal with Google Cloud in May, marking a rare partnership between direct AI competitors.
See Also: Bill Ackman Bets Over 60% Of Pershing Square’s Portfolio In These 4 Stocks: UBER, HHH And More
Why It Matters: The agreement provides OpenAI access to Google’s infrastructure to support training and deployment of models like ChatGPT, reducing reliance on Microsoft Corp. MSFT Azure.
Google has expanded external availability of its TPUs, historically reserved for internal use, winning customers including Apple Inc. AAPL and AI startups Anthropic and Safe Superintelligence. The custom chips offer Google a cost advantage second only to Nvidia in performance, according to D.A. Davidson analyst Gil Luria.
Read Next:
Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors.
Photo courtesy: Jack Hong / Shutterstock.com
Edge Rankings
Price Trend
© 2025 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.