ANALYSIS: Deepseek's AI Innovations Should Help Nvidia, Not Hurt It

Comments
Loading...
Zinger Key Points
  • Although semiconductor stocks fell on the news of a new AI model from China, it's actually proves there's a huge demand for AI chips.
  • Experts say Deepseek's AI costs are much higher than $5.6 million, contrary to mainstream news coverage.
  • Get the Real Story Behind Every Major Earnings Report

The U.S. markets have been in a frenzy over the past few days thanks to a new AI model from Chinese company Deepseek.

The market reaction has been challenging for chipmakers and semiconductors companies like Nvidia Corp NVDAAdvanced Micro Devices Inc AMDArm Holdings ARM, and others.

Nvidia experienced a historic drop in its stock price, plummeting nearly 17% and erasing over $589 billion in market capitalization—the largest single-day market cap loss for any company in U.S. history. Broadcom also saw a significant decline, with shares falling approximately 17%, while AMD's stock price dropped by 4.89%. ARM faced a decline of about 8% on Monday, contributing to a broader selloff across the semiconductor industry.

Despite the buzz around DeepSeek's latest model, R1, the news is largely positive for chipmakers like Nvidia. The market sell-off appears unwarranted and presents a potential long-term buying opportunity for investors.

Deepseek is a secretive AI company based in China, backed by a parent quant fund called High-Flyer. The company's whole thesis is to build Artificial General Intelligence (AGI) and open-source it for general use. While they've been building models and AI technology for a few years, they only recently captured the interest of the American press.

The company made headlines for a few reasons: 1) the press misreported that Deepseek trained a reasoning AI model for roughly $5.6 million, 2) investors felt like the fact that Deepseek was able to build a competitive AI model for so cheap implied that the demand for semiconductor chips used for AI, like the ones Nvidia sells, would be lower in the long run.

In fact, most of the news from Deepseek shows that semiconductors and chips will only get more valuable over time. Chinese companies are already having difficulty keeping up with chip demand after US sanctions prevented advanced chips for AI from being sold to China. Deepseek founder Liang Wenfeng said that the biggest limitation for his company is the availability of high-end AI chips. "Money has never been the problem for us; bans on shipments of advanced chips are the problem," he said, according to ChinaTalk.

Also Read: DeepSeek Selloff Is A Correction, Not Start Of ‘Sustained Bear Market’: Goldman Sachs

AI chips were traditionally used to train large language models and to give them knowledge of all the info on the Internet. But that was a while ago. Today, AI companies mainly use chips for inference—the computing required to generate answers or outputs for users in real-time.

For instance, when you use tools like ChatGPT or Claude, the responses you receive are powered by inference compute. If the answers are slow, it's often because the system is handling a high volume of requests, and there's limited compute available for your specific query.

Even if the demand for chips for training models goes away, the more AI use there is, the more demand there will be for inference compute.

And while most of the media says that Deepseek was able to build a competitive AI model for only $5.5 million, that appears to be incorrect based on expert opinions and Deepseek itself, according to Ray Wang, a Washington-based Analyst specializing in U.S.-China economic and technology Competition.

"I think DeepSeek is actually quite explicit when it comes to the cost of training their model, as laid out in their Technical Paper of V3 and R1. The claim of 5.6 million for DeepSeek for their model is wrong as it was written like this in their technician paper," Wang told Benzinga.

The deeply technical paper explained that Deepseek's costs for training their model took only 2.788 million GPU hours. At $2 per GPU Hour for Nvidia's H800 chip (widely used for AI training), the cost is around $5.6 million. It doesn't account for various associated costs—from R&D and talent to long-term operational costs, failed experiments, and general infrastructure costs like electricity.

This is all to say that the fear that Deepseek's innovations will crush the demand for chips in the upcoming AI revolution is vastly overstated. Deepseek proves you can get much more juice out of high-end AI chips than previously thought, making them more efficient and valuable. If anything, the demand for AI chips from companies like Nvidia, Broadcom, and ARM are only going to increase as AI companies figure out new and innovative uses for them.

Read Next:

Image created using artificial intelligence via Midjourney.

Market News and Data brought to you by Benzinga APIs

Posted In:
Benzinga simplifies the market for smarter investing

Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.

Join Now: Free!