Nvidia AI Dominance Challenged By Asian Startups Betting On Energy-Efficient And Cost-Effective Silicon — 'Inference' And 'Training' Chips Emerge As Key Focus

In a bid to challenge NVIDIA Corp. NVDA’s AI dominance, Asian startups are developing more energy-efficient and cost-effective chips for specific artificial intelligence applications.

What Happened: Asian startups are challenging Nvidia’s AI dominance by developing more energy-efficient and cost-effective chips for specific AI applications. These startups are targeting the gap in the market left by Nvidia’s high energy consumption and bulky design, reported Nikkei Asia on Friday.

These startups are focusing on two types of AI chips: “inference” chips, used to operate existing AI models, and “training” chips, high-powered data-processing components used to develop new AI models.

While Nvidia’s GPUs continue to dominate the AI landscape, the startups believe that their GPUs’ high energy consumption and bulky design leave a gap in the market that they can fill.

These startups believe that Nvidia’s GPUs, while powerful, are too energy-intensive and expensive for many applications. Preferred Networks (PFN) CEO Toru Nishikawa stated, “No one has come up with the perfect chip architecture for inference.” PFN is developing chips that aim to be more efficient and less costly than Nvidia’s offerings.

Nvidia’s GPUs are primarily used for training AI models, but their high cost and energy consumption make them impractical for devices like laptops and wearables. Analysts, including Kazuhiro Sugiyama from Omdia, believe that the demand for on-device AI will rise, encouraging new entrants to the market.

Startups such as Edgecortix, led by Sakyasingha Dasgupta, are focusing on solving issues like the “memory wall” problem to create more streamlined and energy-efficient AI chips. These efforts are part of a broader strategy to cater to the growing demand for AI in industrial applications and robotics, particularly in Asia, according to the report.

“Nvidia’s GPU is mainly suited for training, but we are seeing more newcomers developing chips which can target both training and inference,” Sugiyama said.

See Also: Peter Schiff Slams July CPI Figure As ‘Fraud,’ Economist Explains The Low Number With Used Car Sales Numbers: Data Doesn’t Support A Rate Cut

Other companies entering the market include U.S.-based SambaNova Systems, backed by SoftBank Group‘s SFTBY SFTBF Vision Fund; Tenstorrent, founded by a former Intel Corp INTC engineer; and the British company Graphcore, recently acquired by SoftBank.

Big tech companies like Alphabet Inc.‘s GOOG GOOG Google, Meta Platforms Inc. META, and Amazon.com Inc.s AMZN Amazon Web Services are also joining in, along with Nvidia’s rival Advanced Micro Devices, Inc. AMD.

Why It Matters: The competition between Nvidia and emerging Asian startups is heating up as the AI chip market continues to expand. Recently, Eric Schmidt, former CEO of Google, highlighted Nvidia as a major player in the AI sector, noting that large tech companies are planning significant investments in Nvidia-based AI data centers, potentially costing up to $300 billion.

Meanwhile, SoftBank has faced setbacks in its efforts to rival Nvidia with its own AI chip production. Negotiations with Intel reportedly fell through due to Intel’s inability to meet production demands, leading SoftBank to turn to Taiwan Semiconductor Manufacturing Co. TSM, a key Nvidia supplier.

Read Next:

Image Via Shutterstock

This story was generated using Benzinga Neuro and edited by Kaustubh Bagalkote

Market News and Data brought to you by Benzinga APIs
Comments
Loading...
Posted In: AsiaEquitiesNewsGlobalMarketsTechartificial intelligenceKaustubh Bagalkotesemiconductor
Benzinga simplifies the market for smarter investing

Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.

Join Now: Free!