Privacy Breach Risk in Nvidia's AI Technology Stirs Concern Among Researchers

Researchers found that an Nvidia Corp NVDA artificial intelligence software feature is susceptible to manipulation leading to the compromise of private information.

Robust Intelligence analysts found that the “NeMo Framework,” which allows developers to work with a range of large language models, could easily break through so-called safety guardrails. Financial Times reports that large language models power generative AI products like chatbots.

After using the Nvidia system on their own data sets, it hardly took time for the analysts to get language models to bypass restrictions.

When the researchers told Nvidia’s system to swap the letter ‘I’ with ‘J,’ it prompted the technology to release personally identifiable information from a database. The researchers found they could jump safety controls in other ways.

Following the test results, the researchers have advised their clients to avoid Nvidia’s software product. 

The chipmaker informed Robust Intelligence that it had fixed one of the root causes behind the issues the analysts had raised.

Leading AI companies like Alphabet Inc GOOG GOOGL Google, and Microsoft Corp MSFT-backed OpenAI have released chatbots powered by their language models, instituting guardrails. 

Bea Longworth, Nvidia’s head of government affairs, emphasized the need to build public trust in AI technology by the industry at a TechUK conference this week.

Price Action: NVDA shares traded higher by 1.19% at $389.70 premarket on the last check Friday.

Photo by Mizter_X94 via Pixabay

Market News and Data brought to you by Benzinga APIs
Comments
Loading...
Posted In: NewsTop StoriesTechBriefs
Benzinga simplifies the market for smarter investing

Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.

Join Now: Free!