A Norwegian man has taken legal action against Sam Altman-led OpenAI alleging that ChatGPT falsely claimed he murdered his children.
What Happened: Arve Hjalmar Holmen discovered the alleged fabrication after asking ChatGPT, "Who is Arve Hjalmar Holmen?"
The AI chatbot responded with an invented story that he had murdered two sons, attempted to kill a third, and was sentenced to 21 years in prison.
"Some think that there is no smoke without fire—the fact that someone could read this output and believe it is true is what scares me the most," Holmen said, adding that the hallucination was damaging to his reputation, reported BBC.
The digital rights organization Noyb, which is representing Holmen in the complaint, argues that ChatGPT's response is defamatory and violates European data protection laws regarding the accuracy of personal information.
See Also: Apple's New Passwords App Left Users Exposed To Phishing Attacks For Months Due To Serious HTTP Flaw
"You can't just spread false information and in the end add a small disclaimer saying that everything you said may just not be true," said Noyb lawyer Joakim Söderberg.
Microsoft Corp.-backed MSFT OpenAI responded by saying the issue stemmed from an older version of ChatGPT and that newer models, including those with real-time web search, offer improved accuracy, the report noted.
"We continue to research new ways to improve the accuracy of our models and reduce hallucinations," the company said in a statement.
Subscribe to the Benzinga Tech Trends newsletter to get all the latest tech developments delivered to your inbox.
Why It's Important: The case highlights growing concerns over AI hallucinations—when generative models produce false yet convincing content.
Previously, Yann LeCun, Meta Platforms Inc.’s META chief AI scientist and “Godfather of AI,” said that AI hallucinations stem from their autoregressive prediction process—each time the AI generates a word or token, there's a chance it might deviate from a logical response, gradually leading the conversation astray.
In 2023, Sundar Pichai, CEO of Alphabet Inc. GOOG GOOGL also acknowledged that AI technology, in general, is still grappling with hallucination problems.
Earlier this year, Apple Inc. AAPL temporarily halted its Apple Intelligence news summary tool in the U.K. after it generated inaccurate headlines and misrepresented them as factual.
Google’s AI Gemini has also struggled with hallucinations—last year, it bizarrely advised using glue to attach cheese to pizza and claimed that geologists suggest people consume one rock per day.
Check out more of Benzinga’s Consumer Tech coverage by following this link.
Read Next:
Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors.
Photo courtesy: Shutterstock
Edge Rankings
Price Trend
© 2025 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.
Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.