Amidst the surging tide of AI-generated content, a research report has turned the spotlight on industry players like Meta Platforms Inc. META, OpenAI, Anthropic, and Cohere AI — unmasking the most dubious offender among them.
What Happened: Given AI hallucination and eventual misinformation fears rising, especially in the lead-up to the 2024 U.S. Presidential Election, a study conducted by Arthur AI, a machine learning monitoring platform, has provided some critical evaluation.
The study focused on novel aspect rates of hallucination, wherein AI models inadvertently fabricate information and present it as factual.
Such hallucinations can potentially have wide-ranging consequences, as demonstrated by instances like ChatGPT erroneously citing “bogus” cases in a New York federal court filing.
See Also: Biden’s Urgent Push To Regulate AI Technology: ‘We’ve Got to Move Fast Here’
The researchers designed experiments that challenged the AI models in various categories, including combinatorial mathematics, U.S. Presidents, and Moroccan political leaders.
OpenAI’s GPT-4 emerged as the frontrunner in these trials, displaying superior performance compared to its predecessor, GPT-3.5. The study noted that GPT-4 hallucinated significantly less, up to 50%, particularly evident in mathematical questions.
In stark contrast, Meta’s Llama 2 exhibited a higher tendency for fabrication, surpassing Anthropic’s Claude 2.
The researchers also conducted a secondary experiment, assessing the AI models’ tendency to hedge their responses with cautious phrases to mitigate risks of misinformation.
GPT-4 exhibited a 50% increase in hedging compared to GPT-3.5, Cohere’s AI model bucked the trend, opting not to hedge at all. Meanwhile, Claude 2 showcased self-awareness, answering questions it had proper training data to support.
While the findings offer invaluable insights, a spokesperson for Cohere countered the results, asserting the effectiveness of their retrieval automated generation technology for providing verifiable citations, reported CNBC.
Previously, Alphabet Inc.’s GOOG GOOGL CEO Sundar Pichai admitted that AI platforms generally struggle with “hallucination problems” with no apparent explanation.
Why It’s Important: Meanwhile, Associated Press has released certain guidelines for its reporters. They want journalists to try AI tools like ChatGPT but be careful about using the AI’s work in published news.
Image Credits – Shuttertcock
Check out more of Benzinga’s Consumer Tech coverage by following this link.
Read Next: ChatGPT Is Fabricating Information: Why That’s a Problem For Every Industry, Not Just Journalism
© 2024 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.
Comments
Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.