The Cambridge Dictionary has named ‘hallucinate’ as its Word of the Year, indicative of a significant shift within the artificial intelligence (AI) industry.
What Happened: ‘Hallucinate’ has been given a fresh interpretation within the AI field, reported Business Insider. The term now extends beyond the perception of non-existent phenomena, referring to instances when AI falsely portrays erroneous data as accurate, occasionally leading to detrimental results.
Well-known platforms like Gizmodo, CNET, and Microsoft have been criticized for inaccuracies in their AI-produced articles. A legal professional even lost his job after an AI-based chatbot, ChatGPT, created non-existent lawsuits as references.
Earlier this year, Morgan Stanley analysts highlighted the propensity of ChatGPT to fabricate facts, a problem they predict will persist for some years. This issue has prompted concerns among business leaders and misinformation experts about AI’s potential to heighten online misinformation.
Wendalyn Nichols, the publishing manager of the Cambridge Dictionary, underscored the importance of human critical thinking when using AI tools, stating, “The fact that AIs can ‘hallucinate’ reminds us that humans still need to bring their critical thinking skills to the use of these tools.”
Why It Matters: The term ‘hallucinate’ has become a focal point for tech experts and internet users alike since OpenAI released its revolutionary chatbot technology, ChatGPT in April 2023. Alphabet Inc. CEO Sundar Pichai even acknowledged that the AI industry is grappling with “hallucination problems” with no clear solution.
In a move towards self-regulation and responsible AI, Large Language Model (LLM) builder Vectara released its open-source Hallucination Evaluation Model in November 2023. This model aims to quantify the degree to which an LLM deviates from facts, marking a crucial step towards removing hindrances to enterprise adoption and mitigating risks such as misinformation.
Read Next: Sam Altman Was Not Fired From OpenAI For AI Safety Concerns
Image Via Shutterstock
Engineered by Benzinga Neuro, Edited by Pooja Rajkumari
The GPT-4-based Benzinga Neuro content generation system exploits the extensive Benzinga Ecosystem, including native data, APIs, and more to create comprehensive and timely stories for you. Learn more.
© 2024 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.
Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.