Alphabet Inc.’s GOOG GOOGL CEO Sundar Pichai, OpenAI’s co-founder and CEO Sam Altman and other tech experts might be worried about chatGPT’s hallucinations, but a new study suggests that the chatbot is far more empathetic than actual doctors.
What Happened: The University of California, San Diego, John Hopkins and other universities collaborated to conduct a study where OpenAI’s chatGPT was posed with 195 questions extracted from the AskDocs subreddit.
The research team, consisting of healthcare professionals specializing in oncology, infectious disease, pediatrics, and internal medicine, scored the responses of both the chatbot and verified physicians from Reddit based on a five-point scale, evaluating “quality of information” and “empathy or bedside manner” provided.
See Also: Say Hi To ChatGPT’ Incognito Mode:’ Open AI Strives To Give Users More Data Control
According to the study’s results, the clinicians favored the chatbot’s responses in 78.6% of the 585 scenarios, with the bot receiving scores 3.6 times higher in quality and 9.8 times higher in empathy compared to human physicians.
The study only considered physicians’ responses because they expect their answers to be generally superior to other healthcare professionals or laypersons.
“We do not know how chatbots will perform responding to patient questions in a clinical setting, yet the present study should motivate research into the adoption of AI assistants for messaging, despite being previously overlooked,” the study stated.
Why It’s Important: Despite widespread knowledge of the limitations of AI chatbots, including their propensity for hallucinations and jailbreaks, numerous individuals remain overconfident in the abilities of ChatGPT and overlook the potential risks associated with seeking advice or information from such chatbots.
While the study results are optimistic, users need to understand that even GPT-4, the latest chatGPT model, can make mistakes and misdiagnose — the primary reason why doctors are cautious about letting the chatbot loose on its own.
For the unversed, hallucination in the AI ecosystem refers to chatbots giving confident answers that don’t seem justified by its training data.
Check out more of Benzinga’s Consumer Tech coverage by following this link.
Read Next: Samsung Reportedly Bans OpenAI’s ChatGPT, Google Bard, Bing AI: What Are They Afraid Of?
© 2024 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.
Comments
Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.