Study: ChatGPT Generates False Information About Medications

By Lucía Tedesco via El Planteo

The official launch of ChatGPT revolutionized many sectors, yet its effectiveness, utility and reliability remain under scrutiny. In fact, studies and tests on this chatbot, created by OpenAI, have been initiated to determine its level of efficiency.

Two studies were presented, with clinical demonstrations, to the American Society of Health-System Pharmacists (ASHP). These studies focused on the use of this artificial intelligence to obtain medication information.

What Long Island University Says About ChatGPT

This study examined the chatbot's ability to answer medication-related questions. A total of 45 questions, collected between January 2022 and April 2023, were analyzed, comparing the chatbot's responses with those obtained from professional literature. Based on this, a checklist was developed to compare both sources of information.

The results revealed that ChatGPT only provided satisfactory answers to 10 of 39 questions, displaying deficiencies in accuracy and completeness. In 29 questions, which were not satisfactorily answered, it provided general information or no response at all. Furthermore, the references cited in its responses were found to be non-existent.

How The Research At Iwate Medical University Was Conducted

This observational and cross-sectional study compared information on drug side effects provided by ChatGPT with that from Lexicomp. Thirty FDA-approved drugs were selected and consulted on ChatGPT between April and June 2023. The responses were classified as “accurate,” “partially accurate,” or “inaccurate.”

Out of ChatGPT's 30 responses, 26 were inaccurate, 2 partially accurate, and only 2 were accurate. The chatbot omitted some common side effects listed in Lexicomp. However, the chat recommended consulting with a healthcare provider for more information, and its responses were easy to understand.

The Iwate study involved the evaluation of a single pharmacist, which could limit the generalizability of its conclusions. Nevertheless, researchers suggested, according to Drugtopics, that “Healthcare professionals and consumers should be cautious of using ChatGPT to obtain medication-related information.” They also indicated that a more extensive and controlled study could provide more conclusive results.

Both studies underscore the need for caution when using ChatGPT for obtaining medical and pharmaceutical information. Indeed, researchers from Long Island University concluded that ChatGPT should be trained on a more precise and controlled dataset to produce more accurate results and become a more viable medical education and medication information tool for patients and healthcare professionals in the future.

Our Spanish content:

Market News and Data brought to you by Benzinga APIs
Comments
Loading...
Posted In:
Benzinga simplifies the market for smarter investing

Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.

Join Now: Free!

Cannabis is evolving – don’t get left behind!

Curious about what’s next for the industry and how to leverage California’s unique market?

Join top executives, policymakers, and investors at the Benzinga Cannabis Market Spotlight in Anaheim, CA, at the House of Blues on November 12. Dive deep into the latest strategies, investment trends, and brand insights that are shaping the future of cannabis!

Get your tickets now to secure your spot and avoid last-minute price hikes.