Man looking into microscope in a lab

AI Could Harm Medical Students' 'Critical Thinking Capacities,' Experts Warn. 'What Happens if the Servers or AI Services Go Down?'

Artificial intelligence could harm medical students' "critical thinking capacities" if they're not taught how to properly use the technology, a group of medical experts said in a recent op-ed

"The goal of using AI to augment education, rather than letting it erode independent reasoning, is a worthy pursuit," the authors, who are professors at the University of Missouri's School of Medicine, said in the Dec. 1 op-ed in medical journal BMJ Evidence-Based Medicine. "As AI is disrupting traditional learning and evaluation methods, adjustments to medical school and training curricula are necessary."

Don't Miss:

Medical schools have "largely insufficient institutional policies and guidance" on the use of AI in students' homework and training, according to the op-ed. Unchecked AI usage could cause medical professionals to become overly reliant on the technology and lose key skills, it added.

"What happens if the servers or AI services go down?" the BMJ Evidence-Based Medicine op-ed said. "The impact of this is particularly ominous for learners who are working on developing the skill in the first place, as they are denied the opportunity to do so in the process."

Students should learn how to effectively use AI tools and verify its work, the essay said.

"Medical training should include practice in rejecting poor AI advice and in explaining why it is unsafe to follow," it said. 

Trending: An EA Co-Founder Shapes This VC Backed Marketplace—Now You Can Invest in Gaming's Next Big Platform

AI in healthcare 

Artificial intelligence has quickly become mainstream in many medical offices and hospitals. Two-thirds of physicians used AI in their practices in 2024, up from 38% the prior year, according to the American Medical Association. 

At the same time, AI adoption in healthcare is below average compared to other industries, the World Economic Forum said in a report earlier this year. One of the reasons for the slow rate of adoption is "increased distrust" in AI's abilities and effectiveness, according to the report. 

That distrust is warranted due to the fact that AI sometimes creates false sources, the BMJ op-ed authors said.

"Hallucinating confident falsehoods and sources remains a frequent failure mode for AI models," they said.

Large language models are "highly susceptible" to generating false and potentially dangerous information when used in a clinical setting, according to a study published in Communications Medicine earlier this year. 

Such instances were pushed into the spotlight earlier this year when a report released by U.S. Health Secretary Robert F. Kennedy Jr. cited nonexistent studies.

See Also: GM-Backed EnergyX Is Solving the Lithium Supply Crisis — Invest Before They Scale Global Production

A plan for AI in healthcare 

Medical students should be assessed on how they use AI in clinical settings and not just the end result, the op-ed authors said.

"This can be done by asking the students to ‘show their work', provide a paper trail and even submit the LLM prompts they used along with written rationales for accepting or rejecting the AI's output," they said. 

Students should also be assessed in an AI-free setting to ensure they hone fundamental skills, the BMJ Evidence-Based Medicine op-ed added.

"This may be feasible, and especially important, for bedside communication, physical examination, teamwork and professional judgement," it said.

Additionally, AI literacy should be included in their coursework, according to the essay.

"Medical trainees may not need to be fully emerged into the technical data engineering details and training pipelines for AI models," it said, "but they should understand that process in principle and grasp the concepts underpinning its strengths and weaknesses."

Read Next: Buffett's Secret to Wealth? Private Real Estate—Get Institutional Access Yourself

Image: Shutterstock

Market News and Data brought to you by Benzinga APIs

Comments
Loading...