ChatGPT Causes Legal Chaos After Making Up Cases To Ease Lawyer's Workload

In a startling turn of events, a lawyer’s use of OpenAI‘s chatGPT for legal assistance has backfired as the AI-powered chatbot fabricated nonexistent cases

What Happened: In a bizarre case surrounding Mata Vs. Avianca, which involved a customer suing the airline for a knee injury caused by a serving cart, the use of chatGPT took a surprising turn. 

Mata’s lawyers objected to Avianca’s attempt to dismiss the case and submitted a brief containing numerous purported court decisions generated by the AI-powered chatbot. 

See Also: How To Use ChatGPT On Mobile Like A Pro

At first glance, this may seem like a funny incident, an honest mistake, or both, but what transpired next was nothing short of outlandish.

When pressed to produce the actual cases in question, the plaintiff’s lawyer once again sought assistance from chatGPT, leading to the AI inventing elaborate details of those made-up cases, which were then captured as screenshots and incorporated into the legal filings. 

Adding to the astonishing sequence, the lawyer went as far as asking chatGPT to verify the authenticity of one of the cases, to which the AI responded affirmatively, resulting in the inclusion of the AI’s confirmation screenshots in yet another filing. 

In response to this extraordinary situation, the judge has scheduled a hearing for next month to address the possibility of imposing sanctions on the lawyers, recognizing the need to discuss and address the consequences of this unprecedented circumstance.

It is pertinent to note that last month it was reported that an Indian judge couldn’t decide on whether bail should be given to a murder suspect who used chatGPT for assistance

Why It’s Important: What makes the entire case so bizarre is the fact that while it is well known that OpenAI’s chatGPT and other generative AI models like Microsoft Corp’s MSFT Bing AI and Alphabet Inc.’s GOOG GOOGL Google Bard, tends to hallucinate and provide even made up facts with utmost conviction.

Previously, a Reddit user highlighted similar issues stating that when people become excessively self-assured about AI’s (in this case, chatGPT) abilities and disregard the potential dangers of seeking advice from the chatbot in sensitive fields like Medicine or Law, issues arise. 

The worst part is that even tech-savvy individuals can fall prey to the well-documented hallucinations that chatGPT is notorious for

Check out more of Benzinga’s Consumer Tech coverage by following this link

Read Next: 697K Downloads In 8 Days! OpenAI’s ChatGPT App Skyrockets To Success In US

Market News and Data brought to you by Benzinga APIs
Comments
Loading...
Posted In:
Benzinga simplifies the market for smarter investing

Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.

Join Now: Free!