US Lawyers Fined for Using Citations Faked by ChatGPT

The lawyers used six chatbot-generated legal cases in an aviation injury claim, which turned out to be wholly fabricated.

A Manhattan district judge has ordered two lawyers – Steven Schwartz and Peter LoDuca – and their law firm Levisow, Levidow & Oberman to pay a $5,000 fine for using fictitious legal research created by ChatGPT.

Despite asking the chatbot to verify the cases, ChatGPT claimed the fabricated citations were legitimate.

The fake citations were used in an aviation injury claim and come hot on the heels of a wider conversation about AI plagiarism and chatbots’ potential to spread fake news and disinformation.

ChatGPT Doubled Down on its Fake Stories

Schwartz admitted he’d used ChatGPT to assist with the legal brief of passenger Roberto Mata vs Colombian airline Avianca. Mata’s claim of injury from a refreshment trolley was originally dismissed because the statute of limitations had expired. His lawyers however, contested that the lawsuit should be continued and referenced several previous court cases that supported their argument.

These court cases however, didn’t exist. They had been suggested by ChatGPT and involved cases of airlines that were totally made up or had misidentified judges. Schwartz wasn’t able to confirm the sources of those cases using his law firm’s usual methods, but included them as citations regardless. 

In a written opinion, judge P. Kevin Castel saw no wrongdoing in using artificial intelligence for legal work assistance, but highlighted lawyers’ duty to ensure their filings were accurate. 

“Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance, but existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.” – Judge P. Kevin Castel

According to the law firm’s statement, the lawyers “respectfully” disagreed with this ruling, saying “We made a good-faith mistake in failing to believe that a piece of technology could be making up cases out of whole cloth”.

Schwartz – who has practised law in New York for over 30 years – had apparently never used ChatGPT before and was unaware that the answers supplied could be fake. He has asked the chatbot to verify that the cases were real, which it falsely claimed they were. 

LoDuca’s lawyer said they were reviewing the decision. In a separate written opinion the judge threw out the Mata vs Avianca, confirming the statute of limitations had indeed expired.

A Stark Warning to ChatGPT Users

Despite a few red faces, the low-stakes nature of this specific lawsuit and fine imposed are perhaps a blessing. If it had been a more grievous legal case, it's likely the repercussions could have been a lot worse. Instead, it can simply serve as a warning to all others in the legal industry.

In fact, no matter what industry, this story is a good one to echo the inaccuracies that AI chatbots are prone to making and the dangers of using it blindly.

Get Your Data Back!

Incogni by Surfshark can help you reclaim your information from third-party vendors.

Trained on a wealth of internet data, chatbots’ sources aren’t always available. Pair this with the fact that they work more like a predictive text tool, estimating the most likely word to come after a user’s prompt, and you can see how vulnerable they are to factual errors.

What Does the Future of AI in the Workplace Look Like?

Back in April, ChatGPT faked a sexual harassment claim and named a real American law professor as the accused. Similarly, Chinese authorities arrested a man for allegedly using the chatbot to create fake news articles, with one reporting a made-up fatal train crash.

Concerns around AI’s ability to create accurate content or content that is able to be detected as fake continues to grow, and calls for government intervention have been made.

Apple and Samsung have already blocked AI platforms from being used. Italy already has banned ChatGPT over data security concerns, and Germany is in talks to do the same. Earlier this year, the Biden administration said it was looking into AI’s impact on national security and education.

Did you find this article helpful? Click on one of the following buttons
We're so happy you liked! Get more delivered to your inbox just like it.

We're sorry this article didn't help you today – we welcome feedback, so if there's any way you feel we could improve our content, please email us at contact@tech.co

Written by:

Ellis Di Cataldo (MA) has over 9 years experience writing about, and for, some of the world’s biggest tech companies. She's been the lead writer across digital campaigns, always-on content and worldwide product launches, for global brands including Sony, Electrolux, Byrd, The Open University and Barclaycard. Her particular areas of interest are business trends, startup stories and product news.

Explore More See all news
Back to top
close Building a Website? We've tested and rated Wix as the best website builder you can choose – try it yourself for free Try Wix today