Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

Lawyers blame ChatGPT for tricking them into citing bogus case law

A judge is deciding whether to sanction two lawyers who blamed ChatGPT for tricking them into including fictitious legal research in a court filing

Larry Neumeister
Friday 09 June 2023 04:25 BST
Artificial Intelligence The Courts
Artificial Intelligence The Courts (Copyright 2023 The Associated Press. All rights reserved)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Two apologetic lawyers responding to an angry judge in Manhattan federal court blamed ChatGPT Thursday for tricking them into including fictitious legal research in a court filing.

Attorneys Steven A. Schwartz and Peter LoDuca are facing possible punishment over a filing in a lawsuit against an airline that included references to past court cases that Schwartz thought were real, but were actually invented by the artificial intelligence-powered chatbot.

Schwartz explained that he used the groundbreaking program as he hunted for legal precedents supporting a client's case against the Colombian airline Avianca for an injury incurred on a 2019 flight.

The chatbot, which has fascinated the world with its production of essay-like answers to prompts from users, suggested several cases involving aviation mishaps that Schwartz hadn't been able to find through usual methods used at his law firm.

The problem was, several of those cases weren't real or involved airlines that didn’t exist.

Schwartz told U.S. District Judge P. Kevin Castel he was “operating under a misconception ... that this website was obtaining these cases from some source I did not have access to.”

He said he “failed miserably” at doing follow-up research to ensure the citations were correct.

“I did not comprehend that ChatGPT could fabricate cases,” Schwartz said.

Microsoft has invested some $1 billion in OpenAI, the company behind ChatGPT.

Its success, demonstrating how artificial intelligence could change the way humans work and learn, has generated fears from some. Hundreds of industry leaders signed a letter in May that warns “ mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Judge Castel seemed both baffled and disturbed at the unusual occurrence and disappointed the lawyers did not act quickly to correct the bogus legal citations when they were first alerted to the problem by Avianca’s lawyers and the court. Avianca pointed out the bogus case law in a March filing.

The judge confronted Schwartz with one legal case invented by the computer program. It was initially described as a wrongful death case brought by a woman against an airline only to morph into a legal claim about a man who missed a flight to New York and was forced to incur additional expenses.

“Can we agree that's legal gibberish?” Castel asked.

Schwartz said he erroneously thought that the confusing presentation resulted from excerpts being drawn from different parts of the case.

When Castel finished his questioning, he asked Schwartz if he had anything else to say.

“I would like to sincerely apologize,” Schwartz said.

He added that he had suffered personally and professionally as a result of the blunder and felt “embarrassed, humiliated and extremely remorseful.”

He said that he and the firm where he worked — Levidow, Levidow & Oberman — had put safeguards in place to ensure nothing similar happens again.

LoDuca, another lawyer who worked on the case, said he trusted Schwartz and didn't adequately review what he had compiled.

After the judge read aloud portions of one cited case to show how easily it was to discern that it was “gibberish,” LoDuca said: “It never dawned on me that this was a bogus case.”

He said the outcome “pains me to no end.”

Ronald Minkoff, an attorney for the law firm, told the judge that the submission “resulted from carelessness, not bad faith” and should not result in sanctions.

He said lawyers have historically had a hard time with technology, particularly new technology, “and it's not getting easier.”

“Mr. Schwartz, someone who barely does federal research, chose to use this new technology. He thought he was dealing with a standard search engine," Minkoff said. "What he was doing was playing with live ammo.”

Daniel Shin, an adjunct professor and assistant director of research at the Center for Legal and Court Technology at William & Mary Law School, said he introduced the Avianca case during a conference last week that attracted dozens of participants in person and online from state and federal courts in the U.S., including Manhattan federal court.

He said the subject drew shock and befuddlement at the conference.

“We’re talking about the Southern District of New York, the federal district that handles big cases, 9/11 to all the big financial crimes,” Shin said. “This was the first documented instance of potential professional misconduct by an attorney using generative AI.”

He said the case demonstrated how the lawyers might not have understood how ChatGPT works because it tends to hallucinate, talking about fictional things in a manner that sounds realistic but is not.

“It highlights the dangers of using promising AI technologies without knowing the risks,” Shin said.

The judge said he'll rule on sanctions at a later date.

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in