Judge admits using ‘jolly useful’ ChatGPT to write court ruling
Appeal court judgment included words generated by controversial chatbot renowned for accuracy problems
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.An appeal court judge has admitted using artificial intelligence chatbot ChatGPT to help him write a court ruling.
Lord Justice Birss said the language-processing tool was “jolly useful” and the technology had “real potential”.
Scientists, writers and other professionals have previously found ChatGPT’s accuracy unreliable since it was launched last year, and it has become known for having a “hallucination problem” in which false information is generated.
Earlier this year, ChatGPT falsely accused an American law professor by including him in a generated list of legal scholars who had sexually harassed someone, citing a non-existent The Washington Post report.
PCGuide.com says: “ChatGPT is not a truly reliable source. There’s no denying that it is one of the best artificial intelligence content-generator tools out there, but the accuracy on many topics is still not as good as you would want it to be.”
According to the Law Society Gazette, Lord Justice Birss spoke about AI, ChatGPT and generative large language models at a conference, saying: “I think what is of most interest is that you can ask these large language models to summarise information.
“It is useful and it will be used and I can tell you, I have used it.
“I thought I would try it. I asked ChatGPT can you give me a summary of this area of law, and it gave me a paragraph. I know what the answer is because I was about to write a paragraph that said that, but it did it for me and I put it in my judgment.
“It’s there and it’s jolly useful. I’m taking full personal responsibility for what I put in my judgment – I am not trying to give the responsibility to somebody else.
“All it did was a task which I was about to do and which I knew the answer and could recognise an answer as being acceptable.”
Three months ago, a New York lawyer who used ChatGPT to write a legal brief and ended up citing bogus cases profusely apologised in court.
Steven Schwartz became emotional as he explained being “duped” by the artificial intelligence chatbot.
“I deeply regret my actions in this manner that led to this hearing today,” Mr Schwartz said. “I suffered both professionally and personally [because of] the widespread publicity this issue has generated. I am both embarrassed, humiliated and extremely remorseful.”
Together with a colleague, he and their law firm were fined $5,000 (£3,935).
UK law firm Mishcon de Reya has banned lawyers from using ChatGPT because of fears they risk compromising data.
In July, two new US studies, from Stanford and UC Berkeley universities, concluded that ChatGPT appeared to be getting less accurate over time.
Subscribe to Independent Premium to bookmark this article
Want to bookmark your favourite articles and stories to read or reference later? Start your Independent Premium subscription today.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments