Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

Woman who used nine ‘fabricated’ AI cases in court loses appeal

Mrs Harber said she ‘couldn’t see it made a difference’

Maira Butt
Wednesday 13 December 2023 13:26 GMT
Comments
The court found that AI systems such as ChatGPT had been used in court
The court found that AI systems such as ChatGPT had been used in court (PA)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

A woman who used nine “fabricated” ChatGPT cases to appeal against a penalty for capital gains tax has had her case rejected by a court.

Felicity Harber was charged £3,265 after she failed to pay tax on a property she sold. She appeared in court to appeal the decision and cited cases which the court found were “fabrications” and had been generated by artificial intelligence such as ChatGPT.

She was asked whether AI had been used and confirmed it was “possible”.

When confronted with the reality, Mrs Harber told the court that she “couldn’t see it made any difference” if AI had been used as she was confident there would be cases where mental health or ignorance of the law were a reasonable excuse in her defence.

She proceeded to ask the tribunal how they could be confident that any of the cases used by HMRC were genuine.

The tribunal informed Mrs Harber that cases were publicly listed along with their judgments on case law websites, which she said she had not been aware of.

Judge Anne Redston said that the use of artificial intelligence in court was a ‘serious and important issue’
Judge Anne Redston said that the use of artificial intelligence in court was a ‘serious and important issue’ (PA)

Mrs Harber said the cases had been provided to her by “a friend in a solicitor’s office”.

However, her appeal was dismissed although the judge noted that the outcome would have been reached even in the absence of the fabricated cases.

Judge Anne Redston also added: ”But that does not mean that citing invented judgments is harmeless...providing authorities which are not genuine and asking a court or tribunal to rely on them is a serious and important issue.”

The court accepted that Mrs Harber did not know that the cases were not genuine and that she did not know how to check their validity using legal search tools

It comes after a UK judge admitted to using “jolly useful” ChatGPT when writing judgements.

According to the Law Society Gazette, Lord Justice Birss said of AI: “It’s there and it’s jolly useful. I’m taking full personal responsibility for what I put in my judgment – I am not trying to give the responsibility to somebody else.

“All it did was a task which I was about to do and which I knew the answer and could recognise an answer as being acceptable.”

Earlier this year, two American lawyers were penalised for using fake court citations generated by artificial intelligence in an aviation injury claim.

ChatGPT and other artificial intelligence tools are known to have a “hallucination problem” where false information is created and presented as fact when questions are asked by users.

The Solicitors Regulation Authority (SRA) has highlighted the opportunities for firms of using AI, but also warned of the risks saying: “All computers can make mistakes. AI language models such as ChatGPT, however, can be more prone to this.

“That is because they work by anticipating the text that should follow the input they are given, but do not have a concept of ‘reality’. The result is known as ‘hallucination’, where a system produces highly plausible but incorrect results.”

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in