ChatGPT cooks up fake sexual harassment scandal and names real law professor as accused

‘When first contacted, I found the accusation comical. After some reflection, it took on a more menacing meaning’

Vishwam Sankaran
Thursday 06 April 2023 07:15 BST
Comments
Related video: Italy Bans OpenAI’s ChatGPT and Launches Data Protection Investigation

Your support helps us to tell the story

This election is still a dead heat, according to most polls. In a fight with such wafer-thin margins, we need reporters on the ground talking to the people Trump and Harris are courting. Your support allows us to keep sending journalists to the story.

The Independent is trusted by 27 million Americans from across the entire political spectrum every month. Unlike many other quality news outlets, we choose not to lock you out of our reporting and analysis with paywalls. But quality journalism must still be paid for.

Help us keep bring these critical stories to light. Your support makes all the difference.

OpenAI’s chatbot ChatGPT falsely accused an American law professor by including him in a generated list of legal scholars who had sexually harassed someone, citing a non-existent The Washington Post report.

In an opinion piece published in USA Today, professor Jonathan Turley from George Washington University wrote that he was falsely accused by ChatGPT of assaulting students on a trip he “never took” while working at a school he “never taught at”.

“It is only the latest cautionary tale on how artificial ‘artificial intelligence’ can be,” he said on Monday, highlighting some of the accuracy and reliability issues with AI chatbots like ChatGPT.

As part of a study, a lawyer had reportedly asked ChatGPT to generate a list of legal scholars who had committed sexual harassment.

The AI chatbot returned a list that included Mr Turley’s name, falsely accusing him of making sexually suggestive comments and attempting to touch a student during a class trip to Alaska, citing a fabricated article in the Post that it said was from 2018.

The George Washington University professor noted no such article existed, something echoed by the newspaper as well.

“What is most striking is that this false accusation was not just generated by AI but ostensibly based on a Post article that never existed,” Mr Turley tweeted.

“When first contacted, I found the accusation comical. After some reflection, it took on a more menacing meaning,” he said.

In another instance, ChatGPT falsely claimed a mayor in Australia had been imprisoned for bribery.

Brian Hood, the mayor of Hepburn Shire, has also threatened to sue ChatGPT creator OpenAI over the false accusations.

He was falsely named guilty in a foreign bribery scandal involving a subsidiary of the Reserve Bank of Australia in the early 2000s.

He did, however, work for the subsidiary, Reuters reported, citing lawyers representing Mr Hood.

The Australian mayor’s lawyers have reportedly sent a letter of concern to OpenAI, giving the company 28 days to fix the errors about Mr Hood or face a possible defamation lawsuit.

A spokesperson for Microsoft, which has reportedly invested $10bn in OpenAI and integrated it into its search engine Bing, was not immediately available for comment, said Reuters.

Several scholars in recent months have raised concerns that the chatbot’s use may disrupt academia primarily due to concerns over the accuracy of the content it generates.

University of Southern California AI expert Kate Crawford calls such falsely concocted stories by AI chatbots “hallucitations”.

ChatGPT gained prominence in December last year for its ability to respond to a range of queries with a human-like output.

Some experts have speculated that it may revolutionise entire industries and may even replace tools like Google’s search engine.

Researchers, including those at Harvard School of Medicine and Pennsylvania’s Wharton Business School, found that the chatbot could crack eligibility tests meant for students.

But others have also expressed cautious optimism.

The New York City education department said it was worried about the negative impacts of the chatbot on student learning, citing “concerns regarding the safety and accuracy of content”.

Recently, when Google began rolling out its ChatGPT rival Bard for some adults in the UK and US, it warned that the chatbot may share misinformation and could display biases.

The tech giant noted that the chatbot is “not yet fully capable of distinguishing between what is accurate and inaccurate information” as it predicts answers based on others it has learned from.

OpenAI spokesperson Niko Felix told The Washington Post in a statement that improving factual accuracy is a “significant focus” for the company, adding that it is “making progress”.

“When users sign up for ChatGPT, we strive to be as transparent as possible that it may not always generate accurate answers,” he said.

The Independent has reached out to OpenAI for a comment.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in