Artificial intelligence warning over human extinction labelled ‘publicity stunt’

Professor Sandra Wachter said the risk raised in the letter that AI could wipe out humanity is ‘science fiction fantasy’.

Jordan Reynolds
Thursday 01 June 2023 09:50 BST
Professor Sandra Wachter (Sandra Wachter/PA)
Professor Sandra Wachter (Sandra Wachter/PA) (PA Media)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

The probability of a “Terminator scenario” caused by artificial intelligence is “close to zero”, a University of Oxford professor has said.

Sandra Wachter, professor of technology and regulation, called a letter released by the San Francisco-based Centre for AI Safety – which warned that the technology could wipe out humanity – a “publicity stunt”.

The letter, which warns that the risks should be treated with the same urgency as pandemics or nuclear war, was signed by dozens of experts including artificial intelligence (AI) pioneers.

Prime Minister Rishi Sunak retweeted the Centre for AI Safety’s statement on Wednesday, saying the Government is “looking very carefully” at it.

Professor Wachter said the risk raised in letter is “science fiction fantasy” and she compared it to the film The Terminator.

She added: “There are risks, there are serious risks, but it’s not the risks that are getting all of the attention at the moment.

“What we see with this new open letter is a science fiction fantasy that distracts from the issue right here right now. The issues around bias, discrimination and the environmental impact.

“The whole discourse is being put on something that may or may not happen in a couple of hundred years. You can’t do something meaningful about it as it’s so far in the future.

“But bias and discrimination I can measure, I can measure the environmental impact. It takes 360,000 gallons of water daily to cool a middle-sized data centre, that’s the price that we have to pay.

“It’s a publicity stunt. It will attract funding.

It's a publicity stunt. It will attract funding.

Professor Sandra Wachter

“Let’s focus on people’s jobs being replaced. These things are being completely sidelined by the Terminator scenario.

“What we know about technology now, the probability [of human extinction due to AI] is close to zero. People should worry about other things.”

AI apps have gone viral online, with users posting fake images of celebrities and politicians, and students using ChatGPT and other “language learning models” to generate university-grade essays.

But AI can also perform life-saving tasks, such as algorithms analysing medical images like X-rays, scans and ultrasounds, helping doctors to identify and diagnose diseases such as cancer and heart conditions more accurately and quickly.

The statement was organised by the Centre for AI Safety, a non-profit which aims “to reduce societal-scale risks from AI”.

It says: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Senior bosses at companies such as Google DeepMind and Anthropic signed the letter along with a pioneer of AI, Geoffrey Hinton, who resigned from his job at Google earlier this month, saying that in the wrong hands, AI could be used to to harm people and spell the end of humanity.

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in