‘Superhuman AI’ could cause human extinction, MPs told

‘It could kill everyone,’ warns AI researcher

Anthony Cuthbertson
Thursday 26 January 2023 14:10 GMT
Comments
People take part in a demonstration as part of the campaign ‘Stop Killer Robots’ organised by German NGO ‘Facing Finance’ to ban what they call killer robots
People take part in a demonstration as part of the campaign ‘Stop Killer Robots’ organised by German NGO ‘Facing Finance’ to ban what they call killer robots (DPA/AFP via Getty Images)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

AI researchers have warned MPs that the development of “superhuman” artificial intelligence risks human extinction.

The House of Commons Science and Technology Committee heard from researchers at Oxford University, who advised that AI should be regulated in the same way as nuclear weapons.

“With superhuman AI, there is a particular risk that is of a different sort of class, which is... it could kill everyone,” said doctoral student Michael Cohen.

“If you imagine training a dog with treats: it will learn to pick actions that lead to it getting treats, but if the dog finds the treat cupboard, it can get the treats itself without doing what we wanted it to do.”

It is not the first time AI scientists have warned of the existential risks posed by the technology, with the latest warning echoing that of a thought experiment put forward by philosopher Nick Bostrom nearly 20 years ago.

The Paperclip Maximizer problem hypotheses that a super-intelligent AI would ultimately destroy humanity even if its initial goal – of producing the most amount of paperclips possible – was not explicitly malicious.

Recent AI advances have resurfaced fears surrounding advanced artificial intelligence and how it is handled and developed, though it will take broad consensus from governments and institutions around the world to impose effective safeguards and regulation.

The researchers said the AI industry had already become a “literal arms race” as competition mounts to produce both commercial and military applications with the technology.

“I think the bleak scenario is realistic because AI is attempting to bottle what makes humans special, that has led to humans completely changing the face of the Earth,” said Michael Osborne, a professor of machine learning at the University of Oxford.

“Artificial systems could become as good as good at outfoxing us geopolitically as they are in the simple environments of game.

“There are some reasons for hope in that we have been pretty good at regulating the use of nuclear weapons. AI is as comparable a danger as nuclear weapons.”

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in