DeepMind boss says human-level AI is just a few years away

Demis Hassabis urges caution with ‘pretty incredible’ progress with artificial general intelligence

Anthony Cuthbertson
Thursday 04 May 2023 12:54 BST
Comments
DeepMind founder Demis Hassabis claims we are just years away from human-level artificial general intelligence
DeepMind founder Demis Hassabis claims we are just years away from human-level artificial general intelligence (iStock/ Getty Images)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

The head of Google’s artificial intelligence division DeepMind has predicted that human-level AI may be just a few years away.

The forecast from Demis Hassabis puts the date for the arrival of artificial general intelligence (AGI) – systems that can think in similar but superior ways to humans – much earlier than previous predictions. Many have speculated that the technology may still be decades away.

“The progress in the last few years has been pretty incredible,” Mr Hassabis said at the Future of Everything Festival this week.

“I don’t see any reason why that progress is going to slow down. I think it may even accelerate. So I think we could be just a few years, maybe within a decade away.”

Mr Hassabis is among several leading figures within the AI industry who is aiming to develop a form of AGI, while also creating safeguards to prevent the tech from harming humanity.

“I would advocate developing these types of AGI technologies in a cautious manner using the scientific method, where you try and do very careful controlled experiments to understand what the underlying system does,” he said.

DeepMind’s Gato AI, described as a “generalist agent”, is already close to rivalling human intelligence, according to the firm’s research director Nando de Freitas.

It is capable of completing a range of complex tasks, from stacking blocks to writing poetry, as well as engaging in dialogue in a similar way to OpenAI’s ChatGPT chatbot.

“It’s all about scale now,” Dr de Freitas said last year.

“It’s all about making these models bigger, safer, compute efficient, faster at sampling, smarter memory, more modalities, innovative data, on/offline... Solving these challenges is what will deliver AGI.”

He added: “Safety is of paramount importance.”

DeepMind CEO Demis Hassabis arrives at the “Princesa de Asturias” Awards 2022 at Teatro Campoamor on 28 October, 2022
DeepMind CEO Demis Hassabis arrives at the “Princesa de Asturias” Awards 2022 at Teatro Campoamor on 28 October, 2022 (Getty Images)

DeepMind researchers have spoken of the existential risks posed by artificial intelligence if it reaches and surpasses the level of humans, and have proposed a solution to prevent advanced AI from going rogue.

In a 2016 paper titled ‘Safely Interruptible Agents’, DeepMind suggested a “big red button” could serve as an off-switch in such a scenario.

“Safe interruptibility can be useful to take control of a robot that is misbehaving and may lead to irreversible consequences,” the paper stated.

“If such an agent is operating in real-time under human supervision, now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions – harmful either for the agent or for the environment – and lead the agent into a safer situation.”

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in