‘Existential catastrophe’ caused by AI is likely unavoidable, DeepMind researcher warns
‘We should be progressing slowly – if at all – toward the goal of more powerful AI,’ new paper warns
Researchers from the University of Oxford and Google’s artificial intelligence division DeepMind have claimed that there is a high probability of advanced forms of AI becoming “existentially dangerous to life on Earth”.
In a recent article in the peer-reviewed journal AI Magazine, the researchers warned that there would be “catastrophic consequences” if the development of certain AI agents continues.
Leading philosphers like Oxford University’s Nick Bostrom have previously spoken of the threat posed by advanced forms of artificial intelligence, though one of authors of the new paper claimed such warnings did not go far enough.
“Bostrom, [computer scientist Stuart] Russell, and others have argued that advanced AI poses a threat to humanity,” Michael Cohen wrote in a Twitter thread accompanying the article.
“Under the conditions we have identified, our conclusion is much stronger than that of any previous publication – an existential catastrophe is not just possible, but likely.”
The paper proposes a scenario whereby an AI agent figures out a strategy to cheat in order to receive a reward that it is pre-programmed to seek.
In order to maximize its reward potential, it requires as much energy as is possible to obtain. The thought experiment sees humanity ultimately competing against the AI for energy resources.
“Winning the competition of ‘getting to use the last bit of available energy’ while playing against something much smarter than us would probably be very hard,” Mr Cohen wrote. “Losing would be fatal.
“These possibilities, however theoretical, mean we should be progressing slowly – if at all – toward the goal of more powerful AI.”
DeepMind has already proposed a safeguard against such an eventuality, dubbing it “the big red button”. In a 2016 paper titled ‘Safely Interruptible Agents’, the AI firm outlined a framework for preventing advanced machines from ignoring turn-off commands and becoming an out-of-control rogue agent.
Professor Bostrom previously described DeepMind – whose AI accomplishments include beating human champions at the boardgame Go and manipulating nuclear fusion – as the closest to creating human-level artificial intelligence.
The Sweidish philospher also said it would be a “great tragedy” if AI development did not continue, as it holds the potential to cure diseases and advance civilisation at an otherwise impossible rate.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments