The world still needs to urgently address the danger of AI, experts warn

Not enough is being done to protect the world from risks, say leading scientists

Andrew Griffin
Monday 20 May 2024 20:09 BST
Comments
(Getty Images Entertainment Video)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

The world urgently needs to address the danger of AI – and not enough is being done, experts have warned.

A new paper – published six months since the first AI Safety Summit was held in the UK and ahead of the second in Seoul this week – says that progress is still lagging behind the technology and that the world could be at risk as a result. It brings together 25 of the world’s leading experts, including a so-called “godfather of AI” and the world’s most-cited economist.

The researchers note that many governments have made step towards discussing the dangers of AI and introducing guidelines that could help address some of the risks of the technology. But it is not up to the risk that many experts believe the technology could pose, they warn.

That has left us without the necessary research to understand the threats posed by artificial intelligence. And we do not have the mechanisms or institutions in place to stop those dangers, it warns.

Those dangers could be catastrophic, the experts say in a long paper that aims to gather together the state of the art in AI safety. AI systems could gain the trust of humans and influence decisions, helping lead to large-scale cybercrime and new kinds of threats in wars.

That could leave us with a large-scale loss of life – or the total extinction of humanity, they say.

“Technologies like spaceflight, nuclear weapons and the Internet moved from science fiction to reality in a matter of years. AI is no different,” said Jeff Clone, a professor in AI at the University of British Columbia who signed the letter.

“We have to prepare now for risks that may seem like science fiction – like AI systems hacking into essential networks and infrastructure, AI political manipulation at scale, AI robot soldiers and fully autonomous killer drones, and even AIs attempting to outsmart us and evade our efforts to turn them off.”

Experts noted that there are vast opportunities to be found in the technology. But they warned that those opportunities can only be seized by being responsible about the risks.

“Explosive AI advancement is the biggest opportunity and at the same time the biggest risk for mankind,” said Dawn Song, a professor in AI at UC Berkeley and the most-cited researchers on AI security and privacy.

“It is important to unite and reorient towards advancing AI responsibly, with dedicated resources and priority to ensure that the development of AI safety and risk mitigation capabilities can keep up with the pace of the development of AI capabilities and avoid any catastrophe.”

Some other experts have however suggested that panic about the dangers of artificial intelligence could be premature. Yann LeCun – who, like signatory Geoffrey Hinton, is one of the three “godfathers of AI” – has said that artificial intelligence is not yet developed enough to pose a major threat.

“It seems to me that before ‘urgently figuring out how to control AI systems much smarter than us’ we need to have the beginning of a hint of a design for a system smarter than a house cat,” he wrote in a recent post that responded to similar urging from a former employee of ChatGPT creator OpenAI. “Such a sense of urgency reveals an extremely distorted view of reality.”

The paper, ‘Managing extreme AI risks amid rapid progress’, is published in Science.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in