Halt development of new AI to protect humanity: Chilling call by Elon Musk and tech titans

Governments should step in and halt work on new AI systems if their creators will not, experts say

Andrew Griffin
Wednesday 29 March 2023 19:28 BST
Comments
Elon Musk’s chilling AI ‘Terminator’ warning
Leer en Español

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Humanity is in danger from “AI experiments” and they must be paused to ensure that we are not at risk, according to more than 1,000 experts.

Researchers need to stop working on the development of new artificial intelligence systems for the next six months – and if they will not, then governments need to step in, they warned.

That is the grave conclusion of a new open letter signed by experts including academics in the field and technology leaders including Elon Musk and Apple co-founder Steve Wozniak.

The letter notes that the positive possibilities of AI are significant. It says that humanity “can enjoy a flourishing future” with the technology, and that we can now enjoy an “AI summer” in which we adapt to what has already been created.

But if scientists continue to train new models, then the world could be faced with a much more difficult situation. “Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control,” the authors of the letter write.

The most advanced publicly available AI system at the moment is GPT-4, developed by OpenAI, which was released earlier this month. The letter says that AI labs should pause work on any system more powerful than that, for at least the next six months.

“This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” the authors write.

During those six months, both AI labs and experts should work to create new principles for the design of AI systems, they say. That should mean that any system built within them is “safe beyond a reasonable doubt”.

It would not mean pausing AI work in general, but stopping the development of new models and capabilities. Instead, that research “should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal”.

The same pause could also allow time for policymakers to make new governance systems for looking at AI. That would involve creating authorities that can track their development and ensure they are not pursuing dangerous ends.

At the moment, the letter includes signatures from founders and chief executives of Pinterest, Skype, Apple and Tesla. It also includes experts in the field from universities including Berkeley, Princeton and others.

Some researchers from within the companies that are working on their own AI systems – such as Deepmind, the UK artificial intelligence company owned by Google parent Alphabet – have also signed the letter.

Elon Musk was one of the founders of OpenAI, contributing funding when it launched at the end of 2017. But in recent months he has seemingly become more opposed to its work, arguing that it is becoming fixated on new systems and is wrongly developing them for profit.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in