Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

Musk, scientists call for halt to AI race sparked by ChatGPT

Are tech companies moving too fast in rolling out powerful artificial intelligence technology that could one day outsmart humans

Matt O'Brien
Wednesday 29 March 2023 16:44 BST
Pausing Artificial Intelligence Petition
Pausing Artificial Intelligence Petition (Copyright 2023 The Associated Press. All rights reserved)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Are tech companies moving too fast in rolling out powerful artificial intelligence technology that could one day outsmart humans?

That's the conclusion of a group of prominent computer scientists and other tech industry notables such as Elon Musk and Apple co-founder Steve Wozniak who are calling for a 6-month pause to consider the risks.

Their petition published Wednesday is a response to San Francisco startup OpenAI's recent release of GPT-4, a more advanced successor to its widely-used AI chatbot ChatGPT that helped spark a race among tech giants Microsoft and Google to unveil similar applications.

WHAT DO THEY SAY?

The letter warns that AI systems with “human-competitive intelligence can pose profound risks to society and humanity” — from flooding the internet with disinformation and automating away jobs to more catastrophic future risks out of the realms of science fiction.

It says “recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

“We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4,” the letter says. “This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”

A number of governments are already working to regulate high-risk AI tools. The United Kingdom released a paper Wednesday outlining its approach, which it said “will avoid heavy-handed legislation which could stifle innovation.” Lawmakers in the 27-nation European Union have been negotiating passage of sweeping AI rules.

WHO SIGNED IT?

The petition was organized by the nonprofit Future of Life Institute, which says confirmed signatories include the Turing Award-winning AI pioneer Yoshua Bengio and other leading AI researchers such as Stuart Russell and Gary Marcus. Others who joined include Wozniak, former U.S. presidential candidate Andrew Yang and Rachel Bronson, president of the Bulletin of the Atomic Scientists, a science-oriented advocacy group known for its warnings against humanity-ending nuclear war.

Musk, who runs Tesla, Twitter and SpaceX and was an OpenAI co-founder and early investor, has long expressed concerns about AI's existential risks. A more surprising inclusion is Emad Mostaque, CEO of Stability AI, maker of the AI image generator Stable Diffusion that partners with Amazon and competes with OpenAI's similar generator known as DALL-E.

WHAT'S THE RESPONSE?

OpenAI, Microsoft and Google didn't immediately respond to requests for comment Wednesday, but the letter already has plenty of skeptics.

“A pause is a good idea, but the letter is vague and doesn’t take the regulatory problems seriously," says James Grimmelmann, a Cornell University professor of digital and information law. "It is also deeply hypocritical for Elon Musk to sign on given how hard Tesla has fought against accountability for the defective AI in its self-driving cars.”

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in