The world is not ready for the next huge development in AI, says departing OpenAI researcher

‘Artificial general intelligence’ could bring computers that are more capable than human brains

Andrew Griffin
Friday 25 October 2024 13:07 BST
Comments
Philanthropy OpenAI Nonprofit Sidebar
Philanthropy OpenAI Nonprofit Sidebar (Copyright 2023 The Associated Press. All rights reserved)

Your support helps us to tell the story

This election is still a dead heat, according to most polls. In a fight with such wafer-thin margins, we need reporters on the ground talking to the people Trump and Harris are courting. Your support allows us to keep sending journalists to the story.

The Independent is trusted by 27 million Americans from across the entire political spectrum every month. Unlike many other quality news outlets, we choose not to lock you out of our reporting and analysis with paywalls. But quality journalism must still be paid for.

Help us keep bring these critical stories to light. Your support makes all the difference.

The world is not ready for AGI, or the point at which artificial intelligence becomes as good as human brains, according to a senior OpenAI researcher.

For years, researchers have been speculating about the arrival of artificial general intelligence, or AGI, when artificial systems will be as good as we are at a broad variety of tasks. Many have suggested that its arrival could be an existential risk, since it could allow computers to behave in ways we can’t expect.

Now the person tasked with ensuring that ChatGPT developer OpenAI is ready for its arrival has said that both the world and the company itself is “not ready”. Miles Brundage had previously served as OpenAI’s “senior adviser for the readiness of AGI”, but announced his departure this week as the company said it would wind down the team.

The lack of preparedness is not “controversial statement among OpenAI’s leadership, and notably, that’s a different question from whether the company and the world are on track to be ready at the relevant time”. Being ready when it arrives will depend on regulation and how the culture around the safety of AI changes, he suggested.

OpenAI has faced questions in recent months over its plans for artificial intelligence and how highly it values safety. While it was created as a non-profit with a view to researching how to safely build artificial intelligence, the success of ChatGPT has brought major investment and some pressure to use its new technology to make a profit.

Mr Brundage said that he was leaving the company for a variety of reasons, including the fact that he does not have time to work on some projects and that he had largely done what he intended. He also said that it would be easier to work from the outside since he would be free of bias and conflicts of interest.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in