The world is not ready for the next huge development in AI, says departing OpenAI researcher
‘Artificial general intelligence’ could bring computers that are more capable than human brains
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.The world is not ready for AGI, or the point at which artificial intelligence becomes as good as human brains, according to a senior OpenAI researcher.
For years, researchers have been speculating about the arrival of artificial general intelligence, or AGI, when artificial systems will be as good as we are at a broad variety of tasks. Many have suggested that its arrival could be an existential risk, since it could allow computers to behave in ways we can’t expect.
Now the person tasked with ensuring that ChatGPT developer OpenAI is ready for its arrival has said that both the world and the company itself is “not ready”. Miles Brundage had previously served as OpenAI’s “senior adviser for the readiness of AGI”, but announced his departure this week as the company said it would wind down the team.
The lack of preparedness is not “controversial statement among OpenAI’s leadership, and notably, that’s a different question from whether the company and the world are on track to be ready at the relevant time”. Being ready when it arrives will depend on regulation and how the culture around the safety of AI changes, he suggested.
OpenAI has faced questions in recent months over its plans for artificial intelligence and how highly it values safety. While it was created as a non-profit with a view to researching how to safely build artificial intelligence, the success of ChatGPT has brought major investment and some pressure to use its new technology to make a profit.
Mr Brundage said that he was leaving the company for a variety of reasons, including the fact that he does not have time to work on some projects and that he had largely done what he intended. He also said that it would be easier to work from the outside since he would be free of bias and conflicts of interest.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments