There is no evidence that AI can be controlled, expert says
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.There is no evidence that artificial intelligence can be controlled and made safe, an expert has claimed.
Even partial controls would not be enough to keep us safe from AI reshaping society, perhaps for the worst, said Roman V Yampolskiy, a Russian computer scientist from the University of Louisville.
Nothing should be taken off the table in an attempt to ensure that artificial intelligence does not put us at risk, Dr Yampolskiy argued.
He said that he had come to the conclusion after a detailed review of the existing scientific literature, which will be published in an upcoming book.
“We are facing an almost guaranteed event with potential to cause an existential catastrophe,” said Dr Yampolskiy in a statement. “No wonder many consider this to be the most important problem humanity has ever faced.
“The outcome could be prosperity or extinction, and the fate of the universe hangs in the balance.”
The research for that book – AI: Unexplainable, Unpredictable, Uncontrollable – showed that there is “no evidence” and “no proof” that it would actually be possible to solve the problem of uncontrollable AI.
Since it appears that any AI will not be possible to fully control, it is important to launch a “significant AI safety effort” to ensure that it is made as safe as possible he argues.
But, even then, it may not be possible to protect the world from those dangers: as an AI becomes more capable there are more opportunities for safety failings, so it would not be possible to protect against every danger.
What’s more, many of those AI systems are not able to explain how they came to the conclusions they did. Such technology is already being used in systems such as healthcare and banking – but we might not be able to know how those important decisions were actually made.
“If we grow accustomed to accepting AI’s answers without an explanation, essentially treating it as an Oracle system, we would not be able to tell if it begins providing wrong or manipulative answers,” Dr Yampolskiy said in a statement.
Even a system that was precisely built to follow human orders might run into issues, he noted: those orders might contradict each other, the system might misinterpret them, or it could be used maliciously.
That could be avoided by using an AI more as an advisor with a human making the decisions. But if it is to do that then it will need its own superior values to help advise humanity on.
“The paradox of value-aligned AI is that a person explicitly ordering an AI system to do something may get a “no” while the system tries to do what the person actually wants. Humanity is either protected or respected, but not both,” Dr Yampolskiy said.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments