Robots will 'become lethal' and leave us 'absolutely defenceless', leading professor warns
Scientific community must decide whether it supports the creation of robotic killing machines, professor Stewart Russell has warned
Your support helps us to tell the story
This election is still a dead heat, according to most polls. In a fight with such wafer-thin margins, we need reporters on the ground talking to the people Trump and Harris are courting. Your support allows us to keep sending journalists to the story.
The Independent is trusted by 27 million Americans from across the entire political spectrum every month. Unlike many other quality news outlets, we choose not to lock you out of our reporting and analysis with paywalls. But quality journalism must still be paid for.
Help us keep bring these critical stories to light. Your support makes all the difference.
The US government is developing highly-advanced killer robots and we must decide whether we support or oppose them, a leading computer scientist has said.
Lethal autonomous weapons systems, or LAWS, are being developed that could eventually become super-powerful and will not be able to ethically choose who should live or die, Stewart Russell, a professor of computer science at the University of California, Berkeley has warned. In the journal Nature, Russell likened the power to nuclear weapons — and said that just as physicists eventually had to take a position on the use of that technology to kill, so should AI specialists and others.
"Autonomous weapons systems select and engage targets without human intervention; they become lethal when those targets include humans," he writes. "LAWS might include, for example, armed quadcopters that can search for and eliminate enemy combatants in a city, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions."
Russell said that all of the artificial intelligence and robots needed to create such killing machines are already in place. “They just need to be combined,” Russell notes, arguing that the same technology used in self-driving cars could easily be adapted for “urban search-and-destroy missions”.
If those robots are autonomous, they could kill humans without the normal checks and balances, Russell warns. “LAWS could violate fundamental principles of human dignity by allowing machines to choose whom to kill — for example, they might be tasked to eliminate anyone exhibiting 'threatening behaviour',” he writes.
Eventually those robots will likely become immeasurably small and so difficult for us to do anything about. Eventually, flying drones could carry “a one-gram shaped charge to puncture the human cranium”, and the only thing stopping them is the limits of physics, Russell warns.
“Despite the limits imposed by physics, one can expect platforms deployed in the millions, the agility and lethality of which will leave humans utterly defenceless. This is not a desirable future.”
The only way to stop such developments is for scientists to take a position, Russell warns. “Doing nothing is a vote in favour of continued development and deployment,” he says.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments