Artificial intelligence researchers boycott South Korean university amid fears it is developing killer robots

Researchers warn of a ‘Skynet scenario’ involving weaponized robots fighting wars against each other

Anthony Cuthbertson
Thursday 05 April 2018 13:07 BST
Comments
Film shows dangers of smart drone weapons, from Campaign to Stop Killer Robots

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Leading artificial intelligence researchers have boycotted South Korea’s top university after it teamed up with a defence company to develop “killer robots” for military use.

An open letter sent to the Korea Advanced Institute of Science and Technology (KAIST) stated that the 57 signatories from nearly 30 different countries would no longer visit or collaborate with the university until autonomous weapons were no longer developed at the institute.

“It is regrettable that a prestigious institution like KAIST looks to accelerate the arms race to develop such weapons,” the letter states.

“They have the potential to be weapons of terror. Despots and terrorists could use them against innocent populations, removing any ethical restraints. This Pandora’s box will be hard to close if it is opened.”

The extent of this threat has been likened by some security experts to that of Skynet, a fictional artificial intelligence system that first appeared in the 1984 film The Terminator. After becoming self-aware, Skynet set out to wipe out humanity using militarized robots, drones and war machines.

“If we combine powerful burgeoning AI technology with insecure robots, the Skynet scenario of the famous Terminator films all of a sudden seems not nearly as far-fetched as it once did,” Lucas Apa, a senior security consultant from the cybersecurity firm IOActive, told The Independent.

Apa said robots were at risk to hacking and malfunctioning, citing an incident at a US factory in 2016 that resulted in the death of one of the workers.

“Similar to other technologies, we’ve found robot technology to be insecure in a number of ways,” Apa said. “It is concerning that we are already moving towards offensive military capabilities when the security of these systems are shaky at best. If robot ecosystems continue to be vulnerable to hacking, robots could soon end up hurting instead of helping us.”

Artificial intelligence academics fear weaponized robots pose an existential threat to humanity. (Stephen Bowler/ Wikimedia Commons)
Artificial intelligence academics fear weaponized robots pose an existential threat to humanity. (Stephen Bowler/ Wikimedia Commons) (Stephen Bowler)

KAIST president Sung-Chul Shin responded to the open letter, claiming that the university had "no intention" of developing lethal autonomous weapons.

"I reaffirm once again that KAIST will not conduct any research activities counter to human dignity including autonomous weapons lacking meaningful human control," he said.

It is not the first time that AI academics have warned of the dangers posed by weaponized robots, with a similar letter sent to Canadian Prime Minister Justin Trudeau last year.

Other notable scientific figures, including the physicist Stephen Hawking, have even gone as far as to say that AI has the potential to destroy civilization.

“Computers can, in theory, emulate human intelligence, and exceed it,” Hawking said last year. “AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many.”

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in