Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

Elon Musk, DeepMind and AI researchers promise not to develop robot killing machines

The race to develop 'lethal autonomous weapons' could spiral out of control, experts warn

Andrew Griffin
Wednesday 18 July 2018 17:48 BST
Comments
The world's first operational police robot stands at attention as they prepare a military cannon to fire to mark sunset and the end of the fasting day for Muslims observing Ramadan, in Downtown Dubai on May 31, 2017
The world's first operational police robot stands at attention as they prepare a military cannon to fire to mark sunset and the end of the fasting day for Muslims observing Ramadan, in Downtown Dubai on May 31, 2017 (GIUSEPPE CACACE/AFP/Getty Images)

Elon Musk and many of the world's most respected artificial intelligence researchers have committed not to build autonomous killer robots.

The public pledge not to make any "lethal autonomous weapons" comes amid increasing concern about how machine learning and AI will be used on the battlefields of the future.

The signatories to the new pledge – which includes the founders of DeepMind, a founder of Skype, and leading academics from across the industry – promise that they will not allow the technology they create to be used to help create killing machines.

They also call on international governments to do more to regulate and restrict the use of such autonomous killing machines, amid fears that countries will begin on an armed race that could get out of control and threaten the stability of the world.

The central argument of the pledge is that "the decision to take a human life should never be delegated to a machine". While there is a great deal of technology used on the battlefield today, numerous experts are concerned that in the future that technology could be allowed to be entirely autonomous, allowing them to kill humans without any intervention from other people at all.

The letter says that such a situation would cause a variety of problems.

"There is a moral component to this position, that we should not allow machines to make life-taking decisions for which others – or nobody – will be culpable," the pledge reads. "There is also a powerful pragmatic argument: lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilizing for every country and individual."

They also note that the killing power of such machines could only be enhanced by other technologies being developed alongside artificial intelligence, such as surveillance and data systems.

And they warn that autonomous killing machines could be even more threatening than "nuclear, chemical and biological weapons", since an arms race could easily spill out of control and leave international organisations without the ability to manage it.

They note that governmental regulation is not yet developed enough to be able to deal with such high-level threats.

"These currently being absent, we opt to hold ourselves to a high standard: we will neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons," the pledge reads. "We ask that technology companies and organizations, as well as leaders, policymakers, and other individuals, join us in this pledge.

In all, the letter has been signed by 170 organizations and 2464 individuals. The full letter and list of participants can be seen on the website of the Future of Life Institute, where experts can also volunteer to sign it.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in