Elon Musk, DeepMind and AI researchers promise not to develop robot killing machines
The race to develop 'lethal autonomous weapons' could spiral out of control, experts warn
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.Elon Musk and many of the world's most respected artificial intelligence researchers have committed not to build autonomous killer robots.
The public pledge not to make any "lethal autonomous weapons" comes amid increasing concern about how machine learning and AI will be used on the battlefields of the future.
The signatories to the new pledge – which includes the founders of DeepMind, a founder of Skype, and leading academics from across the industry – promise that they will not allow the technology they create to be used to help create killing machines.
They also call on international governments to do more to regulate and restrict the use of such autonomous killing machines, amid fears that countries will begin on an armed race that could get out of control and threaten the stability of the world.
The central argument of the pledge is that "the decision to take a human life should never be delegated to a machine". While there is a great deal of technology used on the battlefield today, numerous experts are concerned that in the future that technology could be allowed to be entirely autonomous, allowing them to kill humans without any intervention from other people at all.
The letter says that such a situation would cause a variety of problems.
"There is a moral component to this position, that we should not allow machines to make life-taking decisions for which others – or nobody – will be culpable," the pledge reads. "There is also a powerful pragmatic argument: lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilizing for every country and individual."
They also note that the killing power of such machines could only be enhanced by other technologies being developed alongside artificial intelligence, such as surveillance and data systems.
And they warn that autonomous killing machines could be even more threatening than "nuclear, chemical and biological weapons", since an arms race could easily spill out of control and leave international organisations without the ability to manage it.
They note that governmental regulation is not yet developed enough to be able to deal with such high-level threats.
"These currently being absent, we opt to hold ourselves to a high standard: we will neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons," the pledge reads. "We ask that technology companies and organizations, as well as leaders, policymakers, and other individuals, join us in this pledge.
In all, the letter has been signed by 170 organizations and 2464 individuals. The full letter and list of participants can be seen on the website of the Future of Life Institute, where experts can also volunteer to sign it.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments