Google pledges to not work on weapons after Project Maven backlash
But company's CEO says it will continue working with the US military
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.Google has pledged to never work on artificial intelligence weapons projects, laying down the principle after a collaboration with the US military fomented an employee revolt.
The technology giant recently announced it would discontinue work with the Department of Defense on Project Maven, an artificial intelligence project that analyses imagery and could be used to enhance the efficiency of drone strikes.
Thousands of employees had signed onto a letter warning that Google’s participation contravened the company’s ethical tenets. Stating that “Google should not be in the business of war”, the letter warned that the company’s involvement would compromise its image and drive away potential employees.
A blog post by CEO Sundar Pichai addressed the underlying debate, establishing guidelines for future artificial intelligence (AI) projects that pledged to ensure the work benefits society and eschew “technologies that cause or are likely to cause overall harm”.
“We recognize that such powerful technology raises equally powerful questions about its use,” Mr Pichai wrote. “How AI is developed and used will have a significant impact on society for many years to come.”
Among the AI applications that will be banned, Mr Pichai wrote, are “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people” in addition to tools for “surveillance violating internationally accepted norms” and innovations that violate “widely accepted principles of international law and human rights”.
But Mr Pichai stopped short of sundering the company’s relationship with the Pentagon, saying those collaborations remained important.
“We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas. These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue”, Mr Pichai wrote.
The proposals drew a mixed response from civil liberties advocates. American Civil Liberties Union technology expert Jake Snow called the pledge a “good start” that nevertheless would not preclude abuse.
“These principles do not prohibit others from using Google's AI and compute infrastructure to build weapons of war, surveillance, or discrimination”, Mr Snow said on Twitter.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments