Google fills 'concrete' AI weapons policy with caveats
Analysis: Chief executive Sundar Pichai faced an internal revolt about tech giant's ties with US military
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.When Google quietly removed almost all mentions of its famous ‘Don’t be evil’ slogan from its code of conduct earlier this year, the technology giant was in the midst of an internal revolt about its ties with the US military.
The firm was working on the controversial Project Maven program - an artificial intelligence (AI) project that analyses imagery and could be used to enhance the efficiency of drone strikes.
More than 3,100 employees signed an open letter in April that stated: “We believe that Google should not be in the business of war… We cannot outsource the moral responsibility of our technologies to third parties.”
It went on to demand that Google “draft, publicise and enforce a clear policy” surrounding its AI policy.
Around a dozen employees had already resigned in protest of the relationship, citing ethical concerns that autonomous weapons were in direct contradiction of Google’s "Don’t be evil" motto.
This week the tech giant's chief executive Sundar Pichai responded by unveiling his company’s “concrete standards” surrounding AI. However, some have suggested that the AI Principles, appear more porous than Mr Pichai’s language would seem to suggest.
Mr Pichai begins by prefacing the seven-point list of “objectives for AI applications” by saying it is by no means fixed or solid. “We acknowledge that this area is dynamic and evolving,” he says, adding that whatever principles are included are subject to change due to the company’s “willingness to adapt” its approach.
The points listed appear open to interpretation – a significant contrast to other principles put forward on the use of AI. For decades, the "Three Laws of Robotics" by the science fiction writer Isaac Asimov were the cornerstone for the ethical development of artificial intelligence. First set out in 1942, the first law stated simply: “A robot must not harm a human through action or inaction.”
This idea was elaborated on last year in the Asilomar AI Principles, developed by academics and ethicists as guidelines for anyone working in the field of artificial intelligence. Those rules state: “AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.”
But beyond the complexity of Google’s offering on the subject, the most notable caveat to the company’s AI principles comes towards the end of the 1,000-plus word document.
“We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas,” Mr Pichai says. “These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue.”
This leaves Google’s relationship with the US military wide open, even though Project Maven was recently shut down.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments