Pentagon prepares to accelerate use of AI in war by adopting ‘ethical principles’

New principles lay foundation for 'deployment and the use of AI by the Department of Defense'

Anthony Cuthbertson
Tuesday 25 February 2020 12:43 GMT
Comments
Killer robots may seem new, but they've been around for a long time
Killer robots may seem new, but they've been around for a long time (Getty/iStock)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

The US Department of Defense has announced plans to adopt ethical principles to push forward with the use of artificial intelligence within warfare.

The Pentagon's Chief Information Officer, Dana Deasy, said the five AI principles would lay the foundation for "the ethical design, development, deployment, and the use of AI by the Department of Defense".

The new principles call for people to "exercise appropriate levels of judgment and care" when deploying and using AI systems, such as those that scan aerial imagery to look for targets.

They also say decisions made by automated systems should be "traceable" and "governable," which means "there has to be a way to disengage or deactivate" them if they are demonstrating unintended behavior, said Air Force Lt. Gen. Jack Shanahan, director of the Pentagon's Joint Artificial Intelligence Center.

The Pentagon's push to speed up its AI capabilities has fueled a fight between tech companies over a $10 billion cloud computing contract known as the Joint Enterprise Defense Infrastructure, or JEDI. Microsoft won the contract in October but hasn't been able to get started on the 10-year project because Amazon sued the Pentagon, arguing that President Donald Trump's antipathy toward Amazon and its chief executive Jeff Bezos hurt the company's chances at winning the bid.

An existing 2012 military directive requires humans to be in control of automated weapons, but doesn't address broader uses of AI. The new US principles are meant to guide both combat and non-combat applications, from intelligence-gathering and surveillance operations to predicting maintenance problems in planes or ships.

The approach outlined Monday follows recommendations made last year by the Defense Innovation Board, a group led by former Google CEO Eric Schmidt.

While the Pentagon acknowledged that AI "raises new ethical ambiguities and risks," the new principles fall short of stronger restrictions favoured by arms control advocates.

US President Donald Trump and Microsoft CEO Satya Nadella listen to Amazon CEO Jeff Bezos during an American Technology Council roundtable at the White House in Washington, DC, on 19 June 19, 2017
US President Donald Trump and Microsoft CEO Satya Nadella listen to Amazon CEO Jeff Bezos during an American Technology Council roundtable at the White House in Washington, DC, on 19 June 19, 2017 (AFP/Getty Images)

"I worry that the principles are a bit of an ethics-washing project," said Lucy Suchman, an anthropologist who studies the role of AI in warfare. "The word 'appropriate' is open to a lot of interpretations."

Shanahan said the principles are intentionally broad to avoid handcuffing the US military with specific restrictions that could become outdated.

"Tech adapts. Tech evolves," he said.

The Pentagon hit a roadblock in its AI efforts in 2018 after internal protests at Google led the tech company to drop out of the military's Project Maven, which uses algorithms to interpret aerial images from conflict zones. Other companies have since filled the vacuum. Shanahan said the new principles are helping to regain support from the tech industry, where "there was a thirst for having this discussion."

"Sometimes I think the angst is a little hyped, but we do have people who have serious concerns about working with the Department of Defense," he said.

Stephen Hawking has a terrifying warning about artificial intelligence

Shanahan said the guidance also helps secure American technological advantage as China and Russia pursue military AI with little attention paid to ethical concerns.

University of Richmond law professor Rebecca Crootof said adopting principles is a good first step, but the military will need to show it can critically evaluate the huge data troves used by AI systems, as well as their cyber security risks.

Crootof said she also hopes the US action helps establish international norms around the military use of AI.

"If the US is seen to be taking AI ethical norms seriously, by default they become a more serious topic," she said.

Once deployed, the Jedi "war cloud" project would provide the computing power to enable AI-based war planning.

The US military claims it will significantly boost ground operations by giving troops access to ultra powerful computers to assist with battlefield strategy.

Additional reporting by agencies.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in