AI is coming to help national security – but could bring major risks, official report warns

Decision makers may not be aware of the dangers of relying on information from artificial intelligence, experts say

Andrew Griffin
Tuesday 23 April 2024 12:43 BST
Comments
(AFP via Getty Images)
Leer en Español

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

AI could have profound implications for national security – including posing a host of risks, a new government-commissioned report warns.

Artificial intelligence is a valuable tool to help senior officials in government and intelligence make decisions, it says. But it could also lead to inaccuracies, confusion and other dangers, it warns.

Senior officials must be trained to spot those problems, and there is a critical need for any AI systems to be carefully watched and continuously monitored to ensure they don’t lead to more bias and errors, it warns.

Problems may arise, for instance, because some officials believe that AI is far more capable and certain than it actually is. In fact, artificial intelligence often works on probabilities – and can be wildly wrong, it warns.

Choosing not use AI comes with its own risks, including missing patterns across data that could be central to keeping people safe, the report says.

But the vast risks of using it also means that there could be more bias and uncertainty. “There is a critical need for careful design, continuous monitoring, and regular adjustment of AI systems to mitigate the risk of amplifying human biases and errors in intelligence assessment,” the report says.

Those are the conclusions of the new report from the Alan Turing Institute, the UK’s national research organisation for AI. It was commissioned by British intelligence agencies, the Joint Intelligence Organisation (JIO) and Government Communication Headquarters (GCHQ).

The official report did not give any information on how much AI is currently used by intelligence agencies, or how mature that technology is. But it urged that work to counteract the potentially major dangers should begin immediately, to ensure that any future introduction of AI is done safely.

The government said that it would consider the recommendations of the report and that it was already working on combating the potential dangers that the technology could bring.

“We are already taking decisive action to ensure we harness AI safely and effectively, including hosting the inaugural AI Safety Summit and the recent signing of our AI Compact at the Summit for Democracy in South Korea,” said Oliver Dowden, the deputy prime minister.

“We will carefully consider the findings of this report to inform national security decision makers to make the best use of AI in their work protecting the country.”

The report was written by the Centre for Emerging Technology and Security (CETaS), which is based within the Alan Turing Institute. Officials there noted the importance of decision makers ensuring that they understand the nature of information that has been informed by artificial intelligence.

“Our research has found that AI is a critical tool for the intelligence analysis and assessment community. But it also introduces new dimensions of uncertainty, which must be effectively communicated to those making high-stakes decisions based on AI-enriched insights,” said Alexander Babuta, director of The Alan Turing Institute’s Centre for Emerging Technology and Security.

“As the national institute for AI, we will continue to support the UK intelligence community with independent, evidence-based research, to maximise the many opportunities that AI offers to help keep the country safe.”

GCHQ, which jointly commissioned the report, said that it saw great potential in AI – but that it was important to work on safe uses of it too.

“AI is not new to GCHQ or the intelligence assessment community, but the accelerating pace of change is,” said Anne Keast-Butler, director of GCHQ. “In an increasingly contested and volatile world, we need to continue to exploit AI to identify threats and emerging risks, alongside our important contribution to ensuring AI safety and security.”

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in