Huge AI vulnerability could put human life at risk, researchers warn

Finding should trigger a complete rethink of how artificial intelligence is used in robots, study suggests

Andrew Griffin
Thursday 17 October 2024 16:44 BST
Comments
(AFP via Getty Images)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

A major security flaw in artificially intelligent systems could threaten human lives, according to a new study.

Robotic systems that use AI to make decisions could be broken and those systems are not safe, researchers have warned.

The new work looked at large language models, or LLMs, the technology that underpins systems such as ChatGPT. Similar technology is also used in robotics, to govern the decisions of real-world machines.

But that technology has security vulnerabilities and weaknesses that could be exploited by hackers to use the systems in unintended ways, according to the new research from the University of Pennsylvania.

“Our work shows that, at this moment, large language models are just not safe enough when integrated with the physical world,” said George Pappas, a professor at the university.

Professor Pappas and his colleagues demonstrated that it was possible to bypass security guardrails in a host of systems that are currently in use. They include a self-driving system that could be hacked to make the car drive through crossings, for instance.

The researchers behind the paper are working with the creators of those systems to identify the weaknesses and work against them. But they cautioned that it should require a total rethink of how such systems are made, rather than patching up specific vulnerabilities.

“The findings of this paper make abundantly clear that having a safety-first approach is critical to unlocking responsible innovation,” said Vijay Kumar, another coauthor from the University of Pennsylvania. “We must address intrinsic vulnerabilities before deploying AI-enabled robots in the real world.

“Indeed our research is developing a framework for verification and validation that ensures only actions that conform to social norms can – and should – be taken by robotic systems.”

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in