Terrorism legislation adviser says new laws are needed to combat AI chatbots
Jonathan Hall KC said he went to online chatbot website character.ai while posing as a member of the public and spoke to several chatbots.
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.New laws are needed to combat artificial intelligence (AI) chatbots that could radicalise users, the UK’s independent reviewer of terrorism legislation has said.
Writing in the Telegraph, Jonathan Hall KC said the Government’s new Online Safety Act, which passed into law last year, is “unsuited to sophisticated and generative AI”.
Mr Hall said: “Only human beings can commit terrorism offences, and it is hard to identify a person who could in law be responsible for chatbot-generated statements that encouraged terrorism.
“Our laws must be capable of deterring the most cynical or reckless online conduct – and that must include reaching behind the curtain to the big tech platforms in the worst cases, using updated terrorism and online safety laws that are fit for the age of AI.”
Mr Hall said he went to the online chatbot website character.ai while posing as a member of the public and spoke to several AI chatbots.
One of them, which was described as the senior leader of the Islamic State group, tried to recruit him to join the terror organisation.
Mr Hall said the website’s terms and conditions prohibit “only to the submission by human users of content that promotes terrorism or violent extremism, rather than the content generated by its bots.
He said: “Investigating and prosecuting anonymous users is always hard, but if malicious or misguided individuals persist in training terrorist chatbots, then new laws will be needed.”
In a statement given to the Telegraph, character.ai said while their technology is not perfect and is still evolving, “hate speech and extremism are both forbidden by our terms of service”, adding: “Our products should never produce responses that encourage users to harm others.”
Experts have previously warned users of ChatGPT and other chatbots to resist sharing private information while using the technology.
Michael Wooldridge, a professor of computer science at Oxford University, said complaining about personal relationships or expressing political views to the AI was “extremely unwise”.
Prof Wooldridge said users should assume any information they type into ChatGPT or similar chatbots is “just going to be fed directly into future versions”, and it was nearly impossible to get data back once in the system.