AI-generated child sex abuse content increasingly found on open web – watchdog

The Internet Watch Foundation said the public were being increasingly exposed to the distressing content.

Martyn Landi
Friday 18 October 2024 00:01 BST
The Internet Watch Foundation said in the past six months alone it had seen more reports of AI-generated abuse content than in the 12 months prior to that (PA)
The Internet Watch Foundation said in the past six months alone it had seen more reports of AI-generated abuse content than in the 12 months prior to that (PA) (PA Wire)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

AI-generated child sexual abuse content is increasingly being found on publicly accessible areas of the internet, exposing it to more people, an internet watchdog has warned.

The Internet Watch Foundation (IWF), which finds and removes child sexual abuse content from the internet, said in the past six months alone it had seen more reports of AI-generated abuse content than in the 12 months prior to that.

And rather than being hidden in forums on the dark web, the IWF said 99% of this content was found on publicly accessible areas of the internet, with the watchdog warning of the distressing nature of encountering such images.

Recent months show that this problem is not going away and is in fact getting worse

Derek Ray-Hill, Internet Watch Foundation

In its data, it revealed that 78% of the reports it received came from members of the public who had stumbled across the imagery on sites such as forums or AI galleries.

It said many of the AI-generated images and videos of children being hurt or abused are so realistic that they can be difficult to tell apart from imagery of real children, and are regarded as criminal content under UK law.

According to the IWF’s figures, more than half of the AI-generated content found in the past six months was hosted on servers in two countries – Russia and the United States.

Derek Ray-Hill, interim chief executive of the IWF, said: “People can be under no illusion that AI-generated child sexual abuse material causes horrific harm, not only to those who might see it but to those survivors who are repeatedly victimised every time images and videos of their abuse are mercilessly exploited for the twisted enjoyment of predators online.

“To create the level of sophistication seen in the AI imagery, the software used has also had to be trained on existing sexual abuse images and videos of real child victims shared and distributed on the internet.

“The protection of children and the prevention of AI abuse imagery must be prioritised by legislators and the tech industry above any thought of profit.

“Recent months show that this problem is not going away and is in fact getting worse.

“We urgently need to bring laws up to speed for the digital age, and see tangible measures being put in place that address potential risks.”

While we will continue to relentlessly pursue these predators and safeguard victims, we must see action from tech companies to do more under the Online Safety Act to make their platforms safe places for children and young people

Becky Riggs, National Police Chiefs' Council

Many campaigners have called for strict regulation to be put in place around the training and development of AI models, to ensure they do not generate harmful or dangerous content, and for AI platforms to refuse to fulfil any requests or queries which could result in such material being created – a system some AI platforms already have in place.

Assistant Chief Constable Becky Riggs, child protection and abuse investigation lead at the National Police Chiefs’ Council, said: “The scale of online child sexual abuse and imagery is frightening, and we know that the increased use of artificial intelligence to generate abusive images poses a real-life threat to children.

“Law enforcement is committed to finding and prosecuting online child abusers, wherever they are.

“Policing continues to work proactively to pursue offenders, including through our specialist undercover units, who disrupt child abusers online every day, and this is no different for AI-generated imagery.

“While we will continue to relentlessly pursue these predators and safeguard victims, we must see action from tech companies to do more under the Online Safety Act to make their platforms safe places for children and young people.

“This includes and brings into sharp focus those companies responsible for the developing use of AI and the necessary safeguards required to prevent it being used at scale, as we are now seeing.

“We continue to work closely with the National Crime Agency, Government and industry to harness technology which will help us to fight online child sexual abuse and exploitation.”

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in