Tech giants agree to child safety principles around generative AI
Amazon, Google, Meta, Microsoft and OpenAI have signed up to the safety commitments, which are being led by child online safety organisations.
Your support helps us to tell the story
This election is still a dead heat, according to most polls. In a fight with such wafer-thin margins, we need reporters on the ground talking to the people Trump and Harris are courting. Your support allows us to keep sending journalists to the story.
The Independent is trusted by 27 million Americans from across the entire political spectrum every month. Unlike many other quality news outlets, we choose not to lock you out of our reporting and analysis with paywalls. But quality journalism must still be paid for.
Help us keep bring these critical stories to light. Your support makes all the difference.
Some of the world’s biggest tech and AI firms have agreed to follow new online safety principles designed to combat the creation and spread of AI-generated child sexual abuse material.
Amazon, Google, Meta, Microsoft and ChatGPT creator OpenAI are among the companies to have signed up to the principles, called Safety By Design.
The commitments have been drawn up by child online safety group Thorn and fellow nonprofit All Tech is Human and sees the firms pledge to develop, deploy and maintain generative AI models with child safety at the centre in an effort to prevent the misuse of the technology in child exploitation.
The principles see firms commit to develop, build and train AI models that proactively address child safety risks, for example by ensuring training data does not include child sexual abuse material, as well as maintaining safety after their release by staying alert and responding to child safety risks that emerge.
Generative AI tools such as ChatGPT have become the key area of development within the technology sector over the last 18 months, with an array of AI models and content generation tools being developed and launched by the major firms.
The rapid rise has seen social media and other platforms flooded with AI-generated words, images and videos, with many online safety groups warning of the implications of more fake and misleading content being seen and spread online.
Earlier this year, children’s charity the NSPCC warned that young people were already contacting Childline about AI-generated child sexual abuse material.
Speaking about the new agreed principles, Dr Rebecca Portnoff, vice president of data science at Thorn, said: “We’re at a crossroads with generative AI, which holds both promise and risk in our work to defend children from sexual abuse.
“I’ve seen first-hand how machine learning and AI accelerates victim identification and child sexual abuse material detection. But these same technologies are already, today, being misused to harm children.
“That this diverse group of leading AI companies has committed to child safety principles should be a rallying cry for the rest of the tech community to prioritise child safety through Safety by Design.
“This is our opportunity to adopt standards that prevent and mitigate downstream misuse of these technologies to further sexual harm against children. The more companies that join these commitments, the better that we can ensure this powerful technology is rooted in safety while the window of opportunity is still open for action.”