Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

YouTube creators will soon have to disclose use of gen AI in videos or risk suspension

YouTube is rolling out new rules for AI content, including requiring creators to reveal whether they’ve used generative artificial intelligence to make realistic looking videos

Via AP news wire
Tuesday 14 November 2023 15:32 GMT
YouTube-AI
YouTube-AI (Copyright 2018 The Associated Press. All rights reserved.)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

YouTube is rolling out new rules for AI content, including a requirement that creators reveal whether they've used generative artificial intelligence to make realistic looking videos.

In a blog post Tuesday outlining a number of AI-related policy updates, YouTube said creators that don't disclose whether they've used AI tools to make “altered or synthetic” videos face penalties including having their content removed or suspension from the platform's revenue sharing program.

“Generative AI has the potential to unlock creativity on YouTube and transform the experience for viewers and creators on our platform,” Jennifer Flannery O’Connor and Emily Moxley, vice presidents for product management, wrote in the blog post. “But just as important, these opportunities must be balanced with our responsibility to protect the YouTube community.”

The restrictions expand on rules that YouTube's parent company, Google, unveiled in September requiring that political ads on YouTube and other Google platforms using artificial intelligence come with a prominent warning label.

Under the latest changes, which will take effect by next year, YouTubers will get new options to indicate whether they're posting AI-generated video that, for example, realistically depict an event that never happened or show someone saying or doing something they didn’t actually do.

“This is especially important in cases where the content discusses sensitive topics, such as elections, ongoing conflicts and public health crises, or public officials,” O'Connor and Moxley said.

Viewers will be alerted to altered videos with labels, including prominent ones on the YouTube video player for sensitive topics.

The platform is also deploying AI to root out content that breaks its rules, and the company said the technology has helped detect “novel forms of abuse” more quickly.

YouTube’s privacy complaint process will be updated to allow requests for the removal of an AI-generated video that simulates an identifiable person, including their face or voice.

YouTube music partners such as record labels or distributors will be able to request the takedown of AI-generated music content “that mimics an artist’s unique singing or rapping voice.”

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in