YouTube now tells you if videos were ‘captured with a camera’ – because so many are not
Labels have been proposed as a way to limit the danger of AI videos
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.YouTube will now tell its users when videos were “captured with a camera”, because so many of them aren’t.
The labels will show beneath videos alongside a message reading “how this content was made”. “This content was captured using a camera or other recording device”, it reads.
The labels are a response to the rise of artificial intelligence and the threat that videos made with it might pose. Numerous experts have suggested that tagging videos that are or are not real will help people avoid being misled by fake footage.
The videos use a standard called C2PA standard, which various platforms can use to make clear whether a video has been edited or if it is authentic. Some camera companies have integrated the technology into their cameras, for instance, so that YouTube will be able to automatically see whether the video was actually recorded on a device.
Videos can be edited and still get the tag. But they must not be edited in such a way that includes “significant alterations” to its “core nature or content”, or which make it impossible to trace the video back to where it came from.
Google had already allowed YouTube users to press a button that identified their videos as including “altered or synthetic content”, so that they can flag their own AI creations. But the new system works in the opposite way – allowing users to label real videos – as well as relying on technical systems that should make it harder to lie.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments