Meta and OpenAI scramble to stop AI generated images posing as real ones
Watermarks and labels attempt to identify false images
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.Meta and OpenAI are both scrambling to make ways to label images that have been generated by artificial intelligence.
The two companies are adding labels – hidden or otherwise – that should allow people to track the source of an image or other piece of content.
OpenAI will add new features to ChatGPT and DALL-E 3 that place a tag in the metadata of the image that will make clear it has been created with AI. It will use the C2PA standard that aims to allow for images to carry more information about how they have been created, and which is also being adopted by camera companies and others who make tools for generating images.
However, OpenAI noted that it “can easily be removed either accidentally or intentionally”, and so is not a guarantee. Most social media sites remove that metadata, for instance, and it can be removed simply by taking a screenshot.
Meta’s tools will attempt to detect content on Facebook, Instagram and Threads that have been generated by AI and label them.
The social media giant said it was currently building the capability and will roll it out across its social platforms in the “coming months” and ahead of a number of major global elections this year.
Meta already places a label on images created using its own AI, but said its new capability will enable it to label images created by AI from Google, OpenAI, Microsoft, Adobe, Midjourney and Shutterstock as part of an industry-wide effort to use “best practice” and place “invisible markers” onto images and their metadata to help identify them as AI-generated.
Former deputy prime minister, Sir Nick Clegg, now president of global affairs for Meta, acknowledged the potential ability for bad actors to utilise AI-generated imagery to spread disinformation as a key reason for Meta introducing the feature.
“This work is especially important as this is likely to become an increasingly adversarial space in the years ahead,” he said.
“People and organisations that actively want to deceive people with AI-generated content will look for ways around safeguards that are put in place to detect it.
“Across our industry and society more generally, we’ll need to keep looking for ways to stay one step ahead.
“In the meantime, it’s important people consider several things when determining if content has been created by AI, like checking whether the account sharing the content is trustworthy or looking for details that might look or sound unnatural.”
Additional reporting by agencies
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments