Meta responds after Instagram users complain feed is being filled with ‘made with AI’ images
Label is intended to alert users to pictures that might be misleading – but has been applied to vast number of posts
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.Meta says it is “evaluating” its approach to labelling AI posts after users complained their feeds were being filled with warnings.
As with many other tech companies, Meta’s approach to AI-generated posts on its platforms such as Instagram has largely been to allow it but with a label. The company’s executives have said that is a useful way to allow people to use AI in their work but ensure their audience is aware that it is not necessarily real.
That has led to a warning on Instagram and other platforms that can show along posts, which indicates that the image was “made with AI”. It is automatically generated, though creators can flag their own posts as having used artificial intelligence.
In recent days, however, photographers and others have said that their posts are being flagged as having been made with AI even when the use of such technology is insignificant. Changing even one pixel using an AI-powered tool is enough to trigger the flag.
That led to complaints that users might mistrust a picture that is actually legitimate, simply because an invisible part of it was edited using an AI tool in Photoshop, for instance.
Now Meta says that it is aware of the feedback, and suggested that it could change those requirements in future.
“Our intent has always been to help people know when they see content that has been made with AI. We are taking into account recent feedback and continue to evaluate our approach so that our labels reflect the amount of AI used in an image,” a Meta spokesperson said.
“We rely on industry standard indicators that other companies include in content from their tools, so we’re actively working with these companies to improve the process so our labeling approach matches our intent.”
As with many platforms, Meta relies in part on fingerprinting technologies that allow editing apps such as Adobe’s Photoshop to include information in a file that indicates it has been edited using artificial intelligence. That data is not visible to the person viewing the image – but is included in its metadata, to allow others to understand its provenance.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments