Meta and OpenAI ‘disrupt’ Israeli firm’s covert operation to influence views on Gaza war
STOIC used AI models to create accounts posing as Jewish students and African Americans to influence online audiences in US and Canada
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.Meta and OpenAI claim to have disrupted covert online influence operations run by an Israeli company amid the intensifying war in Gaza.
The tech giants said STOIC, a political marketing and business intelligence firm based in Tel Aviv, is deploying their products and tools to manipulate political conversations online.
OpenAI, the maker of ChatGPT, said in a report on Thursday that it banned a network of accounts linked to STOIC, which it accused of posting anti-Hamas and pro-Israel content and acting as a “for-hire Israeli threat actor”.
The accounts used OpenAI models to spread disinformation about the war in Gaza and, to a lesser extent, about the ongoing Indian election.
Specifically, they used the AI models to generate “articles and comments that were then posted across multiple platforms, notably Instagram, Facebook, X, and websites associated with this operation’”, OpenAI said in a blog post.
This included texts on specific themes such as the Gaza war.
The influence operation also faked engagement, OpenAI alleged.
“Some of the campaigns we disrupted used our models to create the appearance of engagement across social media, for example, by generating replies to their own posts to create false online engagement,” it said.
But the network’s activity, according to OpenAI, “appears to have attracted little if any engagement, other than from its own inauthentic accounts”.
STOIC describes itself as an AI content creation system that helps users “automatically create targeted content and organically distribute it quickly to the relevant platforms”.
Meta confirmed in a quarterly security report on Wednesday that it removed over 500 Facebook accounts, one group and 11 pages, along with more than 30 Instagram accounts, tied to the same influence operation.
It said accounts posing as Jewish students, African Americans and other concerned citizens targeted audiences in the US and Canada as part of the covert campaign linked to STOIC.
“There are several examples across these networks of how they use likely generative AI tooling to create content. Perhaps it gives them the ability to do that quicker or to do that with more volume. But it hasn’t really impacted our ability to detect them,” Meta’s head of threat investigations Mike Dvilyanski told Reuters.
The Facebook parent said it has banned STOIC and issued a letter “demanding that they immediately stop activity that violates Meta’s policies”.
STOIC did not immediately respond to a request for comment from The Independent.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments