ChatGPT is being used to disrupt elections around the world, OpenAI warns
Hacking groups affiliated with regimes in China, Iran and Russia named as suspects in a 54-page-report
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.OpenAI has warned that foreign hacking groups are using its AI tools ChatGPT and Dall-E in an attempt to interfere with elections.
A 54-page-report revealed that the company has already detected 20 campaigns around the world since the start of the year, with more expected in the build-up to the US presidential elections next month.
Manipulative activities involving ChatGPT ranged from writing articles for websites, to generating fake personas and posting content on social media. It included “multi-stage efforts to analyse and reply to social media posts”.
This year has been touted as the biggest ever demonstration of democracy, with more than 50 countries heading to the polls. The recent emergence of generative artificial intelligence has led to concerns about potential misuse of the technology to influence the elections, leading to several leading firms taking special measures in an effort to prevent interference.
Last year, OpenAI chief executive Sam Altman said he was “nervous” about the threat generative AI poses to election integrity, testifying before congress that it could be used to spread disinformation in ways never-before possible.
“In this year of global elections, we know it is particularly important to build robust, multi-layered defences against state-linked cyber actors and covert influence operations that may attempt to use our models in furtherance of deceptive campaigns on social media and other internet platforms,” OpenAI’s latest report stated.
“Since the beginning of the year, we’ve disrupted more than 20 operations and deceptive networks from around the world that attempted to use our models.”
OpenAI named hacking groups affiliated with regimes in China, Iran and Russia as suspects in some of the interference operations
Several case studies were detailed in the report, with examples including a “Russia-origin threat actor” generating English- and French-language content targeting West Africa and the UK.
“This operation used our models to generate short comments, long-form articles and images. The long-form articles in English and French were then posted on a cluster of websites that posed as news outlets in Africa and the UK,” the report stated.
“This operation represented an unusual combination of efforts to build an audience... The UK-focused ‘news’ brands appear to have established ‘information partnerships’ with a number of local organisations, including a church in Yorkshire, a school in Wales, and an association of chambers of commerce in California.”
The specific organisations were not named in the report – The Independent has reached out to OpenAI for further information.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments