Deepfakes a major concern for general election, say IT professionals
A survey of IT staff revealed concerns about the use of AI-generated content to undermine democratic processes.
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.More than half of IT professionals have said they fear deepfakes generated by artificial intelligence (AI) could affect the result of the general election, according to new research.
A survey of workers in the sector by BCS, The Chartered Institute for IT, found 65% said they are concerned an election result could be affected by misleading AI-generated content.
The study found that 92% believe political parties should agree to be transparent and declare how and when they use AI in their campaigns, and that more technical and policy solutions need to be forthcoming to address the issue.
Last year, Technology Secretary Michelle Donelan told MPs the Government is working with social media platforms on measures to combat deepfakes, saying “robust mechanisms” will be in place by the time of the general election, which is due by January 2025.
According to the poll of 1,200 IT professionals, public education and technical tools such as watermarking and labelling of AI content are seen as the most effective measures for limiting the impact of deepfakes.
A number of senior politicians, including Prime Minister Rishi Sunak, Labour leader Sir Keir Starmer and London Mayor Sadiq Khan have been the subjects of deepfakes in the past.
BCS chief executive Rashik Parmar said: “Technologists are seriously worried about the impact of deepfakes on the integrity of the general election – but there are things politicians can do to help the public and themselves.
“Parties should agree between them to clearly state when and how they are using AI in their campaigns.
“Official sources are just one part of the problem. Bad actors outside the UK and independent activists inside can do even more to destabilise things.
“We need to increase public awareness of how to spot deepfakes, double-check sources and think critically about what we’re seeing.
“We can support that with technical solutions, and the most popular in the poll was a clear labelling consensus where possible – and it would be ideal if this could be done globally with the US election coming too.”
A spokesperson for the Department for Science, Innovation and Technology said: “We are working extensively across Government to ensure we are ready to rapidly respond to misinformation.
“Alongside our Defending Democracy Taskforce, the Digital Imprints Regime requires certain political campaigning digital material to have a digital imprint making clear to voters who is promoting the content.
“Once implemented the Online Safety Act will also require social media platforms to swiftly remove illegal misinformation and disinformation – including where it is AI-generated – as soon as they become aware of it.”