The Independent's journalism is supported by our readers. When you purchase through links on our site, we may earn commission.
The misinformation crisis is upon us – and ‘fake’ Reform candidates are the least of our worries
Seeing a parliamentary candidate being forced to deny they are a computer generated robot wasn’t on anyone’s bingo card for this election, but this case represents yet another dire warning about the dangers of AI, writes Marc Burrows
In a quintessentially 2024 twist, Reform UK has been forced to drag one of its election candidates from his sickbed to prove that he does, in fact, exist.
Mark Matlock, the 30-year old candidate for Clapham and Brixton Hill, missed election night (coming fifth with around 1,700 votes) due to pneumonia. His absence, combined with ChatGPT-like pledges and an AI-assisted campaign photo that looks like someone has tried to recreate Elon Musk in The Sims, led some to believe the candidate was an AI creation planted by Nigel Farage’s party to increase vote share.
This cast doubt on other Reform candidates with no online footprint. Max Nelson, the Islington South hopeful, is represented by a featureless white face and lists no opinions, which would admittedly tick a lot of boxes for some Reform voters. Similar situations exist for Cardiff West’s Peter Hopkins and Croydon South’s Robert Bromley. Most of the party’s London candidates seem to be opinionless blank-faced blobs.
Matlock, it turns out, is real, and his would-be south London constituents could easily have tracked him down at his home in the Cotswolds, 90 miles away. He was just poorly on election night and had spent the previous month campaigning hard for south Londoners by knocking on doors for Farage, 70 miles away in Clacton.
Reform insists all 609 candidates are real, even if, like South Bristol candidate Richard Visick, they live in Gibraltar –1,700 miles away from the constituency he hoped to represent.
We must take Reform at their word. Electoral fraud is, after all, a serious offence. The story does, however, highlight a broader issue: in 2024, it’s increasingly hard to believe what we see.
AI-generated video has improved dramatically. Once, it was easy to spot the problems with the uncanny CGI Barack Obama or notice the mangled hands and haunted faces that characterised images created in picture generators like MidJourney or DALL-E. Those days are gone. TikTok’s Symphony lets advertisers create lifelike "digital influencers" — fictional attractive young people to endorse your product. A deepfake porn industry is growing rapidly.
AI voice cloning is cheap and accessible. Upload Harry Potter audiobook snippets, and Stephen Fry can promote your brand. Feed in Keir Starmer’s interview on the Today Programme, and suddenly the PM is insulting Scousers. OpenAI’s Sora allows video users to upload videos and edit the time, weather, or background elements. Suddenly an explosion from a Marvel movie is happening today, in central London, and spreading panic online. Remember January’s fake burning Eiffel Tower? That’s just the beginning.
We all learn to trust our senses. But what happens when what we see and hear is so easily manipulated and easily shared on social media? While fielding non-existent candidates is illegal, fake campaign material isn’t. The Electoral Commission even warned about online misinformation at the start of the election campaign, particularly AI-created content, stating starkly that they "do not have legal powers to regulate the content of campaign material. No UK organisation does."
Reform would face serious consequences for inventing candidates, but nothing legally prevents using AI to create campaign photos, write material, or produce video ads with nonexistent voters in fictional towns. This technology is here — Toys ‘R’ Us used OpenAI’s Sora for a recent TV ad.
We can no longer trust our eyes and ears. Remember Farage’s 2015 billboard "Breaking Point: The EU has failed us all", showing asylum seekers that were actually crossing the Slovenia/Croatia border? Now imagine a video controlling every aspect: fictional asylum seekers designed to conform to every racist stereotype, confirming reactionary fears, trudging into Dover to steal jobs and soak up benefits.
The Electoral Commission advises: "[Voters should] think critically about the information they see, before deciding whether to let it influence their vote. Voters should look for an imprint, showing who has paid for material to be created and promoted. This is a legal requirement for all election material."
We need new instincts to spot red flags and question digital assets. We must ask who’s posting content and what their agenda is. We need to become cynical. The misinformation crisis is here, and Mark Matlock’s AI-generated campaign photo could soon be the least of our concerns.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments