Donald Trump is invoking AI in the most dangerous possible way
Analysis: The real danger of artificial intelligence might not be that it makes us believe in fake things – but the opposite
Your support helps us to tell the story
This election is still a dead heat, according to most polls. In a fight with such wafer-thin margins, we need reporters on the ground talking to the people Trump and Harris are courting. Your support allows us to keep sending journalists to the story.
The Independent is trusted by 27 million Americans from across the entire political spectrum every month. Unlike many other quality news outlets, we choose not to lock you out of our reporting and analysis with paywalls. But quality journalism must still be paid for.
Help us keep bring these critical stories to light. Your support makes all the difference.
Donald Trump’s latest controversial post made use of a word that we have not yet heard much in political debate, but is likely to become more common. “She ‘A.I.’d’ it,” he wrote on Truth Social.
It was part of a long post in which he accused Kamala Harris and her campaign of “cheating”. He falsely claimed – despite the evidence – that she had used artificial intelligence to create a “fake crowd picture” that suggested there was a large turnout when in fact “there was nobody there”.
Mr Trump even pointed to what he suggested was evidence. The cheating was “later confirmed by the reflection of the mirror like finish of the Vice Presidential plane”, he posted.
The claims are false: videos and bystander reports indicate that Ms Harris really did bring a huge crowd to the rally. But Mr Trump’s post points to a very real concern about the growing use of artificial intelligence.
While people have spent much time worrying – and attempting to avoid – the potential use of fake images in everything from political campaigns to advertising, there has been less concern about what that growing concern might do to our trust in real images. Perhaps the biggest and largely overlooked danger of artificial intelligence is that it will stop us from believing anything at all.
It is not clear what Mr Trump intended to do with his post, on Truth Social, in which he tried to accuse Ms Harris of cheating more broadly as well as creating fake images. But it was clear that he meant his accusation to apply to more than just those pictures of Ms Harris and her plane.
He suggested that she had faked other crowds. And he suggested that anyone who would create fake image “will cheat at ANYTHING”.
This may well be the eventual effect of accusations like this. Suggesting that real images are fake could mean that people simply refuse to believe things they see – and the consequence could be not that people are duped by fake images but that they do not trust real ones.
It may not matter that those claims can be easily discredited. Mr Trump’s claims about the images of Ms Harris were quickly shot down by experts who pointed to other video footage and yet more evidence – but those who wished to doubt the veracity of the size of the vice president’s crowds were probably already convinced.
And, what’s more, not all images might be quite so easy to prove as authentic. It can often be easy to point to the parts of an image that show that it has been deepfaked with AI technology – questionable hands, for instance – but much harder to conclusively demonstrate that a picture is real.
It is notable too that Mr Trump’s post makes use of some of the techniques that can be used to spot actual deepfaked images. His references to the supposedly fake image being confirmed to be so because of the “reflection of the mirror like finish” of the plane is exactly the kind of detailed analysis that is often required nowadays to spot a fake image.
AI-generated images do indeed often fail to include important details such as accurate reflections. Image generators do not have a sense of what might be supposed to appear inside that reflection – since they do not work in 3D space – and so researchers have often pointed to them as a way of testing whether a picture is legitimate or not.
But by using those same techniques to analyse a fake image, Mr Trump is also undermining them, whether intentionally or not. Knowing whether political and other news imagery is real is likely to require such forensic scrutiny in years to come – and we will need to trust those techniques if we are to trust the pictures we see.
Technology companies are working on a variety of solutions to this problem: fingerprinting images so that it is possible to check whether they came from an image generator or a real camera, for instance. But none of those solutions is yet foolproof, and all rely on both creators and audiences being trustworthy and conscientious in the way they make and share images.
Yann LeCun, the researcher regarded as one of the “godfathers of AI”, noted the fact that Mr Trump’s tweet marks a change in the way that AI is used in political debates.
“Tired: political propaganda by using AI. Wired: political propaganda by claiming your opponent uses AI,” he wrote.
But the really damaging part of Mr Trump’s claim that Ms Harris “AI’d it” might not be the accusation that she is using artificial intelligence. The really damaging part might be that all anyone needs to do is make us think that she might have done.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments