Donald Trump is invoking AI in the most dangerous possible way

Analysis: The real danger of artificial intelligence might not be that it makes us believe in fake things – but the opposite

Andrew Griffin
Monday 12 August 2024 18:31 BST
Comments
(AP)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Donald Trump’s latest controversial post made use of a word that we have not yet heard much in political debate, but is likely to become more common. “She ‘A.I.’d’ it,” he wrote on Truth Social.

It was part of a long post in which he accused Kamala Harris and her campaign of “cheating”. He falsely claimed – despite the evidence – that she had used artificial intelligence to create a “fake crowd picture” that suggested there was a large turnout when in fact “there was nobody there”.

Mr Trump even pointed to what he suggested was evidence. The cheating was “later confirmed by the reflection of the mirror like finish of the Vice Presidential plane”, he posted.

The claims are false: videos and bystander reports indicate that Ms Harris really did bring a huge crowd to the rally. But Mr Trump’s post points to a very real concern about the growing use of artificial intelligence.

While people have spent much time worrying – and attempting to avoid – the potential use of fake images in everything from political campaigns to advertising, there has been less concern about what that growing concern might do to our trust in real images. Perhaps the biggest and largely overlooked danger of artificial intelligence is that it will stop us from believing anything at all.

It is not clear what Mr Trump intended to do with his post, on Truth Social, in which he tried to accuse Ms Harris of cheating more broadly as well as creating fake images. But it was clear that he meant his accusation to apply to more than just those pictures of Ms Harris and her plane.

He suggested that she had faked other crowds. And he suggested that anyone who would create fake image “will cheat at ANYTHING”.

This may well be the eventual effect of accusations like this. Suggesting that real images are fake could mean that people simply refuse to believe things they see – and the consequence could be not that people are duped by fake images but that they do not trust real ones.

It may not matter that those claims can be easily discredited. Mr Trump’s claims about the images of Ms Harris were quickly shot down by experts who pointed to other video footage and yet more evidence – but those who wished to doubt the veracity of the size of the vice president’s crowds were probably already convinced.

And, what’s more, not all images might be quite so easy to prove as authentic. It can often be easy to point to the parts of an image that show that it has been deepfaked with AI technology – questionable hands, for instance – but much harder to conclusively demonstrate that a picture is real.

It is notable too that Mr Trump’s post makes use of some of the techniques that can be used to spot actual deepfaked images. His references to the supposedly fake image being confirmed to be so because of the “reflection of the mirror like finish” of the plane is exactly the kind of detailed analysis that is often required nowadays to spot a fake image.

AI-generated images do indeed often fail to include important details such as accurate reflections. Image generators do not have a sense of what might be supposed to appear inside that reflection – since they do not work in 3D space – and so researchers have often pointed to them as a way of testing whether a picture is legitimate or not.

But by using those same techniques to analyse a fake image, Mr Trump is also undermining them, whether intentionally or not. Knowing whether political and other news imagery is real is likely to require such forensic scrutiny in years to come – and we will need to trust those techniques if we are to trust the pictures we see.

Technology companies are working on a variety of solutions to this problem: fingerprinting images so that it is possible to check whether they came from an image generator or a real camera, for instance. But none of those solutions is yet foolproof, and all rely on both creators and audiences being trustworthy and conscientious in the way they make and share images.

Yann LeCun, the researcher regarded as one of the “godfathers of AI”, noted the fact that Mr Trump’s tweet marks a change in the way that AI is used in political debates.

“Tired: political propaganda by using AI. Wired: political propaganda by claiming your opponent uses AI,” he wrote.

But the really damaging part of Mr Trump’s claim that Ms Harris “AI’d it” might not be the accusation that she is using artificial intelligence. The really damaging part might be that all anyone needs to do is make us think that she might have done.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in