Worried about AI? How to easily spot a deepfake image

There are some telltale sign

Max Freeman-Mills
Monday 30 September 2024 16:24 BST
AI-generated images are becoming increasingly sophisticated
AI-generated images are becoming increasingly sophisticated (iStock)

Your support helps us to tell the story

This election is still a dead heat, according to most polls. In a fight with such wafer-thin margins, we need reporters on the ground talking to the people Trump and Harris are courting. Your support allows us to keep sending journalists to the story.

The Independent is trusted by 27 million Americans from across the entire political spectrum every month. Unlike many other quality news outlets, we choose not to lock you out of our reporting and analysis with paywalls. But quality journalism must still be paid for.

Help us keep bring these critical stories to light. Your support makes all the difference.

AI is well and truly here – and with it deepfake images.

Most people know you can’t necessarily trust every image you see online as AI editing tools can help you make uncannily real images.

These are often known as ‘deepfakes’, particularly when talking about the manipulation of someone’s likeness. Sometimes AI features are included right in smartphones, like Google’s Add Me feature on the Pixel 9, which lets the photo-taker be inserted into an image.

Platforms like Midjourney make AI image generation simple, and they can be seriously realistic – just think of the picture of the Pope wearing a white Balenciaga puffer jacket that went viral last year, fooling many a person.

If you want to get better at spotting images that aren’t quite right, then you’re in the right place. We’ve gathered some top tips to help you spot deepfake images, to help you avoid being fooled by too-good-to-be-true photos.

Zoom in on details

Whether it’s completely AI-generated or simply heavily edited, there are some telltale signs that most deepfake images still exhibit. If you zoom in on things like people’s eyes and the definition of the edge of their face, this can often show up inconsistencies or blurriness that can be a red flag.

For AI-generated imagery, hands and fingers are still frequently the site of issues, so any weirdness in these areas can be obvious. Plus, if a deepfake is made with a face replacement, you’ll often see slight blurriness around the edge of the whole face. In videos, lips might not sync properly with the words they’re supposedly saying, which can also clue you in.

Think emotionally

One thing that many face-swap apps or deepfake programmes can struggle with is particularly complex emotions – expressions on real faces are unbelievably precise and complicated, after all.

So, if you’re looking at a beaming smile that seems a little too rigid, or a face that’s unbelievably neutral, that could be another clue that something’s up. This is easier to spot in videos, too, where an expression that doesn’t seem quite in tune with what someone’s saying can stand out more.

Look at the overall picture

This might seem a little hard to define, but most AI generators still default to creating images that don’t quite look real, since they’re almost too perfect. This might mean that a group photo has everyone lit almost exactly the same way, without any obscuring shadows or differences, or it might just make for a plasticky, wax-like aesthetic that feels a little ‘off’.

While it might not always be easy to pin down a single pixel that concerns you, if you think a photo looks a little unreal then you should probably take the time to do more research into it.

a deepfake images on a computer screen
a deepfake images on a computer screen

Don’t ignore the background

Especially in photos of people, it can be tempting to focus on things like someone’s face or hair to try to figure out if they’re real, but the background of an image can often be just as obviously wrong. AI-generated backgrounds will sometimes have physical contradictions or architecture that doesn’t actually make any sense, for instance.

This is also true for items and devices that might be out of focus or simply not in the centre of a shot – they’re great ways to detect that AI has generated an image. This won’t necessarily work as well on a deepfake where only the face has been swapped out, though.

Research real-world context

This tactic takes things outside the realm of technical expertise and microscopic analysis and simply stands as a reminder that you can always do a web search to see if the image you’re looking at lines up with context. This is particularly useful if it’s a purported photo or video of a public figure like a politician, as it can be quite easy to establish where they were at a given date or time, and if there’s any reporting about what the image shows.

If it’s a social media image of someone less notable, that might be more difficult, but it still pays to be a little careful and think things through. After all, leaping to conclusions based on an image that doesn’t end up being accurate is potentially a little embarrassing.

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in