Deepfakes are the most dangerous crime of the future, researchers say

Deepfakes are hard to detect and could be used for a range of crimes, making them incredibly dangerous

Adam Smith
Wednesday 05 August 2020 17:38 BST
Comments
A woman in Washington, DC, views a manipulated video on January 24, 2019, that changes what is said by President Donald Trump and former president Barack Obama, illustrating how deepfake technology can deceive viewers. - "Deepfake" videos that manipulate reality are becoming more sophisticated and realistic as a result of advances in artificial intelligence, creating a potential for new kinds of misinformation with devastating consequences.
A woman in Washington, DC, views a manipulated video on January 24, 2019, that changes what is said by President Donald Trump and former president Barack Obama, illustrating how deepfake technology can deceive viewers. - "Deepfake" videos that manipulate reality are becoming more sophisticated and realistic as a result of advances in artificial intelligence, creating a potential for new kinds of misinformation with devastating consequences. (Credit: ROB LEVER/AFP via Getty Images)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Deepfakes are the most dangerous form of crime through artificial intelligence, according to a new report from University College London.

The term “deepfake” refers to a video where artificial intelligence and deep learning – an algorithmic learning method used to train computers – has been used to make a person appear to say something they have not.

Notable examples of it include a manipulated video of Richard Nixon’s Apollo 11 presidential address and Barack Obama insulting Donald Trump.

The authors said that deepfake content is a danger for a number of reasons: a prominent one is that it would be difficult to find. This is because while deepfake detectors require training through hundreds of videos and must be victorious in every instance, malicious individuals only have to be successful once.

A second reason is the variety of crimes deepfakes could be used for, such as discrediting a public figure by impersonating a family member. The long-term effect of this could lead to a distrust of audio and video evidence in general, which the researchers say would be an inherent societal harm.

“People now conduct large parts of their lives online and their online activity can make and break reputations. Such an online environment, where data is property and information power, is ideally suited for exploitation by AI-based criminal activity”, said Dr Matthew Caldwell who authored the research.

“Unlike many traditional crimes, crimes in the digital realm can be easily shared, repeated, and even sold, allowing criminal techniques to be marketed and for crime to be provided as a service. This means criminals may be able to outsource the more challenging aspects of their AI-based crime.”

Currently, the predominant use of deepfakes is for pornography. In June 2020, research indicated that 96 per cent of all deepfakes online are for pornographic context, and nearly 100 per cent of those cases are of women.

However, many people consider the use of online disinformation to be a strong precursor to the development of deepfakes. In 2019, the Conservative Party doctored a video of now-Labour leader Kier Starmer to suggest that he had performed worse in an interview than he had done.

A similar action was taken by Republican supporters in America, leading to an edited version of a video of Democratic speaker Nancy Pelosi that made her appear intoxicated – a video that was shared by Donald Trump.

While not deepfakes themselves, they have shown that there is a market for video content that could diminish the appearance of political opponents.

As well as deepfake content, the researchers found five other crimes that utilised artificial intelligence which could be of a high concern in the future.

These include using driverless cars as weapons, using machine learning to tailor phishing messages, disrupting machine controlled systems, writing fake news, and harvesting online information for blackmail

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in