Deepfakes are the most dangerous crime of the future, researchers say
Deepfakes are hard to detect and could be used for a range of crimes, making them incredibly dangerous
Your support helps us to tell the story
This election is still a dead heat, according to most polls. In a fight with such wafer-thin margins, we need reporters on the ground talking to the people Trump and Harris are courting. Your support allows us to keep sending journalists to the story.
The Independent is trusted by 27 million Americans from across the entire political spectrum every month. Unlike many other quality news outlets, we choose not to lock you out of our reporting and analysis with paywalls. But quality journalism must still be paid for.
Help us keep bring these critical stories to light. Your support makes all the difference.
Deepfakes are the most dangerous form of crime through artificial intelligence, according to a new report from University College London.
The term “deepfake” refers to a video where artificial intelligence and deep learning – an algorithmic learning method used to train computers – has been used to make a person appear to say something they have not.
Notable examples of it include a manipulated video of Richard Nixon’s Apollo 11 presidential address and Barack Obama insulting Donald Trump.
The authors said that deepfake content is a danger for a number of reasons: a prominent one is that it would be difficult to find. This is because while deepfake detectors require training through hundreds of videos and must be victorious in every instance, malicious individuals only have to be successful once.
A second reason is the variety of crimes deepfakes could be used for, such as discrediting a public figure by impersonating a family member. The long-term effect of this could lead to a distrust of audio and video evidence in general, which the researchers say would be an inherent societal harm.
“People now conduct large parts of their lives online and their online activity can make and break reputations. Such an online environment, where data is property and information power, is ideally suited for exploitation by AI-based criminal activity”, said Dr Matthew Caldwell who authored the research.
“Unlike many traditional crimes, crimes in the digital realm can be easily shared, repeated, and even sold, allowing criminal techniques to be marketed and for crime to be provided as a service. This means criminals may be able to outsource the more challenging aspects of their AI-based crime.”
Currently, the predominant use of deepfakes is for pornography. In June 2020, research indicated that 96 per cent of all deepfakes online are for pornographic context, and nearly 100 per cent of those cases are of women.
However, many people consider the use of online disinformation to be a strong precursor to the development of deepfakes. In 2019, the Conservative Party doctored a video of now-Labour leader Kier Starmer to suggest that he had performed worse in an interview than he had done.
A similar action was taken by Republican supporters in America, leading to an edited version of a video of Democratic speaker Nancy Pelosi that made her appear intoxicated – a video that was shared by Donald Trump.
While not deepfakes themselves, they have shown that there is a market for video content that could diminish the appearance of political opponents.
As well as deepfake content, the researchers found five other crimes that utilised artificial intelligence which could be of a high concern in the future.
These include using driverless cars as weapons, using machine learning to tailor phishing messages, disrupting machine controlled systems, writing fake news, and harvesting online information for blackmail
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments