In Focus

How robot voice impersonators are coming for your bank accounts and are already scamming thousands every day

Voice-activated ID was meant to be a foolproof way of accessing bank accounts and other private information. Not any more, says Mark Hollingsworth, who reveals how AI is duping people into handing over thousands by using new voice tech which can take on anyone’s persona

Monday 08 January 2024 16:00 GMT
Comments
As more customers use voice activated ID - the more AI is using it to hack systems containing private information
As more customers use voice activated ID - the more AI is using it to hack systems containing private information (Getty Images/iStockphoto)

When the CEO of a UK-based energy company took the phone call, the voice sounded familiar if unusually impatient. It was his boss in Germany, and he had an urgent request. He wanted the CEO to send $243,000 (£191,000) to a Hungarian supplier as soon as possible by wire transfer. “It is urgent,” he said. The CEO was assured of reimbursement.

After the money was transferred, it was forwarded to an account in Mexico and then to other locations. But the money was never reimbursed and so when the CEO received two more calls asking for extra payments, he was suspicious, and the extra transfers were never made.

In fact, the caller was not his boss but voice-generated artificial intelligence software programmed to mimic his voice and so facilitate the bank transfer. It was a scam, and the money was never traced nor recovered.

Such deepfake audio fraud is favoured new form of cyberattack. But while the dangers of how AI affects jobs, national security and disinformation have been much talked about, the potential for fraud has been largely overlooked. Yet the danger of misused investors' funds and manipulated markets is huge. “A few lines of code can act like Miracle-Gro on crime and the global cost of fraud is already estimated to be in the trillions”, said the security minister Tom Tugendhat. As AI’s sophistication grows, so does the threat of fraud for all of us – creating realistic fake documents, fraudulent financial statements, voice recordings and synthetic identity theft (combining AI-generated false data with stolen genuine personal information).

Cybersecurity experts say AI tools such as ChatGPT are helping cybercriminals create more convincing and sophisticated scams. The most disturbing is payment diversion or invoice fraud, where criminals use fake documents to deceive recipients into making payments. These tools are also deployed to impersonate banks and government agencies. AI now enables the fraudsters to create credible voices in phone calls to people who believe they are talking to their familiar friendly accountant or local bank manager – even colleagues and family members. Legal experts warn that accountants failing to detect AI-generated forgeries may unwittingly become involved in tax fraud or money laundering.

Your bank may have elaborate security, but what if you receive a call from a familiar voice? Surely you can trust that?
Your bank may have elaborate security, but what if you receive a call from a familiar voice? Surely you can trust that? (PA)

For investors, tempted by promised high returns, the risks are high. Later this year, the trial in the USA of Michael Brackett, accused of falsifying documents to attract investors for his AI start-up company, will showcase this growing threat. Brackett raised $2.5m for his AI company Centricity, promising to forecast consumer demand in real time. But Centricity collapsed and its founder allegedly defrauded an investor of $500,000 by providing false data about the company’s revenue and client base. Brackett resigned after failing to attract more investors but then the fraud unravelled, according to state prosecutors. “Although the industry is cutting edge, Brackett fabricated documents and revenue numbers to persuade victims to invest in his start-up company,” said Damian Williams, attorney for the southern district of New York.

AI was at the heart of Brackett’s business model and the alleged fraud. Its focus was leveraging AI to predict consumer demand and market trends. But prosecutors claim he manipulated bank statements, lied about the number of his clients and revenue to create a false impression of financial viability.

“Brackett misled investors about Centricity’s financial condition by sending an investor a falsified customer list that included multiple companies who were not paying customers and that included grossly inflated revenue numbers,” said the indictment. “Brackett sent emails to short-term lenders seeking funding for Centricity. In some emails, Brackett attached a purported bank statement for Centricity … Although the [actual] balance in Centricity’s account was $94,420.06, Brackett altered the bank document so that Centricity’s ending balance falsely appeared to be $594,420.06. At least one prospective lender noted the bank statements Brackett sent appeared to have been manipulated”

In 2021 Brackett was charged with securities fraud, which he strongly denies. Last month the AI entrepreneur, a US citizen and resident of Switzerland, sought permission to return to Switzerland, citing work obligations. His request was denied, and the trial will proceed later this year. If found guilty, he faces up to 20 years in prison.

Today the global AI market is worth $40bn and is expected to reach $1.3 trillion by 2032, according to Bloomberg Intelligence. Driven by increasing availability of data, more powerful computing hardware and growing demand for AI solutions, AI has the potential to automate an estimated 65 per cent of tasks currently done by humans. But companies and individuals are in danger from AI fraudsters. Last year 32 per cent of UK businesses experienced cyberattacks, mainly through phishing operations.

Governments are also not immune from the AI fraud threat. During the pandemic, organised crime throughout Europe used AI techniques to fraudulently claim funds from a number of national governments. Yet, say cyberexperts, the UK government is complacent about the need for strong regulation and to slow down deployment. Those experts argue that AI is inevitable, but that the risks need to be assessed and monitored. “It’s very difficult to stop the development of AI because we have this arms race mentality,” says the historian and author Yuval Noah Harari. “People are aware – some of them – of the dangers, but they don’t want to be left behind.”

The ‘arms race mentality’ means countries’ eagerness to get ahead in AI might make them ignore the dangers
The ‘arms race mentality’ means countries’ eagerness to get ahead in AI might make them ignore the dangers (AP)

Rishi Sunak acknowledges the danger. “Criminals could exploit AI for cyberattacks, fraud or even child sexual abuse,” he said recently. “There is even the risk humanity could lose control of AI completely through the kind of AI referred to as super-intelligence.” But there was no AI bill in last month’s King’s Speech and the prime minister told the recent AI safety summit that the UK “will not rush to regulate the sector”.

The UK recently launched an online fraud charter with tech companies to combat theft, fake advertisements and fraud. But AI is increasingly used by criminals to create more convincing and intricate financial scams.

Disinformation is, of course, not new. It has been deployed for centuries and the distortion of voice recordings was used during the Cold War. In 1983 the Soviet Union was desperate for Margaret Thatcher to lose the UK general election. And so the KGB compiled a tape of a fake telephone conversation between Thatcher and US president Ronald Reagan and leaked it throughout Nato. In fact, it was a crudely edited recording of Thatcher promising to punish Argentina for the loss of the HMS Sheffield during the Falklands war and Reagan was heard trying to calm her down. The KGB had spliced together real recordings of their voices from interviews and manufactured a conversation that never took place.

And with another general election approaching, the voice of Keir Starmer has been used to deliver a dramatic but fake message on Facebook promoting a new investment platform for UK residents. The video showed a montage of different clips as the audio played the Labour Party leader’s voice. Underneath the caption read: “A special initiative by the UK government for all British citizens! Act fast! Earn up to £40,000 monthly! Start today with just £250 and make £1,000 on your very first day!”.

The video was created by cloning Starmer’s voice from his real 2023 new year address. It was intended to portray the opposition leader as promoting financial gambling. A spokesperson for Starmer confirmed the speech was a fabrication and the “investment scheme” does not exist.

Today the threat of such techniques is far more insidious and on a greater scale as AI tools become more sophisticated and difficult to detect. “While we are learning to use AI, it is learning to use us”, said Harari. “It is possible for the first time in history to create billions of people. If you can’t know who a real human is and who is a fake human, trust could collapse and so could a free society. If this is allowed to happen, it will do to society what fake money threatens to do to the financial system.”

Additional reporting by Effie Webb

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in