Security experts alarmed at ‘incredibly dangerous’ new Google feature

Tool aims to keep people safe by listening for scams – but could actually put them at risk, some warn

Andrew Griffin
Friday 17 May 2024 17:01 BST
Comments
Google Tech Showcase
Google Tech Showcase (Copyright 2024 The Associated Press. All rights reserved)
Leer en Español

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

A new Google feature aimed at alerting people to scams has led to fears from privacy campaigners.

The tool uses artificial intelligence to listen in on people’s phone calls, and try and spot if they sound like a scam. If they do, then a pop-up will show alerting people to a “likely scam”.

The feature was announced at Google’s I/O event this week, during which it announced a host of new AI tools. Like many of those features, Google did not say when it would actually arrive.

It also gave little information on how the feature would actually work, such as what kind of conversations would prompt the AI to suggest that the call could be a scam. But it said that it relied on Gemini Nano, a recently released, much smaller version of its AI that is built to run on phones.

Google stressed that all the listening and analysis of phone calls would happen on the phone itself, so that private conversations would not be sent to its servers. “This protection all happens on-device so your conversation stays private to you,” it said in its announcement.

But nonetheless security experts suggested that listening to phone calls in such a way at all was “incredibly dangerous” and “terrifying”. They noted that even if the calls stay on the device, then allowing AI to listen in on calls could lead to other problems.

“The phone calls we make on our devices can be one of the most private things we do,” Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project, told NBC News. “It’s very easy for advertisers to scrape every search we make, every URL we click, but what we actually say on our devices, into the microphone, historically hasn’t been monitored.”

“This is incredibly dangerous,” said Meredith Whittaker, president at messaging app Signal. “It lays the path for centralized, device-level client side scanning.”

Ms Whittaker, who worked at Google for 13 years and helped organise internal protests against its policies, said that the use of the technology could quickly expand.

“From detecting ‘scams’ it’s a short step to “detecting patterns commonly associated w[ith] seeking reproductive care” or “commonly associated w[ith] providing LGBTQ resources” or “commonly associated with tech worker whistleblowing,” she said.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in