The Independent's journalism is supported by our readers. When you purchase through links on our site, we may earn commission. 

Apple employees are reportedly raising concerns about company’s on-device image scanning to curb child abuse

Employees express worries of repressive governments possibly exploiting the feature

Vishwam Sankaran
Friday 13 August 2021 11:19 BST
Comments
File: Apple says the feature will use the phone’s on-device machine learning to assess the content of children’s messages for photos that may be sexually explicit
File: Apple says the feature will use the phone’s on-device machine learning to assess the content of children’s messages for photos that may be sexually explicit (AFP via Getty Images)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Apple employees are reportedly raising concerns internally about the tech giant’s plans to roll out a feature that would allow its devices to scan through people’s photos and messages to check for signs of child abuse.

Employees with the company have flooded an internal slack channel with over 800 messages on the plan that was announced a week ago, news agency Reuters reported.

Many Apple workers reportedly expressed worries in a thread of messages on Slack that repressive governments could exploit the feature to find materials for censorship or arrests.

Apple announced a week ago that new features under the plan to be rolled out “later this year” would use the phone’s on-device machine learning to assess the content of children’s messages for photos that may be sexually explicit.

“When receiving this type of content, the photo will be blurred and the child will be warned, presented with helpful resources and reassured it is okay if they do not want to view this photo,” Apple noted in a blog post.

It said the features will be coming as updates to all of its platforms, including iOS 15, iPadOS 15, WatchOS 8 and MacOS Monterey.

“We want to help protect children from predators who use communication tools to recruit and exploit them, and limit the spread of Child Sexual Abuse Material (CSAM),” Apple noted.

The tech company said children would be warned before they send sexually explicit photos and parents could set up notifications when their child sends a photo which triggers the new system.

In one of the features, Apple said it would use a database of known CSAM images provided by child safety organisations and apply on-device machine learning to look for matches in the photos stored on the device.

“Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child safety organizations. Apple further transforms this database into an unreadable set of hashes that is securely stored on users’ devices,” the company noted.

While Apple said the feature is designed so the company does not get access to the messages, it could lead to concerns from privacy advocates, given the tech giant’s long history and commitment to securing the privacy of its users.

Core security employees were reportedly not part of the complainants on the topic, but some reportedly said they thought the company’s response was reasonable to crackdown on illegal content.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in