Apple has acknowledged that its announcement of tools to scan for illegal images on iPhones and iPads was “jumbled pretty badly”.
Following criticism from privacy campaigners, the company has now given more details on the system, saying device-level scanning would allow independent experts to verify how Apple was using the system and what was being scanned for.
On 5 August Apple announced it would scan images uploaded from iPhones and iPads to its iCloud storage, looking for matches against a database of known child sex abuse material (CSAM) maintained by the US National Centre for Missing and Exploited Children (NCMEC).
Companies that operate cloud-based services, including Facebook, Google and Microsoft, commonly scan for CSAM, but do so remotely.
Apple said it plans to add hashes for the CSAM database directly to iPhones and iPads in an operating system update later this year and that devices are to scan images before they reach iCloud.
An image is to be scanned only when a user uploads it to iCloud, and the system only detects exact matches against the database.
The system would not flag images of a person’s children in the bath, or search for pornography, Apple’s head of software, Craig Federighi, told The Wall Street Journal.
He said the announcement was “misunderstood” and that people had become concerned that Apple was scanning iPhones for images.
“That is not what is happening,” Federighi said.
“We feel very positively and strongly about what we’re doing and we can see that it’s been widely misunderstood.”
If the user tries to upload several CSAM images, the account will be flagged for review by Apple staff.
Federighi said this would only happen if the user tried to upload in the region of 30 matching images.
Apple said it plans to add the same database to all versions of iOS and iPadOS, but that it would only be used for scanning in the US initially, with rollouts in other countries to be considered on a case-by-case basis.
Apple said putting the database on the device would add accountability and that an independent auditor would be able to verify what was being scanned for.
The company is also rolling out a separate parental control that invovles image-scanning and Federighi said there had been “confusion” between the two.
If activated by a parent, the second feature scans messages sent or received by a child using the iMessage app. If nudity is detected the tool obscures the photo and warns the child.
Parents can also choose to receie an alert if the child chooses to view the photo.
Privacy groups said the tool could be expanded and used by authoritarian governments to spy on citizens.
Will Cathcard, head of WhatsApp, said Apple’s tools were “very concerning” and whistleblower Edward Snowden called the iPhone a “spyPhone”.
Suspended prison sentence for Craig Wright for “flagrant breach” of court order, after his false…
Cash-strapped south American country agrees to sell or discontinue its national Bitcoin wallet after signing…
Google's change will allow advertisers to track customers' digital “fingerprints”, but UK data protection watchdog…
Welcome to Silicon In Focus Podcast: Tech in 2025! Join Steven Webb, UK Chief Technology…
European Commission publishes preliminary instructions to Apple on how to open up iOS to rivals,…
San Francisco jury finds Nima Momeni guilty of second-degree murder of Cash App founder Bob…