Apple Torches Its Pro-Privacy Reputation to Build a Massive iPhone Surveillance Infrastructure

Posted on August 10, 2021

According to the EFF, Apple has announced that it will be building a content-monitoring system into every iPhone, obliterating its reputation as a privacy champion practically overnight. The smartphone giant explained that every iPhone will soon be automatically loaded with a background operating system process which scans every image before it is uploaded to the company’s iCloud servers for backup. Additionally, parents will also have the option to enable another background process to scan their children’s iMessage messages.

Both of these monitoring processes were designed with the aim of catching child sexual abuse material (CSAM). Noble as Apple’s motivations may be, they have nonetheless resulted in the company constructing a surveillance apparatus which subverts the privacy protections it has long boasted of. What’s worse, this apparatus sets a precedent which governments will expect other tech companies to follow, and can easily be modified to flag content that falls outside of its original targets.

To understand why Apple’s planned monitoring software poses such a grave threat to privacy, it’s important to grasp the basics of how it operates. A crucial element which underpins Apple’s scanning system is that, although iPhones (and any other device featuring full-disk encryption) are encrypted while they are off or locked, all the content is decrypted while the device is on. So, any software process that is able to run on the device with the right level of access can review all of an iPhone’s data. Apple’s monitoring works because Apple controls what gets bundled into its operating systems (in this case iOS).

The other key piece of the puzzle is cryptographic hashing. A hash is like encryption in that it transforms data from its original readable data (plaintext) into unreadable gibberish (ciphertext). The difference, though, is that hashes don’t use keys, and any singular piece of data in plaintext will always be transformed into the same ciphertext. Like with any form of encryption, though, you can’t take the ciphertext of a hash and reverse it to derive the plaintext that produced it.

Here’s how these two pieces come together: under its scanning system, Apple would store hashed versions of target images it wants to flag on a user’s device. This way, even if the user could access the target data, it would be impossible for the user to determine what data is being targeted, as it would all be in ciphertext. Then, whenever the user initiates an image backup to iCloud, the scanner hashes the image and compares it against the pre-hashed target data. Put more simply, if the image is the same as a target image then a hash of each one using the same hashing algorithm will produce identical ciphertext. On such matches or near-matches, the software notifies Apple, and a human reviewer then performs a final manual verification before alerting the authorities.

A similar process would work for scanning iMessages where this feature is enabled. Parents with a family account would simply enable this feature, and then any image to be sent or received by their child’s iPhone would be scanned via this hash comparison method. The underage user and the parent would both be alerted on a match.

Notably, at no point did Apple degrade the iPhone’s encryption itself to facilitate monitoring. Images approved to proceed to iCloud are still encrypted in transit to, and upon reaching, the iCloud server. This lets Apple truthfully state that it hasn’t compromised its encryption, but scan every iPhone just the same.

There are a number of ways in which this surveillance implementation would harm iPhone users. First, this is yet another case in which misidentification by an AI can lead to needless harassment of innocent users. Apple’s image monitoring software leverages machine learning to be able to catch attempts to obfuscate illicit material and thereby evade detection. In the case of images, that could mean cropping, resizing, rotating, or compressing an image so that it strays further and further from the target profile. But in trying to catch manipulations to the image, the AI risks making the wrong call, flagging content that is perfectly innocuous. Apple has assured its users that human reviewers will have the final say in whether to forward matches to law enforcement. But how long until Apple’s staff is overwhelmed by the sheer volume of cases to adjudicate? There’s a reason why every major social media platform uses AI to moderate content, with humans reviewing content on appeal. How many people will Apple’s spy system get into legal hot water because its AI got tripped by accident?

Second, even if Apple’s human reviewers forestall false positives without a single error, it still undermines the encryption of iCloud backups, subjecting users to the risk of breached data. Previously, iPhone users could rest assured knowing that as soon as their image got sent to an iCloud server, it was encrypted all the way. Now, any image that is scanned and triggers human review is sent to Apple, where it now resides unencrypted while a human analyzes it. Not only could a malicious or inept Apple employee mishandle the image, but should Apple suffer a data breach, all the images queued for analysis could be stolen.

Of course, Apple can easily demur that users are free to disable iCloud backup if they don’t want their images to be scanned. But what iPhone user is seriously going to decline backing up their photos just to evade scanning? If they irreparably damage their device and the photos aren’t backed up, they’re lost forever. A small subset of users might back up their images to an alternative service (such as Google Photos) instead, but not every user will be aware of their options. And, of course, there’s the distinct possibility that users will not know, or go to the lengths, to disable iCloud backups even if they are privacy-minded. As privacy advocates have rightly insisted for years, defaults are important. Full-disk encryption and encrypted messaging that just work without manual intervention confer protection on all users, regardless of technical proficiency.

What’s more alarming is that Apple’s software sets the stage for potentially staggering abuses in the future. Although Apple’s monitoring software may only hunt for CSAM content for now, the infrastructure required to track down attorney-client correspondence, journalist-source exchanges, and political dissident content is exactly the same. It would be as simple as sending new hashes to iPhones during a firmware update. And while Apple may have no intention of expanding the scope of its surveillance, the US government may push Apple to do so. Just as it did when badgering Apple to unlock the San Bernardino shooter’s iPhone, the FBI could invoke the All Writs Act to compel Apple to assist them in monitoring unlawful content–of course, we also know that the FBI has snooped on all kinds of content, including that which is unambiguously protected under the First Amendment.

But why would the government stop there? With Apple having voluntarily blazed the trail for surveillance of smartphone users, the government may well pressure Google and Microsoft to follow suit for all of their devices. Foreign governments, especially those with a far looser commitment to human rights than the US, will line up with their own requests for content to flag soon after. Would Apple really stand firm in the face of Chinese government pressure to help it suss out every dissident in the country, under threat of expulsion from the Chinese market?

Civil liberties defenders are understandably livid that Apple would betray them after making such vocal overtures to the company’s respect for privacy. However, a number of influential tech industry veterans have added their voices to the condemnation of this pernicious development. Among those joining the chorus are information security professionals and social media executives. It’s hard to say whether this emphatic censure of Apple’s forthcoming surveillance software will induce it to reverse course. Nonetheless, negative press, if sustained long enough, may dissuade Apple’s competitors from following in Apple’s footsteps.

You can read the full piece from the EFF, here. CCDBR is a proud ally of the EFF as a member of the Electronic Frontier Alliance.

Donate

Volunteer