Leading Contact Tracing Initiative Offers Insufficient Assurance Against Abuse

Posted on June 1, 2020

Not long after the first wave of states imposed shelter-in-place orders to combat the COVID-19 pandemic, public discourse coalesced around initiatives which proposed harnessing technology to stem the spread of the virus. 

In particular, public health experts in the United States began entertaining the implementation of contact tracing using mobile applications. The idea behind such applications is that, because mobile devices are equipped with various wireless communication interfaces, and are ubiquitous among the populace, these devices could be used to warn individuals who passed within close proximity of other individuals who tested positive for COVID-19. The ability to quickly issue warnings in this way, the thinking goes, would allow the potentially exposed to immediately self-quarantine and preclude further transmission via continued interpersonal contact.

There are a number of proposals for how to facilitate this technologically assisted epidemiological control in practice, with a range of software architectures and, by extension, privacy implications. The most mature, broadly supported, and logistically viable among these is the joint project by Apple and Google. Therefore, it is critical to consider the civil liberties ramifications of this approach. 

First, a brief overview of the structure of the Apple-Google contact tracing app. The primary component of the “app” would be installed invisibly deep inside users’ mobile operating system (OS), either iOS or Android. The rationale behind the joint undertaking is to ensure interoperability between devices of these otherwise completely distinct mobile ecosystems. By installing components of the app at such a low level, devices can constantly transmit and receive via Bluetooth in order to determine when the device’s bearer was close enough to another device (and, by assumption, its bearer) to risk transmission of COVID-19 between the two.

The secondary component of the app involves voluntary installation of the app, proper. When installed and initialized, the app instructs the device to constantly log all the Bluetooth-equipped devices that pass close enough to it to risk COVID-19 transmission–it infers distance by measuring the signal strength of nearby Bluetooth devices. 

If the app’s user later tests positive for COVID-19, the user is directed to send an alert through the app, which then warns all the devices which passed within transmissible range, that their users may now be infected.

Deploying this technical public health measure raises a number of concerns for its potential harm to Americans’ privacy. For one thing, it’s hard to say for certain if participation in this program would be worth the potential privacy tradeoff, as there is lack of evidence to suggest that the practice of contact tracing of any sort would produce any tangible public health benefits in the US. 

There are also inherent issues with the way Apple and Google have elected to go forward with their contact tracing app. For one thing, there does not appear to be any way to fully opt out, since its base software mechanism operates at such a fundamental level of your device. If the OS portion of the app is released as an update to the device OS (as writeups of the project suggest would be the case), it would make the software practically impossible to refuse it. The developers have insisted that the program is entirely opt-in, but if a portion of it is included in an OS update, and OS updates are functionally mandatory, then this would not be entirely true. It’s unlikely that Apple and Google will create two tracks each for iOS and Android, respectively, one track for those who consent to the contact tracing app and another for those who don’t–this is so labor-intensive (and, thus, costly) that it is seldom done on the scale that these companies’ install bases would require.

Those with an iOS or Android device and who withhold consent from this software face two possibilities, both of which are so onerous as to be infeasible. Such a person could keep their device perpetually in airplane mode, disconnecting it from any and all networks, but this defeats the purpose of owning a mobile device in the first place. Alternatively, this user could refrain from executing the update, but regular security patches to your device are so essential that this would leave your device dangerously at risk of getting hacked. In effect, bundling part of the contact tracing software inextricably with OS updates holds the device’s security hostage. And because there are practically no commercially available mobile device OSes that aren’t made by Apple or Google, there’s nowhere to turn if you don’t accept these terms.

Additionally, the assurances the two tech giants have given to date are not substantive enough to make a convincing case that user data won’t or can’t be abused. The main reason their assurances fall short is that they have declined to make their work open-source. As our partner organization, the Electronic Frontier Foundation (EFF), has insisted, CCDBR urges this crucial step because it is the only way to see what the code truly does, and thereby verify the claims of privacy and security protections which the developers make. Without the open availability of the software’s source code, we can only take Apple and Google at their word. 

Without seeing the software’s code, there is also no way for independently auditors to thoroughly review it. Third-party security testing is a mainstay of information security best practices, and every expert in the field will agree that providing source code access would add another dimension of security to the project. Publishing the source code for the purposes of this kind of review would enable critical oversight of code which, ostensibly, would be running on an unprecedentedly large number of devices, making it one of the most-used consumer software programs of all time.

In short, as these private companies are extending this software as a public service, they owe it to the public they claim to serve to gain our trust by releasing their source code. 

Finally, while it is possible to engineer the software’s use of Bluetooth in a way that preserves privacy, this must be done with extreme care. The catch with wireless communication is that it always transmits a unique serial number associated with the sending device’s hardware. With Bluetooth, this is the Bluetooth ID. Any software system that relays this Bluetooth ID, or an identifier or number partly or wholly derived from it, back to a central repository would therefore open the door to uniquely identifying devices and correlating them with real-world identities. 

It is possible to use completely random one-time numbers called “tokens” to represent a user’s device. The idea behind these tokens is that every participating device issues constantly rotating tokens, and never issues the same token twice (Mozilla, the developer of the Firefox browser and a vociferous advocate of privacy on the web, succinctly outlines how the application of tokens to contact tracing could work on their blog). The integration of tokens into any contact tracing software is another of the EFF’s criteria for a privacy-respecting public health software tool (and which CCDBR would also endorse), as this architecture seems to be the most promising avenue for obscuring user identity.

It appears that the Apple-Google development team is implementing some kind of one-time token system, but few media outlets go into enough detail to confirm if this is the case. And, again, without the ability to review the source code, there’s no way to confirm whether or how effectively a token system is implemented. 

In the final accounting, civil liberties defenders can count on no promises that contact tracing, as implemented the way Apple and Google appear to have done, would not pave the way for large-scale tracking of Americans. Even if we assume these companies’ best intentions–that they sincerely have no desire to surveil their users–users can’t be sure that they have engineered their software soundly enough that they either (A) won’t inadvertently retain data that a government could request via a legal order or (B) won’t handle data in a way that allows malicious attackers to exploit the system and seize data that can infer the locations of individual persons. 

The EFF has praised Illinois’s leadership on biometric privacy, citing such statues as potential bulwarks against abuse, but the chance that such regulatory requirements will bind how Apple and Google develop this software, which they are offering globally, seems remote, especially since the appetite for pursuing legal action against a program to promote public health would probably be small.

Regardless of the degree to which this particular initiative invades user privacy, it is by no means the only monitoring program being weighed as a tool to address the pandemic: there is also talk of deploying thermal imaging cameras for fever detection, to cite just one example. At best, assuming all of the necessary technical controls for ensuring privacy are properly engineered, a contact tracing application of this sort could still condition Americans to view surveillance as innocuous, making them more docile when confronted with subsequent, more egregious abuses. Thus, it is paramount that we not only structure the software, but our discourse around it, with great care, lest we normalize an abnormally intrusive approach to an extraordinary challenge facing our society.

Donate

Volunteer