On privacy and contact tracing

Contact tracing, the process by which health authorities can identify those at risk of infection by virtue of proximity to a known case of a virus, has long been a pillar of pandemic response. With radio-enabled mobile devices now globally ubiquitous, it comes as little surprise to see that they are playing a role in the evolution of contact tracing methods.

Contact tracing is only as effective as its scale. The ability to evaluate only small proportions of an at-risk population will ultimately undermine any contact tracing system, manual or digital. As early attempts to develop contact tracing apps launched around the globe, technical limitations, privacy concerns and poor assumptions about adoption rates undermined almost all efforts. Into this mess stepped Apple and Google, together responsible for the software, and to a lesser extent hardware, which power the majority of our mobile devices.

Apple and Google in April announced a contact tracing framework built-in to the operating systems of their respective mobile ecosystems. This system is now live. The jointly developed protocol and associated APIs were intended to standardise the fragmented contact tracing app landscape, and to help achieve widespread adoption. Noble ambitions, but coming from companies with at least some vested interest in collecting personal data the privacy implications of the technology cannot be overlooked.

How does it work?

The system is decentralised. It utilises random keys, generated and regularly regenerated on-device, which are swapped anonymously with other nearby devices. Users who test positive can alert the framework, and with their consent, 14 days worth of keys they have collected are uploaded to a server, where they are retained for 14 days. Other users can periodically download a list of “matches” from the server to determine whether they may have been exposed, at which point they are alerted. At no point does the underlying framework match the user identity to the randomly generated keys. However, the system is useless without an app leveraging the framework, developed by (and only by) an official public health authority. The app developer (the health authority) may of course ask the individuals to identify themselves to aid in the appropriate provision of care, but this information is not shared with the operating system.

Source: Apple COVID Contact Tracing FAQ. Accessed 11/Aug/2020

Some design decisions have been made in the development of this solution which are interesting to observe. Central processing, whereby all contact between devices is uploaded constantly, has not been adopted. Use of centralised processing may have improved efficiency and have allowed machine learning algorithms to remove or reduce frequency of false positives, but would have lead to a significant increase in privacy concerns. Centralised, or partially centralised processing would also give scientists access to the data being collected for research purposes beyond contact tracing.

The ability to combine contact matches with geographic location data from device GPS would also have been very powerful. For every additional benefit however, the additional risks increase, and from a privacy perspective it is encouraging to see that the design decisions made by Apple and Google have focused on doing one thing well, whilst sacrificing some potential benefits in the pursuit of privacy.

The engagement of device manufacturers has three key advantages:

  1. The manufacturers are best placed to overcome the technical limitations of the Bluetooth technology which underpin the solutions.
  2. Manufacturers “baking-in” the solution provides a clear path to ubiquity, or at least broad adption, which is a prerequisite for effective digital contact tracing.
  3. The manufacturers are best placed to provide the foundational framework for standardisation of the contact tracing solution across health authorities, ensuring interoperability.

The counterpoint is to question why other, third-party, frameworks could not have been adopted, thus introducing an party external to the implicit conflict of interest as an “accountability anchor” for the big tech firms?

Can we trust big tech?

The involvement of organisations who, particularly in the case of Google, have a vested interest in gathering and exploiting “behavioural surplus” for the purposes of ever more effectively targeting advertising, is of course a cause for concern. There is little trust in these companies acting in the public interest without a clear accountability mechanism in place, and handing over the keys to public health outcomes therefore seems foolish.

In reality, the framework and APIs proposed and implemented by Apple and Google seem to strike a good balance between effective public health outcomes and privacy. The use of frequently rotating keys, 14-day retention policies on the central storage of keys, on-device key matching and anonymity-until-tested-positive are all strong measures against privacy violations.

The concern of course should perhaps be focused less on what has been proposed and implemented now, but on what this first step might lead to in the future once the technology has been normalised. This process of normalising micro-infringements of our privacy as a stepping stone to more invasive methods is a well documented strategy, employed systematically at Google and Facebook (and others) to break down barriers to growth of their business. For more on this, I recommend to read, with a critical eye, The Age of Surveillance Capitalism by Shoshana Zuboff. Those sceptical of broad conspiracy may struggle, but one does not need to buy into the overarching theory to understand that there are significant risks associated with the systems effects of the activities conducted by the big tech advertisers.

The technical nature of developments on this journey may, as previously, prevent an appropriate regulatory response to the activities of big tech. However, knowing what they do about the nature of the businesses of these companies, regulators simply must rapidly develop a regulatory response to the conflict of interests which is clear when companies who are dependent almost entirely on advertising for revenue venture into new markets. Alphabet (parent company of Google) and Facebook in particular have an extraordinary lack of diversity in their revenue models in contrast to Apple, Microsoft and to some extent Amazon (see graphic below). These companies should be scrutinised for conflicts which go against the public interest and be split up in order to protect society from further normalisation and growth of the antidemocratic developments of recent years.

Big tech revenue streams. Source: Business Insider. Accessed: 11/Aug/2020.

Ultimately governments, public health bodies, and individuals must decide on the adoption of digital contact tracing based on a balance of risks. On the one hand few institutions, public or private, have the platform, access, technical knowhow, or means to enable the level of adoption and effectiveness in contact tracing as do Apple and Google. On the other hand, Google in particular has a huge conflict of interest in upholding the privacy rights of the individual, and has a track record of abusing the trust of its customers. The risks of the current system being abused are limited by the technical measures in place to protect privacy, but a precautionary evaluation of future risks to privacy and an appropriate, timely regulatory response are necessary.

Would I personally use an app based on the Apple/Google framework? Yes, and I do. For me the benefits outweigh the risks at this point in time. I will remain vigilant however in understanding the future direction in which this technology moves and reevaluating the balance of risks.


Lead photo by engin akyurt on Unsplash

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.