Mobile data sharing in emergencies – consent, care and control

The Centre for Internet and Society recently released a groundbreaking paper on the practical, legal and ethical implications of using mobile phone data (CDRs, or Call Detail Records) in emergencies, with Liberia’s experience of the recent Ebola epidemic as the case study. Written by Sean Martin McDonald, the paper is brilliant, insightful and well researched, and is creating a much-needed debate in the humanitarian and responsible data communities. This post is a contribution to that debate. It reflects on just one of the ethical issues surfaced by the analysis, that of consent to data-sharing in the context of humanitarian emergency.

CDRs, as McDonald writes, are one of the most sensitive kinds of digital data available. They are intensely revealing of personal characteristics, showing all kinds of information that not only make people identifiable, but also make it possible to track, locate, monitor and influence behaviour.  Consent is key to data protection law, as is understanding the purpose for which data will be shared. The logic of the paper is that for mobile operators to pass such data on to humanitarian organisations (most likely with the national government as intermediary compelling the sharing of the data), individual-level consent would be the most important factor in making sharing possible.

McDonald argues thus:

‘Obtaining consent at point of collection is both a legal requirement in the Liberian context and a commercial practice that has significant precedent for less altruistic means. There is no question that building emergency data use clauses into commercial and public service contracts is both the most straightforward and the most legal way to facilitate the sharing of CDRs, and minimizes virtually every other question that the law compels.’

This argument, although in line with all existing data protection rules and norms, is problematic in a practical context. Consent without purpose limitation – knowing what one is consenting to – is widely judged to be legally (and practically) meaningless. Furthermore, the kinds of context in which mobile data may be shared under ‘emergency data use clauses’ are exactly those where purpose limitation is unlikely and a chain of sharing and reuse may be established under the same premises of urgency that made data shareable in the first place.

For example, a hypothetical case where a large-scale attack, such as an instance of bio-terrorism, occurs in a country, affecting a large portion of the population. The country’s government may authorise  data sharing for public safety reasons, allowing international authorities access to CDRs to track who is in need of help or may infect others. National security challenges often, by their nature, lead to national political and governmental instability, so that data released for purposes of care may soon also be seen as necessary for purposes of control – in fact, control often becomes defined as care in emergency situations. Crowd control and disease quarantine are just two obvious examples of this.

In a situation such as this, mobile data may initially be shared in order to track and help people. But in a context of raised control such as military or emergency rule, such data may also become invaluable for tracking and preventing unauthorised population  movements, flows of resources and financial transactions, or protest and activism. In these cases, the data would gradually become repurposed in a process that the surveillance field terms ‘function creep’, making people’s consent meaningless if they were only consenting to the data’s use for purposes of care.

To tackle this, lack of consent may in fact be the best strategy. To all intents and purposes, data sharing without consent is illegitimate. However, the argument is precisely that some emergency contexts may make it necessary to share data without being able to get people’s consent. So instead of effectively extending meaningless consent to this kind of data-sharing, perhaps it is more ethical to acknowledge that it is happening without explicit individual consent.

Removing consent from the picture has two implications. First, it puts full accountability on the authority sharing the data, and removes the ability to claim that individuals consented to the process – because in cases such as those currently hypothesised, they have not. Second, because it underlines the enormous potential for violating people’s fundamental rights, and (most likely) turns the sharing of such data into a high-profile event that attracts political consequences and is discussed  on the international level.

It will be argued that in this scenario mobile operators will not share data because their liability is too great. However, if a government has requisitioned the data the operator is no longer liable  and cannot be held accountable for any misuse of the data along the line.

Such a strategy would make it sensible to nominate data protection authorities, possibly on the international level, that can act as ethical intermediaries in cases where national data governance has broken down, as McDonald posits happened in Liberia’s case. They should have access to advice from country officials with knowledge of the local context and concerns, and should be responsible for obliging national authorities to let people know how their data is being used, for example through local radio.

Sharing CDRs in their raw form is the data protection equivalent of suspending constitutional rights. When such rights are suspended it is usually on a temporary basis, but in contrast when data is let out of the box, it is out for good – it will only replicate and be re-shared. Furthermore, the kind of context where there is an urgent case for sharing such sensitive data is also the kind of context most likely to give rise to repurposing and further sharing. A principle of non-consent and acknowledgement that privacy is being violated may, ironically, be the most appropriate and ethical way to approach such a situation.


  1. […] Read more:… […]

  2. Linnet – really well-said (and thank you for the kind words). I wholeheartedly agree with the core of your argument – namely, that without enforceable limitations, blanket consent is pointless and may actually allow for more abuse than it prevents.

    I would, however, contribute a few small points of nuance/additional thoughts to add detail to my suggestions around consent:

    1. I think that consent, when it articulates strong limitations on use, actually adds a layer of protection for the individual. Without consent, someone whose privacy has been violated is forced to rely on data protection law or commercial regulation – whereas a person that can prove a violation of contractually granted consent can also pursue civil prosecution for violation of contract, which is typically much easier to litigate. Admittedly, this hinges on limited, defined consent – which is rare – but it’s very important to remember HOW people litigate to protect their rights, as well as why/what they’re litigating.

    2. The examples you provide – specifically around the exercise of government emergency powers – are some of my biggest concerns (and why I talk about the ambiguity of definition and due process rights in the digital extension of emergency powers). They are also, however, almost entirely impossible to challenge at present. The kinds of data repurposing that you describe are already happening – and without any kind of due process, consent contracts, or limited licenses, there are no real legal mechanisms to prevent this. Even the normal due process requirements attached to things like wire tapping or legislative renewal of emergency powers or review of eminent domain seizures don’t apply to CDRs or data yet. This type of data sharing/repurposing becomes even harder to manage when raw data is shared between organizations in high volatility situations (like emergencies). And while we often think of this in unstable settings, France is currently under a perpetual state of emergency, enabling its parliament is passing some very concerning data access and use laws – who knows how they will go on to use it.

    3. Lastly, while we agree about the importance of enforceable purpose limitations – I think culturally ingraining the consent requirement in the way that humanitarian and international development assistance is delivered is a positive step. Organizations being able to trace consent through to use would be a substantial leap forward in traceability, process definition (and documentation), and engaging with stakeholders. While not always practical – and certainly with substantial opportunity for abuse – building consent requirements of any form into processes implicitly raises the question of value and input of stakeholders in a way that current practice doesn’t.

    In these situations, I can’t help but think about organ donor programs – what information do most of us have about how they’re administered? Who our organs go to (and why)? There is, however, the pretty huge caveat that you’re already dead, so there isn’t much room for negative impact. That said, in thinking through these issues, I think we’ll find a good set of frames for necessity, proportionality, legality, and practicality – though they’ll take a while. In the meantime, this kind of engagement and dialogue goes a long way to fleshing the issues out and I hope we’ll get the chance to explore and promote them more in the future.

  3. […] a sort of cure-all for these complications. But as Danna pointed out (as well as Linnet Taylor in this recent post responding to the same paper we discussed), consent without the ability to enforce is not enough to prevent abuse. It might be enough to […]

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: