This is a post about our new book, Group Privacy: New Challenges of Data Technologies click the link for the full text).
In 2014 I met a new faculty member at the Oxford Internet Institute at the photocopier, right at the end of my last day there, and his first. Discussing what we were doing there, we got talking about how privacy didn’t seem to work very well if you looked at it from different disciplinary perspectives, particularly from the angle of development studies. When data are collected and used in places prone to conflict, political instability or otherwise limited statehood, the conditions for collection and use are often not those visualised by data protection laws in wealthier countries. For one thing, visualising conflict data may put whole communities at risk, and the new data technologies (using secondary data collected indirectly from people’s use of devices or services) often leave people unaware that their data are being collected in the first place.
The new faculty member was Luciano Floridi, a leading philosopher of information, and he agreed this seemed like a problem. As we both thought further, it became clear that the issues that come up around collectives when you think about broad data collection and use are not particular to places with limited statehood, but are facing us all as we commmunicate using mobile phones and the internet, exist in smart environments that can sense our activities, and interact with many data technologies that are becoming normal such as drones, micro-work platforms and e-commerce.
Although data protection focuses on individuals and personal data, we are actually analysed and influenced on the group level as much as on the individual. AI and algorithmic methods create ad hoc collectives for purposes such as predictive policing, medicine, psychological experiments and urban planning, but in most cases where people are addressed collectively, no protection will apply until it can be demonstrated that an individual is impacted – and usually until that individual realises they are. Does it matter if an individual doesn’t realise they are being impacted through their data on the group level? In the case of becoming a guinea pig for an experiment, subject to a policy intervention or being discriminated against on the collective level, absolutely.
We got together with Bart van der Sloot, then at the University of Amsterdam, who added legal insights along with philosophical ones, and organised a workshop that incorporated scholars from law, philosophy, development, computer science, sociology and media studies. Almost everyone disagreed with each other on almost everything, and especially on whether group privacy was possible, or even desirable, to protect. The only thing we agreed on was the importance of understanding it better, as it seemed likely to become a chief concern of our technological age.
This was the genesis of Group Privacy: New Challenges of Data Technologies, which has just been published. I’m enormously grateful to my co-editors, Luciano and Bart, for their work on the book, and to all the authors who contributed to it. We hope the discussion continues to develop and that others will be as enlightened and challenged by it as we are.