This year’s IGF was, unsurprisingly, permeated by the issue of surveillance. Almost every discussion landed on it sooner or later, led partly by the Forum’s headlining of issues of cybersecurity and online freedom. This led to the use of the term ‘big data’ almost exclusively to mean personal data, and to denote a particular kind of contemporary insecurity about data mining and the change in our perceived level of visibility as individuals. The IGF is not a governmental space – its function is to bring together a ‘multistakeholder’ dialogue where the human rights community comes together with the security community and where technical experts can enter the debate on the internet’s social functions. The gathering was mostly collegial, largely because of everyone’s heightened insecurity over their online visibility, which led to a lot of connections between communities which might otherwise have been separate (LGBT/Open Data/cybersecurity, to name one).
Overall, the absence of the business lobby meant that the corporate ’3 V’s’ interpretation of big data was almost completely lacking, and that the term, unmoored from its corporate dock, drifted towards denoting personal data in the context of data mining. It was not to be heard in the technical discussions about the Internet of Things, and barely at all in the discussions on cybersecurity. Instead the term came to characterise the debate at the intersection of rights, privacy and development issues. A workshop about ‘big data, social good and privacy’ brought together representatives from the UN Global Pulse initiative, the OECD, Privacy International and LirneAsia – a diverse set of interests ranging from research to development policy – and offered the opportunity to lay out the privacy agenda regarding the use of personal data from the developing world as an ‘observatory on development’. This was one of only two workshops (the first was described in my previous post) where big data was discussed as a social good.
There is a certain irony here: 90% of the discussion at the forum referred to big data as a tool for surveillance, whereas the thread of debate that focused on developing countries alone, treated it as a way to ‘observe’ the poor in order to remedy poverty. The prevailing view was voiced by Pat Walsh of the GSMA: ‘Is it a right to have access to clean water, clean air, health care? Data can help provide these.’ In contrast, Alexandrine Pirlot of Privacy International pointed out that data was data, and that people in low-income countries had as much reason to be worried about surveillance as anyone else: ‘Big data is challenging and putting at risk the human rights of individuals that we’re supposed to be helping. It’s discriminatory and exclusionary in nature – data collected is from people who are active online, buy online, who have a mobile, who are on Facebook, but excludes those who don’t take part in these activities.’ She also pointed out that using people’s personal data to observe economic trends and response to shocks can lead to surveillance just as easily as any other type of data science.
This debate clarified the disjuncture at the IGF between lower and higher-income countries. The panels had almost no low-income-country speakers, because those who are asked to speak at the IGF tend to come from countries with high internet usership. However, the big data discussion could have benefited immeasurably from the inclusion of African and Latin American voices in particular, a lack which is reflected in the internet (and data) governance discussions more broadly, and which leaves no one to contest the view that surveillance is benevolent where its subjects are poor.