Big Data & Privacy:

Elise Antoine

In 2018 Cambridge Analytica harvested millions of Facebook profiles, including 64,000 New Zealanders for political campaign purposes, using the information without consent.

The scandal illustrated how big data can put the right to privacy at risk.   This may have focused on national elections at the time, but now it reflects a global problem.  The ethical implications include the potential for privacy breach.

So, what is the global community doing about this? What about New Zealand?

Aggregated data conveys insights into the lives of individuals and groups, better informing policy-makers.  Analysis of data collected by governments for policy-making is a common practice. The notion of privacy is complex – essentially being free from intrusion into personal life. It involves many elements but in the digital context it comprises the capacity to control personal information and weaken anonymity.

Global – The UN

The UN has adopted 17 Sustainable Development Goals to achieve a more sustainable global future. As big data help to inform policy-makers, it can contribute to each of these Goals.

Yet human rights, another major UN goal, must be protected to realize the opportunities that big data presents.  The right to privacy has been upheld as a fundamental human right in the Universal Declaration of Human Rights (UDHR) and the International Covenant on Civil and Political Rights (ICCPR).

“No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks”.

The problem human society faces is that digital technology is integrated into every sphere of life, and so the space for being free from ‘interference’ is shrinking.  Big data enables the collection and analysis of massive personal information, often without individual consent. Research and policy-making purposes ‘legitimises’ the collection and storage of personal information on the supposition of individual identity-protection. But because so much information is now available, it’s harder to remain anonymous.

These issues are global: they can affect anyone, in any country. Global issues require global responses, i.e. solutions that go beyond national boundaries. The General Assembly adopted its first resolution on the right to privacy in the digital age in 2013, affirming that “the same rights that people have offline must also be protected online”. (UNGA; 2013)

Yet there’s currently no framework regulating digital technology and protecting privacy on a global scale.  So let’s consider how privacy is safeguarded at the regional and national levels.

Regional – EU

In the last few years, privacy laws all around the world have been reformed. The most influential of those reforms has been the EU’s General Data Protection Regulation (May ’18). This regulates the processing of EU residents personal data, which apply both to private and public sectors. Although the GDPR has affected many countries outside Europe, it is not sufficient to impose worldwide rules on privacy.

National – New Zealand

In New Zealand, personal information is protected through the Privacy Act 1993, with principles designed to prevent data (privacy) breach. A data breach is a loss or unauthorised use of personal data. Disclosure of personal information also constitutes a data breach. Breaches can result in financial loss or emotional distress, for example, for patients whose diagnosis has been publicly exposed. A privacy breach jeopardises human dignity.

The Act seeks to protect individual privacy and, as such, is based on the capacity of the individual to manage data (e.g. right to access information and correct it). Yet there’s no specific framework when it comes to data from and about Māori. The Act recognises individual privacy but not the collective one.  And the number of privacy breaches has recently increased – the Privacy Commissioner reports that MSD was collecting data of beneficiaries including text messages, police and banking records.

So in 2018 a new Bill was introduced to replace the Privacy Act. When a privacy breach has occurred, the individual affected is currently responsible for making a complaint to the Privacy Commissioner.  The Bill shifts responsibility – the agency collecting data must notify both the individual and the Commissioner when a breach that caused harm (or risk of harm) has happened. This is essential to increase transparency and accountability.

This reform will bring New Zealand close to European regulation, as it acknowledges data subjects’ rights, and requires the reporting of privacy breaches. But the Act falls behind the EU on significant issues.

  • The GDPR defines personal data in a broader sense: it addresses data that can be linked to a person, even if only in combination with other data (g. location data from mobile phones). Additionally it recognises a ‘right to be forgotten’ that extends the capacity to control data. Individuals can ask for the erasing of their data, which at the same time strengthens the capacity to withdraw consent. And the NZ Bill (Art. 89) allows use of personal data for research when safeguards preventing re-identification of individuals are implemented.

Big data is massive; so are the ethical questions

These national and international frameworks delineate the privacy issues brought about by big data, underpinned in New Zealand’s IDI.  The IDI operates under a clear purpose: improving the quality of public services by enabling research based on linked data. Researchers (from government departments or universities) must demonstrate how their project contributes to the purpose. But there is no independent ethics committee for reviewing the projects, and only Statistics NZ is responsible for accepting or refusing proposals.

Privacy, moreover, is considered at different stages. Potential risks to individual privacy are considered before adding data in the infrastructure. Also, researchers can only access data in a secure environment without internet or USB access after attending privacy and confidentiality training. Finally, data is de-identified, so individuals cannot be recognised. The IDI is an example of how researchers and policy-makers can use aggregated data ethically.  New Zealanders could be further involved in deciding how data should be used, particularly Māori who recognise collective privacy.

Above all, anonymity is potentially at risk. The IDI is based on big data and, researchers may recognize an individual due to the fact that they know a large number of different features about the person. The more data is aggregated, the more individuals can be identified. New Zealand has under 5 m. residents, which facilitates re-identification. There’s currently a gap in the legislation, as in most privacy laws, since re-identification is not taken into account.

Because individuals are not identifiable does not mean that harm cannot occur.

Groups of people (social, ethnic, religious) can be identified and flagged, which may lead to discriminatory practices. Further research could, therefore, focus on the challenges of an ‘IDI-based policy’ – how data is interpreted and used in policy-making. What are the implications?

The IDI operates under secure principles but there remains room for improvement, both in data-governance and anonymity. Although the new Bill doesn’t address the re-identification issue, it significantly improves transparency and accountability in data use. New Zealand, like the EU, is seeking to counter data breaches and protect the right to privacy ‘offline’.

Privacy is essential for people to be themselves, i.e. to develop unique individuality. As a fundamental human right, it requires constant debate, and effort to keep it safe.  The problem gets harder, from the national to regional to global level.

The digital revolution, both the internet and social media, are shaping, for better or worse, the nature of global citizenship in the 21st century. We need to ensure it’s for the better.

Perhaps there is scope for NZCGS to explore the implications of internet governance for the global community.  As a member of the Centre’s Young Global Scholars Group, I think this would be ‘added value’, for New Zealand and beyond.

Elise Antoine is a member of the NZ Centre’s Young Global Scholar’s Group.  She graduated in Political Science from Panthéon Sorbonne University (Paris), and in 2019 was an intern at UNANZ (Wellington).  She is currently at Kings College (London) where her doctorate focus is on the politicisation of internet governance and its implications for the global community.

By Libby Giles, Director

Libby Giles is the Director of NZCGS. She specialises in global citizenship education, which she sees as a key tool in response to global challenges and that sits at the heart of all the Centre’s kaupapa.

December 19, 2019

You May Also Be Interested To Read…

New United States Leadership

In his address to the Munich security conference (his first international speech as president), President Biden told European leaders the US wants to “earn back our position of trusted leadership.” In a clean break from the isolationist policies of his predecessor,...

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *