Dissertation, University of Stellenbosch (
2022)
Copy
BIBTEX
Abstract
All societies have to balance privacy claims with other moral concerns. However, while some concern for privacy appears to be a common feature of social life, the definition, extent and moral justifications for privacy differ widely. Are there better and worse ways of conceptualising, justifying, and managing privacy? These are the questions that lie in the background of this thesis.
My particular concern is with the ethical issues around privacy that are tied to the rise of new information and communication technologies (henceforth “ICT”). The focus here is on technologies involved in processing personal data in its broadest sense, including “any operation or set of operations which is performed on personal data or on sets of personal data, whether or not by automated means, such as collection, recording, organisation, structuring, storage, adaptation or alteration, retrieval, consultation, use, disclosure by transmission, dissemination or otherwise making available, alignment or combination, restriction, erasure or destruction.” The issue of moral concern is commonly designated as that of 'data privacy'.
The first step is to consider why privacy matters, or ought to matter, in the first place. And here we run into our first difficulty, since both the conceptualisation of privacy and the justification for the right to privacy are matters for contention. On the one hand, we are animated by a moral concern for privacy in some sense; on the other, we lack the conceptual and moral resources for making sense of this concern or for promoting particular social policies and criticising others. This often-cited dichotomy and its ensuing confusion have been the central motivation for most of the philosophical literature on privacy – including this thesis.
In what follows, I will suggest a way out of the confusion that pervades the current debate on data privacy. I will essentially argue two claims: one, that the concept of (data) privacy is uniquely characterised by a particular context within which we exchange information; two, that because (data) privacy constitutes an intrinsic value, its moral concerns and justifications ought to be deliberated as such.
The first chapter sketches the current data privacy landscape by introducing a number of concepts and arguing the unique nature of the landscape. I argue that the traditional distinction between four kinds of privacy (physical, mental, decisional and informational) is no longer accurate, given the technological context that reduces, unifies and processes all aspects of people’s lives by means of digital data. In fact, to hold on to the distinction leads to a fundamental misconception of data privacy. I further argue that today’s distinction between data protection and data privacy runs the risk of ignoring the fact that the former is an inherent aspect of the latter. These arguments allow me to settle on a (provisional) definition of data privacy: controlling how data about identifiable people is processed by ICT.
Next, I draw a line between descriptive and normative accounts of data privacy, and explain why this book belongs to the latter category. After a brief discussion of the coherence and distinctiveness theses, and of the difference between instrumental and intrinsic theories of privacy, I introduce the problem of data privacy as a distinct, coherent, and intrinsic moral concern. Three arguments support this position: one, data privacy is a coherent moral concern because it is motivated by a singular moral concept; two, data privacy is a distinct moral concern because the privacy challenges we face today are of a fundamentally new nature – distinct from any moral concerns we’ve faced in the past; and three, data privacy is an intrinsic moral concern that is predominantly justified, not by what it instrumentally protects us from, but by what it intrinsically constitutes.
Chapter Two makes the case against a harm-based, consequentialist approach to data privacy. After a brief introduction to the basic tenets of consequentialism I discuss harm-based arguments in favour as well as against privacy. I explain how privacy may protect us against psychological distress, but at the same time constitute the source of that distress; how privacy may be necessary for the proper functioning of democratic societies, but at the same time provide a cloak for anti-social and immoral behaviour; and how privacy protects us against the improper use of our personal data by others, but at the same enables us to conceal information or spread false information about ourselves in ways that are harmful to others. Next, I argue that consequentialism essentially relies on two flawed presumptions – that we know what harm is, and that we know how to predict it – and discuss the controversies surrounding definitions and predictions of harm. This argument becomes especially pertinent in today’s rapidly changing technological context: we may guess, and make assumptions, perhaps even find correlations, but don’t really know or understand (yet) with any degree of certainty the harmful consequences today’s data privacy violations are causing or will eventually cause – if any.
I illustrate this with examples of recent data privacy violations, and with a brief overview of consequentialist arguments for and against data privacy. Those who invoke consequentialist arguments against data privacy claim that personal data is nothing more than a raw material that can be turned into something useful and valuable without harming anyone. Choosing to move fast and break things – a popular mantra that promotes the benefits of technological “permissionless innovation” over and above its risks and costs – they draw our attention to the many benefits free data flows generate. Those who rely on consequentialist arguments in favour of data privacy, on the other hand, insist on actual and potential misuse of personal data. Rather careful than sorry, they draw our attention to the urgent need to take back control and protect ourselves against data privacy violations – precisely because we don’t know which harm they may cause in the future. Others go one step further and argue that these violations are part of a broader ideology of power.
What Chapter Two demonstrates is that vague definitions, unreliable predictions and inconclusive allegations of harm from both sides of the debate perpetuate a stalemate that fails to produce morally defensible answers to our growing data privacy concerns. Deliberating about data privacy from a consequentialist perspective is a dead end.
Chapter Three proposes an alternative to the harm-based approach, namely that privacy is an intrinsic value that deserves moral protection for its own sake. After a brief account of what I take values to be, and why we have a need to protect them, I introduce the foundational value of respect for persons, and lay out my main argument in two steps. The conceptual component of my argument returns to the definition of data privacy I had temporarily settled on in Chapter One, but now with the added consideration of the particular context, and particular circumstances and attitudes, within which information is exchanged. This addition is crucial, as it reveals the fundamental value that is at stake in our data privacy concerns, namely the inherent value of each and every person, which we protect by upholding an ethical principle commonly known as the principle of respect for persons. It is for this reason that the violation of privacy cannot (only) be expressed in terms of harm. I further argue that the principle of respect for persons is consistent with, and in fact unthinkable without, some degree of privacy.
I reply to potential challenges to a value-based approach to data privacy by showing that (i) values and value trade-offs are fundamentally different from consequential benefits and cost-benefit analyses; (ii) the suspension of one value in favour of another is not inherently contradictory; what matters is morally defensible deliberation about values and valid individual consent; (iii) data privacy concerns cannot be reduced to other moral concerns and therefore deserve a specific, distinct protection.
Finally, because this is a thesis in applied ethics, I attempt an answer to the question as to how we ought to apply a value-based approach to practical data privacy concerns. I argue that a revised, three-pronged version of John Rawls’ reflective equilibrium, founded on the non-negotiable principle of respect for persons, is the best possible procedure to deliberate data privacy concerns in relation to other moral values.