At the beginning of the COVID-19 pandemic, high hopes were placed on digital contact tracing. Digital contact tracing apps can now be downloaded in many countries, but as further waves of COVID-19 tear through much of the northern hemisphere, these apps are playing a less important role in interrupting chains of infection than anticipated. We argue that one of the reasons for this is that most countries have opted for decentralised apps, which cannot provide a means of rapidly informing users (...) of likely infections while avoiding too many false positive reports. Centralised apps, in contrast, have the potential to do this. But policy making was influenced by public debates about the right app configuration, which have tended to focus heavily on privacy, and are driven by the assumption that decentralised apps are “privacy preserving by design”. We show that both types of apps are in fact vulnerable to privacy breaches, and, drawing on principles from safety engineering and risk analysis, compare the risks of centralised and decentralised systems along two dimensions, namely the probability of possible breaches and their severity. We conclude that a centralised app may in fact minimise overall ethical risk, and contend that we must reassess our approach to digital contact tracing, and should, more generally, be cautious about a myopic focus on privacy when conducting ethical assessments of data technologies. (shrink)
Traditional arguments for privacy in public suggest that intentionally public activities, such as political speech, do not deserve privacy protection. In this article, I develop a new argument for the view that surveillance of intentionally public activities should be limited to protect the specific good that this context provides, namely democratic legitimacy. Combining insights from Helen Nissenbaum’s contextualism and Jürgen Habermas’s theory of the public sphere, I argue that strategic surveillance of the public sphere can undermine the capacity (...) of citizens to freely deliberate in public and therefore conflicts with democratic self-determination. (shrink)
The first few years of the 21st century were characterised by a progressive loss of privacy. Two phenomena converged to give rise to the data economy: the realisation that data trails from users interacting with technology could be used to develop personalised advertising, and a concern for security that led authorities to use such personal data for the purposes of intelligence and policing. In contrast to the early days of the data economy and internet surveillance, the last few years (...) have witnessed a rising concern for privacy. As bad data practices have come to light, citizens are starting to understand the real cost of using online digital technologies. Two events stamped 2018 as a landmark year for privacy: the Cambridge Analytica scandal, and the implementation of the European Union’s General Data Protection Regulation (GDPR). The former showed the extent to which personal data has been shared without data subjects’ knowledge and consent and many times for unacceptable purposes, such as swaying elections. The latter inaugurated the beginning of robust data protection regulation in the digital age. Getting privacy right is one of the biggest challenges of this new decade of the 21st century. The past year has shown that there is still much work to be done on privacy to tame the darkest aspects of the data economy. As data scandals continue to emerge, questions abound as to how to interpret and enforce regulation, how to design new and better laws, how to complement regulation with better ethics, and how to find technical solutions to data problems. The aim of the research project Data, Privacy, and the Individual is to contribute to a better understanding of the ethics of privacy and of differential privacy. The outcomes of the project are seven research papers on privacy, a survey, and this final report, which summarises each research paper, and goes on to offer a set of reflections and recommendations to implement best practices regarding privacy. (shrink)
Technological advances are bringing new light to privacy issues and changing the reasons for why privacy is important. These advances have changed not only the kind of personal data that is available to be collected, but also how that personal data can be used by those who have access to it. We are particularly concerned with how information about personal attributes inferred from collected data (such as online behaviour), can be used to tailor messages and services to specific (...) individuals or groups. This kind of ‘personalised targeting’ has the potential to influence individuals’ perceptions, attitudes, and choices in unprecedented ways. In this paper, we argue that because it is becoming easier for companies to use collected data for influence, threats to privacy are increasingly also threats to personal autonomy—an individual’s ability to reflect on and decide freely about their values, actions, and behaviour, and to act on those choices.4 While increasing attention is directed to the ethics of how personal data is collected, we make the case that a new ethics of privacy needs to also think more rigorously about how personal data may be used, and its potential impact on personal autonomy. (shrink)
In this chapter I identify three problems affecting the plausibility of group privacy and argue in favour of their resolution. The first problem concerns the nature of the groups in question. I shall argue that groups are neither discovered nor invented, but designed by the level of abstraction (LoA) at which a specific analysis of a social system is developed. Their design is therefore justified insofar as the purpose, guiding the choice of the LoA, is justified. This should remove (...) the objection that groups cannot have a right to privacy because groups are mere artefacts (there are no groups, only individuals) or that, even if there are groups, it is too difficult to deal with them. The second problem concerns the possibility of attributing rights to groups. I shall argue that the same logic of attribution of a right to individuals may be used to attribute a right to a group, provided one modifies the LoA and now treats the whole group itself as an individual. This should remove the objection that, even if groups exist and are manageable, they cannot be treated as holders of rights. The third problem concerns the possibility of attributing a right to privacy to groups. I shall argue that sometimes it is the group and only the group, not its members, that is correctly identified as the correct holder of a right to privacy. This should remove the objection that privacy, as a group right, is a right held not by a group as a group but rather by the group’s members severally. The solutions of the three problems supports the thesis that an interpretation of privacy in terms of a protection of the information that constitutes an individual—both in terms of a single person and in terms of a group—is better suited than other interpretations to make sense of group privacy. (shrink)
What motivated an absolutist Erastian who rejected religious freedom, defended uniform public worship, and deemed the public expression of disagreement a catalyst for war to endorse a movement known to history as the champion of toleration, no coercion in religion, and separation of church and state? At least three factors motivated Hobbes’s 1651 endorsement of Independency: the Erastianism of Cromwellian Independency, the influence of the politique tradition, and, paradoxically, the contribution of early-modern practices of toleration to maintaining the public sphere’s (...) religious uniformity. The third factor illustrates how a key function of the emerging private sphere in the early-modern period was to protect uniformity, rather than diversity; it also shows that what was novel was not so much the public/private distinction itself, but the separation of two previously conflated dimensions of publicity – visibility and representativeness – that enabled early-modern Europeans to envisage modes of worship out in the open, yet still private. (shrink)
Philosophers have focused on why privacy is of value to innocent people with nothing to hide. I argue that for people who do have something to hide, such as a past crime, or bad behavior in a public place, informational privacy can be important for avoiding undeserved or disproportionate non-legal punishment. Against the objection that one cannot expect privacy in public facts, I argue that I might have a legitimate privacy interest in public facts that are (...) not readily accessible, or in details of a public fact that implicate my dignity, or in not having a public fact memorialized and spread to more people than I willingly exposed myself to. (shrink)
The dominant approach in privacy theory defines information privacy as some form of control over personal information. In this essay, I argue that the control approach is mistaken, but for different reasons than those offered by its other critics. I claim that information privacy involves the drawing of epistemic boundaries—boundaries between what others should and shouldn’t know about us. While controlling what information others have about us is one strategy we use to draw such boundaries, it is (...) not the only one. We conceal information about ourselves and we reveal it. And since the meaning of information is not self-evident, we also work to shape how others contextualize and interpret the information about us that they have. Information privacy is thus about more than controlling information; it involves the constant work of producing and managing public identities, what I call “social self- authorship.” In the second part of the essay, I argue that thinking about information privacy in terms of social self- authorship helps us see ways that information technology threatens privacy, which the control approach misses. Namely, information technology makes social self- authorship invisible and unnecessary, by making it difficult for us to know when others are forming impressions about us, and by providing them with tools for making assumptions about who we are which obviate the need for our involvement in the process. (shrink)
The human mind is constituted by inner, subjective, private, first-person conscious experiences that cannot be measured with physical devices or observed from an external, objective, public, third-person perspective. The qualitative, phenomenal nature of conscious experiences also cannot be communicated to others in the form of a message composed of classical bits of information. Because in a classical world everything physical is observable and communicable, it is a daunting task to explain how an empirically unobservable, incommunicable consciousness could have any physical (...) substrates such as neurons composed of biochemical molecules, water, and electrolytes. The challenges encountered by classical physics are exemplified by a number of thought experiments including the inverted qualia argument, the private language argument, the beetle in the box argument and the knowledge argument. These thought experiments, however, do not imply that our consciousness is nonphysical and our introspective conscious testimonies are untrustworthy. The principles of classical physics have been superseded by modern quantum physics, which contains two fundamentally different kinds of physical objects: unobservable quantum state vectors, which define what physically exists, and quantum operators (observables), which define what can physically be observed. Identifying consciousness with the unobservable quantum information contained by quantum physical brain states allows for application of quantum information theorems to resolve possible paradoxes created by the inner privacy of conscious experiences, and explains how the observable brain is constructed by accessible bits of classical information that are bound by Holevo's theorem and extracted from the physically existing quantum brain upon measurement with physical devices. (shrink)
: Does the rejection of pure proceduralism show that we should adopt Brettschneider’s value theory of democracy? The answer, this paper suggests, is ‘no’. There are a potentially infinite number of incompatible ways to understand democracy, of which the value theory is, at best, only one. The paper illustrates and substantiates its claims by looking at what the secret ballot shows us about the importance of privacy and democracy. Drawing on the reasons to reject Mill’s arguments for open voting, (...) in a previous paper by A. Lever, it argues that people’s claims to privacy have a constitutive, as well as an instrumental, importance to democratic government, which is best seen by attending to democracy as a practice, and not merely as a distinctive set of values. (shrink)
In recent years, educational institutions have started using the tools of commercial data analytics in higher education. By gathering information about students as they navigate campus information systems, learning analytics “uses analytic techniques to help target instructional, curricular, and support resources” to examine student learning behaviors and change students’ learning environments. As a result, the information educators and educational institutions have at their disposal is no longer demarcated by course content and assessments, and old boundaries between information used for assessment (...) and information about how students live and work are blurring. Our goal in this paper is to provide a systematic discussion of the ways in which privacy and learning analytics conflict and to provide a framework for understanding those conflicts. -/- We argue that there are five crucial issues about student privacy that we must address in order to ensure that whatever the laudable goals and gains of learning analytics, they are commensurate with respecting students’ privacy and associated rights, including (but not limited to) autonomy interests. First, we argue that we must distinguish among different entities with respect to whom students have, or lack, privacy. Second, we argue that we need clear criteria for what information may justifiably be collected in the name of learning analytics. Third, we need to address whether purported consequences of learning analytics (e.g., better learning outcomes) are justified and what the distributions of those consequences are. Fourth, we argue that regardless of how robust the benefits of learning analytics turn out to be, students have important autonomy interests in how information about them is collected. Finally, we argue that it is an open question whether the goods that justify higher education are advanced by learning analytics, or whether collection of information actually runs counter to those goods. (shrink)
In light of technology that may reveal the content of a person’s innermost thoughts, I address the question of whether there is a right to ‘brain privacy’—a right not to have one’s inner thoughts revealed to others–even if exposing these thoughts might be beneficial to society. I draw on a conception of privacy as the ability to control who has access to information about oneself and to an account that connects one’s interest in privacy to one’s interests (...) in autonomy and associated reputational interests, and preserving one’s dignity. Focusing on the controversial case of Gilberto Valle, known as the ‘cannibal cop’, who faced legal punishment and moral reproach for deeply disturbing thoughts of doing violence to women, thoughts he claimed were mere fantasies, I argue that Valle has a right to brain privacy if he has a legitimate privacy interest that is not outweighed by competing societal interests in avoiding harm. Our weighing of competing privacy and societal interests will depend on the magnitude of the privacy interests in autonomy and dignity, and on how reliable the technology used to expose inner thoughts is in predicting future harmful behavior and identifying true threats. (shrink)
The increasingly prominent role of digital technologies during the coronavirus pandemic has been accompanied by concerning trends in privacy and digital ethics. But more robust protection of our rights in the digital realm is possible in the future. -/- After surveying some of the challenges we face, I argue for the importance of diplomacy. Democratic countries must try to come together and reach agreements on minimum standards and rules regarding cybersecurity, privacy and the governance of AI.
Does privacy--the condition of being invisible to public scrutiny--in so emphasizing individual rights, undermine community? One objection to privacy is that it is a license to engage in antisocial activity that undermines social norms. Another objection is that privacy encourages isolation and anonymity, also undermining community. Drawing on the political theory of Hegel, I argue that privacy can promote community. Some invasions of privacy can undermine a sort of autonomy essential for maintaining a community. I (...) also discuss what we need to know before establishing whether privacy empirically promotes or undermines community. (shrink)
Privacy is valued by many. But what it means to have privacy remains less than clear. In this paper, I argue that the notion of privacy should be understood in epistemic terms. What it means to have (some degree of) privacy is that other persons do not stand in significant epistemic relations to those truths one wishes to keep private.
This article is part of a symposium on property-owning democracy. In A Theory of Justice John Rawls argued that people in a just society would have rights to some forms of personal property, whatever the best way to organise the economy. Without being explicit about it, he also seems to have believed that protection for at least some forms of privacy are included in the Basic Liberties, to which all are entitled. Thus, Rawls assumes that people are entitled to (...) form families, as well as personal associations which reflect their tastes as well as their beliefs and interests. He seems also to have assumed that people are entitled to seclude themselves, as well as to associate with others, and to keep some of their beliefs, knowledge, feelings and ideas to themselves, rather than having to share them with others. So, thinking of privacy as an amalgam of claims to seclusion, solitude, anonymity and intimate association, we can say that Rawls appears to include at least some forms of privacy in his account of the liberties protected by the first principle of justice. -/- However, Rawls did not say very much about how he understands people’s claims to privacy, or how those claims relate to his ideas about property-ownership. This is unfortunate, because two familiar objections to privacy seem particularly pertinent to his conception of the basic liberties. The first was articulated with customary panache by Judith Thomson, in a famous article on the moral right to privacy, in which she argued that talk of a moral right to privacy is confused and confusing, because privacy rights are really just property rights in disguise. The second objection has long been a staple of leftist politics, and is that the association of privacy with private property means that privacy rights are just a mask for coercive and exploitative relationships, and therefore at odds with democratic freedom, equality and solidarity. If the first objection implies that Rawls is wrong to think that protection for privacy can be distinguished from protection of personal property, the second objection implies that Rawls cannot hope to protect privacy without thereby committing himself to the grossest forms of capitalist inequality. -/- In this paper I will not discuss Rawls’ views of property-owning democracy. However, by clarifying the relationship between claims to privacy and claims to property-ownership, I hope to illuminate some of the conceptual, moral and political issues raised by Rawls’ ideas, and by work on the concept of a property-owning democracy, which he inspired. As we will see, privacy-based justifications of private ownership are not always unappealing, and privacy is sometimes promoted, rather than threatened, by collective ownership. The conclusion draws out the significance of these claims for the idea of a property-owning democracy. (shrink)
This essay critically examines some classic philosophical and legal theories of privacy, organized into four categories: the nonintrusion, seclusion, limitation, and control theories of privacy. Although each theory includes one or more important insights regarding the concept of privacy, I argue that each falls short of providing an adequate account of privacy. I then examine and defend a theory of privacy that incorporates elements of the classic theories into one unified theory: the Restricted Access/Limited Control (...) (RALC) theory of privacy. Using an example involving data-mining technology on the Internet, I show how RALC can help us to frame an online privacy policy that is sufficiently comprehensive in scope to address a wide range of privacy concerns that arise in connection with computers and information technology. (shrink)
Current technology and surveillance practices make behaviors traceable to persons in unprecedented ways. This causes a loss of anonymity and of many privacy measures relied on in the past. These de facto privacy losses are by many seen as problematic for individual psychology, intimate relations and democratic practices such as free speech and free assembly. I share most of these concerns but propose that an even more fundamental problem might be that our very ability to act as autonomous (...) and purposive agents relies on some degree of privacy, perhaps particularly as we act in public and semi-public spaces. I suggest that basic issues concerning action choices have been left largely unexplored, due to a series of problematic theoretical assumptions at the heart of privacy debates. One such assumption has to do with the influential conceptualization of privacy as pertaining to personal intimate facts belonging to a private sphere as opposed to a public sphere of public facts. As Helen Nissenbaum has pointed out, the notion of privacy in public sounds almost like an oxymoron given this traditional private-public dichotomy. I discuss her important attempt to defend privacy in public through her concept of ‘contextual integrity.’ Context is crucial, but Nissenbaum’s descriptive notion of existing norms seems to fall short of a solution. I here agree with Joel Reidenberg’s recent worries regarding any approach that relies on ‘reasonable expectations’ . The problem is that in many current contexts we have no such expectations. Our contexts have already lost their integrity, so to speak. By way of a functional and more biologically inspired account, I analyze the relational and contextual dynamics of both privacy needs and harms. Through an understanding of action choice as situated and options and capabilities as relational, a more consequence-oriented notion of privacy begins to appear. I suggest that privacy needs, harms and protections are relational. Privacy might have less to do with seclusion and absolute transactional control than hitherto thought. It might instead hinge on capacities to limit the social consequences of our actions through knowing and shaping our perceptible agency and social contexts of action. To act with intent we generally need the ability to conceal during exposure. If this analysis is correct then relational privacy is an important condition for autonomic purposive and responsible agency—particularly in public space. Overall, this chapter offers a first stab at a reconceptualization of our privacy needs as relational to contexts of action. In terms of ‘rights to privacy’ this means that we should expand our view from the regulation and protection of the information of individuals to questions of the kind of contexts we are creating. I am here particularly interested in what I call ‘unbounded contexts’, i.e. cases of context collapses, hidden audiences and even unknowable future agents. (shrink)
Even though the idea that privacy is some kind of control is often presented as the standard view on privacy, there are powerful objections against it. The aim of this paper is to defend the control account of privacy against some particularly pressing challenges by proposing a new way to understand the relevant kind of control. The main thesis is that privacy should be analyzed in terms of source control, a notion that is adopted from discussions (...) about moral responsibility. (shrink)
In this chapter I give a brief explanation of what privacy is, argue that protecting privacy is important because violations of the right to privacy can harm us individually and collectively, and offer some advice as to how to protect our privacy online.
Are rights to privacy consistent with sexual equality? In a brief, but influential, article Catherine MacKinnon trenchantly laid out feminist criticisms of the right to privacy. In “Privacy v. Equality: Beyond Roe v. Wade” she linked familiar objections to the right to privacy and connected them to the fate of abortion rights in the U.S.A. (MacKinnon, 1983, 93-102). For many feminists, the Supreme Court’s decision in Roe v. Wade (1973) had suggested that, notwithstanding a dubious past, (...) legal rights to privacy might serve feminist objectives, and prove consistent with sexual equality. By arguing that Roe’s privacy justification of abortion rights was directly responsible for the weakness and vulnerability of abortion rights in America, MacKinnon took aim at feminist hopes for the right to privacy at their strongest point. Maintaining that Roe’s privacy justification of abortion is intimately, and not contingently, related to the Supreme Court’s subsequent decision in Harris v. McRae, (1980) MacKinnon concluded that privacy rights cannot be reconciled with the freedom and equality of women, and so can have no place in a democracy.1 In Harris, the Supreme Court held that the State need not provide Medicaid coverage for abortions that are necessary to preserve the health, but not the life, of a pregnant woman, effectively depriving poor women of almost all state aid for abortions.2 Moreover, the Court’s subsequent decision in Bowers v . Hardwick (1986) appeared to confirm the truth of MacKinnon’s observation – though this case concerned gay rights, rather than abortion rights, and occurred several years after MacKinnon’s condemnation of Harris. -/- This paper examines MacKinnon’s claims about the relationship of rights to privacy and equality in light of the reasoning in Harris and Bowers. When we contrast the Majority and Minority decisions in these cases, it shows, we can distinguish interpretations of the right to privacy that are consistent with sexual equality from those that are not. This is not simply because the two differ in their consequences – though they do - but because the former, unlike the latter, rely on empirical and normative assumptions that would justify sexual inequality whatever right they were used to interpret. So while I agree with MacKinnon that the Majority’s interpretation of the right to privacy in Harris is inconsistent with the equality of men and women, I show that there is no inherent inconsistency in valuing both privacy and equality, and no reason why we must chose to protect the one, rather than the other. Indeed, an examination of MacKinnon’s article, I suggest, can help us to see why rights to privacy can be part of a scheme of democratic rights, and how we might go about democratising the right to privacy in future. To avoid confusion I should emphasise that my arguments are of a philosophical, not a legal, nature. Thus, I will be ignoring the specifically legal and constitutional aspects of MacKinnon’s article, and of the Supreme Court decisions, in order to bring their philosophical significance into focus. -/- . (shrink)
Many attempts to define privacy have been made over the last century. Early definitions and theories of privacy had little to do with the concept of information and, when they did, only in an informal sense. With the advent of information technology, the question of a precise and universally acceptable definition of privacy in this new domain became an urgent issue as legal and business problems regarding privacy started to accrue. In this paper, I propose a (...) definition of informational privacy that is simple, yet strongly tied with the concepts of information and property. Privacy thus defined is similar to intellectual property and should receive commensurate legal protection. (shrink)
This article addresses the question of whether an expectation of privacy is reasonable in the face of new technologies of surveillance, by developing a principle that best fits our intuitions. A "no sense enhancement" principle which would rule out searches using technologically sophisticated devices is rejected. The paper instead argues for the "mischance principle," which proscribes uses of technology that reveal what could not plausibly be discovered accidentally without the technology, subject to the proviso that searches that serve a (...) great public good that clearly outweighs minimal intrusions upon privacy are permissible. Justifications of the principle are discussed, including reasons why we should use the principle and not rely solely on a utilitarian balancing test. The principle is applied to uses of aerial photography and heat-detection devices. (shrink)
The purpose of this survey was to gather individual’s attitudes and feelings towards privacy and the selling of data. A total (N) of 1,107 people responded to the survey. -/- Across continents, age, gender, and levels of education, people overwhelmingly think privacy is important. An impressive 82% of respondents deem privacy extremely or very important, and only 1% deem privacy unimportant. Similarly, 88% of participants either agree or strongly agree with the statement that ‘violations to the (...) right to privacy are one of the most important dangers that citizens face in the digital age.’ The great majority of respondents (92%) report having experienced at least one privacy breach. -/- People’s first concern when losing privacy is the possibility that their personal data might be used to steal money from them. Interestingly, in second place in the ranking of concerns, people report being concerned about privacy because ‘Privacy is a good in itself, above and beyond the consequences it may have.’ -/- People tend to feel that they cannot trust companies and institutions to protect their privacy and use their personal data in responsible ways. The majority of people believe that governments should not be allowed to collect everyone’s personal data. Privacy is thought to be a right that should not have to be paid for. (shrink)
This paper seeks to identify the distinctive moral wrong of stalking and argues that this wrong is serious enough to criminalize. We draw on psychological literature about stalking, distinguishing types of stalkers, their pathologies, and victims. The victimology is the basis for claims about what is wrong with stalking. Close attention to the experiences of victims often reveals an obsessive preoccupation with the stalker and what he will do next. The kind of harm this does is best understood in relation (...) to the value of privacy and conventionally protected zones of privacy. We compare anti-stalking laws in different jurisdictions, claiming that they all fail in some way to capture the distinctive privacy violation that stalking involves. Further reflection on the seriousness of the invasion of privacy it represents suggests that it is a deeply personal wrong. Indeed, it is usually more serious than obtrusive surveillance by states, precisely because it is more personal. Where state surveillance genuinely is as intrusive as stalking, it tends to adopt the tactics of the stalker, imposing its presence on the activist victim at every turn. Power dynamics —whether rooted in the power of the state or the violence of a stalker —may exacerbate violations of privacy, but the wrong is distinct from violence, threats of violence and other aggression. Nor is stalking a simple expression of a difference in power between stalker and victim, such as a difference due to gender. (shrink)
Unjustifiable assumptions about sex and gender roles, the untamable potency of maleness, and gynophobic notions about women's bodies inform and influence a broad range of policy-making institutions in this society. In December 2004, the U.S. Court of Appeals for the Sixth Circuit continued this ignoble cultural pastime when they decided Everson v. Michigan Department of Corrections. In this decision, the Everson Court accepted the Michigan Department of Correction's claim that “the very manhood” of male prison guards both threatens the safety (...) of female inmates and violates the women's “special sense of privacy in their genitals” and declared that sex-specific employment policies for prison guards is not impermissibly discriminatory. I believe that the Court's decision relies on unacceptable stereotypes about sex, gender and sexuality and it significantly undermines Title VII's power to end discriminatory employment practices. (shrink)
Privacy involves a zone of inaccessibility in a particular context. In social discourse it pertains to activities that are not public, the latter being by definition knowable by outsiders. The public domain so called is the opposite of secrecy and somewhat less so of confidentiality. The private sphere is respected in law and morality, now in terms of a right to privacy. In law some violations of privacy are torts. Philosophers tend to associate privacy with personhood. (...) Professional relationships are prima facie private but may be investigated for cause. Privacy may entail both intimacy and caring but neither necessarily so. Finally, various technology-based means of intrusion are rendering privacy ever more difficult to sustain. (shrink)
Most people are completely oblivious to the danger that their medical data undergoes as soon as it goes out into the burgeoning world of big data. Medical data is financially valuable, and your sensitive data may be shared or sold by doctors, hospitals, clinical laboratories, and pharmacies—without your knowledge or consent. Medical data can also be found in your browsing history, the smartphone applications you use, data from wearables, your shopping list, and more. At best, data about your health might (...) end up in the hands of researchers on whose good will we depend to avoid abuses of power.2 Most likely, it will end up with data brokers who might sell it to a future employer, or an insurance company, or the government. At worst, your medical data may end up in the hands of criminals eager to commit extortion or identity theft. In addition to data harms related to exposure and discrimination, the collection of sensitive data by powerful corporations risks the creation of data monopolies that can dominate and condition access to health care. -/- This chapter aims to explore the challenge that big data brings to medical privacy. Section I offers a brief overview of the role of privacy in medical settings. I define privacy as having one’s personal information and one’s personal sensorial space (what I call autotopos) unaccessed. Section II discusses how the challenge of big data differs from other risks to medical privacy. Section III is about what can be done to minimise those risks. I argue that the most effective way of protecting people from suffering unfair medical consequences is by having a public universal healthcare system in which coverage is not influenced by personal data (e.g., genetic predisposition, exercise habits, eating habits, etc.). (shrink)
Despite widespread agreement that privacy in the context of education is important, it can be difficult to pin down precisely why and to what extent it is important, and it is challenging to determine how privacy is related to other important values. But that task is crucial. Absent a clear sense of what privacy is, it will be difficult to understand the scope of privacy protections in codes of ethics. Moreover, privacy will inevitably conflict with (...) other values, and understanding the values that underwrite privacy protections is crucial for addressing conflicts between privacy and institutional efficiency, advising efficacy, vendor benefits, and student autonomy. -/- My task in this paper is to seek a better understanding of the concept of privacy in institutional research, canvas a number of important moral values underlying privacy generally (including several that are explicit in the AIR Statement), and examine how those moral values should bear upon institutional research by considering several recent cases. (shrink)
It is especially hard, at present, to read the newspapers without emitting a howl of anguish and outrage. Philosophy can heal some wounds but, in this case, political action may prove a better remedy than philosophy. It can therefore feel odd trying to think philosophically about surveillance at a time like this, rather than joining with like-minded people to protest the erosion of our civil liberties, the duplicity of our governments, and the failings in our political institutions - including our (...) political parties – revealed by the succession of leaks dripping away this the summer. Still, philosophy can help us to think about what we should do, not merely what we should believe. Thus, in what follows I draw on my previous work on privacy, democracy and security, in order to highlight aspects of recent events which – or so I hope – may prove useful both for political thought and action. (shrink)
This paper discusses the idea that the concept of privacy should be understood in terms of control. Three different attempts to spell out this idea will be critically discussed. The conclusion will be that the so-called Source Control View on privacy is the most promising version of the idea that privacy is to be understood in terms of control.
Disputes at the intersection of national security, surveillance, civil liberties, and transparency are nothing new, but they have become a particularly prominent part of public discourse in the years since the attacks on the World Trade Center in September 2001. This is in part due to the dramatic nature of those attacks, in part based on significant legal developments after the attacks (classifying persons as “enemy combatants” outside the scope of traditional Geneva protections, legal memos by White House counsel providing (...) rationale for torture, the USA Patriot Act), and in part because of the rapid development of communications and computing technologies that enable both greater connectivity among people and the greater ability to collect information about those connections. -/- One important way in which these questions intersect is in the controversy surrounding bulk collection of telephone metadata by the U.S. National Security Agency. The bulk metadata program (the “metadata program” or “program”) involved court orders under section 215 of the USA Patriot Act requiring telecommunications companies to provide records about all calls the companies handled and the creation of database that the NSA could search. The program was revealed to the general public in June 2013 as part of the large document leak by Edward Snowden, a former contractor for the NSA. -/- A fair amount has been written about section 215 and the bulk metadata program. Much of the commentary has focused on three discrete issues. First is whether the program is legal; that is, does the program comport with the language of the statute and is it consistent with Fourth Amendment protections against unreasonable searches and seizures? Second is whether the program infringes privacy rights; that is, does bulk metadata collection diminish individual privacy in a way that rises to the level that it infringes persons’ rights to privacy? Third is whether the secrecy of the program is inconsistent with democratic accountability. After all, people in the general public only became aware of the metadata program via the Snowden leaks; absent those leaks, there would have not likely been the sort of political backlash and investigation necessary to provide some kind of accountability. -/- In this chapter I argue that we need to look at these not as discrete questions, but as intersecting ones. The metadata program is not simply a legal problem (though it is one); it is not simply a privacy problem (though it is one); and it is not simply a secrecy problem (though it is one). Instead, the importance of the metadata program is the way in which these problems intersect and reinforce one another. Specifically, I will argue that the intersection of the questions undermines the value of rights, and that this is a deeper and more far-reaching moral problem than each of the component questions. (shrink)
New technologies of surveillance such as Global Positioning Systems (GPS) are increasingly used as convenient substitutes for conventional means of observation. Recent court decisions hold that the government may, without a warrant, use a GPS to track a vehicle’s movements in public places without violating the 4th Amendment, as the vehicle is in plain view and no reasonable expectation of privacy is violated. This emerging consensus of opinions fails to distinguish the unreasonable expectation that we not be seen in (...) public, from the reasonable expectation that we not be followed. Drawing on a critical discussion of the plain view doctrine, analysis of privacy interests in public places, and distinguishing privacy from property interests, the article contends that government use of GPS to track our movements should require a warrant. (shrink)
Civil liberty and privacy advocates have criticized the USA PATRIOT Act (Act) on numerous grounds since it was passed in the wake of the World Trade Center attacks in 2001. Two of the primary targets of those criticisms are the Act’s sneak-and-peek search provision, which allows law enforcement agents to conduct searches without informing the search’s subjects, and the business records provision, which allows agents to secretly subpoena a variety of information – most notoriously, library borrowing records. Without attending (...) to all of the ways that critics claim the Act burdens privacy, I examine whether those two controversial parts of the Act, the section 213 sneak-and-peak search and the section 215 business records gag-rule provisions, burden privacy as critics charge. I begin by describing the two provisions. Next, I explain why those provisions don’t burden privacy on standard philosophical accounts. Moreover, I argue that they need not conflict with the justifications for people’s claims to privacy, nor do they undermine the value of privacy on the standard accounts. However, rather than simply concluding that the sections don’t burden privacy, I argue that those provisions are problematic on the grounds that they undermine the value of whatever rights to privacy people have. Specifically, I argue that it is important to distinguish rights themselves from the value that those rights have to the rights-holders, and that an essential element of privacy rights having value is that privacy right-holders be able to tell the extent to which they actually have privacy. This element, which is justified by the right-holders’ autonomy interests, is harmed by the two provisions. (shrink)
This article forms part of a tribute to Anita L. Allen by the APA newletter on Philosophy and Law. It celebrates Allen's work, but also explains why her conception of privacy is philosophically inadequate. It then uses basic democratic principles and the example of the secret ballot to suggest how we might develop a more philosophically persuasive version of Allen's ideas.
Today’s ethics of privacy is largely dedicated to defending personal information from big data technologies. This essay goes in the other direction; it considers the struggle to be lost, and explores two strategies for living after privacy is gone. First, total exposure embraces privacy’s decline, and then contributes to the process with transparency. All personal information is shared without reservation. The resulting ethics is explored through a big data version of Robert Nozick’s Experience Machine thought experiment. Second, (...) transient existence responds to privacy’s loss by ceaselessly generating new personal identities, which translates into constantly producing temporarily unviolated private information. The ethics is explored through Gilles Deleuze’s metaphysics of difference applied in linguistic terms to the formation of the self. Comparing the exposure and transience alternatives leads to the conclusion that today’s big data reality splits the traditional ethical link between authenticity and freedom. Exposure provides authenticity, but negates human freedom. Transience provides freedom, but disdains authenticity. (shrink)
This paper weaves together a number of separate strands each relating to an aspect of Wittgenstein’s Philosophical Investigations. The first strand introduces his radical and incoherent idea of a private object. Wittgenstein in § 258 and related passages is not investigating a perfectly ordinary notion of first person privacy; but his critics have treated his question, whether a private language is possible, solely in terms of their quite separate question of how our ordinary sensation terms can be understood, in (...) a philosophical context, to acquire meaning. Yet it is no part of his intention to demonstrate logically that ordinary sensations are not intrinsically meaningful. This is a tempting yet misleading picture, the picture also expressed through the idea of Augustine’s child who is conceptually articulateprior to learning how to talk. This picture lies behind the born Crusoe, an idea at the centre of the dichotomy between language as essentially shared and essentially shareable, a dichotomy considered here to result from a misconception of two quite separate but related aspects of Wittgenstein’s treatment offollowing a rule. The notion of a misleading picture, in both its pre-theoretical and philosophical aspects, also plays a crucial role in a treatment of Saul Kripke’s well-known “Postscript: Wittgenstein and Other Minds.”. (shrink)
This paper examines whether American parents legally violate their children’s privacy rights when they share embarrassing images of their children on social media without their children’s consent. My inquiry is motivated by recent reports that French authorities have warned French parents that they could face fines and imprisonment for such conduct, if their children sue them once their children turn 18. Where French privacy law is grounded in respect for dignity, thereby explaining the French concerns for parental “over-sharing,” (...) I show that there are three major legal roadblocks for such a case to succeed in US law. First, US privacy tort law largely only protects a person’s image where the person has a commercial interest in his or her image. Secondly, privacy tort laws are subject to constitutional constraints respecting the freedom of speech and press. Third, American courts are reluctant to erode parental authority, except in cases where extraordinary threats to children’s welfare exist. I argue that while existing privacy law in the US is inadequate to offer children legal remedy if their parents share their embarrassing images of them without their consent, the dignity-based concerns of the French should not be neglected. I consider a recent proposal to protect children’s privacy by extending to them the “right to be forgotten” online, but I identify problems in this proposal, and argue it is not a panacea to the over-sharing problem. I conclude by emphasizing our shared social responsibilities to protect children by teaching them about the importance of respecting one another’s privacy and dignity in the online context, and by setting examples as responsible users of internet technologies. (shrink)
One of the most prominent ethical concerns regarding emerging neurotechnologies is mental privacy. This is the idea that we should have control over access to our neural data and to the information about our mental processes and states that can be obtained by analyzing it. A key issue is whether this information needs more stringent protection than other kinds of personal information. I will articulate and support the view, underlying recent regulatory frameworks, that mental privacy requires a special (...) treatment because of its relation to relevant aspects of personal identity. It has been suggested that this approach could be supported by the idea that mental privacy constitutes a fundamental psychological dimension of privacy. The connection between this psychological view of privacy and identity can be traced back to Irwin Altman’s idea that privacy is an interpersonal boundary regulation process. However, it is not clear whether this notion of privacy can be associated with a conception of identity that is relevant in contemporary neuroethics. I will suggest that the narrative and relational approach to identity, a prominent view in recent ethical discussions of neurotechnology, lines up with key aspects of Altman’s proposal. I suggest that if mental privacy is an essential component of identity, the latter could be affected by technological mind-reading. (shrink)
In this paper I argue for two things. First, many concerns we have regarding privacy—both regarding what things we do and do not want to protect in its name—can be explained through an account of our moral (legal and ethical) rights. Second, to understand a further set of moral (ethical and legal) concerns regarding privacy—especially the temptation to want to intrude on and disrespect others’ privacy and the gravity of such breaches and denials of privacy—we must (...) appreciate the way in which protecting freedom requires us to take into account the sociality of human nature. I draw on Kant’s practical philosophy—his moral accounts of freedom (of virtue and of right) as well as of human nature and evil—to make these arguments. (shrink)
If one lives in a city and wants to be by oneself or have a private conversation with someone else, there are two ways to set about it: either one finds a place of solitude, such as one’s bedroom, or one finds a place crowded enough, public enough, that attention to each person dilutes so much so as to resemble a deserted refuge. Often, one can get more privacy in public places than in the most private of spaces. The (...) home is not always the ideal place to find privacy. Neighbours snoop, children ask questions, and family members judge. When the home suffocates privacy, the only escape is to go out, to the coffee shop, the public square. For centuries, city streets have been the true refuges of the solitaries, the overwhelmed, and the underprivileged. -/- Yet time and again we hear people arguing that we do not have any claim to privacy while on the streets because they are part of the so-called public sphere. The main objective of this chapter is to argue that privacy belongs as much in the streets as it does in the home. (shrink)
In 2016, the European Parliament approved the General Data Protection Regulation (GDPR) whose core aim is the safeguarding of information privacy, and, by corollary, human dignity. Drawing on the field of philosophical anthropology, this paper analyses various interpretations of human dignity and human exceptionalism. It concludes that privacy is essential for humans to flourish and enable individuals to build a sense of self and the world.
The article provides an outline of the ontological interpretation of informational privacy based on information ethics. It is part of a larger project of research, in which I have developed the foundations of ideas presented here and their consequences. As an outline, it is meant to be self-sufficient and to provide enough information to enable the reader to assess how the approach fares with respect to other alternatives. However, those interested in a more detailed analysis, and especially in the (...) reasons offered in its favour, may wish to consult the other articles as well. (shrink)
The paper outlines a new interpretation of informational privacy and of its moral value. The main theses defended are: (a) informational privacy is a function of the ontological friction in the infosphere, that is, of the forces that oppose the information flow within the space of information; (b) digital ICTs (information and communication technologies) affect the ontological friction by changing the nature of the infosphere (re-ontologization); (c) digital ICTs can therefore both decrease and protect informational privacy but, (...) most importantly, they can also alter its nature and hence our understanding and appreciation of it; (d) a change in our ontological perspective, brought about by digital ICTs, suggests considering each person as being constituted by his or her information and hence regarding a breach of one’s informational privacy as a form of aggression towards one’s personal identity. (shrink)
The idea that there are features of or in our conscious experience that are, in some important sense, private has both a long history in philosophy and a large measure of intuitive attraction. Once this idea is in place, it will be very natural to assume that one can think and judge about one’s own private features. And it is then only a small step to the idea that we might communicate such thoughts and judgements about our respective private features (...) with each other. (shrink)
This paper addresses the always online behavior of the data owner in proxy re- encryption schemes for re-encryption keys issuing. We extend and adapt multi-authority ciphertext policy attribute based encryption techniques to type-based proxy re-encryption to build our solution. As a result, user authentication and user authorization are moved to the cloud server which does not require further interaction with the data owner, data owner and data users identities are hidden from the cloud server, and re-encryption keys are only issued (...) to legitimate users. An in depth analysis shows that our scheme is secure, flexible and efficient for mobile cloud computing. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.