Despite recent growth in surveillance capabilities there has been little discussion regarding the ethics of surveillance. Much of the research that has been carried out has tended to lack a coherent structure or fails to address key concerns. I argue that the just war tradition should be used as an ethical framework which is applicable to surveillance, providing the questions which should be asked of any surveillance operation. In this manner, when considering whether to employ (...) class='Hi'>surveillance, one should take into account the reason for the surveillance, the authority of the surveillant, whether or not there has been a declaration of intent, whether surveillance is an act of last resort, what is the likelihood of success of the operation and whether surveillance is a proportionate response. Once underway, the methods of surveillance should be proportionate to the occasion and seek to target appropriate people while limiting surveillance of those deemed inappropriate. By drawing on the just war tradition, ethical questions regarding surveillance can draw on a long and considered discourse while gaining a framework which, I argue, raises all the key concerns and misses none. (shrink)
Some of the world's most powerful corporations practise what Shoshana Zuboff (2015; 2019) calls ‘surveillance capitalism’. The core of their business is harvesting, analysing and selling data about the people who use their products. In Zuboff's view, the first corporation to engage in surveillance capitalism was Google, followed by Facebook; recently, firms such as Microsoft and Amazon have pivoted towards such a model. In this paper, I suggest that Karl Marx's analysis of the relations between industrial capitalists and (...) workers is closely analogous to the relations between surveillance capitalists and users. Furthermore, three problematic aspects of industrial capitalism that Marx describes – alienation, exploitation and accumulation – are also aspects, in new forms, of surveillance capitalism. I draw heavily on Zuboff's work to make these parallels. However, my Marx-inspired account of surveillance capitalism differs from hers over the nature of the exchange between users and surveillance capitalists. For Zuboff, this is akin either to robbery or the gathering of raw materials; on the Marx-inspired account it is a voluntary sale. This difference has important implications for the question of how to resist surveillance capitalism. -/- Joint winner of the 2020 Philosophy essay prize. (shrink)
There is a long-running debate as to whether privacy is a matter of control or access. This has become more important following revelations made by Edward Snowden in 2013 regarding the collection of vast swathes of data from the Internet by signals intelligence agencies such as NSA and GCHQ. The nature of this collection is such that if the control account is correct then there has been a significant invasion of people's privacy. If, though, the access account is correct then (...) there has not been an invasion of privacy on the scale suggested by the control account. I argue that the control account of privacy is mistaken. However, the consequences of this are not that the seizing control of personal information is unproblematic. I argue that the control account, while mistaken, seems plausible for two reasons. The first is that a loss of control over my information entails harm to the rights and interests that privacy protects. The second is that a loss of control over my information increases the risk that my information will be accessed and that my privacy will be violated. Seizing control of another's information is therefore harmful, even though it may not entail a violation of privacy. Indeed, seizing control of another's information may be more harmful than actually violating their privacy. (shrink)
Urban centers are being transformed into consumer tourist playgrounds made possible by dense networks of surveillance. The safety and entertainment however, come at an unseen price. One of the historical roots of surveillance can be connected to the modern information base of tracking individuals for economic and political reasons. Though its antecedents can be traced via Foucault's account of panoptic discipline which walled in society's outcasts for rehabilitation, the following essay explores the shift to the urban panopticism of (...) today where society's outcasts are subtly filtered out of "public" view. Juxtaposing a sociological account of the concentration camp with urban Disneyization fosters a greater understanding of how surveillance creates certain categories of citizenship. In particular, how urban surveillance intensifies Walter Benjamin's description of the flâneur who often experiences the brunt of the new urban panopticon's filtering power. (shrink)
Recent disclosures suggest that many governments apply indiscriminate mass surveillance technologies that allow them to capture and store a massive amount of communications data belonging to citizens and non-citizens alike. This article argues that traditional liberal critiques of government surveillance that center on an individual right to privacy cannot completely capture the harm that is caused by such surveillance because they ignore its distinctive political dimension. As a complement to standard liberal approaches to privacy, the article develops (...) a critique of surveillance that focuses on the question of political power in the public sphere. (shrink)
In her BBC Reith Lectures on Trust, Onora O’Neill offers a short, but biting, criticism of transparency. People think that trust and transparency go together but in reality, says O'Neill, they are deeply opposed. Transparency forces people to conceal their actual reasons for action and invent different ones for public consumption. Transparency forces deception. I work out the details of her argument and worsen her conclusion. I focus on public transparency – that is, transparency to the public over expert domains. (...) I offer two versions of the criticism. First, the epistemic intrusion argument: The drive to transparency forces experts to explain their reasoning to non-experts. But expert reasons are, by their nature, often inaccessible to non-experts. So the demand for transparency can pressure experts to act only in those ways for which they can offer public justification. Second, the intimate reasons argument: In many cases of practical deliberation, the relevant reasons are intimate to a community and not easily explicable to those who lack a particular shared background. The demand for transparency, then, pressures community members to abandon the special understanding and sensitivity that arises from their particular experiences. Transparency, it turns out, is a form of surveillance. By forcing reasoning into the explicit and public sphere, transparency roots out corruption — but it also inhibits the full application of expert skill, sensitivity, and subtle shared understandings. The difficulty here arises from the basic fact that human knowledge vastly outstrips any individual’s capacities. We all depend on experts, which makes us vulnerable to their biases and corruption. But if we try to wholly secure our trust — if we leash groups of experts to pursuing only the goals and taking only the actions that can be justified to the non-expert public — then we will undermine their expertise. We need both trust and transparency, but they are in essential tension. This is a deep practical dilemma; it admits of no neat resolution, but only painful compromise. (shrink)
The concept of surveillance has recently been complemented by the concept of sousveillance. Neither term, however, has been rigorously defined, and it is particularly unclear how to understand and delimit sousveillance. This article sketches a generic definition of surveillance and proceeds to explore various ways in which we might define sousveillance, including power differentials, surreptitiousness, control, reciprocity, and moral valence. It argues that for each of these ways of defining it, sousveillance either fails to be distinct from (...) class='Hi'>surveillance or to provide a generally useful concept. As such, the article concludes that academics should avoid the neologism, and simply clarify what sense of surveillance is at stake when necessary. (shrink)
Foucault’s disciplinary society and his notion of panopticism are often invoked in discussions regarding electronic surveillance. Against this use of Foucault, I argue that contemporary trends in surveillance technology abstract human bodies from their territorial settings, separating them into a series of discrete flows through what Deleuze will term, the surveillant assemblage. The surveillant assemblage and its product, the socially sorted body, aim less at molding, punishing and controlling the body and more at triggering events of in- and (...) ex-clusion from life opportunities. The meaning of the body as monitored by latest generation vision technologies formed from machine only surveillance has been transformed. Such a body is no longer disciplinary in the Foucauldian sense. It is a virtual/flesh interface broken into discrete data flows whose comparison and breakage generate bodies as both legible and eligible (or illegible). (shrink)
Surveillance plays a crucial role in public health, and for obvious reasons conflicts with individual privacy. This paper argues that the predominant approach to the conflict is problematic, and then offers an alternative. It outlines a Basic Interests Approach to public health measures, and the Unreasonable Exercise Argument, which sets forth conditions under which individuals may justifiably exercise individual privacy claims that conflict with public health goals. The view articulated is compatible with a broad range conceptions of the value (...) of health. (shrink)
In this paper, we argue that there is at least a pro tanto reason to favor the control account of the right to privacy over the access account of the right to privacy. This conclusion is of interest due to its relevance for contemporary discussions related to surveillance policies. We discuss several ways in which the two accounts of the right to privacy can be improved significantly by making minor adjustments to their respective definitions. We then test the improved (...) versions of the two accounts on a test case, to see which account best explains the violation that occurs in the case. The test turns out in favor of the control account. (shrink)
There is an ongoing increase in the use of mobile health technologies that patients can use to monitor health-related outcomes and behaviours. While the dominant narrative around mHealth focuses on patient empowerment, there is potential for mHealth to fit into a growing push for patients to take personal responsibility for their health. I call the first of these uses ‘medical monitoring’, and the second ‘personal health surveillance’. After outlining two problems which the use of mHealth might seem to enable (...) us to overcome—fairness of burdens and reliance on self-reporting—I note that these problems would only really be solved by unacceptably comprehensive forms of personal health surveillance which applies to all of us at all times. A more plausible model is to use personal health surveillance as a last resort for patients who would otherwise independently qualify for responsibility-based penalties. However, I note that there are still a number of ethical and practical problems that such a policy would need to overcome. The prospects of mHealth enabling a fair, genuinely cost-saving policy of patient responsibility are slim. (shrink)
This paper addresses issues regarding perceptions of surveillance technologies in Europe. It analyses existing studies in order to explore how perceptions of surveillance affect and are affected by the negative effects of surveillance and how perceptions and effectiveness of surveillance technologies relate to each other. The paper identifies 12 negative effects of surveillance including, among others, privacy intrusion, the chilling effect and social exclusion, and classifies them into three groups. It further illustrates the different ways (...) in which perceptions and effectiveness of surveillance interact with each other, distinguishing between perceived security and perceived effectiveness. Finally, the paper advances a methodology to take into account perception issues when designing new surveillance technologies. By doing so, it rejects manipulative measures aiming at improving perceptions only and suggests measures that address the background conditions affecting perceptions. (shrink)
In todays advanced society, there is rising concern for data privacy and the diminution thereof on the internet. I argue from the position that for one to enjoy privacy, one must be able to effectively exercise autonomous action. I offer in this paper a survey of the many ways in which persons autonomy is severely limited due to a variety of privacy invasions that come not only through the use of modern technological apparatuses, but as well simply by existing in (...) an advanced technological society. I conclude that regarding the majority of persons whose privacy is violated, such a violations are actually initiated and upheld by the users of modern technology themselves, and that ultimately, most disruptions of privacy that occur are self-levied. (shrink)
This article analyses proportionality as a potential element of a theory of morally justified surveillance, and sets out a teleological account. It draws on conceptions in criminal justice ethics and just war theory, defines teleological proportionality in the context of surveillance, and sketches some of the central values likely to go into the consideration. It then explores some of the ways in which deontologists might want to modify the account and illustrates the difficulties of doing so. Having set (...) out the account, however, it considers whether the proportionality condition is necessary to a theory of morally justified surveillance. The article concludes that we need and should apply only a necessity condition, but notes that proportionality considerations may retain some use in in practice, as a form of coarse‐grained filter applied before assessing necessity when deliberating the permissibility of potential forms of surveillance. (shrink)
This article examines the domestic use of drones by law enforcement to gather information. Although the use of drones for surveillance will undoubtedly provide law enforcement agencies with new means of gathering intelligence, these unmanned aircrafts bring with them a host of legal and epistemic complications. Part I considers the Fourth Amendment and the different legal standards of proof that might apply to law enforcement drone use. Part II explores philosopher Wittgenstein’s notion of actional certainty as a means to (...) interpret citizens' expectations of privacy with regard to their patterns of movement over time. Part III discusses how the theory of actional certainty can apply to the epistemic challenge of determining what is a “reasonable” expectation of privacy under the law. This Part also investigates the Mosaic Theory as a possible reading of the Fourth Amendment. (shrink)
It is often claimed that surveillance should be proportionate, but it is rarely made clear exactly what proportionate surveillance would look like beyond an intuitive sense of an act being excessive. I argue that surveillance should indeed be proportionate and draw on Thomas Hurka’s work on proportionality in war to inform the debate on surveillance. After distinguishing between the proportionality of surveillance per se, and surveillance as a particular act, I deal with objections to (...) using proportionality as a legitimate ethical measure. From there I argue that only certain benefits and harms should be counted in any determination of proportionality. Finally I look at how context can affect the proportionality of a particular method of surveillance. In conclusion, I hold that proportionality is not only a morally relevant criterion by which to assess surveillance, but that it is a necessary criterion. Furthermore, while granting that it is difficult to assess, that difficulty should not prevent our trying to do so. (shrink)
In this paper I critique the ethical implications of automating CCTV surveillance. I consider three modes of CCTV with respect to automation: manual, fully automated, and partially automated. In each of these I examine concerns posed by processing capacity, prejudice towards and profiling of surveilled subjects, and false positives and false negatives. While it might seem as if fully automated surveillance is an improvement over the manual alternative in these areas, I demonstrate that this is not necessarily the (...) case. In preference to the extremes I argue in favour of partial automation in which the system integrates a human CCTV operator with some level of automation. To assess the degree to which such a system should be automated I draw on the further issues of privacy and distance. Here I argue that the privacy of the surveilled subject can benefit from automation, while the distance between the surveilled subject and the CCTV operator introduced by automation can have both positive and negative effects. I conclude that in at least the majority of cases more automation is preferable to less within a partially automated system where this does not impinge on efficacy. (shrink)
How much surveillance is morally permissible in the pursuit of a socially desirable goal? The proportionality question has received renewed attention during the 2020 Coronavirus pandemic, because governments in many countries have responded to the pandemic by implementing, redirecting or expanding state surveillance, most controversially in the shape of collection and use of cell-phone location data to support a strategy of contact tracing, testing and containment. Behind the proportionality question lies a further question: in what way does a (...) state of emergency affect the proportionality of morally permissible surveillance? On the qualitative difference view, a state of emergency has the effect of suspending or altering at least some of the constraints on morally permissible action that apply under ordinary circumstances. On the quantitative difference view, the only difference between states of emergency and ordinary circumstances is that the stakes are greater in a state of emergency. If the qualitative difference view is true, then there are situations, perhaps such as the current Coronavirus pandemic, during which the proportionality condition employs a much less demanding ratio between social goods achieved and the badness of the surveillance performed. The overall objective of this article is to argue against the qualitative and for the quantitative difference view. I proceed by first setting out in somewhat greater detail how we must understand the qualitative difference view (section two). I then present a series of problematic implications of adopting the qualitative difference view and argue that jointly these give us sufficient reason to reject it (section three). This entails that our account of morally permissible surveillance should be unexceptional, i.e. the quantitative difference view: there is no morally significant difference between proportionality in ordinary circumstances and proportionality in emergencies, simply a spectrum of smaller to greater potential goods and bads of surveillance. In order to flesh out the implications of the quantitative view, I briefly sketch an unexceptional theory of proportional surveillance in exceptional circumstances (section four). The last section (five) summarises and concludes. (shrink)
There is increasing concern about “surveillance capitalism,” whereby for-profit companies generate value from data, while individuals are unable to resist (Zuboff 2019). Non-profits using data-enabled surveillance receive less attention. Higher education institutions (HEIs) have embraced data analytics, but the wide latitude that private, profit-oriented enterprises have to collect data is inappropriate. HEIs have a fiduciary relationship to students, not a narrowly transactional one (see Jones et al, forthcoming). They are responsible for facets of student life beyond education. In (...) addition to classrooms, learning management systems, and libraries, HEIs manage dormitories, gyms, dining halls, health facilities, career advising, police departments, and student employment. HEIs collect and use student data in all of these domains, ostensibly to understand learner behaviors and contexts, improve learning outcomes, and increase institutional efficiency through “learning analytics” (LA). ID card swipes and Wi-Fi log-ins can track student location, class attendance, use of campus facilities, eating habits, and friend groups. Course management systems capture how students interact with readings, video lectures, and discussion boards. Application materials provide demographic information. These data are used to identify students needing support, predict enrollment demands, and target recruiting efforts. These are laudable aims. However, current LA practices may be inconsistent with HEIs’ fiduciary responsibilities. HEIs often justify LA as advancing student interests, but some projects advance primarily organizational welfare and institutional interests. Moreover, LA advances a narrow conception of student interests while discounting privacy and autonomy. Students are generally unaware of the information collected, do not provide meaningful consent, and express discomfort and resigned acceptance about HEI data practices, especially for non-academic data (see Jones et al. forthcoming). The breadth and depth of student information available, combined with their fiduciary responsibility, create a duty that HEIs exercise substantial restraint and rigorous evaluation in data collection and use. (shrink)
This article addresses the question of whether an expectation of privacy is reasonable in the face of new technologies of surveillance, by developing a principle that best fits our intuitions. A "no sense enhancement" principle which would rule out searches using technologically sophisticated devices is rejected. The paper instead argues for the "mischance principle," which proscribes uses of technology that reveal what could not plausibly be discovered accidentally without the technology, subject to the proviso that searches that serve a (...) great public good that clearly outweighs minimal intrusions upon privacy are permissible. Justifications of the principle are discussed, including reasons why we should use the principle and not rely solely on a utilitarian balancing test. The principle is applied to uses of aerial photography and heat-detection devices. (shrink)
New technologies of surveillance such as Global Positioning Systems (GPS) are increasingly used as convenient substitutes for conventional means of observation. Recent court decisions hold that the government may, without a warrant, use a GPS to track a vehicle’s movements in public places without violating the 4th Amendment, as the vehicle is in plain view and no reasonable expectation of privacy is violated. This emerging consensus of opinions fails to distinguish the unreasonable expectation that we not be seen in (...) public, from the reasonable expectation that we not be followed. Drawing on a critical discussion of the plain view doctrine, analysis of privacy interests in public places, and distinguishing privacy from property interests, the article contends that government use of GPS to track our movements should require a warrant. (shrink)
This paper focuses on Precision Public Health (PPH), described in the scientific literature as an effort to broaden the scope of precision medicine by extrap- olating it towards public health. By means of the “All of Us” (AoU) research pro- gram, launched by the National Institutes of Health in the U.S., PPH is being devel- oped based on health data shared through a broad range of digital tools. PPH is an emerging idea to harness the data collected for precision medicine (...) to be able to tai- lor preventive interventions for at-risk groups. The character of these data concern genetic identity, lifestyle and overall health and therefore affect the ‘intimacy’ of personhood. Through the concept of biological citizenship, we elucidate how AoU and its recruitment tactics, by resonating ‘diversity’, at the same time appeal to and constitute identity, defining individuals as ‘data sharing subjects’. Although PPH is called for; the type of bio-citizenship that is enacted here, has a particular definition, where participant recruitment focuses on ‘citizenship’ in terms of empowerment (front), it is the ‘bio’ prefix that has become the main focus in terms of research. i.e. biosubjectivities vs biocapital. This raises the question whether the societal chal- lenges that often underlie public health issues can be sufficiently dealt with based on the way ‘diversity’ is accounted for in the program. We suggest that the AoU still risks of harming underrepresented groups based on the preconditions and the design of the program. (shrink)
The Center for Disease Control and Prevention’s Active Bacterial Core Surveillance (CDC ABCs) Program is a collaborative effort betweeen the CDC, state health departments, laboratories, and universities to track invasive bacterial pathogens of particular importance to public health [1]. The year-end surveillance reports produced by this program help to shape public policy and coordinate responses to emerging infectious diseases over time. The ABCs case report form (CRF) data represents an excellent opportunity for data reuse beyond the original (...) class='Hi'>surveillance purposes. (shrink)
Purpose The purpose of this paper is to defend the notion of “trust in technology” against the philosophical view that this concept is misled and unsuitable for ethical evaluation. In contrast, it is shown that “trustworthy technology” addresses a critical societal need in the digital age as it is inclusive of IT-security risks not only from a technical but also from a public layperson perspective. Design/methodology/approach From an interdisciplinary perspective between philosophy andIT-security, the authors discuss a potential instantiation of a (...) “trustworthy information and communication technology ”: a solution for privacy respecting video surveillance. Here, strong data protection measures address grave concerns such as the threat of bulk biometric tracking of citizens. In a logical argument, however, the authors show that this technical notion of “trust” needs to be complemented by interlocking trust relations to justify public trust. Findings Based on this argument, the authors demonstrate that the philosophical position considering “trust in technology” to denote either “reliability” or “interpersonal trust” is too limited as it fails to address critical aspects of IT-security. In a broader, socio-technical sense, however, it is shown that several distinct accounts of trust – technical, interpersonal and institutional – should meaningfully interlock, to address concerns with ICTs. Originality/value This conceptual study demonstrates the potential of “trust in technology” for a more comprehensive evaluation of ICTs within the context of operation. Furthermore, it adds to the discussion of trust in IT-security by highlighting the layperson’s challenge of judging a technology’s trustworthiness. Vice versa, it contributes to Ethics of Technology by highlighting crucial IT-security needs. (shrink)
This paper questions the use of new technologies as tools of modern surveillance in order to: (a) advance the research done by Michel Foucault on panoptic techniques of surveillance and dominance; and (b) give new insights on the way we use these new surveillance technologies in violation of democratic principles and legal norms. Furthermore, it questions Foucault’s statements on the expansion of Bentham’s Panopticon scheme as a universal model of modern-day democratic institutions. Therefore the purpose of this (...) paper is to (c) shed new light on the various ways the deployment of new technologies reinforces the Panopticon model, and (d) conduct an analysis of the effects produced by the emerging modes of surveillance that empower various new mechanisms of domination and control of individuals. This research paper seeks to (e) examine to what extent technology influences the course of our social, political and behavioral changes; and to propose devices for (f) evaluation and transformation of democratic institutions and practices that rely on the use of modern communication tools and technologies. Our cities have become a new kind of technologically driven Panopticon and this model has achieved perfection as increasingly fragmented, disseminated and ubiquitous device of power and dominance. (shrink)
This chapter addresses the morality of two types of national security electronic surveillance (SIGINT) programs: the analysis of communication “metadata” and dragnet searches for keywords in electronic communication. The chapter develops a standard for assessing coercive government action based on respect for the autonomy of inhabitants of liberal states and argues that both types of SIGINT can potentially meet this standard. That said, the collection of metadata creates opportunities for abuse of power, and so judgments about the trustworthiness and (...) competence of the government engaging in the collection must be made in order to decide whether metadata collection is justified in a particular state. Further, the moral standard proposed has a reflexive element justifying adversary states’ intelligence collection against one another. Therefore, high-tech forms of SIGINT can only be justified at the cost of justifying cruder versions of signals intelligence collection practiced by a state’s less-advanced adversaries. (shrink)
Surveillance studies scholars and privacy scholars have each developed sophisticated, important critiques of the existing data-driven order. But too few scholars in either tradition have put forward alternative substantive conceptions of a good digital society. This, I argue, is a crucial omission. Unless we construct new “sociotechnical imaginaries,” new understandings of the goals and aspirations digital technologies should aim to achieve, the most surveillance studies and privacy scholars can hope to accomplish is a less unjust version of the (...) technology industry’s own vision for the future. (shrink)
Privacy and surveillance scholars increasingly worry that data collectors can use the information they gather about our behaviors, preferences, interests, incomes, and so on to manipulate us. Yet what it means, exactly, to manipulate someone, and how we might systematically distinguish cases of manipulation from other forms of influence—such as persuasion and coercion—has not been thoroughly enough explored in light of the unprecedented capacities that information technologies and digital media enable. In this paper, we develop a definition of manipulation (...) that addresses these enhanced capacities, investigate how information technologies facilitate manipulative practices, and describe the harms—to individuals and to social institutions—that flow from such practices. -/- We use the term “online manipulation” to highlight the particular class of manipulative practices enabled by a broad range of information technologies. We argue that at its core, manipulation is hidden influence—the covert subversion of another person’s decision-making power. We argue that information technology, for a number of reasons, makes engaging in manipulative practices significantly easier, and it makes the effects of such practices potentially more deeply debilitating. And we argue that by subverting another person’s decision-making power, manipulation undermines his or her autonomy. Given that respect for individual autonomy is a bedrock principle of liberal democracy, the threat of online manipulation is a cause for grave concern. (shrink)
Traditional arguments for privacy in public suggest that intentionally public activities, such as political speech, do not deserve privacy protection. In this article, I develop a new argument for the view that surveillance of intentionally public activities should be limited to protect the specific good that this context provides, namely democratic legitimacy. Combining insights from Helen Nissenbaum’s contextualism and Jürgen Habermas’s theory of the public sphere, I argue that strategic surveillance of the public sphere can undermine the capacity (...) of citizens to freely deliberate in public and therefore conflicts with democratic self-determination. (shrink)
In this article, we explore some of the roles of cameras in policing in the United States. We outline the trajectory of key new media technologies, arguing that cameras and social media together generate the ambient surveillance through which graphic violence is now routinely captured and circulated. Drawing on Michel Foucault, we suggest that there are important intersections between this video footage and police subjectivity, and propose to look at two: recruit training at the Washington state Basic Law Enforcement (...) Academy and the Seattle Police Department’s body-worn camera project. We analyze these cases in relation to the major arguments for and against initiatives to increase police use of cameras, outlining what we see as techno-optimistic and techno-pessimistic positions. Drawing on the pragmatism of John Dewey, we argue for a third position that calls for field-based inquiry into the specific co-production of socio-techno subjectivities. (shrink)
Questions of privacy have become particularly salient in recent years due, in part, to information-gathering initiatives precipitated by the 2001 World Trade Center attacks, increasing power of surveillance and computing technologies, and massive data collection about individuals for commercial purposes. While privacy is not new to the philosophical and legal literature, there is much to say about the nature and value of privacy. My focus here is on the nature of informational privacy. I argue that the predominant accounts of (...) privacy are unsatisfactory and offer an alternative: for a person to have informational privacy is for there to be limits on the particularized judgments that others are able to reasonably make about that person. (shrink)
Current technology and surveillance practices make behaviors traceable to persons in unprecedented ways. This causes a loss of anonymity and of many privacy measures relied on in the past. These de facto privacy losses are by many seen as problematic for individual psychology, intimate relations and democratic practices such as free speech and free assembly. I share most of these concerns but propose that an even more fundamental problem might be that our very ability to act as autonomous and (...) purposive agents relies on some degree of privacy, perhaps particularly as we act in public and semi-public spaces. I suggest that basic issues concerning action choices have been left largely unexplored, due to a series of problematic theoretical assumptions at the heart of privacy debates. One such assumption has to do with the influential conceptualization of privacy as pertaining to personal intimate facts belonging to a private sphere as opposed to a public sphere of public facts. As Helen Nissenbaum has pointed out, the notion of privacy in public sounds almost like an oxymoron given this traditional private-public dichotomy. I discuss her important attempt to defend privacy in public through her concept of ‘contextual integrity.’ Context is crucial, but Nissenbaum’s descriptive notion of existing norms seems to fall short of a solution. I here agree with Joel Reidenberg’s recent worries regarding any approach that relies on ‘reasonable expectations’ . The problem is that in many current contexts we have no such expectations. Our contexts have already lost their integrity, so to speak. By way of a functional and more biologically inspired account, I analyze the relational and contextual dynamics of both privacy needs and harms. Through an understanding of action choice as situated and options and capabilities as relational, a more consequence-oriented notion of privacy begins to appear. I suggest that privacy needs, harms and protections are relational. Privacy might have less to do with seclusion and absolute transactional control than hitherto thought. It might instead hinge on capacities to limit the social consequences of our actions through knowing and shaping our perceptible agency and social contexts of action. To act with intent we generally need the ability to conceal during exposure. If this analysis is correct then relational privacy is an important condition for autonomic purposive and responsible agency—particularly in public space. Overall, this chapter offers a first stab at a reconceptualization of our privacy needs as relational to contexts of action. In terms of ‘rights to privacy’ this means that we should expand our view from the regulation and protection of the information of individuals to questions of the kind of contexts we are creating. I am here particularly interested in what I call ‘unbounded contexts’, i.e. cases of context collapses, hidden audiences and even unknowable future agents. (shrink)
It is especially hard, at present, to read the newspapers without emitting a howl of anguish and outrage. Philosophy can heal some wounds but, in this case, political action may prove a better remedy than philosophy. It can therefore feel odd trying to think philosophically about surveillance at a time like this, rather than joining with like-minded people to protest the erosion of our civil liberties, the duplicity of our governments, and the failings in our political institutions - including (...) our political parties – revealed by the succession of leaks dripping away this the summer. Still, philosophy can help us to think about what we should do, not merely what we should believe. Thus, in what follows I draw on my previous work on privacy, democracy and security, in order to highlight aspects of recent events which – or so I hope – may prove useful both for political thought and action. (shrink)
The article introduces the concept of "surveillance-capitalist biopolitics" to problematize the recent expansion of "data extractivism" in health care and health research. As we show, this trend has accelerated during the ongoing Covid pandemic and points to a normalization and institutionalization of self-tracking practices, which, drawing on the "quantified self", points to the emergence of a "quantified collective". Referring to Foucault and Zuboff, and by analyzing key examples of the leading "Big Tech" companies (e.g., Alphabet and Apple), we argue (...) that contemporary forms of digital biopolitics are privatized, opaque, flexible, and not limited to the state. Instead, especially through the integration of wearable technologies, the biopolitical regulation of bodies is increasingly mediated by private tech companies. These companies rely on a questionable narrative of participation, responsibility, and care despite owning, and ultimately controlling, access to intimate health data and the proprietary algorithms mediating this data. The article shows that the proliferation of "surveillance-capitalist biopolitics" ultimately strengthens not only market power but also the epistemic and infrastructural power of the dataowning and gadget-producing firms. Finally, against an exclusively repressive and negative reading of biopolitics, and to effectively counter the forms of power emerging from surveillance-capitalist biopolitics, we propose four dimensions that are central to its democratizationnamely privacy/individual sovereignty, democratic deliberation, pluralism, and epistemic equality. -/- DOI: 10.1007/s41358-021-00309-9. (shrink)
The first few years of the 21st century were characterised by a progressive loss of privacy. Two phenomena converged to give rise to the data economy: the realisation that data trails from users interacting with technology could be used to develop personalised advertising, and a concern for security that led authorities to use such personal data for the purposes of intelligence and policing. In contrast to the early days of the data economy and internet surveillance, the last few years (...) have witnessed a rising concern for privacy. As bad data practices have come to light, citizens are starting to understand the real cost of using online digital technologies. Two events stamped 2018 as a landmark year for privacy: the Cambridge Analytica scandal, and the implementation of the European Union’s General Data Protection Regulation (GDPR). The former showed the extent to which personal data has been shared without data subjects’ knowledge and consent and many times for unacceptable purposes, such as swaying elections. The latter inaugurated the beginning of robust data protection regulation in the digital age. Getting privacy right is one of the biggest challenges of this new decade of the 21st century. The past year has shown that there is still much work to be done on privacy to tame the darkest aspects of the data economy. As data scandals continue to emerge, questions abound as to how to interpret and enforce regulation, how to design new and better laws, how to complement regulation with better ethics, and how to find technical solutions to data problems. The aim of the research project Data, Privacy, and the Individual is to contribute to a better understanding of the ethics of privacy and of differential privacy. The outcomes of the project are seven research papers on privacy, a survey, and this final report, which summarises each research paper, and goes on to offer a set of reflections and recommendations to implement best practices regarding privacy. (shrink)
In discussions of state surveillance, the values of privacy and security are often set against one another, and people often ask whether privacy is more important than national security.2 I will argue that in one sense privacy is more important than national security. Just what more important means is its own question, though, so I will be more precise. I will argue that national security rationales cannot by themselves justify some kinds of encroachments on individual privacy (including some kinds (...) that the United States has conducted). Specifically, I turn my attention to a recent, well publicized, and recently amended statute (section 215 of the USA Patriot Act3), a surveillance program based on that statute (the National Security Agency’s bulk metadata collection program), and a recent change to that statute that addresses some of the public controversy surrounding the surveillance program (the USA Freedom Act).4 That process (a statute enabling surveillance, a program abiding by that statute, a public controversy, and a change in the law) looks like a paradigm case of law working as it should; but I am not so sure. While the program was plausibly legal, I will argue that it was morally and legally unjustifiable. Specifically, I will argue that the interpretations of section 215 that supported the program violate what Jeremy Waldron calls “legal archetypes,”5 and that changes to the law illustrate one of the central features of legal archetypes and violation of legal archetypes. The paper proceeds as follows: I begin in Part 1 by setting out what I call the “basic argument” in favor of surveillance programs. This is strictly a moral argument about the conditions under which surveillance in the service of national security can be justified. In Part 2, I turn to section 215 and the bulk metadata surveillance program based on that section. I will argue that the program was plausibly legal, though based on an aggressive, envelope-pushing interpretation of the statute. I conclude Part 2 by describing the USA Freedom Act, which amends section 215 in important ways. In Part 3, I change tack. Rather than offering an argument for the conditions under which surveillance is justified (as in Part 1), I use the discussion of the legal interpretations underlying the metadata program to describe a key ambiguity in the basic argument, and to explain a distinct concern in the program. Specifically that it undermines a legal archetype. Moreover, while the USA Freedom Act does not violate legal archetypes, and hence meets a condition for justifiability, it helps illustrate why the bulk metadata program did violate archetypes. (shrink)
Disputes at the intersection of national security, surveillance, civil liberties, and transparency are nothing new, but they have become a particularly prominent part of public discourse in the years since the attacks on the World Trade Center in September 2001. This is in part due to the dramatic nature of those attacks, in part based on significant legal developments after the attacks (classifying persons as “enemy combatants” outside the scope of traditional Geneva protections, legal memos by White House counsel (...) providing rationale for torture, the USA Patriot Act), and in part because of the rapid development of communications and computing technologies that enable both greater connectivity among people and the greater ability to collect information about those connections. -/- One important way in which these questions intersect is in the controversy surrounding bulk collection of telephone metadata by the U.S. National Security Agency. The bulk metadata program (the “metadata program” or “program”) involved court orders under section 215 of the USA Patriot Act requiring telecommunications companies to provide records about all calls the companies handled and the creation of database that the NSA could search. The program was revealed to the general public in June 2013 as part of the large document leak by Edward Snowden, a former contractor for the NSA. -/- A fair amount has been written about section 215 and the bulk metadata program. Much of the commentary has focused on three discrete issues. First is whether the program is legal; that is, does the program comport with the language of the statute and is it consistent with Fourth Amendment protections against unreasonable searches and seizures? Second is whether the program infringes privacy rights; that is, does bulk metadata collection diminish individual privacy in a way that rises to the level that it infringes persons’ rights to privacy? Third is whether the secrecy of the program is inconsistent with democratic accountability. After all, people in the general public only became aware of the metadata program via the Snowden leaks; absent those leaks, there would have not likely been the sort of political backlash and investigation necessary to provide some kind of accountability. -/- In this chapter I argue that we need to look at these not as discrete questions, but as intersecting ones. The metadata program is not simply a legal problem (though it is one); it is not simply a privacy problem (though it is one); and it is not simply a secrecy problem (though it is one). Instead, the importance of the metadata program is the way in which these problems intersect and reinforce one another. Specifically, I will argue that the intersection of the questions undermines the value of rights, and that this is a deeper and more far-reaching moral problem than each of the component questions. (shrink)
In a recent article in Respublica, Jesper Ryberg argues that CCTV can be compared to a little old lady gazing out onto the street below. This article takes issue with the claim that government surveillance can be justified in this manner. Governments have powers and responsibilities that little old ladies lack. Even if CCTV is effective at preventing crime, there may be less intrusive ways of doing so. People have a variety of legitimate interests in privacy, and protection for (...) these is important to their status as free and equal citizens. Consequently, though necessary, effectiveness is insufficient to justify CCTV in a democracy. (shrink)
English - language Wikipedia is constantly being plagued by vandalistic contributions on a massive scale. In order to fight them its volunteer contributors deploy an array of software tools and autonomous bots. After an analysis of their functioning and the ‘ coactivity ’ in use between humans and bots, this research ‘ discloses ’ the moral issues that emerge from the combined patrolling by humans and bots. Administrators provide the stronger tools only to trusted users, thereby creating a new hierarchical (...) layer. Further, surveillance exhibits several troubling features : questionable profiling practices, the use of the controversial measure of reputation, ‘ oversurveillance ’ where quantity trumps quality, and a prospective loss of the required moral skills whenever bots take over from humans. The most troubling aspect, though, is that Wikipedia has become a Janus - faced institution. One face is the basic platform of MediaWiki software, transparent to all. Its other face is the anti - vandalism system, which, in contrast, is opaque to the average user, in particular as a result of the algorithms and neural networks in use. Finally it is argued that this secrecy impedes a much needed discussion to unfold ; a discussion that should focus on a ‘ rebalancing ’ of the anti - vandalism system and the development of more ethical information practices towards contributors. (shrink)
The ability of videos to serve as evidence of racial injustice is complex and contested. This essay argues that scrutiny of the Black body has come to play a key role in how videos of police violence are mined for evidence, following a long history of racialized surveillance and attributions of threat and superhuman powers to Black bodies. Using videos to combat injustice requires incorporating humanizing narratives and cultivating resistant modes of looking.
This study contributes to the micro-credit literature by addressing the lack of philosophical dialogue concerning the issue of trust between micro-credit NGOs and rural poor women. The study demonstrates that one of the root causes of NGOs’ contested roles in Bangladesh is the norm that they use (i.e., trust) to rationalize their micro-credit activities. I argue that Bangladeshi micro-credit NGOs’ trust in poor village women is not genuine because they resort to group responsibility sustained through aggressive surveillance. I maintain (...) so by drawing on a trust-based theoretical framework that uses various philosophical insights. Drawing on the same conceptual framework, I also contend, somewhat softening the previous claim, that if micro-credit trust is trust at all, it is at most strategic, not generalized. For being strategic, it has many undermining effects on local social solidarity norms, rendering Bangladeshi micro-credit NGOs and strategic trust an odd couple with no moral compass. To bring forth the moral impetus in micro-credit activities, I lay out some recommendations intended for organizations, managers, and policymakers, consistent with normative corporate social responsibility initiatives. However, further studies can be initiated based on this paper, suggesting its importance for future research. (shrink)
There is a growing sense that many liberal states are in the midst of a shift in legal and political norms—a shift that is happening slowly and for a variety of reasons relating to security. The internet and tech booms—paving the way for new forms of electronic surveillance—predated the 9/11 attacks by several years, while the police’s vast use of secret informants and deceptive operations began well before that. On the other hand, the recent uptick in reactionary movements—movements in (...) which the rule of law seems expendable—began many years after 9/11 and continues to this day. One way to describe this book is an examination of the moral limits on modern police practices that flow from the basic legal and political tenets of the liberal tradition. The central argument is that policing in liberal states is constrained by a liberal conception of persons coupled with particular rule of law principles. Part I consists of three chapters that constitute the book’s theoretical foundation, including an overview of the police’s law enforcement role in the liberal polity and a methodology for evaluating that role. Part II consists of three chapters that address applications of the theory, including the police’s use of informants, deceptive operations, and surveillance. The upshot is that policing in liberal societies has become illiberal in light of its response to both internal and external threats to security. The book provides an account of what it might mean to retrieve policing that is consistent with the basic tenets of liberalism and the limits imposed by those tenets. [This is an uncorrected draft of the book's preface and introduction, forthcoming from Oxford University Press.]. (shrink)
Big data and predictive analytics applied to economic life is forcing individuals to choose between authenticity and freedom. The fact of the choice cuts philosophy away from the traditional understanding of the two values as entwined. This essay describes why the split is happening, how new conceptions of authenticity and freedom are rising, and the human experience of the dilemma between them. Also, this essay participates in recent philosophical intersections with Shoshana Zuboff’s work on surveillance capitalism, but the investigation (...) connects on the individual, ethical level as opposed to the more prevalent social and political interaction. (shrink)
Frantz Fanon offers a lucid account of his entrance into the white world where the weightiness of the ‘white gaze’ nearly crushed him. In chapter five of Black Skins, White Masks, he develops his historico-racial and epidermal racial schemata as correctives to Merleau-Ponty’s overly inclusive corporeal schema. Experientially aware of the reality of socially constructed (racialized) subjectivities, Fanon uses his schemata to explain the creation, maintenance, and eventual rigidification of white-scripted ‘blackness’. Through a re-telling of his own experiences of racism, (...) Fanon is able to show how a black person in a racialized context eventually internalizes the ‘white gaze’. In this essay I bring Fanon’s insights into conversation with Foucault’s discussion of panoptic surveillance. Although the internalization of the white narrative creates a situation in which external constraints are no longer needed, Fanon highlights both the historical contingency of ‘blackness’ and the ways in which the oppressed can re-narrate their subjectivities. Lastly, I discuss Fanon’s historically attuned ‘new humanism’, once again engaging Fanon and Foucault as dialogue partners. (shrink)
In the midst of a pandemic, what does it mean to see the Other as Other and not as a carrier of the virus? I argue that in seeking a Levinasian response to the pandemic, we must be mindful of the implications of the mechanisms of surveillance and control that, presented as ways to protect the Other, operate by controlling the Other and rendering our relation to the Other increasingly impersonal. Subjected to these mechanisms, the Other becomes a dangerous (...) entity that must be controlled, and the state that deploys them comes increasingly to mediate the relation between self and Other. The more we rely on such mechanisms for protection, the easier it becomes to regard the Other not as one who summons me to an infinite responsibility but as a vector of disease. Despite all the differences between Levinas’s and Foucault’s approaches, reading them in conversation shows that the control and surveillance of the population functions within a discourse that medicalizes and objectifies the Other in favor of the centralizing power that uses those technologies. In defiance of Levinas’s warning against imposing a narrative on the Other’s suffering, this discourse coopts that suffering as a justification for biopower. (shrink)
It is especially hard, at present, to read the newspapers without emitting a howl of anguish and outrage. Philosophy can heal some wounds but, in this case, political action may prove a better remedy than philosophy. It can therefore feel odd trying to think philosophically about surveillance at a time like this, rather than joining with like‐minded people to protest the erosion of our civil liberties, the duplicity of our governments, and the failings in our political institutions ‐ including (...) our political parties – revealed by the succession of leaks which have dripped away this the summer. Still, philosophy can help us to think about we should do, not merely what we should believe. Thus, in what follows I draw on my previous work on privacy, democracy and security, in order to highlight aspects of recent events which – or so I hope – may prove useful both for political thought and action. (shrink)
Abstract: As there are no currently obvious ways to create safe self-improving superintelligence, but its emergence is looming, we probably need temporary ways to prevent its creation. The only way to prevent it is to create a special type of AI that is able to control and monitor the entire world. The idea has been suggested by Goertzel in the form of an AI Nanny, but his Nanny is still superintelligent, and is not easy to control. We explore here ways (...) to create the safest and simplest form of AI which may work as an AI Nanny, that is, a global surveillance state powered by a Narrow AI, or AI Police. A similar but more limited system has already been implemented in China for the prevention of ordinary crime. AI police will be able to predict the actions of and stop potential terrorists and bad actors in advance. Implementation of such AI police will probably consist of two steps: first, a strategic decisive advantage via Narrow AI created by an intelligence services of a nuclear superpower, and then ubiquitous control over potentially dangerous agents which could create unauthorized artificial general intelligence which could evolve into Superintelligence. (shrink)
The first thing we must keep in mind is that when saying that China says this or China does that, we are not speaking of the Chinese people, but of the Sociopaths who control the CCP -- Chinese Communist Party, i.e., the Seven Senile Sociopathic Serial Killers (SSSSK) of the Standing Committee of the CCP or the 25 members of the Politburo etc.. -/- The CCP’s plans for WW3 and total domination are laid out quite clearly in Chinese govt publications (...) and speeches and this is Xi Jinping’s “China Dream”. It is a dream only for the tiny minority (perhaps a few dozen to a few hundred) who rule China and a nightmare for everyone else (including 1.4 billion Chinese). The 10 billion dollars yearly enables them or their puppets to own or control newspapers, magazines, TV and radio channels and place fake news in most major media everywhere every day. In addition, they have an army (maybe millions of people) who troll all the media placing more propaganda and drowning out legitimate commentary (the 50 cent army). -/- In addition to stripping the 3rd world of resources, a major thrust of the multi-trillion dollar Belt and Road Initiative is building military bases worldwide. They are forcing the free world into a massive high-tech arms race that makes the cold war with the Soviet Union look like a picnic. -/- Though the SSSSK, and the rest of the world’s military, are spending huge sums on advanced hardware, it is highly likely that WW3 (or the smaller engagements leading up to it) will be software dominated. It is not out of the question that the SSSSK, with probably more hackers (coders) working for them then all the rest of the world combined, will win future wars with minimal physical conflict, just by paralyzing their enemies via the net. No satellites, no phones, no communications, no financial transactions, no power grid, no internet, no advanced weapons, no vehicles, trains, ships or planes. -/- There are only two main paths to removing the CCP, freeing 1.4 billion Chinese prisoners, and ending the lunatic march to WW3. The peaceful one is to launch an all-out trade war to devastate the Chinese economy until the military gets fed up and boots out the CCP. -/- An alternative to shutting down China’s economy is a limited war, such as a targeted strike by say 50 thermobaric drones on the 20th Congress of the CCP, when all the top members are in one place, but that won’t take place until 2022 so one could hit the annual plenary meeting. The Chinese would be informed, as the attack happened, that they must lay down their arms and prepare to hold a democratic election or be nuked into the stone age. The other alternative is an all-out nuclear attack. Military confrontation is unavoidable given the CCP’s present course. It will likely happen over the islands in the South China Sea or Taiwan within a few decades, but as they establish military bases worldwide it could happen anywhere (see Crouching Tiger etc.). Future conflicts will have hardkill and softkill aspects with the stated objectives of the CCP to emphasize cyberwar by hacking and paralyzing control systems of all military and industrial communications, equipment, power plants, satellites, internet, banks, and any device or vehicle connected to the net. The SS are slowly fielding a worldwide array of manned and autonomous surface and underwater subs or drones capable of launching conventional or nuclear weapons that may lie dormant awaiting a signal from China or even looking for the signature of US ships or planes. While destroying our satellites, thus eliminating communication between the USA and our forces worldwide, they will use theirs, in conjunction with drones to target and destroy our currently superior naval forces. Of course, all of this is increasingly done automatically by AI. -/- By far the biggest ally of the CCP is the Democratic party of the USA. -/- The choice is to stop the CCP now or watch as they extend the Chinese prison over the whole world. -/- Of course, universal surveillance and digitizing of our lives is inevitable everywhere. Anyone who does not think so is profoundly out of touch. -/- Of course, it is the optimists who expect the Chinese sociopaths to rule the world while the pessimists (who view themselves as realists) expect AI sociopathy (or AS as I call it – i.e., Artificial Stupidity or Artificial Sociopathy) to take over, perhaps by 2030. -/- Those interested in further details on the lunatic path of modern society may consult my other works such as Suicide by Democracy-an Obituary for America and the World 4th Edition 2019 and Suicidal Utopian Delusions in the 21st Century: Philosophy, Human Nature and the Collapse of Civilization 5th ed (2019) . (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.