When agents insert technological systems into their decision-making processes, they can obscure moral responsibility for the results. This can give rise to a distinct moral wrong, which we call “agency laundering.” At root, agency laundering involves obfuscating one’s moral responsibility by enlisting a technology or process to take some action and letting it forestall others from demanding an account for bad outcomes that result. We argue that the concept of agency laundering helps in understanding important moral problems in a number (...) of recent cases involving automated, or algorithmic, decision-systems. We apply our conception of agency laundering to a series of examples, including Facebook’s automated advertising suggestions, Uber’s driver interfaces, algorithmic evaluation of K-12 teachers, and risk assessment in criminal sentencing. We distinguish agency laundering from several other critiques of information technology, including the so-called “responsibility gap,” “bias laundering,” and masking. (shrink)
Algorithmic systems and predictive analytics play an increasingly important role in various aspects of modern life. Scholarship on the moral ramifications of such systems is in its early stages, and much of it focuses on bias and harm. This paper argues that in understanding the moral salience of algorithmic systems it is essential to understand the relation between algorithms, autonomy, and agency. We draw on several recent cases in criminal sentencing and K–12 teacher evaluation to outline four key ways in (...) which issues of agency, autonomy, and respect for persons can conflict with algorithmic decision-making. Three of these involve failures to treat individual agents with the respect they deserve. The fourth involves distancing oneself from a morally suspect action by attributing one’s decision to take that action to an algorithm, thereby laundering one’s agency. (shrink)
New media (highly interactive digital technology for creating, sharing, and consuming information) affords users a great deal of control over their informational diets. As a result, many users of new media unwittingly encapsulate themselves in epistemic bubbles (epistemic structures, such as highly personalized news feeds, that leave relevant sources of information out (Nguyen forthcoming)). Epistemically paternalistic alterations to new media technologies could be made to pop at least some epistemic bubbles. We examine one such alteration that Facebook has made in (...) an effort to fight fake news and conclude that it is morally permissible. We further argue that many epistemically paternalistic policies can (and should) be a perennial part of the internet information environment. (shrink)
In recent years, educational institutions have started using the tools of commercial data analytics in higher education. By gathering information about students as they navigate campus information systems, learning analytics “uses analytic techniques to help target instructional, curricular, and support resources” to examine student learning behaviors and change students’ learning environments. As a result, the information educators and educational institutions have at their disposal is no longer demarcated by course content and assessments, and old boundaries between information used for assessment (...) and information about how students live and work are blurring. Our goal in this paper is to provide a systematic discussion of the ways in which privacy and learning analytics conflict and to provide a framework for understanding those conflicts. -/- We argue that there are five crucial issues about student privacy that we must address in order to ensure that whatever the laudable goals and gains of learning analytics, they are commensurate with respecting students’ privacy and associated rights, including (but not limited to) autonomy interests. First, we argue that we must distinguish among different entities with respect to whom students have, or lack, privacy. Second, we argue that we need clear criteria for what information may justifiably be collected in the name of learning analytics. Third, we need to address whether purported consequences of learning analytics (e.g., better learning outcomes) are justified and what the distributions of those consequences are. Fourth, we argue that regardless of how robust the benefits of learning analytics turn out to be, students have important autonomy interests in how information about them is collected. Finally, we argue that it is an open question whether the goods that justify higher education are advanced by learning analytics, or whether collection of information actually runs counter to those goods. (shrink)
Questions of privacy have become particularly salient in recent years due, in part, to information-gathering initiatives precipitated by the 2001 World Trade Center attacks, increasing power of surveillance and computing technologies, and massive data collection about individuals for commercial purposes. While privacy is not new to the philosophical and legal literature, there is much to say about the nature and value of privacy. My focus here is on the nature of informational privacy. I argue that the predominant accounts of privacy (...) are unsatisfactory and offer an alternative: for a person to have informational privacy is for there to be limits on the particularized judgments that others are able to reasonably make about that person. (shrink)
ABSTRACT: So far in this book, we have examined algorithmic decision systems from three autonomy-based perspectives: in terms of what we owe autonomous agents (chapters 3 and 4), in terms of the conditions required for people to act autonomously (chapters 5 and 6), and in terms of the responsibilities of agents (chapter 7). -/- In this chapter we turn to the ways in which autonomy underwrites democratic governance. Political authority, which is to say the ability of a government to exercise (...) power, may be justifiable or not. Whether it is justified and how it can come to be justified is a question of political legitimacy. Political legitimacy is another way in which autonomy and responsibility are linked. This relationship is the basis of the current chapter, and it is important in understanding the moral salience of algorithmic systems. We will draw the connection as follows. We begin, in section 8.1, by describing two uses of technology: crime predicting technology used to drive policing practices and social media technology used to influence elections (including by Cambridge Analytica and by the Internet Research Agency). In section 8.2 we consider several views of legitimacy and argue for a hybrid version of normative legitimacy based on one recently offered by Fabienne Peter. In section 8.3 we will explain that the connection between political legitimacy and autonomy is that legitimacy is grounded in legitimating processes, which are in turn based on autonomy. Algorithmic systems—among them PredPol and the Cambridge Analytica-Facebook-Internet Research Agency amalgam—can hinder that legitimation process and conflict with democratic legitimacy, as we argue in section 8.4. We will conclude by returning to several cases that serve as through-lines to the book: Loomis, Wagner, and Houston Schools. -/- The link below is to an open-access copy of the chapter. (shrink)
“Big Data” and data analytics affect all of us. Data collection, analysis, and use on a large scale is an important and growing part of commerce, governance, communication, law enforcement, security, finance, medicine, and research. And the theme of this symposium, “Individual and Informational Privacy in the Age of Big Data,” is expansive; we could have long and fruitful discussions about practices, laws, and concerns in any of these domains. But a big part of the audience for this symposium is (...) students and faculty in higher education institutions (HEIs), and the subject of this paper is data analytics in our own backyards. Higher education learning analytics (LA) is something that most of us involved in this symposium are familiar with. Students have encountered LA in their courses, in their interactions with their law school or with their undergraduate institutions, instructors use systems that collect information about their students, and administrators use information to help understand and steer their institutions. More importantly, though, data analytics in higher education is something that those of us participating in the symposium can actually control. Students can put pressure on administrators, and faculty often participate in university governance. Moreover, the systems in place in HEIs are more easily comprehensible to many of us because we work with them on a day-to-day basis. Students use systems as part of their course work, in their residences, in their libraries, and elsewhere. Faculty deploy course management systems (CMS) such as Desire2Learn, Moodle, Blackboard, and Canvas to structure their courses, and administrators use information gleaned from analytics systems to make operational decisions. If we (the participants in the symposium) indeed care about Individual and Informational Privacy in the Age of Big Data, the topic of this paper is a pretty good place to hone our thinking and put into practice our ideas. (shrink)
Civil liberty and privacy advocates have criticized the USA PATRIOT Act (Act) on numerous grounds since it was passed in the wake of the World Trade Center attacks in 2001. Two of the primary targets of those criticisms are the Act’s sneak-and-peek search provision, which allows law enforcement agents to conduct searches without informing the search’s subjects, and the business records provision, which allows agents to secretly subpoena a variety of information – most notoriously, library borrowing records. Without attending to (...) all of the ways that critics claim the Act burdens privacy, I examine whether those two controversial parts of the Act, the section 213 sneak-and-peak search and the section 215 business records gag-rule provisions, burden privacy as critics charge. I begin by describing the two provisions. Next, I explain why those provisions don’t burden privacy on standard philosophical accounts. Moreover, I argue that they need not conflict with the justifications for people’s claims to privacy, nor do they undermine the value of privacy on the standard accounts. However, rather than simply concluding that the sections don’t burden privacy, I argue that those provisions are problematic on the grounds that they undermine the value of whatever rights to privacy people have. Specifically, I argue that it is important to distinguish rights themselves from the value that those rights have to the rights-holders, and that an essential element of privacy rights having value is that privacy right-holders be able to tell the extent to which they actually have privacy. This element, which is justified by the right-holders’ autonomy interests, is harmed by the two provisions. (shrink)
This article develops a framework for analyzing and comparing privacy and privacy protections across (inter alia) time, place, and polity and for examining factors that affect privacy and privacy protection. This framework provides a method to describe precisely aspects of privacy and context and a flexible vocabulary and notation for such descriptions and comparisons. Moreover, it links philosophical and conceptual work on privacy to social science and policy work and accommodates different conceptions of the nature and value of privacy. The (...) article begins with an outline of the framework. It then refines the view by describing a hypothetical application. Finally, it applies the framework to a real‐world privacy issue—campaign finance disclosure laws in the United States and France. The article concludes with an argument that the framework offers important advantages to privacy scholarship and for privacy policy makers. (shrink)
Surveillance plays a crucial role in public health, and for obvious reasons conflicts with individual privacy. This paper argues that the predominant approach to the conflict is problematic, and then offers an alternative. It outlines a Basic Interests Approach to public health measures, and the Unreasonable Exercise Argument, which sets forth conditions under which individuals may justifiably exercise individual privacy claims that conflict with public health goals. The view articulated is compatible with a broad range conceptions of the value of (...) health. (shrink)
Higher education institutions are mining and analyzing student data to effect educational, political, and managerial outcomes. Done under the banner of “learning analytics,” this work can—and often does—surface sensitive data and information about, inter alia, a student’s demographics, academic performance, offline and online movements, physical fitness, mental wellbeing, and social network. With these data, institutions and third parties are able to describe student life, predict future behaviors, and intervene to address academic or other barriers to student success (however defined). Learning (...) analytics, consequently, raise serious issues concerning student privacy, autonomy, and the appropriate flow of student data. We argue that issues around privacy lead to valid questions about the degree to which students should trust their institution to use learning analytics data and other artifacts (algorithms, predictive scores) with their interests in mind. We argue that higher education institutions are paradigms of information fiduciaries. As such, colleges and universities have a special responsibility to their students. In this article, we use the information fiduciary concept to analyze cases when learning analytics violate an institution’s responsibility to its students. (shrink)
ABSTRACT: One important criticism of algorithmic systems is that they lack transparency. Such systems can be opaque because they are complex, protected by patent or trade secret, or deliberately obscure. In the EU, there is a debate about whether the General Data Protection Regulation (GDPR) contains a “right to explanation,” and if so what such a right entails. Our task in this chapter is to address this informational component of algorithmic systems. We argue that information access is integral for respecting (...) autonomy, and transparency policies should be tailored to advance autonomy. -/- To make this argument we distinguish two facets of agency (i.e., capacity to act). The first is practical agency, or the ability to act effectively according to one’s values. The second is what we call cognitive agency, which is the ability to exercise what Pamela Hieronymi calls “evaluative control” (i.e., the ability to control our affective states, such as beliefs, desires, and attitudes). We argue that respecting autonomy requires providing persons sufficient information to exercise evaluative control and properly interpret the world and one’s place in it. We draw this distinction out by considering algorithmic systems used in background checks, and we apply the view to key cases involving risk assessment in criminal justice decisions and K-12 teacher evaluation. -/- The link below is to an open access version of the chapter. (shrink)
There is increasing concern about “surveillance capitalism,” whereby for-profit companies generate value from data, while individuals are unable to resist (Zuboff 2019). Non-profits using data-enabled surveillance receive less attention. Higher education institutions (HEIs) have embraced data analytics, but the wide latitude that private, profit-oriented enterprises have to collect data is inappropriate. HEIs have a fiduciary relationship to students, not a narrowly transactional one (see Jones et al, forthcoming). They are responsible for facets of student life beyond education. In addition to (...) classrooms, learning management systems, and libraries, HEIs manage dormitories, gyms, dining halls, health facilities, career advising, police departments, and student employment. HEIs collect and use student data in all of these domains, ostensibly to understand learner behaviors and contexts, improve learning outcomes, and increase institutional efficiency through “learning analytics” (LA). ID card swipes and Wi-Fi log-ins can track student location, class attendance, use of campus facilities, eating habits, and friend groups. Course management systems capture how students interact with readings, video lectures, and discussion boards. Application materials provide demographic information. These data are used to identify students needing support, predict enrollment demands, and target recruiting efforts. These are laudable aims. However, current LA practices may be inconsistent with HEIs’ fiduciary responsibilities. HEIs often justify LA as advancing student interests, but some projects advance primarily organizational welfare and institutional interests. Moreover, LA advances a narrow conception of student interests while discounting privacy and autonomy. Students are generally unaware of the information collected, do not provide meaningful consent, and express discomfort and resigned acceptance about HEI data practices, especially for non-academic data (see Jones et al. forthcoming). The breadth and depth of student information available, combined with their fiduciary responsibility, create a duty that HEIs exercise substantial restraint and rigorous evaluation in data collection and use. (shrink)
Disputes at the intersection of national security, surveillance, civil liberties, and transparency are nothing new, but they have become a particularly prominent part of public discourse in the years since the attacks on the World Trade Center in September 2001. This is in part due to the dramatic nature of those attacks, in part based on significant legal developments after the attacks (classifying persons as “enemy combatants” outside the scope of traditional Geneva protections, legal memos by White House counsel providing (...) rationale for torture, the USA Patriot Act), and in part because of the rapid development of communications and computing technologies that enable both greater connectivity among people and the greater ability to collect information about those connections. -/- One important way in which these questions intersect is in the controversy surrounding bulk collection of telephone metadata by the U.S. National Security Agency. The bulk metadata program (the “metadata program” or “program”) involved court orders under section 215 of the USA Patriot Act requiring telecommunications companies to provide records about all calls the companies handled and the creation of database that the NSA could search. The program was revealed to the general public in June 2013 as part of the large document leak by Edward Snowden, a former contractor for the NSA. -/- A fair amount has been written about section 215 and the bulk metadata program. Much of the commentary has focused on three discrete issues. First is whether the program is legal; that is, does the program comport with the language of the statute and is it consistent with Fourth Amendment protections against unreasonable searches and seizures? Second is whether the program infringes privacy rights; that is, does bulk metadata collection diminish individual privacy in a way that rises to the level that it infringes persons’ rights to privacy? Third is whether the secrecy of the program is inconsistent with democratic accountability. After all, people in the general public only became aware of the metadata program via the Snowden leaks; absent those leaks, there would have not likely been the sort of political backlash and investigation necessary to provide some kind of accountability. -/- In this chapter I argue that we need to look at these not as discrete questions, but as intersecting ones. The metadata program is not simply a legal problem (though it is one); it is not simply a privacy problem (though it is one); and it is not simply a secrecy problem (though it is one). Instead, the importance of the metadata program is the way in which these problems intersect and reinforce one another. Specifically, I will argue that the intersection of the questions undermines the value of rights, and that this is a deeper and more far-reaching moral problem than each of the component questions. (shrink)
Public and research libraries have long provided resources in electronic formats, and the tension between providing electronic resources and patron privacy is widely recognized. But assessing trade-offs between privacy and access to electronic resources remains difficult. One reason is a conceptual problem regarding intellectual freedom. Traditionally, the LIS literature has plausibly understood privacy as a facet of intellectual freedom. However, while certain types of electronic resource use may diminish patron privacy, thereby diminishing intellectual freedom, the opportunities created by such resources (...) also appear liberty-enhancing. Adjudicating between privacy loss and enhanced opportunities on intellectual freedom grounds must therefore provide an account of intellectual freedom capable of addressing both privacy and opportunity. I will argue that intellectual freedom is a form of positive freedom, where a person’s freedom is a function of the quality of her agency. Using this view as the lodestar, I articulate several principles for assessing adoption of electronic resources and privacy protections. (shrink)
Despite widespread agreement that privacy in the context of education is important, it can be difficult to pin down precisely why and to what extent it is important, and it is challenging to determine how privacy is related to other important values. But that task is crucial. Absent a clear sense of what privacy is, it will be difficult to understand the scope of privacy protections in codes of ethics. Moreover, privacy will inevitably conflict with other values, and understanding the (...) values that underwrite privacy protections is crucial for addressing conflicts between privacy and institutional efficiency, advising efficacy, vendor benefits, and student autonomy. -/- My task in this paper is to seek a better understanding of the concept of privacy in institutional research, canvas a number of important moral values underlying privacy generally (including several that are explicit in the AIR Statement), and examine how those moral values should bear upon institutional research by considering several recent cases. (shrink)
This is a study of the treatment of library patron privacy in licenses for electronic journals in academic libraries. We begin by distinguishing four facets of privacy and intellectual freedom based on the LIS and philosophical literature. Next, we perform a content analysis of 42 license agreements for electronic journals, focusing on terms for enforcing authorized use and collection and sharing of user data. We compare our findings to model licenses, to recommendations proposed in a recent treatise on licenses, and (...) to our account of the four facets of intellectual freedom. We find important conflicts with each. (shrink)
In discussions of state surveillance, the values of privacy and security are often set against one another, and people often ask whether privacy is more important than national security.2 I will argue that in one sense privacy is more important than national security. Just what more important means is its own question, though, so I will be more precise. I will argue that national security rationales cannot by themselves justify some kinds of encroachments on individual privacy (including some kinds that (...) the United States has conducted). Specifically, I turn my attention to a recent, well publicized, and recently amended statute (section 215 of the USA Patriot Act3), a surveillance program based on that statute (the National Security Agency’s bulk metadata collection program), and a recent change to that statute that addresses some of the public controversy surrounding the surveillance program (the USA Freedom Act).4 That process (a statute enabling surveillance, a program abiding by that statute, a public controversy, and a change in the law) looks like a paradigm case of law working as it should; but I am not so sure. While the program was plausibly legal, I will argue that it was morally and legally unjustifiable. Specifically, I will argue that the interpretations of section 215 that supported the program violate what Jeremy Waldron calls “legal archetypes,”5 and that changes to the law illustrate one of the central features of legal archetypes and violation of legal archetypes. The paper proceeds as follows: I begin in Part 1 by setting out what I call the “basic argument” in favor of surveillance programs. This is strictly a moral argument about the conditions under which surveillance in the service of national security can be justified. In Part 2, I turn to section 215 and the bulk metadata surveillance program based on that section. I will argue that the program was plausibly legal, though based on an aggressive, envelope-pushing interpretation of the statute. I conclude Part 2 by describing the USA Freedom Act, which amends section 215 in important ways. In Part 3, I change tack. Rather than offering an argument for the conditions under which surveillance is justified (as in Part 1), I use the discussion of the legal interpretations underlying the metadata program to describe a key ambiguity in the basic argument, and to explain a distinct concern in the program. Specifically that it undermines a legal archetype. Moreover, while the USA Freedom Act does not violate legal archetypes, and hence meets a condition for justifiability, it helps illustrate why the bulk metadata program did violate archetypes. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.