Switch to: Citations

Add references

You must login to add references.
  1. Principles of biomedical ethics.Tom L. Beauchamp - 1994 - New York: Oxford University Press. Edited by James F. Childress.
    Over the course of its first seven editions, Principles of Biomedical Ethics has proved to be, globally, the most widely used, authored work in biomedical ethics. It is unique in being a book in bioethics used in numerous disciplines for purposes of instruction in bioethics. Its framework of moral principles is authoritative for many professional associations and biomedical institutions-for instruction in both clinical ethics and research ethics. It has been widely used in several disciplines for purposes of teaching in the (...)
    Download  
     
    Export citation  
     
    Bookmark   1940 citations  
  • Algorithmic Decision-Making Based on Machine Learning from Big Data: Can Transparency Restore Accountability?Paul B. de Laat - 2018 - Philosophy and Technology 31 (4):525-541.
    Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Would transparency contribute to restoring accountability for such systems as is often maintained? Several objections to full transparency are examined: the loss of privacy when datasets become public, the perverse effects of disclosure of the very algorithms themselves, the potential loss of companies’ competitive edge, and the limited gains in answerability to be expected since sophisticated algorithms usually are (...)
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  • Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability.Mark Coeckelbergh - 2020 - Science and Engineering Ethics 26 (4):2051-2068.
    This paper discusses the problem of responsibility attribution raised by the use of artificial intelligence technologies. It is assumed that only humans can be responsible agents; yet this alone already raises many issues, which are discussed starting from two Aristotelian conditions for responsibility. Next to the well-known problem of many hands, the issue of “many things” is identified and the temporal dimension is emphasized when it comes to the control condition. Special attention is given to the epistemic condition, which draws (...)
    Download  
     
    Export citation  
     
    Bookmark   47 citations  
  • How the machine ‘thinks’: Understanding opacity in machine learning algorithms.Jenna Burrell - 2016 - Big Data and Society 3 (1):205395171562251.
    This article considers the issue of opacity as a problem for socially consequential mechanisms of classification and ranking, such as spam filters, credit card fraud detection, search engines, news trends, market segmentation and advertising, insurance or loan qualification, and credit scoring. These mechanisms of classification all frequently rely on computational algorithms, and in many cases on machine learning algorithms to do this work. In this article, I draw a distinction between three forms of opacity: opacity as intentional corporate or state (...)
    Download  
     
    Export citation  
     
    Bookmark   182 citations  
  • Managing Algorithmic Accountability: Balancing Reputational Concerns, Engagement Strategies, and the Potential of Rational Discourse.Alexander Buhmann, Johannes Paßmann & Christian Fieseler - 2020 - Journal of Business Ethics 163 (2):265-280.
    While organizations today make extensive use of complex algorithms, the notion of algorithmic accountability remains an elusive ideal due to the opacity and fluidity of algorithms. In this article, we develop a framework for managing algorithmic accountability that highlights three interrelated dimensions: reputational concerns, engagement strategies, and discourse principles. The framework clarifies that accountability processes for algorithms are driven by reputational concerns about the epistemic setup, opacity, and outcomes of algorithms; that the way in which organizations practically engage with emergent (...)
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  • Bias in algorithmic filtering and personalization.Engin Bozdag - 2013 - Ethics and Information Technology 15 (3):209-227.
    Online information intermediaries such as Facebook and Google are slowly replacing traditional media channels thereby partly becoming the gatekeepers of our society. To deal with the growing amount of information on the social web and the burden it brings on the average user, these gatekeepers recently started to introduce personalization features, algorithms that filter information per individual. In this paper we show that these online services that filter information are not merely algorithms. Humans not only affect the design of the (...)
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
  • In AI we trust? Perceptions about automated decision-making by artificial intelligence.Theo Araujo, Natali Helberger, Sanne Kruikemeier & Claes H. de Vreese - 2020 - AI and Society 35 (3):611-623.
    Fueled by ever-growing amounts of (digital) data and advances in artificial intelligence, decision-making in contemporary societies is increasingly delegated to automated processes. Drawing from social science theories and from the emerging body of research about algorithmic appreciation and algorithmic perceptions, the current study explores the extent to which personal characteristics can be linked to perceptions of automated decision-making by AI, and the boundary conditions of these perceptions, namely the extent to which such perceptions differ across media, (public) health, and judicial (...)
    Download  
     
    Export citation  
     
    Bookmark   39 citations  
  • Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard?John Zerilli, Alistair Knott, James Maclaurin & Colin Gavaghan - 2018 - Philosophy and Technology 32 (4):661-683.
    We are sceptical of concerns over the opacity of algorithmic decision tools. While transparency and explainability are certainly important desiderata in algorithmic governance, we worry that automated decision-making is being held to an unrealistically high standard, possibly owing to an unrealistically high estimate of the degree of transparency attainable from human decision-makers. In this paper, we review evidence demonstrating that much human decision-making is fraught with transparency problems, show in what respects AI fares little worse or better and argue that (...)
    Download  
     
    Export citation  
     
    Bookmark   68 citations  
  • Designing Robots for Care: Care Centered Value-Sensitive Design. [REVIEW]Aimee Wynsberghe - 2013 - Science and Engineering Ethics 19 (2):407-433.
    The prospective robots in healthcare intended to be included within the conclave of the nurse-patient relationship—what I refer to as care robots—require rigorous ethical reflection to ensure their design and introduction do not impede the promotion of values and the dignity of patients at such a vulnerable and sensitive time in their lives. The ethical evaluation of care robots requires insight into the values at stake in the healthcare tradition. What’s more, given the stage of their development and lack of (...)
    Download  
     
    Export citation  
     
    Bookmark   69 citations  
  • Trust diffusion: The effect of interpersonal trust on structure, function, and organizational transparency.Cynthia Clark Williams - 2005 - Business and Society 44 (3):357-368.
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Ethics from Within: Google Glass, the Collingridge Dilemma, and the Mediated Value of Privacy.Peter-Paul Verbeek & Olya Kudina - 2019 - Science, Technology, and Human Values 44 (2):291-314.
    Following the “control dilemma” of Collingridge, influencing technological developments is easy when their implications are not yet manifest, yet once we know these implications, they are difficult to change. This article revisits the Collingridge dilemma in the context of contemporary ethics of technology, when technologies affect both society and the value frameworks we use to evaluate them. Early in its development, we do not know how a technology will affect the value frameworks from which it will be evaluated, while later, (...)
    Download  
     
    Export citation  
     
    Bookmark   29 citations  
  • Designing Robots for Care: Care Centered Value-Sensitive Design.Aimee van Wynsberghe - 2013 - Science and Engineering Ethics 19 (2):407-433.
    The prospective robots in healthcare intended to be included within the conclave of the nurse-patient relationship—what I refer to as care robots—require rigorous ethical reflection to ensure their design and introduction do not impede the promotion of values and the dignity of patients at such a vulnerable and sensitive time in their lives. The ethical evaluation of care robots requires insight into the values at stake in the healthcare tradition. What’s more, given the stage of their development and lack of (...)
    Download  
     
    Export citation  
     
    Bookmark   91 citations  
  • Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns.Aurelia Tamò-Larrieux, Christoph Lutz, Eduard Fosch Villaronga & Heike Felzmann - 2019 - Big Data and Society 6 (1).
    Transparency is now a fundamental principle for data processing under the General Data Protection Regulation. We explore what this requirement entails for artificial intelligence and automated decision-making systems. We address the topic of transparency in artificial intelligence by integrating legal, social, and ethical aspects. We first investigate the ratio legis of the transparency requirement in the General Data Protection Regulation and its ethical underpinnings, showing its focus on the provision of information and explanation. We then discuss the pitfalls with respect (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Algorithms as culture: Some tactics for the ethnography of algorithmic systems.Nick Seaver - 2017 - Big Data and Society 4 (2).
    This article responds to recent debates in critical algorithm studies about the significance of the term “algorithm.” Where some have suggested that critical scholars should align their use of the term with its common definition in professional computer science, I argue that we should instead approach algorithms as “multiples”—unstable objects that are enacted through the varied practices that people use to engage with them, including the practices of “outsider” researchers. This approach builds on the work of Laura Devendorf, Elizabeth Goodman, (...)
    Download  
     
    Export citation  
     
    Bookmark   48 citations  
  • Folk theories of algorithmic recommendations on Spotify: Enacting data assemblages in the global South.Mónica Sancho, Ricardo Solís, Andrés Segura-Castillo & Ignacio Siles - 2020 - Big Data and Society 7 (1).
    This paper examines folk theories of algorithmic recommendations on Spotify in order to make visible the cultural specificities of data assemblages in the global South. The study was conducted in Costa Rica and draws on triangulated data from 30 interviews, 4 focus groups with 22 users, and the study of “rich pictures” made by individuals to graphically represent their understanding of algorithmic recommendations. We found two main folk theories: one that personifies Spotify and another one that envisions it as a (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Accountability in a computerized society.Helen Nissenbaum - 1996 - Science and Engineering Ethics 2 (1):25-42.
    This essay warns of eroding accountability in computerized societies. It argues that assumptions about computing and features of situations in which computers are produced create barriers to accountability. Drawing on philosophical analyses of moral blame and responsibility, four barriers are identified: 1) the problem of many hands, 2) the problem of bugs, 3) blaming the computer, and 4) software ownership without liability. The paper concludes with ideas on how to reverse this trend.
    Download  
     
    Export citation  
     
    Bookmark   48 citations  
  • Bearing Account-able Witness to the Ethical Algorithmic System.Daniel Neyland - 2016 - Science, Technology, and Human Values 41 (1):50-76.
    This paper explores how accountability might make otherwise obscure and inaccessible algorithms available for governance. The potential import and difficulty of accountability is made clear in the compelling narrative reproduced across recent popular and academic reports. Through this narrative we are told that algorithms trap us and control our lives, undermine our privacy, have power and an independent agential impact, at the same time as being inaccessible, reducing our opportunities for critical engagement. The paper suggests that STS sensibilities can provide (...)
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  • Explanation in artificial intelligence: Insights from the social sciences.Tim Miller - 2019 - Artificial Intelligence 267 (C):1-38.
    Download  
     
    Export citation  
     
    Bookmark   127 citations  
  • The responsibility gap: Ascribing responsibility for the actions of learning automata. [REVIEW]Andreas Matthias - 2004 - Ethics and Information Technology 6 (3):175-183.
    Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this kind of machine any more (which is not a (...)
    Download  
     
    Export citation  
     
    Bookmark   169 citations  
  • The responsibility gap: Ascribing responsibility for the actions of learning automata.Andreas Matthias - 2004 - Ethics and Information Technology 6 (3):175-183.
    Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this kind of machine any more (which is not a (...)
    Download  
     
    Export citation  
     
    Bookmark   171 citations  
  • Fair, Transparent, and Accountable Algorithmic Decision-making Processes: The Premise, the Proposed Solutions, and the Open Challenges.Bruno Lepri, Nuria Oliver, Emmanuel Letouzé, Alex Pentland & Patrick Vinck - 2018 - Philosophy and Technology 31 (4):611-627.
    The combination of increased availability of large amounts of fine-grained human behavioral data and advances in machine learning is presiding over a growing reliance on algorithms to address complex societal problems. Algorithmic decision-making processes might lead to more objective and thus potentially fairer decisions than those made by humans who may be influenced by greed, prejudice, fatigue, or hunger. However, algorithmic decision-making has been criticized for its potential to enhance discrimination, information and power asymmetry, and opacity. In this paper, we (...)
    Download  
     
    Export citation  
     
    Bookmark   48 citations  
  • Algorithmic Decision-Making Based on Machine Learning from Big Data: Can Transparency Restore Accountability?Paul Laat - 2018 - Philosophy and Technology 31 (4):525-541.
    Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Would transparency contribute to restoring accountability for such systems as is often maintained? Several objections to full transparency are examined: the loss of privacy when datasets become public, the perverse effects of disclosure of the very algorithms themselves (“gaming the system” in particular), the potential loss of companies’ competitive edge, and the limited gains in answerability to be expected (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  • Trust as an affective attitude.Karen Jones - 1996 - Ethics 107 (1):4-25.
    Download  
     
    Export citation  
     
    Bookmark   287 citations  
  • Trust as an Affective Attitude.Karen Jones, Russell Hardin & Lawrence C. Becker - 1996 - Ethics 107 (1):4-25.
    Download  
     
    Export citation  
     
    Bookmark   115 citations  
  • Organizational Transparency: Conceptualizations, Conditions, and Consequences.Mikkel Flyverbom & Oana Brindusa Albu - 2019 - Business and Society 58 (2):268-297.
    Transparency is an increasingly prominent area of research that offers valuable insights for organizational studies. However, conceptualizations of transparency are rarely subject to critical scrutiny and thus their relevance remains unclear. In most accounts, transparency is associated with the sharing of information and the perceived quality of the information shared. This narrow focus on information and quality, however, overlooks the dynamics of organizational transparency. To provide a more structured conceptualization of organizational transparency, this article unpacks the assumptions that shape the (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Principles of Biomedical Ethics.Ezekiel J. Emanuel, Tom L. Beauchamp & James F. Childress - 1995 - Hastings Center Report 25 (4):37.
    Book reviewed in this article: Principles of Biomedical Ethics. By Tom L. Beauchamp and James F. Childress.
    Download  
     
    Export citation  
     
    Bookmark   2165 citations  
  • Transparency rights, technology, and trust.John Elia - 2009 - Ethics and Information Technology 11 (2):145-153.
    Information theorists often construe new Information and Communication Technologies (ICTs) as leveling mechanisms, regulating power relations at a distance by arming stakeholders with information and enhanced agency. Management theorists have claimed that transparency cultivates stakeholder trust, distinguishes a business from its competition, and attracts new clients, investors, and employees, making it key to future growth and prosperity. Synthesizing these claims, we encounter an increasingly common view: If corporations voluntarily adopted new ICTs in order to foster transparency, trust, and growth, while (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Algorithmic Decision-Making Based on Machine Learning from Big Data: Can Transparency Restore Accountability?Massimo Durante & Marcello D'Agostino - 2018 - Philosophy and Technology 31 (4):525-541.
    Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Would transparency contribute to restoring accountability for such systems as is often maintained? Several objections to full transparency are examined: the loss of privacy when datasets become public, the perverse effects of disclosure of the very algorithms themselves, the potential loss of companies’ competitive edge, and the limited gains in answerability to be expected since sophisticated algorithms usually are (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  • Algorithms and their others: Algorithmic culture in context.Paul Dourish - 2016 - Big Data and Society 3 (2).
    Algorithms, once obscure objects of technical art, have lately been subject to considerable popular and scholarly scrutiny. What does it mean to adopt the algorithm as an object of analytic attention? What is in view, and out of view, when we focus on the algorithm? Using Niklaus Wirth's 1975 formulation that “algorithms + data structures = programs” as a launching-off point, this paper examines how an algorithmic lens shapes the way in which we might inquire into contemporary digital culture.
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • A Question of Trust: The Bbc Reith Lectures 2002.Onora O'Neill - 2002 - Cambridge University Press.
    We say we can no longer trust our public services, institutions or the people who run them. The professionals we have to rely on - politicians, doctors, scientists, businessmen and many others - are treated with suspicion. Their word is doubted, their motives questioned. Whether real or perceived, this crisis of trust has a debilitating impact on society and democracy. Can trust be restored by making people and institutions more accountable? Or do complex systems of accountability and control themselves damage (...)
    Download  
     
    Export citation  
     
    Bookmark   55 citations  
  • Transparency: The Key to Better Governance?Christopher Hood & David Heald - unknown - Proceedings of the British Academy 135.
    Christopher Hood: Transparency in Historical Perspective David Heald: Varieties of Transparency Patrick Birkinshaw: Transparency as a Human Right David Heald: Transparency as an Instrumental Value Onora O'Neill: Transparency and the Ethics of Communication Andrea Prat: The More Closely We Are Watched, the Better We Behave? Alasdair Roberts: Dashed Expectations: Governmental Adaptation to Transparency Rules Andrew McDonald: What Hope Freedom of Information in th UK James Savage: Member State Bedgetary Transparency in the Economic and Monetary Union David Stasavage: Does Transparency Make (...)
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  • The ethics of algorithms: mapping the debate.Brent Mittelstadt, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter & Luciano Floridi - 2016 - Big Data and Society 3 (2).
    In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences (...)
    Download  
     
    Export citation  
     
    Bookmark   163 citations  
  • Rethinking Explainable Machines: The GDPR's 'Right to Explanation' Debate and the Rise of Algorithmic Audits in Enterprise.Bryan Casey, Ashkon Farhangi & Roland Vogl - forthcoming - Berkeley Technology Law Journal.
    The public debate surrounding the General Data Protection Regulation's “right to explanation” has sparked a global conversation of profound social and.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation.Sandra Wachter, Brent Mittelstadt & Luciano Floridi - 2017 - International Data Privacy Law 1 (2):76-99.
    Since approval of the EU General Data Protection Regulation (GDPR) in 2016, it has been widely and repeatedly claimed that the GDPR will legally mandate a ‘right to explanation’ of all decisions made by automated or artificially intelligent algorithmic systems. This right to explanation is viewed as an ideal mechanism to enhance the accountability and transparency of automated decision-making. However, there are several reasons to doubt both the legal existence and the feasibility of such a right. In contrast to the (...)
    Download  
     
    Export citation  
     
    Bookmark   62 citations