Switch to: Citations

Add references

You must login to add references.
  1. Ethics without principles.Jonathan Dancy - 2004 - New York: Oxford University Press.
    In this much-anticipated book, Jonathan Dancy offers the only available full-scale treatment of particularism in ethics, a view with which he has been associated for twenty years. Dancy now presents particularism as the view that the possibility of moral thought and judgement does not in any way depend on an adequate supply of principles. He grounds this claim on a form of reasons-holism, holding that what is a reason in one case need not be any reason in another, and maintaining (...)
    Download  
     
    Export citation  
     
    Bookmark   678 citations  
  • On statistical criteria of algorithmic fairness.Brian Hedden - 2021 - Philosophy and Public Affairs 49 (2):209-231.
    Predictive algorithms are playing an increasingly prominent role in society, being used to predict recidivism, loan repayment, job performance, and so on. With this increasing influence has come an increasing concern with the ways in which they might be unfair or biased against individuals in virtue of their race, gender, or, more generally, their group membership. Many purported criteria of algorithmic fairness concern statistical relationships between the algorithm’s predictions and the actual outcomes, for instance requiring that the rate of false (...)
    Download  
     
    Export citation  
     
    Bookmark   49 citations  
  • (1 other version)Responsibility and Control: A Theory of Moral Responsibility.John Fischer & Mark Ravizza - 1998 - Philosophical Quarterly 49 (197):543-545.
    Download  
     
    Export citation  
     
    Bookmark   578 citations  
  • Robots, Law and the Retribution Gap.John Danaher - 2016 - Ethics and Information Technology 18 (4):299–309.
    We are living through an era of increased robotisation. Some authors have already begun to explore the impact of this robotisation on legal rules and practice. In doing so, many highlight potential liability gaps that might arise through robot misbehaviour. Although these gaps are interesting and socially significant, they do not exhaust the possible gaps that might be created by increased robotisation. In this article, I make the case for one of those alternative gaps: the retribution gap. This gap arises (...)
    Download  
     
    Export citation  
     
    Bookmark   78 citations  
  • Killer robots.Robert Sparrow - 2007 - Journal of Applied Philosophy 24 (1):62–77.
    The United States Army’s Future Combat Systems Project, which aims to manufacture a “robot army” to be ready for deployment by 2012, is only the latest and most dramatic example of military interest in the use of artificially intelligent systems in modern warfare. This paper considers the ethics of a decision to send artificially intelligent robots into war, by asking who we should hold responsible when an autonomous weapon system is involved in an atrocity of the sort that would normally (...)
    Download  
     
    Export citation  
     
    Bookmark   249 citations  
  • Artificial intelligence and responsibility gaps: what is the problem?Peter Königs - 2022 - Ethics and Information Technology 24 (3):1-11.
    Recent decades have witnessed tremendous progress in artificial intelligence and in the development of autonomous systems that rely on artificial intelligence. Critics, however, have pointed to the difficulty of allocating responsibility for the actions of an autonomous system, especially when the autonomous system causes harm or damage. The highly autonomous behavior of such systems, for which neither the programmer, the manufacturer, nor the operator seems to be responsible, has been suspected to generate responsibility gaps. This has been the cause of (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  • What do we want from Explainable Artificial Intelligence (XAI)? – A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research.Markus Langer, Daniel Oster, Timo Speith, Lena Kästner, Kevin Baum, Holger Hermanns, Eva Schmidt & Andreas Sesing - 2021 - Artificial Intelligence 296 (C):103473.
    Previous research in Explainable Artificial Intelligence (XAI) suggests that a main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems (we call these “stakeholders' desiderata”) in a variety of contexts. However, the literature on XAI is vast, spreads out across multiple largely disconnected disciplines, and it often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desiderata. This paper discusses the main classes of stakeholders calling for explainability (...)
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  • From Responsibility to Reason-Giving Explainable Artificial Intelligence.Kevin Baum, Susanne Mantel, Timo Speith & Eva Schmidt - 2022 - Philosophy and Technology 35 (1):1-30.
    We argue that explainable artificial intelligence (XAI), specifically reason-giving XAI, often constitutes the most suitable way of ensuring that someone can properly be held responsible for decisions that are based on the outputs of artificial intelligent (AI) systems. We first show that, to close moral responsibility gaps (Matthias 2004), often a human in the loop is needed who is directly responsible for particular AI-supported decisions. Second, we appeal to the epistemic condition on moral responsibility to argue that, in order to (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Algorithmic Fairness and Base Rate Tracking.Benjamin Eva - 2022 - Philosophy and Public Affairs 50 (2):239-266.
    Philosophy & Public Affairs, Volume 50, Issue 2, Page 239-266, Spring 2022.
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • The epistemic condition for moral responsibility.Fernando Rudy-Hiller - 2018 - Stanford Encyclopedia of Philosophy.
    An encyclopedia article on the epistemic or knowledge condition for moral responsibility, written for the SEP.
    Download  
     
    Export citation  
     
    Bookmark   40 citations  
  • Robots and Respect: Assessing the Case Against Autonomous Weapon Systems.Robert Sparrow - 2016 - Ethics and International Affairs 30 (1):93-116.
    There is increasing speculation within military and policy circles that the future of armed conflict is likely to include extensive deployment of robots designed to identify targets and destroy them without the direct oversight of a human operator. My aim in this paper is twofold. First, I will argue that the ethical case for allowing autonomous targeting, at least in specific restricted domains, is stronger than critics have acknowledged. Second, I will attempt to uncover, explicate, and defend the intuition that (...)
    Download  
     
    Export citation  
     
    Bookmark   36 citations  
  • Who Should Bear the Risk When Self-Driving Vehicles Crash?Antti Kauppinen - 2020 - Journal of Applied Philosophy 38 (4):630-645.
    The moral importance of liability to harm has so far been ignored in the lively debate about what self-driving vehicles should be programmed to do when an accident is inevitable. But liability matters a great deal to just distribution of risk of harm. While morality sometimes requires simply minimizing relevant harms, this is not so when one party is liable to harm in virtue of voluntarily engaging in activity that foreseeably creates a risky situation, while having reasonable alternatives. On plausible (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • First Steps Towards an Ethics of Robots and Artificial Intelligence.John Tasioulas - 2019 - Journal of Practical Ethics 7 (1):61-95.
    This article offers an overview of the main first-order ethical questions raised by robots and Artificial Intelligence (RAIs) under five broad rubrics: functionality, inherent significance, rights and responsibilities, side-effects, and threats. The first letter of each rubric taken together conveniently generates the acronym FIRST. Special attention is given to the rubrics of functionality and inherent significance given the centrality of the former and the tendency to neglect the latter in virtue of its somewhat nebulous and contested character. In addition to (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Who Is Responsible for Killer Robots? Autonomous Weapons, Group Agency, and the Military‐Industrial Complex.Isaac Taylor - 2021 - Journal of Applied Philosophy 38 (2):320-334.
    There has recently been increasing interest in the possibility and ethics of lethal autonomous weapons systems (LAWS), which would combine sophisticated AI with machinery capable of deadly force. One objection to LAWS is that their use will create a troubling responsibility gap, where no human agent can properly be held accountable for the outcomes that they create. While some authors have attempted to show that individual agents can, in fact, be responsible for the behaviour of LAWS in various circumstances, this (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Instrumental Robots.Sebastian Köhler - 2020 - Science and Engineering Ethics 26 (6):3121-3141.
    Advances in artificial intelligence research allow us to build fairly sophisticated agents: robots and computer programs capable of acting and deciding on their own. These systems raise questions about who is responsible when something goes wrong—when such systems harm or kill humans. In a recent paper, Sven Nyholm has suggested that, because current AI will likely possess what we might call “supervised agency”, the theory of responsibility for individual agency is the wrong place to look for an answer to the (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Saying 'No!' to Lethal Autonomous Targeting.Noel Sharkey - 2010 - Journal of Military Ethics 9 (4):369-383.
    Plans to automate killing by using robots armed with lethal weapons have been a prominent feature of most US military forces? roadmaps since 2004. The idea is to have a staged move from ?man-in-the-loop? to ?man-on-the-loop? to full autonomy. While this may result in considerable military advantages, the policy raises ethical concerns with regard to potential breaches of International Humanitarian Law, including the Principle of Distinction and the Principle of Proportionality. Current applications of remote piloted robot planes or drones offer (...)
    Download  
     
    Export citation  
     
    Bookmark   39 citations  
  • Autonomous Military Systems: collective responsibility and distributed burdens.Niël Henk Conradie - 2023 - Ethics and Information Technology 25 (1):1-14.
    The introduction of Autonomous Military Systems (AMS) onto contemporary battlefields raises concerns that they will bring with them the possibility of a techno-responsibility gap, leaving insecurity about how to attribute responsibility in scenarios involving these systems. In this work I approach this problem in the domain of applied ethics with foundational conceptual work on autonomy and responsibility. I argue that concerns over the use of AMS can be assuaged by recognising the richly interrelated context in which these systems will most (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Punishing Robots – Way Out of Sparrow’s Responsibility Attribution Problem.Maciek Zając - 2020 - Journal of Military Ethics 19 (4):285-291.
    The Laws of Armed Conflict require that war crimes be attributed to individuals who can be held responsible and be punished. Yet assigning responsibility for the actions of Lethal Autonomous Weapon...
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Comparative and non-comparative desert.David Miller - 2003 - In Serena Olsaretti, Desert and justice. New York: Oxford University Press. pp. 25--44.
    Serena Olsaretti brings together new essays by leading moral and political philosophers on the nature of desert and justice, their relations with each other and with other values.
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • Governing lethal behavior in autonomous robots.Ronald C. Arkin - 2009 - .
    Download  
     
    Export citation  
     
    Bookmark   72 citations  
  • (1 other version)Defeaters and Practical Knowledge.Carla Bagnoli - 2018 - Synthese, DOI: 10.1007/S11229-016-1095-Z 195 (7):2855–2875.
    This paper situates the problem of defeaters in a larger debate about the source of normative authority. It argues in favour of a constructivist account of defeasibility, which appeals to the justificatory role of moral principles. The argument builds upon the critique of two recent attempts to deal with defeasibility: first, a particularist account, which disposes of moral principles on the ground that reasons are holistic; and second, a proceduralist view, which addresses the problem of defeaters by distinguishing between provisional (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation