Switch to: Citations

Add references

You must login to add references.
  1. Moral agency without responsibility? Analysis of three ethical models of human-computer interaction in times of artificial intelligence (AI).Alexis Fritz, Wiebke Brandt, Henner Gimpel & Sarah Bayer - 2020 - De Ethica 6 (1):3-22.
    Philosophical and sociological approaches in technology have increasingly shifted toward describing AI (artificial intelligence) systems as ‘(moral) agents,’ while also attributing ‘agency’ to them. It is only in this way – so their principal argument goes – that the effects of technological components in a complex human-computer interaction can be understood sufficiently in phenomenological-descriptive and ethical-normative respects. By contrast, this article aims to demonstrate that an explanatory model only achieves a descriptively and normatively satisfactory result if the concepts of ‘(moral) (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Moral Uncertainty.William MacAskill, Krister Bykvist & Toby Ord - 2020 - Oxford University Press.
    How should we make decisions when we're uncertain about what we ought, morally, to do? Decision-making in the face of fundamental moral uncertainty is underexplored terrain: MacAskill, Bykvist, and Ord argue that there are distinctive norms by which it is governed, and which depend on the nature of one's moral beliefs.
    Download  
     
    Export citation  
     
    Bookmark   44 citations  
  • Safety requirements vs. crashing ethically: what matters most for policies on autonomous vehicles.Björn Lundgren - forthcoming - AI and Society:1-11.
    The philosophical–ethical literature and the public debate on autonomous vehicles have been obsessed with ethical issues related to crashing. In this article, these discussions, including more empirical investigations, will be critically assessed. It is argued that a related and more pressing issue is questions concerning safety. For example, what should we require from autonomous vehicles when it comes to safety? What do we mean by ‘safety’? How do we measure it? In response to these questions, the article will present a (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Artificial Moral Agents: A Survey of the Current Status. [REVIEW]José-Antonio Cervantes, Sonia López, Luis-Felipe Rodríguez, Salvador Cervantes, Francisco Cervantes & Félix Ramos - 2020 - Science and Engineering Ethics 26 (2):501-532.
    One of the objectives in the field of artificial intelligence for some decades has been the development of artificial agents capable of coexisting in harmony with people and other systems. The computing research community has made efforts to design artificial agents capable of doing tasks the way people do, tasks requiring cognitive mechanisms such as planning, decision-making, and learning. The application domains of such software agents are evident nowadays. Humans are experiencing the inclusion of artificial agents in their environment as (...)
    Download  
     
    Export citation  
     
    Bookmark   29 citations  
  • The Immoral Machine.John Harris - 2020 - Cambridge Quarterly of Healthcare Ethics 29 (1):71-79.
    :In a recent paper in Nature1 entitled The Moral Machine Experiment, Edmond Awad, et al. make a number of breathtakingly reckless assumptions, both about the decisionmaking capacities of current so-called “autonomous vehicles” and about the nature of morality and the law. Accepting their bizarre premise that the holy grail is to find out how to obtain cognizance of public morality and then program driverless vehicles accordingly, the following are the four steps to the Moral Machinists argument:1)Find out what “public morality” (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • (1 other version)Critiquing the Reasons for Making Artificial Moral Agents.Aimee van Wynsberghe & Scott Robbins - 2019 - Science and Engineering Ethics 25 (3):719-735.
    Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and (...)
    Download  
     
    Export citation  
     
    Bookmark   47 citations  
  • Never Mind the Trolley: The Ethics of Autonomous Vehicles in Mundane Situations.Johannes Himmelreich - 2018 - Ethical Theory and Moral Practice 21 (3):669-684.
    Trolley cases are widely considered central to the ethics of autonomous vehicles. We caution against this by identifying four problems. Trolley cases, given technical limitations, rest on assumptions that are in tension with one another. Furthermore, trolley cases illuminate only a limited range of ethical issues insofar as they cohere with a certain design framework. Furthermore, trolley cases seem to demand a moral answer when a political answer is called for. Finally, trolley cases might be epistemically problematic in several ways. (...)
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
  • Human Decisions in Moral Dilemmas are Largely Described by Utilitarianism: Virtual Car Driving Study Provides Guidelines for Autonomous Driving Vehicles.Anja K. Faulhaber, Anke Dittmer, Felix Blind, Maximilian A. Wächter, Silja Timm, Leon R. Sütfeld, Achim Stephan, Gordon Pipa & Peter König - 2019 - Science and Engineering Ethics 25 (2):399-418.
    Ethical thought experiments such as the trolley dilemma have been investigated extensively in the past, showing that humans act in utilitarian ways, trying to cause as little overall damage as possible. These trolley dilemmas have gained renewed attention over the past few years, especially due to the necessity of implementing moral decisions in autonomous driving vehicles. We conducted a set of experiments in which participants experienced modified trolley dilemmas as drivers in virtual reality environments. Participants had to make decisions between (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Normative Uncertainty.William MacAskill - 2014 - Dissertation, University of Oxford
    We are often unsure about what we ought to do. This can be because we lack empirical knowledge, such as the extent to which future generations will be harmed by climate change. It can also be because we lack normative knowledge, such as the relative moral importance of the interests of present people and the interests of future people. However, though the question of how one ought to act under empirical uncertainty has been addressed extensively by both economists and philosophers---with (...)
    Download  
     
    Export citation  
     
    Bookmark   36 citations  
  • Normative Uncertainty as a Voting Problem.William MacAskill - 2016 - Mind 125 (500):967-1004.
    Some philosophers have recently argued that decision-makers ought to take normative uncertainty into account in their decisionmaking. These philosophers argue that, just as it is plausible that we should maximize expected value under empirical uncertainty, it is plausible that we should maximize expected choice-worthiness under normative uncertainty. However, such an approach faces two serious problems: how to deal with merely ordinal theories, which do not give sense to the idea of magnitudes of choice-worthiness; and how, even when theories do give (...)
    Download  
     
    Export citation  
     
    Bookmark   52 citations  
  • Moral Machines: Teaching Robots Right From Wrong.Wendell Wallach & Colin Allen - 2008 - New York, US: Oxford University Press.
    Computers are already approving financial transactions, controlling electrical supplies, and driving trains. Soon, service robots will be taking care of the elderly in their homes, and military robots will have their own targeting and firing protocols. Colin Allen and Wendell Wallach argue that as robots take on more and more responsibility, they must be programmed with moral decision-making abilities, for our own safety. Taking a fast paced tour through the latest thinking about philosophical ethics and artificial intelligence, the authors argue (...)
    Download  
     
    Export citation  
     
    Bookmark   191 citations  
  • (1 other version)Ethical Theory: An Anthology.Russ Shafer-Landau (ed.) - 2007 - Malden, MA: Wiley-Blackwell.
    _Ethical Theory: An Anthology_ is an authoritative collection of key essays by top scholars in the field, addressing core issues including consequentialism, deontology, and virtue ethics, as well as traditionally underrepresented topics such as moral knowledge and moral responsibility. Brings together seventy-six classic and contemporary pieces by renowned philosophers, from classic writing by Hume and Kant to contemporary writing by Derek Parfit, Susan Wolf, and Judith Jarvis Thomson Guides students through key areas in the field, among them consequentialism, deontology, contractarianism, (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • The structure of random utility models.Charles F. Manski - 1977 - Theory and Decision 8 (3):229-254.
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • (1 other version)The Problem of Abortion and the Doctrine of the Double Effect.Philippa Foot - 1967 - Oxford Review 5:5-15.
    One of the reasons why most of us feel puzzled about the problem of abortion is that we want, and do not want, to allow to the unborn child the rights that belong to adults and children. When we think of a baby about to be born it seems absurd to think that the next few minutes or even hours could make so radical a difference to its status; yet as we go back in the life of the fetus we (...)
    Download  
     
    Export citation  
     
    Bookmark   523 citations  
  • Machine Ethics.Michael Anderson & Susan Leigh Anderson (eds.) - 2011 - Cambridge Univ. Press.
    The essays in this volume represent the first steps by philosophers and artificial intelligence researchers toward explaining why it is necessary to add an ...
    Download  
     
    Export citation  
     
    Bookmark   68 citations  
  • Prolegomena to any future artificial moral agent.Colin Allen & Gary Varner - 2000 - Journal of Experimental and Theoretical Artificial Intelligence 12 (3):251--261.
    As arti® cial intelligence moves ever closer to the goal of producing fully autonomous agents, the question of how to design and implement an arti® cial moral agent (AMA) becomes increasingly pressing. Robots possessing autonomous capacities to do things that are useful to humans will also have the capacity to do things that are harmful to humans and other sentient beings. Theoretical challenges to developing arti® cial moral agents result both from controversies among ethicists about moral theory itself, and from (...)
    Download  
     
    Export citation  
     
    Bookmark   79 citations  
  • A challenge for machine ethics.Ryan Tonkens - 2009 - Minds and Machines 19 (3):421-438.
    That the successful development of fully autonomous artificial moral agents (AMAs) is imminent is becoming the received view within artificial intelligence research and robotics. The discipline of Machines Ethics, whose mandate is to create such ethical robots, is consequently gaining momentum. Although it is often asked whether a given moral framework can be implemented into machines, it is never asked whether it should be. This paper articulates a pressing challenge for Machine Ethics: To identify an ethical framework that is both (...)
    Download  
     
    Export citation  
     
    Bookmark   39 citations  
  • Moral uncertainty and its consequences.Ted Lockhart - 2000 - New York: Oxford University Press.
    We are often uncertain how to behave morally in complex situations. In this controversial study, Ted Lockhart contends that moral philosophy has failed to address how we make such moral decisions. Adapting decision theory to the task of decision-making under moral uncertainly, he proposes that we should not always act how we feel we ought to act, and that sometimes we should act against what we feel to be morally right. Lockhart also discusses abortion extensively and proposes new ways to (...)
    Download  
     
    Export citation  
     
    Bookmark   115 citations  
  • (1 other version)Rule-consequentialism.Brad Hooker - 1990 - Mind 99 (393):67-77.
    The theory of morality we can call full rule - consequentialism selects rules solely in terms of the goodness of their consequences and then claims that these rules determine which kinds of acts are morally wrong. George Berkeley was arguably the first rule -consequentialist. He wrote, “In framing the general laws of nature, it is granted we must be entirely guided by the public good of mankind, but not in the ordinary moral actions of our lives. … The rule is (...)
    Download  
     
    Export citation  
     
    Bookmark   69 citations  
  • On the morality of artificial agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.
    Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility of agents (most (...)
    Download  
     
    Export citation  
     
    Bookmark   294 citations  
  • Embedding Values in Artificial Intelligence (AI) Systems.Ibo van de Poel - 2020 - Minds and Machines 30 (3):385-409.
    Organizations such as the EU High-Level Expert Group on AI and the IEEE have recently formulated ethical principles and (moral) values that should be adhered to in the design and deployment of artificial intelligence (AI). These include respect for autonomy, non-maleficence, fairness, transparency, explainability, and accountability. But how can we ensure and verify that an AI system actually respects these values? To help answer this question, I propose an account for determining when an AI system can be said to embody (...)
    Download  
     
    Export citation  
     
    Bookmark   46 citations  
  • Why Trolley Problems Matter for the Ethics of Automated Vehicles.Geoff Keeling - 2020 - Science and Engineering Ethics 26 (1):293-307.
    This paper argues against the view that trolley cases are of little or no relevance to the ethics of automated vehicles. Four arguments for this view are outlined and rejected: the Not Going to Happen Argument, the Moral Difference Argument, the Impossible Deliberation Argument and the Wrong Question Argument. In making clear where these arguments go wrong, a positive account is developed of how trolley cases can inform the ethics of automated vehicles.
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  • What has the Trolley Dilemma ever done for us (and what will it do in the future)? On some recent debates about the ethics of self-driving cars.Andreas Wolkenstein - 2018 - Ethics and Information Technology 20 (3):163-173.
    Self-driving cars currently face a lot of technological problems that need to be solved before the cars can be widely used. However, they also face ethical problems, among which the question of crash-optimization algorithms is most prominently discussed. Reviewing current debates about whether we should use the ethics of the Trolley Dilemma as a guide towards designing self-driving cars will provide us with insights about what exactly ethical research does. It will result in the view that although we need the (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • (1 other version)Critiquing the Reasons for Making Artificial Moral Agents.Aimee van Wynsberghe & Scott Robbins - 2018 - Science and Engineering Ethics:1-17.
    Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and (...)
    Download  
     
    Export citation  
     
    Bookmark   45 citations  
  • Implementation of Moral Uncertainty in Intelligent Machines.Kyle Bogosian - 2017 - Minds and Machines 27 (4):591-608.
    The development of artificial intelligence will require systems of ethical decision making to be adapted for automatic computation. However, projects to implement moral reasoning in artificial moral agents so far have failed to satisfactorily address the widespread disagreement between competing approaches to moral philosophy. In this paper I argue that the proper response to this situation is to design machines to be fundamentally uncertain about morality. I describe a computational framework for doing so and show that it efficiently resolves common (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Meta-Reasoning in Making Moral Decisions Under Normative Uncertainty.Tomasz Żuradzki - 2016 - In Dima Mohammed & Marcin Lewiński (eds.), Argumentation and Reasoned Action. College Publications. pp. 1093-1104.
    I analyze recent discussions about making moral decisions under normative uncertainty. I discuss whether this kind of uncertainty should have practical consequences for decisions and whether there are reliable methods of reasoning that deal with the possibility that we are wrong about some moral issues. I defend a limited use of the decision theory model of reasoning in cases of normative uncertainty.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • A Prima Facie Duty Approach to Machine Ethics Machine Learning of Features of Ethical Dilemmas, Prima Facie Duties, and Decision Principles through a Dialogue with Ethicists.Susan Leigh Anderson & Michael Anderson - 2011 - In Michael Anderson & Susan Leigh Anderson (eds.), Machine Ethics. Cambridge Univ. Press.
    Download  
     
    Export citation  
     
    Bookmark   41 citations  
  • Machine morality: bottom-up and top-down approaches for modelling human moral faculties. [REVIEW]Wendell Wallach, Colin Allen & Iva Smit - 2008 - AI and Society 22 (4):565-582.
    The implementation of moral decision making abilities in artificial intelligence (AI) is a natural and necessary extension to the social mechanisms of autonomous software agents and robots. Engineers exploring design strategies for systems sensitive to moral considerations in their choices and actions will need to determine what role ethical theory should play in defining control architectures for such systems. The architectures for morally intelligent agents fall within two broad approaches: the top-down imposition of ethical theories, and the bottom-up building of (...)
    Download  
     
    Export citation  
     
    Bookmark   36 citations  
  • The Moral Machine experiment.Edmond Awad, Sohan Dsouza, Richard Kim, Jonathan Schulz, Joseph Henrich, Azim Shariff, Jean-François Bonnefon & Iyad Rahwan - 2018 - Nature 563 (7729):59-64.
    Download  
     
    Export citation  
     
    Bookmark   110 citations  
  • (1 other version)Rule-consequentialism.Brad Hooker - 2007 - In Russ Shafer-Landau (ed.), Ethical Theory: An Anthology. Malden, MA: Wiley-Blackwell. pp. 482-492.
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
  • How Do Technological Artefacts Embody Moral Values?Michael Klenk - 2020 - Philosophy and Technology 34 (3):525-544.
    According to some philosophers of technology, technology embodies moral values in virtue of its functional properties and the intentions of its designers. But this paper shows that such an account makes the values supposedly embedded in technology epistemically opaque and that it does not allow for values to change. Therefore, to overcome these shortcomings, the paper introduces the novel Affordance Account of Value Embedding as a superior alternative. Accordingly, artefacts bear affordances, that is, artefacts make certain actions likelier given the (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • Act‐Consequentialism.Brad Hooker - 2000 - In Ideal Code, Real World: A Rule-Consequentialist Theory of Morality. Oxford, GB: Oxford University Press UK.
    Act‐consequentialism is best construed as a criterion of rightness, not a decision procedure. Act‐consequentialism recommends that our procedure for making moral decisions employs rules very like the ones endorsed by rule‐consequentialism. However, the chapter highlights the remaining significant differences between act‐consequentialism and rule‐consequentialism over prohibitions, and discusses the extreme demandingness of act‐consequentialist duties to aid.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Prospects for a Kantian machine.Thomas M. Powers - 2006 - IEEE Intelligent Systems 21 (4):46-51.
    This paper is reprinted in the book Machine Ethics, eds. M. Anderson and S. Anderson, Cambridge University Press, 2011.
    Download  
     
    Export citation  
     
    Bookmark   40 citations  
  • Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead.Cynthia Rudin - 2019 - Nature Machine Intelligence 1.
    Download  
     
    Export citation  
     
    Bookmark   114 citations  
  • Against Moral Hedging.Ittay Nissan-Rozen - 2015 - Economics and Philosophy (3):1-21.
    It has been argued by several philosophers that a morally motivated rational agent who has to make decisions under conditions of moral uncertainty ought to maximize expected moral value in his choices, where the expectation is calculated relative to the agent's moral uncertainty. I present a counter-example to this thesis and to a larger family of decision rules for choice under conditions of moral uncertainty. Based on this counter-example, I argue against the thesis and suggest a reason for its failure (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations