Switch to: References

Citations of:

A challenge for machine ethics

Minds and Machines 19 (3):421-438 (2009)

Add citations

You must login to add citations.
  1. Manufacturing Morality A general theory of moral agency grounding computational implementations: the ACTWith model.Jeffrey White - 2013 - In Computational Intelligence. Nova Publications. pp. 1-65.
    The ultimate goal of research into computational intelligence is the construction of a fully embodied and fully autonomous artificial agent. This ultimate artificial agent must not only be able to act, but it must be able to act morally. In order to realize this goal, a number of challenges must be met, and a number of questions must be answered, the upshot being that, in doing so, the form of agency to which we must aim in developing artificial agents comes (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Safety Engineering for Artificial General Intelligence.Roman Yampolskiy & Joshua Fox - 2013 - Topoi 32 (2):217-226.
    Machine ethics and robot rights are quickly becoming hot topics in artificial intelligence and robotics communities. We will argue that attempts to attribute moral agency and assign rights to all intelligent machines are misguided, whether applied to infrahuman or superhuman AIs, as are proposals to limit the negative effects of AIs by constraining their behavior. As an alternative, we propose a new science of safety engineering for intelligent artificial agents based on maximizing for what humans value. In particular, we challenge (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • 8 Rightful Machines.Ava Thomas Wright - 2022 - In Hyeongjoo Kim & Dieter Schönecker (eds.), Kant and Artificial Intelligence. De Gruyter. pp. 223-238.
    In this paper, I set out a new Kantian approach to resolving conflicts between moral obligations for highly autonomous machine agents. First, I argue that efforts to build explicitly moral autonomous machine agents should focus on what Kant refers to as duties of right, which are duties that everyone could accept, rather than on duties of virtue (or “ethics”), which are subject to dispute in particular cases. “Moral” machines must first be rightful machines, I argue. I then show how this (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Autonomous reboot: Aristotle, autonomy and the ends of machine ethics.Jeffrey White - 2022 - AI and Society 37 (2):647-659.
    Tonkens has issued a seemingly impossible challenge, to articulate a comprehensive ethical framework within which artificial moral agents satisfy a Kantian inspired recipe—"rational" and "free"—while also satisfying perceived prerogatives of machine ethicists to facilitate the creation of AMAs that are perfectly and not merely reliably ethical. Challenges for machine ethicists have also been presented by Anthony Beavers and Wendell Wallach. Beavers pushes for the reinvention of traditional ethics to avoid "ethical nihilism" due to the reduction of morality to mechanical causation. (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Autonomous Reboot: Kant, the categorical imperative, and contemporary challenges for machine ethicists.Jeffrey White - 2022 - AI and Society 37 (2):661-673.
    Ryan Tonkens has issued a seemingly impossible challenge, to articulate a comprehensive ethical framework within which artificial moral agents satisfy a Kantian inspired recipe—"rational" and "free"—while also satisfying perceived prerogatives of machine ethicists to facilitate the creation of AMAs that are perfectly and not merely reliably ethical. This series of papers meets this challenge by landscaping traditional moral theory in resolution of a comprehensive account of moral agency. The first paper established the challenge and set out autonomy in Aristotelian terms. (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Ethical Issues for Autonomous Trading Agents.Michael P. Wellman & Uday Rajan - 2017 - Minds and Machines 27 (4):609-624.
    The rapid advancement of algorithmic trading has demonstrated the success of AI automation, as well as gaps in our understanding of the implications of this technology proliferation. We explore ethical issues in the context of autonomous trading agents, both to address problems in this domain and as a case study for regulating autonomous agents more generally. We argue that increasingly competent trading agents will be capable of initiative at wider levels, necessitating clarification of ethical and legal boundaries, and corresponding development (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • When Doctors and AI Interact: on Human Responsibility for Artificial Risks.Mario Verdicchio & Andrea Perin - 2022 - Philosophy and Technology 35 (1):1-28.
    A discussion concerning whether to conceive Artificial Intelligence systems as responsible moral entities, also known as “artificial moral agents”, has been going on for some time. In this regard, we argue that the notion of “moral agency” is to be attributed only to humans based on their autonomy and sentience, which AI systems lack. We analyze human responsibility in the presence of AI systems in terms of meaningful control and due diligence and argue against fully automated systems in medicine. With (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Ethical aspects of AI robots for agri-food; a relational approach based on four case studies.Simone van der Burg, Else Giesbers, Marc-Jeroen Bogaardt, Wijbrand Ouweltjes & Kees Lokhorst - forthcoming - AI and Society:1-15.
    These last years, the development of AI robots for agriculture, livestock farming and food processing industries is rapidly increasing. These robots are expected to help produce and deliver food more efficiently for a growing human population, but they also raise societal and ethical questions. As the type of questions raised by these AI robots in society have been rarely empirically explored, we engaged in four case studies focussing on four types of AI robots for agri-food ‘in the making’: manure collectors, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Critiquing the Reasons for Making Artificial Moral Agents.Aimee van Wynsberghe & Scott Robbins - 2018 - Science and Engineering Ethics:1-17.
    Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and (...)
    Download  
     
    Export citation  
     
    Bookmark   44 citations  
  • Critiquing the Reasons for Making Artificial Moral Agents.Aimee van Wynsberghe & Scott Robbins - 2019 - Science and Engineering Ethics 25 (3):719-735.
    Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and (...)
    Download  
     
    Export citation  
     
    Bookmark   45 citations  
  • The case against robotic warfare: A response to Arkin.Ryan Tonkens - 2012 - Journal of Military Ethics 11 (2):149-168.
    Abstract Semi-autonomous robotic weapons are already carving out a role for themselves in modern warfare. Recently, Ronald Arkin has argued that autonomous lethal robotic systems could be more ethical than humans on the battlefield, and that this marks a significant reason in favour of their development and use. Here I offer a critical response to the position advanced by Arkin. Although I am sympathetic to the spirit of the motivation behind Arkin's project and agree that if we decide to develop (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Should autonomous robots be pacifists?Ryan Tonkens - 2013 - Ethics and Information Technology 15 (2):109-123.
    Currently, the central questions in the philosophical debate surrounding the ethics of automated warfare are (1) Is the development and use of autonomous lethal robotic systems for military purposes consistent with (existing) international laws of war and received just war theory?; and (2) does the creation and use of such machines improve the moral caliber of modern warfare? However, both of these approaches have significant problems, and thus we need to start exploring alternative approaches. In this paper, I ask whether (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Out of character: on the creation of virtuous machines. [REVIEW]Ryan Tonkens - 2012 - Ethics and Information Technology 14 (2):137-149.
    The emerging discipline of Machine Ethics is concerned with creating autonomous artificial moral agents that perform ethically significant actions out in the world. Recently, Wallach and Allen (Moral machines: teaching robots right from wrong, Oxford University Press, Oxford, 2009) and others have argued that a virtue-based moral framework is a promising tool for meeting this end. However, even if we could program autonomous machines to follow a virtue-based moral framework, there are certain pressing ethical issues that need to be taken (...)
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  • Interdisciplinary Confusion and Resolution in the Context of Moral Machines.Jakob Stenseke - 2022 - Science and Engineering Ethics 28 (3):1-17.
    Recent advancements in artificial intelligence have fueled widespread academic discourse on the ethics of AI within and across a diverse set of disciplines. One notable subfield of AI ethics is machine ethics, which seeks to implement ethical considerations into AI systems. However, since different research efforts within machine ethics have discipline-specific concepts, practices, and goals, the resulting body of work is pestered with conflict and confusion as opposed to fruitful synergies. The aim of this paper is to explore ways to (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Artificial virtuous agents: from theory to machine implementation.Jakob Stenseke - 2021 - AI and Society:1-20.
    Virtue ethics has many times been suggested as a promising recipe for the construction of artificial moral agents due to its emphasis on moral character and learning. However, given the complex nature of the theory, hardly any work has de facto attempted to implement the core tenets of virtue ethics in moral machines. The main goal of this paper is to demonstrate how virtue ethics can be taken all the way from theory to machine implementation. To achieve this goal, we (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Artificial virtuous agents: from theory to machine implementation.Jakob Stenseke - 2023 - AI and Society 38 (4):1301-1320.
    Virtue ethics has many times been suggested as a promising recipe for the construction of artificial moral agents due to its emphasis on moral character and learning. However, given the complex nature of the theory, hardly any work has de facto attempted to implement the core tenets of virtue ethics in moral machines. The main goal of this paper is to demonstrate how virtue ethics can be taken all the way from theory to machine implementation. To achieve this goal, we (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • A neo-aristotelian perspective on the need for artificial moral agents (AMAs).Alejo José G. Sison & Dulce M. Redín - 2023 - AI and Society 38 (1):47-65.
    We examine Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) critique of the need for Artificial Moral Agents (AMAs) and its rebuttal by Formosa and Ryan (JAMA 10.1007/s00146-020-01089-6, 2020) set against a neo-Aristotelian ethical background. Neither Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) essay nor Formosa and Ryan’s (JAMA 10.1007/s00146-020-01089-6, 2020) is explicitly framed within the teachings of a specific ethical school. The former appeals to the lack of “both empirical and intuitive support” (Van Wynsberghe and Robbins 2019, p. 721) for (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Synthetic Deliberation: Can Emulated Imagination Enhance Machine Ethics?Robert Pinka - 2020 - Minds and Machines 31 (1):121-136.
    Artificial intelligence is becoming increasingly entwined with our daily lives: AIs work as assistants through our phones, control our vehicles, and navigate our vacuums. As these objects become more complex and work within our societies in ways that affect our well-being, there is a growing demand for machine ethics—we want a guarantee that the various automata in our lives will behave in a way that minimizes the amount of harm they create. Though many technologies exist as moral artifacts, the development (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The autonomy-safety-paradox of service robotics in Europe and Japan: a comparative analysis.Hironori Matsuzaki & Gesa Lindemann - 2016 - AI and Society 31 (4):501-517.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Computer Says I Don’t Know: An Empirical Approach to Capture Moral Uncertainty in Artificial Intelligence.Andreia Martinho, Maarten Kroesen & Caspar Chorus - 2021 - Minds and Machines 31 (2):215-237.
    As AI Systems become increasingly autonomous, they are expected to engage in decision-making processes that have moral implications. In this research we integrate theoretical and empirical lines of thought to address the matters of moral reasoning and moral uncertainty in AI Systems. We reconceptualize the metanormative framework for decision-making under moral uncertainty and we operationalize it through a latent class choice model. The core idea being that moral heterogeneity in society can be codified in terms of a small number of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Could a robot flirt? 4E cognition, reactive attitudes, and robot autonomy.Charles Lassiter - 2022 - AI and Society 37 (2):675-686.
    In this paper, I develop a view about machine autonomy grounded in the theoretical frameworks of 4E cognition and PF Strawson’s reactive attitudes. I begin with critical discussion of White, and conclude that his view is strongly committed to functionalism as it has developed in mainstream analytic philosophy since the 1950s. After suggesting that there is good reason to resist this view by appeal to developments in 4E cognition, I propose an alternative view of machine autonomy. Namely, machines count as (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Artificial moral agents are infeasible with foreseeable technologies.Patrick Chisan Hew - 2014 - Ethics and Information Technology 16 (3):197-206.
    For an artificial agent to be morally praiseworthy, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. Such systems are a substantial departure from current technologies and theory, and are a low prospect. With foreseeable technologies, an artificial agent will carry zero responsibility for its behavior and humans will retain full responsibility.
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Three Risks That Caution Against a Premature Implementation of Artificial Moral Agents for Practical and Economical Use.Christian Herzog - 2021 - Science and Engineering Ethics 27 (1):1-15.
    In the present article, I will advocate caution against developing artificial moral agents based on the notion that the utilization of preliminary forms of AMAs will potentially negatively feed back on the human social system and on human moral thought itself and its value—e.g., by reinforcing social inequalities, diminishing the breadth of employed ethical arguments and the value of character. While scientific investigations into AMAs pose no direct significant threat, I will argue against their premature utilization for practical and economical (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Autonomous technologies in human ecologies: enlanguaged cognition, practices and technology.Rasmus Gahrn-Andersen & Stephen J. Cowley - 2022 - AI and Society 37 (2):687-699.
    Advanced technologies such as drones, intelligent algorithms and androids have grave implications for human existence. With the purpose of exploring their basis for doing so, the paper proposes a framework for investigating the complex relationship between such devices and human practices and language-mediated cognition. Specifically, it centers on the importance of the typically neglected intermediate layer of culture which not only drives both technophobia and philia but also, more fundamentally, connects pre-reflective experience and socio-material practices by placing advanced technologies in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Towards the Epistemology of the Internet of Things Techno-Epistemology and Ethical Considerations Through the Prism of Trust.Ori Freiman - 2014 - International Review of Information Ethics 22:6-22.
    This paper discusses the epistemology of the Internet of Things [IoT] by focusing on the topic of trust. It presents various frameworks of trust, and argues that the ethical framework of trust is what constitutes our responsibility to reveal desired norms and standards and embed them in other frameworks of trust. The first section briefly presents the IoT and scrutinizes the scarce philosophical work that has been done on this subject so far. The second section suggests that the field of (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Making moral machines: why we need artificial moral agents.Paul Formosa & Malcolm Ryan - forthcoming - AI and Society.
    As robots and Artificial Intelligences become more enmeshed in rich social contexts, it seems inevitable that we will have to make them into moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should seek to create such Artificial Moral Agents (AMAs). Recently, several papers have argued that we have strong reasons not to develop AMAs. In response, we develop a comprehensive analysis (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Categorization and challenges of utilitarianisms in the context of artificial intelligence.Štěpán Cvik - 2022 - AI and Society 37 (1):291-297.
    The debates about ethics in the context of artificial intelligence have been recently focusing primarily on various types of utilitarianisms. This article suggests a categorization of the various presented utilitarianisms into static utilitarianisms and dynamic utilitarianisms. It explains the main features of both. Then, it presents the challenges the utilitarianisms in each group need to be able to deal with. Since it appears that those cannot be overcome in the context of each group alone, the article suggests a possibility of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Drones, robots and perceived autonomy: implications for living human beings.Stephen J. Cowley & Rasmus Gahrn-Andersen - 2022 - AI and Society 37 (2):591-594.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Should Moral Machines be Banned? A Commentary on van Wynsberghe and Robbins “Critiquing the Reasons for Making Artificial Moral Agents”.Bartek Chomanski - 2020 - Science and Engineering Ethics 26 (6):3469-3481.
    In a stimulating recent article for this journal (van Wynsberghe and Robbins in Sci Eng Ethics 25(3):719–735, 2019), Aimee van Wynsberghe and Scott Robbins mount a serious critique of a number of reasons advanced in favor of building artificial moral agents (AMAs). In light of their critique, vW&R make two recommendations: they advocate a moratorium on the commercialization of AMAs and suggest that the argumentative burden is now shifted onto the proponents of AMAs to come up with new reasons for (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Patiency is not a virtue: the design of intelligent systems and systems of ethics.Joanna J. Bryson - 2018 - Ethics and Information Technology 20 (1):15-26.
    The question of whether AI systems such as robots can or should be afforded moral agency or patiency is not one amenable either to discovery or simple reasoning, because we as societies constantly reconstruct our artefacts, including our ethical systems. Consequently, the place of AI systems in society is a matter of normative, not descriptive ethics. Here I start from a functionalist assumption, that ethics is the set of behaviour that maintains a society. This assumption allows me to exploit the (...)
    Download  
     
    Export citation  
     
    Bookmark   42 citations  
  • Can AI Weapons Make Ethical Decisions?Ross W. Bellaby - 2021 - Criminal Justice Ethics 40 (2):86-107.
    The ability of machines to make truly independent and autonomous decisions is a goal of many, not least of military leaders who wish to take the human out of the loop as much as possible, claiming that autonomous military weaponry—most notably drones—can make decisions more quickly and with greater accuracy. However, there is no clear understanding of how autonomous weapons should be conceptualized and of the implications that their “autonomous” nature has on them as ethical agents. It will be argued (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A Normative Approach to Artificial Moral Agency.Dorna Behdadi & Christian Munthe - 2020 - Minds and Machines 30 (2):195-218.
    This paper proposes a methodological redirection of the philosophical debate on artificial moral agency in view of increasingly pressing practical needs due to technological development. This “normative approach” suggests abandoning theoretical discussions about what conditions may hold for moral agency and to what extent these may be met by artificial entities such as AI systems and robots. Instead, the debate should focus on how and to what extent such entities should be included in human practices normally assuming moral agency and (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • Mental time-travel, semantic flexibility, and A.I. ethics.Marcus Arvan - 2023 - AI and Society 38 (6):2577-2596.
    This article argues that existing approaches to programming ethical AI fail to resolve a serious moral-semantic trilemma, generating interpretations of ethical requirements that are either too semantically strict, too semantically flexible, or overly unpredictable. This paper then illustrates the trilemma utilizing a recently proposed ‘general ethical dilemma analyzer,’ GenEth. Finally, it uses empirical evidence to argue that human beings resolve the semantic trilemma using general cognitive and motivational processes involving ‘mental time-travel,’ whereby we simulate different possible pasts and futures. I (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Characteristics and challenges in the industries towards responsible AI: a systematic literature review.Marianna Anagnostou, Olga Karvounidou, Chrysovalantou Katritzidaki, Christina Kechagia, Kyriaki Melidou, Eleni Mpeza, Ioannis Konstantinidis, Eleni Kapantai, Christos Berberidis, Ioannis Magnisalis & Vassilios Peristeras - 2022 - Ethics and Information Technology 24 (3):1-18.
    Today humanity is in the midst of the massive expansion of new and fundamental technology, represented by advanced artificial intelligence (AI) systems. The ongoing revolution of these technologies and their profound impact across various sectors, has triggered discussions about the characteristics and values that should guide their use and development in a responsible manner. In this paper, we conduct a systematic literature review with the aim of pointing out existing challenges and required principles in AI-based systems in different industries. We (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Implementation of Ethical Decision Procedures in Autonomous Systems : the Case of the Autonomous Vehicle.Katherine Evans - 2021 - Dissertation, Sorbonne Université
    The ethics of emerging forms of artificial intelligence has become a prolific subject in both academic and public spheres. A great deal of these concerns flow from the need to ensure that these technologies do not cause harm—physical, emotional or otherwise—to the human agents with which they will interact. In the literature, this challenge has been met with the creation of artificial moral agents: embodied or virtual forms of artificial intelligence whose decision procedures are constrained by explicit normative principles, requiring (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Leakproofing the Singularity.Roman V. Yampolskiy - 2012 - Journal of Consciousness Studies 19 (1-2):194-214.
    This paper attempts to formalize and to address the ‘leakproofing’ of the Singularity problem presented by David Chalmers. The paper begins with the definition of the Artificial Intelligence Confinement Problem. After analysis of existing solutions and their shortcomings, a protocol is proposed aimed at making a more secure confinement environment which might delay potential negative effect from the technological singularity while allowing humanity to benefit from the superintelligence.
    Download  
     
    Export citation  
     
    Bookmark   4 citations