Switch to: References

Add citations

You must login to add citations.
  1. Robots: ethical by design.Gordana Dodig Crnkovic & Baran Çürüklü - 2012 - Ethics and Information Technology 14 (1):61-71.
    Among ethicists and engineers within robotics there is an ongoing discussion as to whether ethical robots are possible or even desirable. We answer both of these questions in the positive, based on an extensive literature study of existing arguments. Our contribution consists in bringing together and reinterpreting pieces of information from a variety of sources. One of the conclusions drawn is that artifactual morality must come in degrees and depend on the level of agency, autonomy and intelligence of the machine. (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Embedding Values in Artificial Intelligence (AI) Systems.Ibo van de Poel - 2020 - Minds and Machines 30 (3):385-409.
    Organizations such as the EU High-Level Expert Group on AI and the IEEE have recently formulated ethical principles and (moral) values that should be adhered to in the design and deployment of artificial intelligence (AI). These include respect for autonomy, non-maleficence, fairness, transparency, explainability, and accountability. But how can we ensure and verify that an AI system actually respects these values? To help answer this question, I propose an account for determining when an AI system can be said to embody (...)
    Download  
     
    Export citation  
     
    Bookmark   37 citations  
  • On the computational complexity of ethics: moral tractability for minds and machines.Jakob Stenseke - 2024 - Artificial Intelligence Review 57 (105):90.
    Why should moral philosophers, moral psychologists, and machine ethicists care about computational complexity? Debates on whether artificial intelligence (AI) can or should be used to solve problems in ethical domains have mainly been driven by what AI can or cannot do in terms of human capacities. In this paper, we tackle the problem from the other end by exploring what kind of moral machines are possible based on what computational systems can or cannot do. To do so, we analyze normative (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Moral sensitivity and the limits of artificial moral agents.Joris Graff - 2024 - Ethics and Information Technology 26 (1):1-12.
    Machine ethics is the field that strives to develop ‘artificial moral agents’ (AMAs), artificial systems that can autonomously make moral decisions. Some authors have questioned the feasibility of machine ethics, by questioning whether artificial systems can possess moral competence, or the capacity to reach morally right decisions in various situations. This paper explores this question by drawing on the work of several moral philosophers (McDowell, Wiggins, Hampshire, and Nussbaum) who have characterised moral competence in a manner inspired by Aristotle. Although (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A qualified defense of top-down approaches in machine ethics.Tyler Cook - forthcoming - AI and Society:1-15.
    This paper concerns top-down approaches in machine ethics. It is divided into three main parts. First, I briefly describe top-down design approaches, and in doing so I make clear what those approaches are committed to and what they involve when it comes to training an AI to behave ethically. In the second part, I formulate two underappreciated motivations for endorsing them, one relating to predictability of machine behavior and the other relating to scrutability of machine decision-making. Finally, I present three (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Ethics of Artificial Intelligence and Robotics.Vincent C. Müller - 2012 - In Peter Adamson (ed.), Stanford Encyclopedia of Philosophy. Stanford Encyclopedia of Philosophy. pp. 1-70.
    Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used (...)
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
  • AI Ethics and Value Alignment for Nonhuman Animals.Soenke Ziesche - 2021 - Philosophies 6 (2):31.
    This article is about a specific, but so far neglected peril of AI, which is that AI systems may become existential as well as causing suffering risks for nonhuman animals. The AI value alignment problem has now been acknowledged as critical for AI safety as well as very hard. However, currently it has only been attempted to align the values of AI systems with human values. It is argued here that this ought to be extended to the values of nonhuman (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Safety Engineering for Artificial General Intelligence.Roman Yampolskiy & Joshua Fox - 2013 - Topoi 32 (2):217-226.
    Machine ethics and robot rights are quickly becoming hot topics in artificial intelligence and robotics communities. We will argue that attempts to attribute moral agency and assign rights to all intelligent machines are misguided, whether applied to infrahuman or superhuman AIs, as are proposals to limit the negative effects of AIs by constraining their behavior. As an alternative, we propose a new science of safety engineering for intelligent artificial agents based on maximizing for what humans value. In particular, we challenge (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • A Conceptual and Computational Model of Moral Decision Making in Human and Artificial Agents.Wendell Wallach, Stan Franklin & Colin Allen - 2010 - Topics in Cognitive Science 2 (3):454-485.
    Recently, there has been a resurgence of interest in general, comprehensive models of human cognition. Such models aim to explain higher-order cognitive faculties, such as deliberation and planning. Given a computational representation, the validity of these models can be tested in computer simulations such as software agents or embodied robots. The push to implement computational models of this kind has created the field of artificial general intelligence (AGI). Moral decision making is arguably one of the most challenging tasks for computational (...)
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  • Artificial intelligence and African conceptions of personhood.C. S. Wareham - 2021 - Ethics and Information Technology 23 (2):127-136.
    Under what circumstances if ever ought we to grant that Artificial Intelligences (AI) are persons? The question of whether AI could have the high degree of moral status that is attributed to human persons has received little attention. What little work there is employs western conceptions of personhood, while non-western approaches are neglected. In this article, I discuss African conceptions of personhood and their implications for the possibility of AI persons. I focus on an African account of personhood that is (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Implementing moral decision making faculties in computers and robots.Wendell Wallach - 2008 - AI and Society 22 (4):463-475.
    The challenge of designing computer systems and robots with the ability to make moral judgments is stepping out of science fiction and moving into the laboratory. Engineers and scholars, anticipating practical necessities, are writing articles, participating in conference workshops, and initiating a few experiments directed at substantiating rudimentary moral reasoning in hardware and software. The subject has been designated by several names, including machine ethics, machine morality, artificial morality, or computational morality. Most references to the challenge elucidate one facet or (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Consciousness and ethics: Artificially conscious moral agents.Wendell Wallach, Colin Allen & Stan Franklin - 2011 - International Journal of Machine Consciousness 3 (01):177-192.
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Moral distance, AI, and the ethics of care.Carolina Villegas-Galaviz & Kirsten Martin - forthcoming - AI and Society:1-12.
    This paper investigates how the introduction of AI to decision making increases moral distance and recommends the ethics of care to augment the ethical examination of AI decision making. With AI decision making, face-to-face interactions are minimized, and decisions are part of a more opaque process that humans do not always understand. Within decision-making research, the concept of moral distance is used to explain why individuals behave unethically towards those who are not seen. Moral distance abstracts those who are impacted (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • When Doctors and AI Interact: on Human Responsibility for Artificial Risks.Mario Verdicchio & Andrea Perin - 2022 - Philosophy and Technology 35 (1):1-28.
    A discussion concerning whether to conceive Artificial Intelligence systems as responsible moral entities, also known as “artificial moral agents”, has been going on for some time. In this regard, we argue that the notion of “moral agency” is to be attributed only to humans based on their autonomy and sentience, which AI systems lack. We analyze human responsibility in the presence of AI systems in terms of meaningful control and due diligence and argue against fully automated systems in medicine. With (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Critiquing the Reasons for Making Artificial Moral Agents.Aimee van Wynsberghe & Scott Robbins - 2018 - Science and Engineering Ethics:1-17.
    Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and (...)
    Download  
     
    Export citation  
     
    Bookmark   44 citations  
  • Critiquing the Reasons for Making Artificial Moral Agents.Aimee van Wynsberghe & Scott Robbins - 2019 - Science and Engineering Ethics 25 (3):719-735.
    Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and (...)
    Download  
     
    Export citation  
     
    Bookmark   45 citations  
  • Ethical protocols design.Matteo Turilli - 2007 - Ethics and Information Technology 9 (1):49-62.
    The paper offers a solution to the problem of specifying computational systems that behave in accordance with a given set of ethical principles. The proposed solution is based on the concepts of ethical requirements and ethical protocols. A new conceptual tool, called the Control Closure of an operation, is defined and used to translate ethical principles into ethical requirements and protocols. The concept of Generalised Informational Privacy (GIP) is used as a paradigmatic example of an ethical principle. GIP is defined (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Artificial wisdom: a philosophical framework.Cheng-Hung Tsai - 2020 - AI and Society:937-944.
    Human excellences such as intelligence, morality, and consciousness are investigated by philosophers as well as artificial intelligence researchers. One excellence that has not been widely discussed by AI researchers is practical wisdom, the highest human excellence, or the highest, seventh, stage in Dreyfus’s model of skill acquisition. In this paper, I explain why artificial wisdom matters and how artificial wisdom is possible (in principle and in practice) by responding to two philosophical challenges to building artificial wisdom systems. The result is (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • From Pluralistic Normative Principles to Autonomous-Agent Rules.Beverley Townsend, Colin Paterson, T. T. Arvind, Gabriel Nemirovsky, Radu Calinescu, Ana Cavalcanti, Ibrahim Habli & Alan Thomas - 2022 - Minds and Machines 1:1-33.
    With recent advancements in systems engineering and artificial intelligence, autonomous agents are increasingly being called upon to execute tasks that have normative relevance. These are tasks that directly—and potentially adversely—affect human well-being and demand of the agent a degree of normative-sensitivity and -compliance. Such norms and normative principles are typically of a social, legal, ethical, empathetic, or cultural nature. Whereas norms of this type are often framed in the abstract, or as high-level principles, addressing normative concerns in concrete applications of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • From Pluralistic Normative Principles to Autonomous-Agent Rules.Beverley Townsend, Colin Paterson, T. T. Arvind, Gabriel Nemirovsky, Radu Calinescu, Ana Cavalcanti, Ibrahim Habli & Alan Thomas - 2022 - Minds and Machines 32 (4):683-715.
    With recent advancements in systems engineering and artificial intelligence, autonomous agents are increasingly being called upon to execute tasks that have normative relevance. These are tasks that directly—and potentially adversely—affect human well-being and demand of the agent a degree of normative-sensitivity and -compliance. Such norms and normative principles are typically of a social, legal, ethical, empathetic, or cultural (‘SLEEC’) nature. Whereas norms of this type are often framed in the abstract, or as high-level principles, addressing normative concerns in concrete applications (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • A challenge for machine ethics.Ryan Tonkens - 2009 - Minds and Machines 19 (3):421-438.
    That the successful development of fully autonomous artificial moral agents (AMAs) is imminent is becoming the received view within artificial intelligence research and robotics. The discipline of Machines Ethics, whose mandate is to create such ethical robots, is consequently gaining momentum. Although it is often asked whether a given moral framework can be implemented into machines, it is never asked whether it should be. This paper articulates a pressing challenge for Machine Ethics: To identify an ethical framework that is both (...)
    Download  
     
    Export citation  
     
    Bookmark   38 citations  
  • What overarching ethical principle should a superintelligent AI follow?Atle Ottesen Søvik - 2022 - AI and Society 37 (4):1505-1518.
    What is the best overarching ethical principle to give a possible future superintelligent machine, given that we do not know what the best ethics are today or in the future? Eliezer Yudkowsky has suggested that a superintelligent AI should have as its goal to carry out the coherent extrapolated volition of humanity (CEV), the most coherent way of combining human goals. The article discusses some problems with this proposal and some alternatives suggested by Nick Bostrom. A slightly different proposal is (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Interdisciplinary Confusion and Resolution in the Context of Moral Machines.Jakob Stenseke - 2022 - Science and Engineering Ethics 28 (3):1-17.
    Recent advancements in artificial intelligence have fueled widespread academic discourse on the ethics of AI within and across a diverse set of disciplines. One notable subfield of AI ethics is machine ethics, which seeks to implement ethical considerations into AI systems. However, since different research efforts within machine ethics have discipline-specific concepts, practices, and goals, the resulting body of work is pestered with conflict and confusion as opposed to fruitful synergies. The aim of this paper is to explore ways to (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Responsible computers? A case for ascribing quasi-responsibility to computers independent of personhood or agency.Bernd Carsten Stahl - 2006 - Ethics and Information Technology 8 (4):205-213.
    There has been much debate whether computers can be responsible. This question is usually discussed in terms of personhood and personal characteristics, which a computer may or may not possess. If a computer fulfils the conditions required for agency or personhood, then it can be responsible; otherwise not. This paper suggests a different approach. An analysis of the concept of responsibility shows that it is a social construct of ascription which is only viable in certain social contexts and which serves (...)
    Download  
     
    Export citation  
     
    Bookmark   20 citations  
  • From machine ethics to computational ethics.Samuel T. Segun - 2021 - AI and Society 36 (1):263-276.
    Research into the ethics of artificial intelligence is often categorized into two subareas—robot ethics and machine ethics. Many of the definitions and classifications of the subject matter of these subfields, as found in the literature, are conflated, which I seek to rectify. In this essay, I infer that using the term ‘machine ethics’ is too broad and glosses over issues that the term computational ethics best describes. I show that the subject of inquiry of computational ethics is of great value (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Our responsibility to manage evaluative diversity.Christopher Santos-Lang - 2014 - Acm Sigcas Computers and Society 44 (2):16-19.
    The ecosystem approach to computer system development is similar to management of biodiversity. Instead of modeling machines after a successful individual, it models machines after successful teams. It includes measuring the evaluative diversity of human teams (i.e. the disparity in ways members conduct the evaluative aspect of decision-making), adding similarly diverse machines to those teams, and monitoring the impact on evaluative balance. This article reviews new research relevant to this approach, especially the validation of a survey instrument for measuring computational (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A democratic way of controlling artificial general intelligence.Jussi Salmi - forthcoming - AI and Society:1-7.
    The problem of controlling an artificial general intelligence has fascinated both scientists and science-fiction writers for centuries. Today that problem is becoming more important because the time when we may have a superhuman intelligence among us is within the foreseeable future. Current average estimates place that moment to before 2060. Some estimates place it as early as 2040, which is quite soon. The arrival of the first AGI might lead to a series of events that we have not seen before: (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Instilling moral value alignment by means of multi-objective reinforcement learning.Juan Antonio Rodriguez-Aguilar, Maite Lopez-Sanchez, Marc Serramia & Manel Rodriguez-Soto - 2022 - Ethics and Information Technology 24 (1).
    AI research is being challenged with ensuring that autonomous agents learn to behave ethically, namely in alignment with moral values. Here, we propose a novel way of tackling the value alignment problem as a two-step process. The first step consists on formalising moral values and value aligned behaviour based on philosophical foundations. Our formalisation is compatible with the framework of (Multi-Objective) Reinforcement Learning, to ease the handling of an agent’s individual and ethical objectives. The second step consists in designing an (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Inteligencia artificial sostenible y evaluación ética constructiva.Antonio Luis Terrones Rodríguez - 2022 - Isegoría 67:10-10.
    El aumento considerable de la capacidad de la inteligencia artificial (IA) implica un alto consumo de recursos energéticos. La situación ambiental actual, caracterizada por la acuciante degradación de ecosistemas y la ruptura del equilibrio, exige tomar medidas en diversos ámbitos. La IA no puede quedar al margen, y aunque es empleada para objetivos de sostenibilidad, debe plantearse como sostenible en términos integrales. La propuesta de una inteligencia artificial sostenible se argumenta a partir de una evaluación ética constructiva, donde la inclusión (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Interdependence as the key for an ethical artificial autonomy.Filippo Pianca & Vieri Giuliano Santucci - forthcoming - AI and Society:1-15.
    Currently, the autonomy of artificial systems, robotic systems in particular, is certainly one of the most debated issues, both from the perspective of technological development and its social impact and ethical repercussions. While theoretical considerations often focus on scenarios far beyond what can be concretely hypothesized from the current state of the art, the term autonomy is still used in a vague or too general way. This reduces the possibilities of a punctual analysis of such an important issue, thus leading (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Nonhuman Value: A Survey of the Intrinsic Valuation of Natural and Artificial Nonhuman Entities.Andrea Owe, Seth D. Baum & Mark Coeckelbergh - 2022 - Science and Engineering Ethics 28 (5):1-29.
    To be intrinsically valuable means to be valuable for its own sake. Moral philosophy is often ethically anthropocentric, meaning that it locates intrinsic value within humans. This paper rejects ethical anthropocentrism and asks, in what ways might nonhumans be intrinsically valuable? The paper answers this question with a wide-ranging survey of theories of nonhuman intrinsic value. The survey includes both moral subjects and moral objects, and both natural and artificial nonhumans. Literatures from environmental ethics, philosophy of technology, philosophy of art, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Smart soldiers: towards a more ethical warfare.Femi Richard Omotoyinbo - 2023 - AI and Society 38 (4):1485-1491.
    It is a truism that, due to human weaknesses, human soldiers have yet to have sufficiently ethical warfare. It is arguable that the likelihood of human soldiers to breach the Principle of Non-Combatant Immunity, for example, is higher in contrast tosmart soldierswho are emotionally inept. Hence, this paper examines the possibility that the integration of ethics into smart soldiers will help address moral challenges in modern warfare. The approach is to develop and employ smart soldiers that are enhanced with ethical (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Ethical Implications of Using Artificial Intelligence in Auditing.Ivy Munoko, Helen L. Brown-Liburd & Miklos Vasarhelyi - 2020 - Journal of Business Ethics 167 (2):209-234.
    Accounting firms are reporting the use of Artificial Intelligence in their auditing and advisory functions, citing benefits such as time savings, faster data analysis, increased levels of accuracy, more in-depth insight into business processes, and enhanced client service. AI, an emerging technology that aims to mimic the cognitive skills and judgment of humans, promises competitive advantages to the adopter. As a result, all the Big 4 firms are reporting its use and their plans to continue with this innovation in areas (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • The autonomy-safety-paradox of service robotics in Europe and Japan: a comparative analysis.Hironori Matsuzaki & Gesa Lindemann - 2016 - AI and Society 31 (4):501-517.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Computationally rational agents can be moral agents.Bongani Andy Mabaso - 2020 - Ethics and Information Technology 23 (2):137-145.
    In this article, a concise argument for computational rationality as a basis for artificial moral agency is advanced. Some ethicists have long argued that rational agents can become artificial moral agents. However, most of their views have come from purely philosophical perspectives, thus making it difficult to transfer their arguments to a scientific and analytical frame of reference. The result has been a disintegrated approach to the conceptualisation and design of artificial moral agents. In this article, I make the argument (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Moral dilemmas in self-driving cars.Chiara Lucifora, Giorgio Mario Grasso, Pietro Perconti & Alessio Plebe - 2020 - Rivista Internazionale di Filosofia e Psicologia 11 (2):238-250.
    : Autonomous driving systems promise important changes for future of transport, primarily through the reduction of road accidents. However, ethical concerns, in particular, two central issues, will be key to their successful development. First, situations of risk that involve inevitable harm to passengers and/or bystanders, in which some individuals must be sacrificed for the benefit of others. Secondly, and identification responsible parties and liabilities in the event of an accident. Our work addresses the first of these ethical problems. We are (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Assessment of Cognitive Behavioral Characteristics in Intelligent Systems with Predictive Ability and Computing Power.Oleg V. Kubryak, Sergey V. Kovalchuk & Nadezhda G. Bagdasaryan - 2023 - Philosophies 8 (5):75.
    The article proposes a universal dual-axis intelligent systems assessment scale. The scale considers the properties of intelligent systems within the environmental context, which develops over time. In contrast to the frequent consideration of the “mind” of artificial intelligent systems on a scale from “weak” to “strong”, we highlight the modulating influences of anticipatory ability on their “brute force”. In addition, the complexity, the ”weight“ of the cognitive task and the ability to critically assess it beforehand determine the actual set of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Why Trolley Problems Matter for the Ethics of Automated Vehicles.Geoff Keeling - 2020 - Science and Engineering Ethics 26 (1):293-307.
    This paper argues against the view that trolley cases are of little or no relevance to the ethics of automated vehicles. Four arguments for this view are outlined and rejected: the Not Going to Happen Argument, the Moral Difference Argument, the Impossible Deliberation Argument and the Wrong Question Argument. In making clear where these arguments go wrong, a positive account is developed of how trolley cases can inform the ethics of automated vehicles.
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  • Technology with No Human Responsibility?Deborah G. Johnson - 2015 - Journal of Business Ethics 127 (4):707-715.
    Download  
     
    Export citation  
     
    Bookmark   36 citations  
  • Separating facts and evaluation: motivation, account, and learnings from a novel approach to evaluating the human impacts of machine learning.Ryan Jenkins, Kristian Hammond, Sarah Spurlock & Leilani Gilpin - forthcoming - AI and Society:1-14.
    In this paper, we outline a new method for evaluating the human impact of machine-learning applications. In partnership with Underwriters Laboratories Inc., we have developed a framework to evaluate the impacts of a particular use of machine learning that is based on the goals and values of the domain in which that application is deployed. By examining the use of artificial intelligence in particular domains, such as journalism, criminal justice, or law, we can develop more nuanced and practically relevant understandings (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Preserving a combat commander’s moral agency: The Vincennes Incident as a Chinese Room.Patrick Chisan Hew - 2016 - Ethics and Information Technology 18 (3):227-235.
    We argue that a command and control system can undermine a commander’s moral agency if it causes him/her to process information in a purely syntactic manner, or if it precludes him/her from ascertaining the truth of that information. Our case is based on the resemblance between a commander’s circumstances and the protagonist in Searle’s Chinese Room, together with a careful reading of Aristotle’s notions of ‘compulsory’ and ‘ignorance’. We further substantiate our case by considering the Vincennes Incident, when the crew (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial moral agents are infeasible with foreseeable technologies.Patrick Chisan Hew - 2014 - Ethics and Information Technology 16 (3):197-206.
    For an artificial agent to be morally praiseworthy, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. Such systems are a substantial departure from current technologies and theory, and are a low prospect. With foreseeable technologies, an artificial agent will carry zero responsibility for its behavior and humans will retain full responsibility.
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Moral Gridworlds: A Theoretical Proposal for Modeling Artificial Moral Cognition.Julia Haas - 2020 - Minds and Machines 30 (2):219-246.
    I describe a suite of reinforcement learning environments in which artificial agents learn to value and respond to moral content and contexts. I illustrate the core principles of the framework by characterizing one such environment, or “gridworld,” in which an agent learns to trade-off between monetary profit and fair dealing, as applied in a standard behavioral economic paradigm. I then highlight the core technical and philosophical advantages of the learning approach for modeling moral cognition, and for addressing the so-called value (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Machine Ethics: Do Androids Dream of Being Good People?Gonzalo Génova, Valentín Moreno & M. Rosario González - 2023 - Science and Engineering Ethics 29 (2):1-17.
    Is ethics a computable function? Can machines learn ethics like humans do? If teaching consists in no more than programming, training, indoctrinating… and if ethics is merely following a code of conduct, then yes, we can teach ethics to algorithmic machines. But if ethics is not merely about following a code of conduct or about imitating the behavior of others, then an approach based on computing outcomes, and on the reduction of ethics to the compilation and application of a set (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial Intelligence, Values, and Alignment.Iason Gabriel - 2020 - Minds and Machines 30 (3):411-437.
    This paper looks at philosophical questions that arise in the context of AI alignment. It defends three propositions. First, normative and technical aspects of the AI alignment problem are interrelated, creating space for productive engagement between people working in both domains. Second, it is important to be clear about the goal of alignment. There are significant differences between AI that aligns with instructions, intentions, revealed preferences, ideal preferences, interests and values. A principle-based approach to AI alignment, which combines these elements (...)
    Download  
     
    Export citation  
     
    Bookmark   43 citations  
  • Adapting a kidney exchange algorithm to align with human values.Rachel Freedman, Jana Schaich Borg, Walter Sinnott-Armstrong, John P. Dickerson & Vincent Conitzer - 2020 - Artificial Intelligence 283 (C):103261.
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Making moral machines: why we need artificial moral agents.Paul Formosa & Malcolm Ryan - forthcoming - AI and Society.
    As robots and Artificial Intelligences become more enmeshed in rich social contexts, it seems inevitable that we will have to make them into moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should seek to create such Artificial Moral Agents (AMAs). Recently, several papers have argued that we have strong reasons not to develop AMAs. In response, we develop a comprehensive analysis (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Artificial Moral Agents: A Survey of the Current Status. [REVIEW]José-Antonio Cervantes, Sonia López, Luis-Felipe Rodríguez, Salvador Cervantes, Francisco Cervantes & Félix Ramos - 2020 - Science and Engineering Ethics 26 (2):501-532.
    One of the objectives in the field of artificial intelligence for some decades has been the development of artificial agents capable of coexisting in harmony with people and other systems. The computing research community has made efforts to design artificial agents capable of doing tasks the way people do, tasks requiring cognitive mechanisms such as planning, decision-making, and learning. The application domains of such software agents are evident nowadays. Humans are experiencing the inclusion of artificial agents in their environment as (...)
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  • Implementation of Moral Uncertainty in Intelligent Machines.Kyle Bogosian - 2017 - Minds and Machines 27 (4):591-608.
    The development of artificial intelligence will require systems of ethical decision making to be adapted for automatic computation. However, projects to implement moral reasoning in artificial moral agents so far have failed to satisfactorily address the widespread disagreement between competing approaches to moral philosophy. In this paper I argue that the proper response to this situation is to design machines to be fundamentally uncertain about morality. I describe a computational framework for doing so and show that it efficiently resolves common (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Knowledge representation and acquisition for ethical AI: challenges and opportunities.Vaishak Belle - 2023 - Ethics and Information Technology 25 (1):1-12.
    Machine learning (ML) techniques have become pervasive across a range of different applications, and are now widely used in areas as disparate as recidivism prediction, consumer credit-risk analysis, and insurance pricing. Likewise, in the physical world, ML models are critical components in autonomous agents such as robotic surgeons and self-driving cars. Among the many ethical dimensions that arise in the use of ML technology in such applications, analyzing morally permissible actions is both immediate and profound. For example, there is the (...)
    Download  
     
    Export citation  
     
    Bookmark