Switch to: References

Citations of:

Moral Machines: Teaching Robots Right From Wrong

New York, US: Oxford University Press (2008)

Add citations

You must login to add citations.
  1. Should we welcome robot teachers?Amanda J. C. Sharkey - 2016 - Ethics and Information Technology 18 (4):283-297.
    Current uses of robots in classrooms are reviewed and used to characterise four scenarios: Robot as Classroom Teacher; Robot as Companion and Peer; Robot as Care-eliciting Companion; and Telepresence Robot Teacher. The main ethical concerns associated with robot teachers are identified as: privacy; attachment, deception, and loss of human contact; and control and accountability. These are discussed in terms of the four identified scenarios. It is argued that classroom robots are likely to impact children’s’ privacy, especially when they masquerade as (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Granny and the robots: ethical issues in robot care for the elderly.Amanda Sharkey & Noel Sharkey - 2012 - Ethics and Information Technology 14 (1):27-40.
    The growing proportion of elderly people in society, together with recent advances in robotics, makes the use of robots in elder care increasingly likely. We outline developments in the areas of robot applications for assisting the elderly and their carers, for monitoring their health and safety, and for providing them with companionship. Despite the possible benefits, we raise and discuss six main ethical concerns associated with: (1) the potential reduction in the amount of human contact; (2) an increase in the (...)
    Download  
     
    Export citation  
     
    Bookmark   113 citations  
  • Can we program or train robots to be good?Amanda Sharkey - 2020 - Ethics and Information Technology 22 (4):283-295.
    As robots are deployed in a widening range of situations, it is necessary to develop a clearer position about whether or not they can be trusted to make good moral decisions. In this paper, we take a realistic look at recent attempts to program and to train robots to develop some form of moral competence. Examples of implemented robot behaviours that have been described as 'ethical', or 'minimally ethical' are considered, although they are found to only operate in quite constrained (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • From machine ethics to computational ethics.Samuel T. Segun - 2021 - AI and Society 36 (1):263-276.
    Research into the ethics of artificial intelligence is often categorized into two subareas—robot ethics and machine ethics. Many of the definitions and classifications of the subject matter of these subfields, as found in the literature, are conflated, which I seek to rectify. In this essay, I infer that using the term ‘machine ethics’ is too broad and glosses over issues that the term computational ethics best describes. I show that the subject of inquiry of computational ethics is of great value (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • First-person representations and responsible agency in AI.Miguel Ángel Sebastián & Fernando Rudy-Hiller - 2021 - Synthese 199 (3-4):7061-7079.
    In this paper I investigate which of the main conditions proposed in the moral responsibility literature are the ones that spell trouble for the idea that Artificial Intelligence Systems could ever be full-fledged responsible agents. After arguing that the standard construals of the control and epistemic conditions don’t impose any in-principle barrier to AISs being responsible agents, I identify the requirement that responsible agents must be aware of their own actions as the main locus of resistance to attribute that kind (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • The hard limit on human nonanthropocentrism.Michael R. Scheessele - 2022 - AI and Society 37 (1):49-65.
    There may be a limit on our capacity to suppress anthropocentric tendencies toward non-human others. Normally, we do not reach this limit in our dealings with animals, the environment, etc. Thus, continued striving to overcome anthropocentrism when confronted with these non-human others may be justified. Anticipation of super artificial intelligence may force us to face this limit, denying us the ability to free ourselves completely of anthropocentrism. This could be for our own good.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • ETHICA EX MACHINA. Exploring artificial moral agency or the possibility of computable ethics.Rodrigo Sanz - 2020 - Zeitschrift Für Ethik Und Moralphilosophie 3 (2):223-239.
    Since the automation revolution of our technological era, diverse machines or robots have gradually begun to reconfigure our lives. With this expansion, it seems that those machines are now faced with a new challenge: more autonomous decision-making involving life or death consequences. This paper explores the philosophical possibility of artificial moral agency through the following question: could a machine obtain the cognitive capacities needed to be a moral agent? In this regard, I propose to expose, under a normative-cognitive perspective, the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Should Artificial Intelligence be used to support clinical ethical decision-making? A systematic review of reasons.Sabine Salloch, Tim Kacprowski, Wolf-Tilo Balke, Frank Ursin & Lasse Benzinger - 2023 - BMC Medical Ethics 24 (1):1-9.
    BackgroundHealthcare providers have to make ethically complex clinical decisions which may be a source of stress. Researchers have recently introduced Artificial Intelligence (AI)-based applications to assist in clinical ethical decision-making. However, the use of such tools is controversial. This review aims to provide a comprehensive overview of the reasons given in the academic literature for and against their use.MethodsPubMed, Web of Science, Philpapers.org and Google Scholar were searched for all relevant publications. The resulting set of publications was title and abstract (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • On the Moral Agency of Computers.Thomas M. Powers - 2013 - Topoi 32 (2):227-236.
    Can computer systems ever be considered moral agents? This paper considers two factors that are explored in the recent philosophical literature. First, there are the important domains in which computers are allowed to act, made possible by their greater functional capacities. Second, there is the claim that these functional capacities appear to embody relevant human abilities, such as autonomy and responsibility. I argue that neither the first (Domain-Function) factor nor the second (Simulacrum) factor gets at the central issue in the (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • The Relativistic Car: Applying Metaethics to the Debate about Self-Driving Vehicles.Thomas Pölzler - 2021 - Ethical Theory and Moral Practice 24 (3):833-850.
    Almost all participants in the debate about the ethics of accidents with self-driving cars have so far assumed moral universalism. However, universalism may be philosophically more controversial than is commonly thought, and may lead to undesirable results in terms of non-moral consequences and feasibility. There thus seems to be a need to also start considering what I refer to as the “relativistic car” — a car that is programmed under the assumption that what is morally right, wrong, good, bad, etc. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Robot caregivers: harbingers of expanded freedom for all? [REVIEW]Yvette Pearson - 2010 - Ethics and Information Technology 12 (3):277-288.
    As we near a time when robots may serve a vital function by becoming caregivers, it is important to examine the ethical implications of this development. By applying the capabilities approach as a guide to both the design and use of robot caregivers, we hope that this will maximize opportunities to preserve or expand freedom for care recipients. We think the use of the capabilities approach will be especially valuable for improving the ability of impaired persons to interface more effectively (...)
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  • When Morals Ain’t Enough: Robots, Ethics, and the Rules of the Law.Ugo Pagallo - 2017 - Minds and Machines 27 (4):625-638.
    No single moral theory can instruct us as to whether and to what extent we are confronted with legal loopholes, e.g. whether or not new legal rules should be added to the system in the criminal law field. This question on the primary rules of the law appears crucial for today’s debate on roboethics and still, goes beyond the expertise of robo-ethicists. On the other hand, attention should be drawn to the secondary rules of the law: The unpredictability of robotic (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Machines and the Moral Community.Erica L. Neely - 2013 - Philosophy and Technology 27 (1):97-111.
    A key distinction in ethics is between members and nonmembers of the moral community. Over time, our notion of this community has expanded as we have moved from a rationality criterion to a sentience criterion for membership. I argue that a sentience criterion is insufficient to accommodate all members of the moral community; the true underlying criterion can be understood in terms of whether a being has interests. This may be extended to conscious, self-aware machines, as well as to any (...)
    Download  
     
    Export citation  
     
    Bookmark   31 citations  
  • In principle obstacles for empathic AI: why we can’t replace human empathy in healthcare.Carlos Montemayor, Jodi Halpern & Abrol Fairweather - 2022 - AI and Society 37 (4):1353-1359.
    What are the limits of the use of artificial intelligence (AI) in the relational aspects of medical and nursing care? There has been a lot of recent work and applications showing the promise and efficiency of AI in clinical medicine, both at the research and treatment levels. Many of the obstacles discussed in the literature are technical in character, regarding how to improve and optimize current practices in clinical medicine and also how to develop better data bases for optimal parameter (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • “An Eye Turned into a Weapon”: a Philosophical Investigation of Remote Controlled, Automated, and Autonomous Drone Warfare.Oliver Müller - 2020 - Philosophy and Technology 34 (4):875-896.
    Military drones combine surveillance technology with missile equipment in a far-reaching way. In this article, I argue that military drones could and should be object for a philosophical investigation, referring in particular on Chamayou’s theory of the drone, who also coined the term “an eye turned into a weapon.” Focusing on issues of human self-understanding, agency, and alterity, I examine the intricate human-technology relations in the context of designing and deploying military drones. For that purpose, I am drawing on the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A Softwaremodule for an Ethical Elder Care Robot. Design and Implementation.Catrin Misselhorn - 2019 - Ethics in Progress 10 (2):68-81.
    The development of increasingly intelligent and autonomous technologies will eventually lead to these systems having to face morally problematic situations. This is particularly true of artificial systems that are used in geriatric care environments. The goal of this article is to describe how one can approach the design of an elder care robot which is capable of moral decision-making and moral learning. A conceptual design for the development of such a system is provided and the steps that are necessary to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • This “Ethical Trap” Is for Roboticists, Not Robots: On the Issue of Artificial Agent Ethical Decision-Making.Keith W. Miller, Marty J. Wolf & Frances Grodzinsky - 2017 - Science and Engineering Ethics 23 (2):389-401.
    In this paper we address the question of when a researcher is justified in describing his or her artificial agent as demonstrating ethical decision-making. The paper is motivated by the amount of research being done that attempts to imbue artificial agents with expertise in ethical decision-making. It seems clear that computing systems make decisions, in that they make choices between different options; and there is scholarship in philosophy that addresses the distinction between ethical decision-making and general decision-making. Essentially, the qualitative (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Robotic Bodies and the Kairos of Humanoid Theologies.James McBride - 2019 - Sophia 58 (4):663-676.
    In the not-too-distant future, robots will populate the walks of everyday life, from the manufacturing floor to corporate offices, and from battlefields to the home. While most work on the social implications of robotics focuses on such moral issues as the economic impact on human workers or the ethics of lethal machines, scant attention is paid to the effect of the advent of the robotic age on religion. Robots will likely become commonplace in the home by the end of the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The autonomy-safety-paradox of service robotics in Europe and Japan: a comparative analysis.Hironori Matsuzaki & Gesa Lindemann - 2016 - AI and Society 31 (4):501-517.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Dignity and Dissent in Humans and Non-humans.Andreas Matthias - 2020 - Science and Engineering Ethics 26 (5):2497-2510.
    Is there a difference between human beings and those based on artificial intelligence that would affect their ability to be subjects of dignity? This paper first examines the philosophical notion of dignity as Immanuel Kant derives it from the moral autonomy of the individual. It then asks whether animals and AI systems can claim Kantian dignity or whether there is a sharp divide between human beings, animals and AI systems regarding their ability to be subjects of dignity. How this question (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Computer Says I Don’t Know: An Empirical Approach to Capture Moral Uncertainty in Artificial Intelligence.Andreia Martinho, Maarten Kroesen & Caspar Chorus - 2021 - Minds and Machines 31 (2):215-237.
    As AI Systems become increasingly autonomous, they are expected to engage in decision-making processes that have moral implications. In this research we integrate theoretical and empirical lines of thought to address the matters of moral reasoning and moral uncertainty in AI Systems. We reconceptualize the metanormative framework for decision-making under moral uncertainty and we operationalize it through a latent class choice model. The core idea being that moral heterogeneity in society can be codified in terms of a small number of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Integrating robot ethics and machine morality: the study and design of moral competence in robots.Bertram F. Malle - 2016 - Ethics and Information Technology 18 (4):243-256.
    Robot ethics encompasses ethical questions about how humans should design, deploy, and treat robots; machine morality encompasses questions about what moral capacities a robot should have and how these capacities could be computationally implemented. Publications on both of these topics have doubled twice in the past 10 years but have often remained separate from one another. In an attempt to better integrate the two, I offer a framework for what a morally competent robot would look like and discuss a number (...)
    Download  
     
    Export citation  
     
    Bookmark   20 citations  
  • Moral dilemmas in self-driving cars.Chiara Lucifora, Giorgio Mario Grasso, Pietro Perconti & Alessio Plebe - 2020 - Rivista Internazionale di Filosofia e Psicologia 11 (2):238-250.
    : Autonomous driving systems promise important changes for future of transport, primarily through the reduction of road accidents. However, ethical concerns, in particular, two central issues, will be key to their successful development. First, situations of risk that involve inevitable harm to passengers and/or bystanders, in which some individuals must be sacrificed for the benefit of others. Secondly, and identification responsible parties and liabilities in the event of an accident. Our work addresses the first of these ethical problems. We are (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Industrial challenges of military robotics.George R. Lucas - 2011 - Journal of Military Ethics 10 (4):274-295.
    Abstract This article evaluates the ?drive toward greater autonomy? in lethally-armed unmanned systems. Following a summary of the main criticisms and challenges to lethal autonomy, both engineering and ethical, raised by opponents of this effort, the article turns toward solutions or responses that defense industries and military end users might seek to incorporate in design, testing and manufacturing to address these concerns. The way forward encompasses a two-fold testing procedure for reliability incorporating empirical, quantitative benchmarks of performance in compliance with (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Computational Meta-Ethics: Towards the Meta-Ethical Robot.Gert-Jan C. Lokhorst - 2011 - Minds and Machines 21 (2):261-274.
    It has been argued that ethically correct robots should be able to reason about right and wrong. In order to do so, they must have a set of do’s and don’ts at their disposal. However, such a list may be inconsistent, incomplete or otherwise unsatisfactory, depending on the reasoning principles that one employs. For this reason, it might be desirable if robots were to some extent able to reason about their own reasoning—in other words, if they had some meta-ethical capacities. (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Autonomous Driving and Perverse Incentives.Wulf Loh & Catrin Misselhorn - 2019 - Philosophy and Technology 32 (4):575-590.
    This paper discusses the ethical implications of perverse incentives with regard to autonomous driving. We define perverse incentives as a feature of an action, technology, or social policy that invites behavior which negates the primary goal of the actors initiating the action, introducing a certain technology, or implementing a social policy. As a special form of means-end-irrationality, perverse incentives are to be avoided from a prudential standpoint, as they prove to be directly self-defeating: They are not just a form of (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Autonomous Driving and Perverse Incentives.Wulf Loh & Catrin Misselhorn - 2019 - Philosophy and Technology 32 (4):575-590.
    This paper discusses the ethical implications of perverse incentives with regard to autonomous driving. We define perverse incentives as a feature of an action, technology, or social policy that invites behavior which negates the primary goal of the actors initiating the action, introducing a certain technology, or implementing a social policy. As a special form of means-end-irrationality, perverse incentives are to be avoided from a prudential standpoint, as they prove to be directly self-defeating: They are not just a form of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • From responsible robotics towards a human rights regime oriented to the challenges of robotics and artificial intelligence.Hin-Yan Liu & Karolina Zawieska - 2020 - Ethics and Information Technology 22 (4):321-333.
    As the aim of the responsible robotics initiative is to ensure that responsible practices are inculcated within each stage of design, development and use, this impetus is undergirded by the alignment of ethical and legal considerations towards socially beneficial ends. While every effort should be expended to ensure that issues of responsibility are addressed at each stage of technological progression, irresponsibility is inherent within the nature of robotics technologies from a theoretical perspective that threatens to thwart the endeavour. This is (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • A Rawlsian algorithm for autonomous vehicles.Derek Leben - 2017 - Ethics and Information Technology 19 (2):107-115.
    Autonomous vehicles must be programmed with procedures for dealing with trolley-style dilemmas where actions result in harm to either pedestrians or passengers. This paper outlines a Rawlsian algorithm as an alternative to the Utilitarian solution. The algorithm will gather the vehicle’s estimation of probability of survival for each person in each action, then calculate which action a self-interested person would agree to if he or she were in an original bargaining position of fairness. I will employ Rawls’ assumption that the (...)
    Download  
     
    Export citation  
     
    Bookmark   31 citations  
  • Accounting for the Moral Significance of Technology: Revisiting the Case of Non-Medical Sex Selection.Olya Kudina - 2019 - Journal of Bioethical Inquiry 16 (1):75-85.
    This article explores the moral significance of technology, reviewing a microfluidic chip for sperm sorting and its use for non-medical sex selection. I explore how a specific material setting of this new iteration of pre-pregnancy sex selection technology—with a promised low cost, non-invasive nature and possibility to use at home—fosters new and exacerbates existing ethical concerns. I compare this new technology with the existing sex selection methods of sperm sorting and Prenatal Genetic Diagnosis. Current ethical and political debates on emerging (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Phronetic Ethics in Social Robotics: A New Approach to Building Ethical Robots.Roman Krzanowski & Paweł Polak - 2020 - Studies in Logic, Grammar and Rhetoric 63 (1):165-183.
    Social robotics are autonomous robots or Artificial Moral Agents (AMA), that will interact respect and embody human ethical values. However, the conceptual and practical problems of building such systems have not yet been resolved, playing a role of significant challenge for computational modeling. It seems that the lack of success in constructing robots, ceteris paribus, is due to the conceptual and algorithmic limitations of the current design of ethical robots. This paper proposes a new approach for developing ethical capacities in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Cognitive Load Selectively Interferes with Utilitarian Moral Judgment.Jonathan D. Cohen Joshua D. Greene, Sylvia A. Morelli, Kelly Lowenberg, Leigh E. Nystrom - 2008 - Cognition 107 (3):1144.
    Download  
     
    Export citation  
     
    Bookmark   147 citations  
  • Decentered ethics in the machine era and guidance for AI regulation.Christian Hugo Hoffmann & Benjamin Hahn - 2020 - AI and Society 35 (3):635-644.
    Recent advancements in AI have prompted a large number of AI ethics guidelines published by governments and nonprofits. While many of these papers propose concrete or seemingly applicable ideas, few philosophically sound proposals are made. In particular, we observe that the line of questioning has often not been examined critically and underlying conceptual problems not always dealt with at the root. In this paper, we investigate the nature of ethical AI systems and what their moral status might be by first (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Never Mind the Trolley: The Ethics of Autonomous Vehicles in Mundane Situations.Johannes Himmelreich - 2018 - Ethical Theory and Moral Practice 21 (3):669-684.
    Trolley cases are widely considered central to the ethics of autonomous vehicles. We caution against this by identifying four problems. Trolley cases, given technical limitations, rest on assumptions that are in tension with one another. Furthermore, trolley cases illuminate only a limited range of ethical issues insofar as they cohere with a certain design framework. Furthermore, trolley cases seem to demand a moral answer when a political answer is called for. Finally, trolley cases might be epistemically problematic in several ways. (...)
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
  • Forbidden knowledge in machine learning reflections on the limits of research and publication.Thilo Hagendorff - 2021 - AI and Society 36 (3):767-781.
    Certain research strands can yield “forbidden knowledge”. This term refers to knowledge that is considered too sensitive, dangerous or taboo to be produced or shared. Discourses about such publication restrictions are already entrenched in scientific fields like IT security, synthetic biology or nuclear physics research. This paper makes the case for transferring this discourse to machine learning research. Some machine learning applications can very easily be misused and unfold harmful consequences, for instance, with regard to generative video or text synthesis, (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Moral Gridworlds: A Theoretical Proposal for Modeling Artificial Moral Cognition.Julia Haas - 2020 - Minds and Machines 30 (2):219-246.
    I describe a suite of reinforcement learning environments in which artificial agents learn to value and respond to moral content and contexts. I illustrate the core principles of the framework by characterizing one such environment, or “gridworld,” in which an agent learns to trade-off between monetary profit and fair dealing, as applied in a standard behavioral economic paradigm. I then highlight the core technical and philosophical advantages of the learning approach for modeling moral cognition, and for addressing the so-called value (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The other question: can and should robots have rights?David J. Gunkel - 2018 - Ethics and Information Technology 20 (2):87-99.
    This essay addresses the other side of the robot ethics debate, taking up and investigating the question “Can and should robots have rights?” The examination of this subject proceeds by way of three steps or movements. We begin by looking at and analyzing the form of the question itself. There is an important philosophical difference between the two modal verbs that organize the inquiry—can and should. This difference has considerable history behind it that influences what is asked about and how. (...)
    Download  
     
    Export citation  
     
    Bookmark   57 citations  
  • Mind the gap: responsible robotics and the problem of responsibility.David J. Gunkel - 2020 - Ethics and Information Technology 22 (4):307-320.
    The task of this essay is to respond to the question concerning robots and responsibility—to answer for the way that we understand, debate, and decide who or what is able to answer for decisions and actions undertaken by increasingly interactive, autonomous, and sociable mechanisms. The analysis proceeds through three steps or movements. It begins by critically examining the instrumental theory of technology, which determines the way one typically deals with and responds to the question of responsibility when it involves technology. (...)
    Download  
     
    Export citation  
     
    Bookmark   39 citations  
  • Particularism, Analogy, and Moral Cognition.Marcello Guarini - 2010 - Minds and Machines 20 (3):385-422.
    ‘Particularism’ and ‘generalism’ refer to families of positions in the philosophy of moral reasoning, with the former playing down the importance of principles, rules or standards, and the latter stressing their importance. Part of the debate has taken an empirical turn, and this turn has implications for AI research and the philosophy of cognitive modeling. In this paper, Jonathan Dancy’s approach to particularism (arguably one of the best known and most radical approaches) is questioned both on logical and empirical grounds. (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Introduction: Machine Ethics and the Ethics of Building Intelligent Machines. [REVIEW]Marcello Guarini - 2013 - Topoi 32 (2):213-215.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • What do we owe to intelligent robots?John-Stewart Gordon - 2020 - AI and Society 35 (1):209-223.
    Great technological advances in such areas as computer science, artificial intelligence, and robotics have brought the advent of artificially intelligent robots within our reach within the next century. Against this background, the interdisciplinary field of machine ethics is concerned with the vital issue of making robots “ethical” and examining the moral status of autonomous robots that are capable of moral reasoning and decision-making. The existence of such robots will deeply reshape our socio-political life. This paper focuses on whether such highly (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • Moral Status and Intelligent Robots.John-Stewart Gordon & David J. Gunkel - 2021 - Southern Journal of Philosophy 60 (1):88-117.
    The Southern Journal of Philosophy, Volume 60, Issue 1, Page 88-117, March 2022.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Building Moral Robots: Ethical Pitfalls and Challenges.John-Stewart Gordon - 2020 - Science and Engineering Ethics 26 (1):141-157.
    This paper examines the ethical pitfalls and challenges that non-ethicists, such as researchers and programmers in the fields of computer science, artificial intelligence and robotics, face when building moral machines. Whether ethics is “computable” depends on how programmers understand ethics in the first place and on the adequacy of their understanding of the ethical problems and methodological challenges in these fields. Researchers and programmers face at least two types of problems due to their general lack of ethical knowledge or expertise. (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Artificial moral and legal personhood.John-Stewart Gordon - forthcoming - AI and Society:1-15.
    This paper considers the hotly debated issue of whether one should grant moral and legal personhood to intelligent robots once they have achieved a certain standard of sophistication based on such criteria as rationality, autonomy, and social relations. The starting point for the analysis is the European Parliament’s resolution on Civil Law Rules on Robotics and its recommendation that robots be granted legal status and electronic personhood. The resolution is discussed against the background of the so-called Robotics Open Letter, which (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Moral control and ownership in AI systems.Raul Gonzalez Fabre, Javier Camacho Ibáñez & Pedro Tejedor Escobar - 2021 - AI and Society 36 (1):289-303.
    AI systems are bringing an augmentation of human capabilities to shape the world. They may also drag a replacement of human conscience in large chunks of life. AI systems can be designed to leave moral control in human hands, to obstruct or diminish that moral control, or even to prevent it, replacing human morality with pre-packaged or developed ‘solutions’ by the ‘intelligent’ machine itself. Artificial Intelligent systems (AIS) are increasingly being used in multiple applications and receiving more attention from the (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Martial Bliss: War and Peace in Popular Science Robotics. [REVIEW]Robert M. Geraci - 2011 - Philosophy and Technology 24 (3):339-354.
    In considering how to best deploy robotic systems in public and private sectors, we must consider what individuals will expect from the robots with which they interact. Public awareness of robotics—as both military machines and domestic helpers—emerges out of a braided stream composed of science fiction and popular science. These two genres influence news media, government and corporate spending, and public expectations. In the Euro-American West, both science fiction and popular science are ambivalent about the military applications for robotics, and (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Artificial virtue: the machine question and perceptions of moral character in artificial moral agents.Patrick Gamez, Daniel B. Shank, Carson Arnold & Mallory North - 2020 - AI and Society 35 (4):795-809.
    Virtue ethics seems to be a promising moral theory for understanding and interpreting the development and behavior of artificial moral agents. Virtuous artificial agents would blur traditional distinctions between different sorts of moral machines and could make a claim to membership in the moral community. Accordingly, we investigate the “machine question” by studying whether virtue or vice can be attributed to artificial intelligence; that is, are people willing to judge machines as possessing moral character? An experiment describes situations where either (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Artificial Intelligence, Values, and Alignment.Iason Gabriel - 2020 - Minds and Machines 30 (3):411-437.
    This paper looks at philosophical questions that arise in the context of AI alignment. It defends three propositions. First, normative and technical aspects of the AI alignment problem are interrelated, creating space for productive engagement between people working in both domains. Second, it is important to be clear about the goal of alignment. There are significant differences between AI that aligns with instructions, intentions, revealed preferences, ideal preferences, interests and values. A principle-based approach to AI alignment, which combines these elements (...)
    Download  
     
    Export citation  
     
    Bookmark   47 citations  
  • Eight grand challenges for value sensitive design from the 2016 Lorentz workshop.Batya Friedman, Maaike Harbers, David G. Hendry, Jeroen van den Hoven, Catholijn Jonker & Nick Logler - 2018 - Ethics and Information Technology 23 (1):5-16.
    In this article, we report on eight grand challenges for value sensitive design, which were developed at a one-week workshop, Value Sensitive Design: Charting the Next Decade, Lorentz Center, Leiden, The Netherlands, November 14–18, 2016. A grand challenge is a substantial problem, opportunity, or question that motives sustained research and design activity. The eight grand challenges are: Accounting for Power, Evaluating Value Sensitive Design, Framing and Prioritizing Values, Professional and Industry Appropriation, Tech policy, Values and Human Emotions, Value Sensitive Design (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Towards the Epistemology of the Internet of Things Techno-Epistemology and Ethical Considerations Through the Prism of Trust.Ori Freiman - 2014 - International Review of Information Ethics 22:6-22.
    This paper discusses the epistemology of the Internet of Things [IoT] by focusing on the topic of trust. It presents various frameworks of trust, and argues that the ethical framework of trust is what constitutes our responsibility to reveal desired norms and standards and embed them in other frameworks of trust. The first section briefly presents the IoT and scrutinizes the scarce philosophical work that has been done on this subject so far. The second section suggests that the field of (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations