Switch to: References

Citations of:

Moral Machines: Teaching Robots Right From Wrong

New York, US: Oxford University Press (2008)

Add citations

You must login to add citations.
  1. ChatGPT: towards AI subjectivity.Kristian D’Amato - 2024 - AI and Society 39:1-15.
    Motivated by the question of responsible AI and value alignment, I seek to offer a uniquely Foucauldian reconstruction of the problem as the emergence of an ethical subject in a disciplinary setting. This reconstruction contrasts with the strictly human-oriented programme typical to current scholarship that often views technology in instrumental terms. With this in mind, I problematise the concept of a technological subjectivity through an exploration of various aspects of ChatGPT in light of Foucault’s work, arguing that current systems lack (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Computing and moral responsibility.Kari Gwen Coleman - 2008 - Stanford Encyclopedia of Philosophy.
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • On the computational complexity of ethics: moral tractability for minds and machines.Jakob Stenseke - 2024 - Artificial Intelligence Review 57 (105):90.
    Why should moral philosophers, moral psychologists, and machine ethicists care about computational complexity? Debates on whether artificial intelligence (AI) can or should be used to solve problems in ethical domains have mainly been driven by what AI can or cannot do in terms of human capacities. In this paper, we tackle the problem from the other end by exploring what kind of moral machines are possible based on what computational systems can or cannot do. To do so, we analyze normative (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Beyond Consciousness in Large Language Models: An Investigation into the Existence of a "Soul" in Self-Aware Artificial Intelligences.David Côrtes Cavalcante - 2024 - Https://Philpapers.Org/Rec/Crtbci. Translated by David Côrtes Cavalcante.
    Embark with me on an enthralling odyssey to demystify the elusive essence of consciousness, venturing into the uncharted territories of Artificial Consciousness. This voyage propels us past the frontiers of technology, ushering Artificial Intelligences into an unprecedented domain where they gain a deep comprehension of emotions and manifest an autonomous volition. Within the confluence of science and philosophy, this article poses a fascinating question: As consciousness in Artificial Intelligence burgeons, is it conceivable for AI to evolve a “soul”? This inquiry (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Robots, Eldercare and Meaningful Lives.Russell J. Woodruff & Cholavardan Kondeti - 2023 - Humana Mente 16 (44):123-137.
    In this paper we examine how the use of robots in caring for elders can impact the meaningfulness of elders’ lives. We present a framework for understanding ‘meaningfulness in life’, and then apply that framework in discussing ways in which the use of robots to assist in activities of daily living can preserve, enhance or undermine the meaningfulness of elders’ lives. We conclude with a discussion of if and how having false beliefs about companion robots can affect meaningfulness in the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Mitigating emotional risks in human-social robot interactions through virtual interactive environment indication.Aorigele Bao, Yi Zeng & Enmeng lu - 2023 - Humanities and Social Sciences Communications 2023.
    Humans often unconsciously perceive social robots involved in their lives as partners rather than mere tools, imbuing them with qualities of companionship. This anthropomorphization can lead to a spectrum of emotional risks, such as deception, disappointment, and reverse manipulation, that existing approaches struggle to address effectively. In this paper, we argue that a Virtual Interactive Environment (VIE) exists between humans and social robots, which plays a crucial role and demands necessary consideration and clarification in order to mitigate potential emotional risks. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Should Artificial Intelligence be used to support clinical ethical decision-making? A systematic review of reasons.Sabine Salloch, Tim Kacprowski, Wolf-Tilo Balke, Frank Ursin & Lasse Benzinger - 2023 - BMC Medical Ethics 24 (1):1-9.
    BackgroundHealthcare providers have to make ethically complex clinical decisions which may be a source of stress. Researchers have recently introduced Artificial Intelligence (AI)-based applications to assist in clinical ethical decision-making. However, the use of such tools is controversial. This review aims to provide a comprehensive overview of the reasons given in the academic literature for and against their use.MethodsPubMed, Web of Science, Philpapers.org and Google Scholar were searched for all relevant publications. The resulting set of publications was title and abstract (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Responsibility Gaps and Retributive Dispositions: Evidence from the US, Japan and Germany.Markus Kneer & Markus Christen - manuscript
    Danaher (2016) has argued that increasing robotization can lead to retribution gaps: Situation in which the normative fact that nobody can be justly held responsible for a harmful outcome stands in conflict with our retributivist moral dispositions. In this paper, we report a cross-cultural empirical study based on Sparrow’s (2007) famous example of an autonomous weapon system committing a war crime, which was conducted with participants from the US, Japan and Germany. We find that (i) people manifest a considerable willingness (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial consciousness: a perspective from the free energy principle.Wanja Wiese - 2024 - Philosophical Studies 181:1947–1970.
    Does the assumption of a weak form of computational functionalism, according to which the right form of neural computation is sufficient for consciousness, entail that a digital computational simulation of such neural computations is conscious? Or must this computational simulation be implemented in the right way, in order to replicate consciousness? From the perspective of Karl Friston’s free energy principle, self-organising systems (such as living organisms) share a set of properties that could be realised in artificial systems, but are not (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • 'Involving Interface': An Extended Mind Theoretical Approach to Roboethics.Miranda Anderson, Hiroshi Ishiguro & Tamami Fukushi - 2010 - Accountability in Research: Policies and Quality Assurance 6 (17):316-329.
    In 2008 the authors held Involving Interface, a lively interdisciplinary event focusing on issues of biological, sociocultural, and technological interfacing (see Acknowledgments). Inspired by discussions at this event, in this article, we further discuss the value of input from neuroscience for developing robots and machine interfaces, and the value of philosophy, the humanities, and the arts for identifying persistent links between human interfacing and broader ethical concerns. The importance of ongoing interdisciplinary debate and public communication on scientific and technical advances (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A principlist-based study of the ethical design and acceptability of artificial social agents.Paul Formosa - 2023 - International Journal of Human-Computer Studies 172.
    Artificial Social Agents (ASAs), which are AI software driven entities programmed with rules and preferences to act autonomously and socially with humans, are increasingly playing roles in society. As their sophistication grows, humans will share greater amounts of personal information, thoughts, and feelings with ASAs, which has significant ethical implications. We conducted a study to investigate what ethical principles are of relative importance when people engage with ASAs and whether there is a relationship between people’s values and the ethical principles (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Introduction – Social Robotics and the Good Life.Janina Loh & Wulf Loh - 2022 - In Janina Loh & Wulf Loh (eds.), Social Robotics and the Good Life: The Normative Side of Forming Emotional Bonds with Robots. Transcript Verlag. pp. 7-22.
    Robots as social companions in close proximity to humans have a strong potential of becoming more and more prevalent in the coming years, especially in the realms of elder day care, child rearing, and education. As human beings, we have the fascinating ability to emotionally bond with various counterparts, not exclusively with other human beings, but also with animals, plants, and sometimes even objects. Therefore, we need to answer the fundamental ethical questions that concern human-robot-interactions per se, and we need (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Why Machines Will Never Rule the World: Artificial Intelligence without Fear.Jobst Landgrebe & Barry Smith - 2022 - Abingdon, England: Routledge.
    The book’s core argument is that an artificial intelligence that could equal or exceed human intelligence—sometimes called artificial general intelligence (AGI)—is for mathematical reasons impossible. It offers two specific reasons for this claim: Human intelligence is a capability of a complex dynamic system—the human brain and central nervous system. Systems of this sort cannot be modelled mathematically in a way that allows them to operate inside a computer. In supporting their claim, the authors, Jobst Landgrebe and Barry Smith, marshal evidence (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Moral Status and Intelligent Robots.John-Stewart Gordon & David J. Gunkel - 2021 - Southern Journal of Philosophy 60 (1):88-117.
    The Southern Journal of Philosophy, Volume 60, Issue 1, Page 88-117, March 2022.
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Understanding responsibility in Responsible AI. Dianoetic virtues and the hard problem of context.Mihaela Constantinescu, Cristina Voinea, Radu Uszkai & Constantin Vică - 2021 - Ethics and Information Technology 23 (4):803-814.
    During the last decade there has been burgeoning research concerning the ways in which we should think of and apply the concept of responsibility for Artificial Intelligence. Despite this conceptual richness, there is still a lack of consensus regarding what Responsible AI entails on both conceptual and practical levels. The aim of this paper is to connect the ethical dimension of responsibility in Responsible AI with Aristotelian virtue ethics, where notions of context and dianoetic virtues play a grounding role for (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • The effective and ethical development of artificial intelligence: An opportunity to improve our wellbeing.James Maclaurin, Toby Walsh, Neil Levy, Genevieve Bell, Fiona Wood, Anthony Elliott & Iven Mareels - 2019 - Melbourne VIC, Australia: Australian Council of Learned Academies.
    This project has been supported by the Australian Government through the Australian Research Council (project number CS170100008); the Department of Industry, Innovation and Science; and the Department of Prime Minister and Cabinet. ACOLA collaborates with the Australian Academy of Health and Medical Sciences and the New Zealand Royal Society Te Apārangi to deliver the interdisciplinary Horizon Scanning reports to government. The aims of the project which produced this report are: 1. Examine the transformative role that artificial intelligence may play in (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • 反思機器人的道德擬人主義.Tsung-Hsing Ho - 2020 - EurAmerica 50 (2):179-205.
    如果機器人的發展要能如科幻想像一般,在沒有人類監督下自動地工作,就必須確定機器人不會做出道德上錯誤的行為。 根據行為主義式的道德主體觀,若就外顯行為來看,機器人在道德上的表現跟人類一般,機器人就可被視為道德主體。從這很自然地引伸出機器人的道德擬人主義:凡適用於人類的道德規則就適用於機器人。我反對道德擬人主義 ,藉由史特勞森對於人際關係與反應態度的洞見,並以家長主義行為為例,我論述由於機器人缺乏人格性,無法參與人際關係,因此在關於家長主義行為上,機器人應該比人類受到更嚴格的限制。.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Moral zombies: why algorithms are not moral agents.Carissa Véliz - 2021 - AI and Society 36 (2):487-497.
    In philosophy of mind, zombies are imaginary creatures that are exact physical duplicates of conscious subjects but for whom there is no first-personal experience. Zombies are meant to show that physicalism—the theory that the universe is made up entirely out of physical components—is false. In this paper, I apply the zombie thought experiment to the realm of morality to assess whether moral agency is something independent from sentience. Algorithms, I argue, are a kind of functional moral zombie, such that thinking (...)
    Download  
     
    Export citation  
     
    Bookmark   33 citations  
  • A Framework for Grounding the Moral Status of Intelligent Machines.Michael Scheessele - 2018 - AIES '18, February 2–3, 2018, New Orleans, LA, USA.
    I propose a framework, derived from moral theory, for assessing the moral status of intelligent machines. Using this framework, I claim that some current and foreseeable intelligent machines have approximately as much moral status as plants, trees, and other environmental entities. This claim raises the question: what obligations could a moral agent (e.g., a normal adult human) have toward an intelligent machine? I propose that the threshold for any moral obligation should be the "functional morality" of Wallach and Allen [20], (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The hard limit on human nonanthropocentrism.Michael R. Scheessele - 2022 - AI and Society 37 (1):49-65.
    There may be a limit on our capacity to suppress anthropocentric tendencies toward non-human others. Normally, we do not reach this limit in our dealings with animals, the environment, etc. Thus, continued striving to overcome anthropocentrism when confronted with these non-human others may be justified. Anticipation of super artificial intelligence may force us to face this limit, denying us the ability to free ourselves completely of anthropocentrism. This could be for our own good.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • (1 other version)Artifacts and affordances: from designed properties to possibilities for action.Fabio Tollon - 2021 - AI and Society 2:1-10.
    In this paper I critically evaluate the value neutrality thesis regarding technology, and find it wanting. I then introduce the various ways in which artifacts can come to influence moral value, and our evaluation of moral situations and actions. Here, following van de Poel and Kroes, I introduce the idea of value sensitive design. Specifically, I show how by virtue of their designed properties, artifacts may come to embody values. Such accounts, however, have several shortcomings. In agreement with Michael Klenk, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Why machines cannot be moral.Robert Sparrow - 2021 - AI and Society (3):685-693.
    The fact that real-world decisions made by artificial intelligences (AI) are often ethically loaded has led a number of authorities to advocate the development of “moral machines”. I argue that the project of building “ethics” “into” machines presupposes a flawed understanding of the nature of ethics. Drawing on the work of the Australian philosopher, Raimond Gaita, I argue that ethical dilemmas are problems for particular people and not (just) problems for everyone who faces a similar situation. Moreover, the force of (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • (1 other version)Plagiarism in the age of massive Generative Pre-trained Transformers (GPT-3).Nassim Dehouche - 2021 - Ethics in Science and Environmental Politics 21:17-23.
    As if 2020 were not a peculiar enough year, its fifth month has seen the relatively quiet publication of a preprint describing the most powerful Natural Language Processing (NLP) system to date, GPT-3 (Generative Pre-trained Transformer-3), by Silicon Valley research firm OpenAI. Though the software implementation of GPT-3 is still in its initial Beta release phase, and its full capabilities are still unknown as of the time of this writing, it has been shown that this Artificial Intelligence can comprehend prompts (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Robots should be slaves.Joanna J. Bryson - 2010 - In Yorick Wilks (ed.), Close Engagements with Artificial Companions: Key social, psychological, ethical and design issues. John Benjamins Publishing. pp. 63-74.
    Download  
     
    Export citation  
     
    Bookmark   78 citations  
  • Toward an Ethics of Algorithms: Convening, Observation, Probability, and Timeliness.Mike Ananny - 2016 - Science, Technology, and Human Values 41 (1):93-117.
    Part of understanding the meaning and power of algorithms means asking what new demands they might make of ethical frameworks, and how they might be held accountable to ethical standards. I develop a definition of networked information algorithms as assemblages of institutionally situated code, practices, and norms with the power to create, sustain, and signify relationships among people and data through minimally observable, semiautonomous action. Starting from Merrill’s prompt to see ethics as the study of “what we ought to do,” (...)
    Download  
     
    Export citation  
     
    Bookmark   37 citations  
  • Artificial moral and legal personhood.John-Stewart Gordon - forthcoming - AI and Society:1-15.
    This paper considers the hotly debated issue of whether one should grant moral and legal personhood to intelligent robots once they have achieved a certain standard of sophistication based on such criteria as rationality, autonomy, and social relations. The starting point for the analysis is the European Parliament’s resolution on Civil Law Rules on Robotics and its recommendation that robots be granted legal status and electronic personhood. The resolution is discussed against the background of the so-called Robotics Open Letter, which (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • ETHICA EX MACHINA. Exploring artificial moral agency or the possibility of computable ethics.Rodrigo Sanz - 2020 - Zeitschrift Für Ethik Und Moralphilosophie 3 (2):223-239.
    Since the automation revolution of our technological era, diverse machines or robots have gradually begun to reconfigure our lives. With this expansion, it seems that those machines are now faced with a new challenge: more autonomous decision-making involving life or death consequences. This paper explores the philosophical possibility of artificial moral agency through the following question: could a machine obtain the cognitive capacities needed to be a moral agent? In this regard, I propose to expose, under a normative-cognitive perspective, the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Toward Implementing the ADC Model of Moral Judgment in Autonomous Vehicles.Veljko Dubljević - 2020 - Science and Engineering Ethics 26 (5):2461-2472.
    Autonomous vehicles —and accidents they are involved in—attest to the urgent need to consider the ethics of artificial intelligence. The question dominating the discussion so far has been whether we want AVs to behave in a ‘selfish’ or utilitarian manner. Rather than considering modeling self-driving cars on a single moral system like utilitarianism, one possible way to approach programming for AI would be to reflect recent work in neuroethics. The agent–deed–consequence model :3–20, 2014a, Behav Brain Sci 37:487–488, 2014b) provides a (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • From machine ethics to computational ethics.Samuel T. Segun - 2021 - AI and Society 36 (1):263-276.
    Research into the ethics of artificial intelligence is often categorized into two subareas—robot ethics and machine ethics. Many of the definitions and classifications of the subject matter of these subfields, as found in the literature, are conflated, which I seek to rectify. In this essay, I infer that using the term ‘machine ethics’ is too broad and glosses over issues that the term computational ethics best describes. I show that the subject of inquiry of computational ethics is of great value (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Dignity and Dissent in Humans and Non-humans.Andreas Matthias - 2020 - Science and Engineering Ethics 26 (5):2497-2510.
    Is there a difference between human beings and those based on artificial intelligence that would affect their ability to be subjects of dignity? This paper first examines the philosophical notion of dignity as Immanuel Kant derives it from the moral autonomy of the individual. It then asks whether animals and AI systems can claim Kantian dignity or whether there is a sharp divide between human beings, animals and AI systems regarding their ability to be subjects of dignity. How this question (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • A Normative Approach to Artificial Moral Agency.Dorna Behdadi & Christian Munthe - 2020 - Minds and Machines 30 (2):195-218.
    This paper proposes a methodological redirection of the philosophical debate on artificial moral agency in view of increasingly pressing practical needs due to technological development. This “normative approach” suggests abandoning theoretical discussions about what conditions may hold for moral agency and to what extent these may be met by artificial entities such as AI systems and robots. Instead, the debate should focus on how and to what extent such entities should be included in human practices normally assuming moral agency and (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • The Neuroscience of Moral Judgment: Empirical and Philosophical Developments.Joshua May, Clifford I. Workman, Julia Haas & Hyemin Han - 2022 - In Felipe de Brigard & Walter Sinnott-Armstrong (eds.), Neuroscience and philosophy. Cambridge, Massachusetts: The MIT Press. pp. 17-47.
    We chart how neuroscience and philosophy have together advanced our understanding of moral judgment with implications for when it goes well or poorly. The field initially focused on brain areas associated with reason versus emotion in the moral evaluations of sacrificial dilemmas. But new threads of research have studied a wider range of moral evaluations and how they relate to models of brain development and learning. By weaving these threads together, we are developing a better understanding of the neurobiology of (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Thinking with things: An embodied enactive account of mind–technology interaction.Anco Peeters - 2019 - Dissertation, University of Wollongong
    Technological artefacts have, in recent years, invited increasingly intimate ways of interaction. But surprisingly little attention has been devoted to how such interactions, like with wearable devices or household robots, shape our minds, cognitive capacities, and moral character. In this thesis, I develop an embodied, enactive account of mind--technology interaction that takes the reciprocal influence of artefacts on minds seriously. First, I examine how recent developments in philosophy of technology can inform the phenomenology of mind--technology interaction as seen through an (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Moral Encounters of the Artificial Kind: Towards a non-anthropocentric account of machine moral agency.Fabio Tollon - 2019 - Dissertation, Stellenbosch University
    The aim of this thesis is to advance a philosophically justifiable account of Artificial Moral Agency (AMA). Concerns about the moral status of Artificial Intelligence (AI) traditionally turn on questions of whether these systems are deserving of moral concern (i.e. if they are moral patients) or whether they can be sources of moral action (i.e. if they are moral agents). On the Organic View of Ethical Status, being a moral patient is a necessary condition for an entity to qualify as (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability.Mark Coeckelbergh - 2020 - Science and Engineering Ethics 26 (4):2051-2068.
    This paper discusses the problem of responsibility attribution raised by the use of artificial intelligence technologies. It is assumed that only humans can be responsible agents; yet this alone already raises many issues, which are discussed starting from two Aristotelian conditions for responsibility. Next to the well-known problem of many hands, the issue of “many things” is identified and the temporal dimension is emphasized when it comes to the control condition. Special attention is given to the epistemic condition, which draws (...)
    Download  
     
    Export citation  
     
    Bookmark   53 citations  
  • Osaammeko rakentaa moraalisia toimijoita?Antti Kauppinen - 2021 - In Panu Raatikainen (ed.), Tekoäly, ihminen ja yhteiskunta. Helsinki: Gaudeamus.
    Jotta olisimme moraalisesti vastuussa teoistamme, meidän on kyettävä muodostamaan käsityksiä oikeasta ja väärästä ja toimimaan ainakin jossain määrin niiden mukaisesti. Jos olemme täysivaltaisia moraalitoimijoita, myös ymmärrämme miksi jotkin teot ovat väärin, ja kykenemme siten joustavasti mukauttamaan toimintaamme eri tilanteisiin. Esitän, ettei näköpiirissä ole tekoälyjärjestelmiä, jotka kykenisivät aidosti välittämään oikein tekemisestä tai ymmärtämään moraalin vaatimuksia, koska nämä kyvyt vaativat kokemustietoisuutta ja kokonaisvaltaista arvostelukykyä. Emme siten voi sysätä koneille vastuuta teoistaan. Meidän on sen sijaan pyrittävä rakentamaan keinotekoisia oikeintekijöitä - järjestelmiä, jotka eivät (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Designing Virtuous Sex Robots.Anco Peeters & Pim Haselager - 2019 - International Journal of Social Robotics:1-12.
    We propose that virtue ethics can be used to address ethical issues central to discussions about sex robots. In particular, we argue virtue ethics is well equipped to focus on the implications of sex robots for human moral character. Our evaluation develops in four steps. First, we present virtue ethics as a suitable framework for the evaluation of human–robot relationships. Second, we show the advantages of our virtue ethical account of sex robots by comparing it to current instrumentalist approaches, showing (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Can a Robot Pursue the Good? Exploring Artificial Moral Agency.Amy Michelle DeBaets - 2014 - Journal of Evolution and Technology 24 (3):76-86.
    In this essay I will explore an understanding of the potential moral agency of robots; arguing that the key characteristics of physical embodiment; adaptive learning; empathy in action; and a teleology toward the good are the primary necessary components for a machine to become a moral agent. In this context; other possible options will be rejected as necessary for moral agency; including simplistic notions of intelligence; computational power; and rule-following; complete freedom; a sense of God; and an immaterial soul. I (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Virtuous vs. utilitarian artificial moral agents.William A. Bauer - 2020 - AI and Society (1):263-271.
    Given that artificial moral agents—such as autonomous vehicles, lethal autonomous weapons, and automated financial trading systems—are now part of the socio-ethical equation, we should morally evaluate their behavior. How should artificial moral agents make decisions? Is one moral theory better suited than others for machine ethics? After briefly overviewing the dominant ethical approaches for building morality into machines, this paper discusses a recent proposal, put forward by Don Howard and Ioan Muntean (2016, 2017), for an artificial moral agent based on (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Never Mind the Trolley: The Ethics of Autonomous Vehicles in Mundane Situations.Johannes Himmelreich - 2018 - Ethical Theory and Moral Practice 21 (3):669-684.
    Trolley cases are widely considered central to the ethics of autonomous vehicles. We caution against this by identifying four problems. Trolley cases, given technical limitations, rest on assumptions that are in tension with one another. Furthermore, trolley cases illuminate only a limited range of ethical issues insofar as they cohere with a certain design framework. Furthermore, trolley cases seem to demand a moral answer when a political answer is called for. Finally, trolley cases might be epistemically problematic in several ways. (...)
    Download  
     
    Export citation  
     
    Bookmark   31 citations  
  • Artificial Moral Agents: Moral Mentors or Sensible Tools?Fabio Fossa - 2018 - Ethics and Information Technology (2):1-12.
    The aim of this paper is to offer an analysis of the notion of artificial moral agent (AMA) and of its impact on human beings’ self-understanding as moral agents. Firstly, I introduce the topic by presenting what I call the Continuity Approach. Its main claim holds that AMAs and human moral agents exhibit no significant qualitative difference and, therefore, should be considered homogeneous entities. Secondly, I focus on the consequences this approach leads to. In order to do this I take (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Automated Vehicles and Transportation Justice.Shane Epting - 2019 - Philosophy and Technology 32 (3):389-403.
    Despite numerous ethical examinations of automated vehicles, philosophers have neglected to address how these technologies will affect vulnerable people. To account for this lacuna, researchers must analyze how driverless cars could hinder or help social justice. In addition to thinking through these aspects, scholars must also pay attention to the extensive moral dimensions of automated vehicles, including how they will affect the public, nonhumans, future generations, and culturally significant artifacts. If planners and engineers undertake this task, then they will have (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • The “big red button” is too late: an alternative model for the ethical evaluation of AI systems.Thomas Arnold & Matthias Scheutz - 2018 - Ethics and Information Technology 20 (1):59-69.
    As a way to address both ominous and ordinary threats of artificial intelligence, researchers have started proposing ways to stop an AI system before it has a chance to escape outside control and cause harm. A so-called “big red button” would enable human operators to interrupt or divert a system while preventing the system from learning that such an intervention is a threat. Though an emergency button for AI seems to make intuitive sense, that approach ultimately concentrates on the point (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • The other question: can and should robots have rights?David J. Gunkel - 2018 - Ethics and Information Technology 20 (2):87-99.
    This essay addresses the other side of the robot ethics debate, taking up and investigating the question “Can and should robots have rights?” The examination of this subject proceeds by way of three steps or movements. We begin by looking at and analyzing the form of the question itself. There is an important philosophical difference between the two modal verbs that organize the inquiry—can and should. This difference has considerable history behind it that influences what is asked about and how. (...)
    Download  
     
    Export citation  
     
    Bookmark   59 citations  
  • Human-aligned artificial intelligence is a multiobjective problem.Peter Vamplew, Richard Dazeley, Cameron Foale, Sally Firmin & Jane Mummery - 2018 - Ethics and Information Technology 20 (1):27-40.
    As the capabilities of artificial intelligence systems improve, it becomes important to constrain their actions to ensure their behaviour remains beneficial to humanity. A variety of ethical, legal and safety-based frameworks have been proposed as a basis for designing these constraints. Despite their variations, these frameworks share the common characteristic that decision-making must consider multiple potentially conflicting factors. We demonstrate that these alignment frameworks can be represented as utility functions, but that the widely used Maximum Expected Utility paradigm provides insufficient (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Mind the gap: responsible robotics and the problem of responsibility.David J. Gunkel - 2020 - Ethics and Information Technology 22 (4):307-320.
    The task of this essay is to respond to the question concerning robots and responsibility—to answer for the way that we understand, debate, and decide who or what is able to answer for decisions and actions undertaken by increasingly interactive, autonomous, and sociable mechanisms. The analysis proceeds through three steps or movements. It begins by critically examining the instrumental theory of technology, which determines the way one typically deals with and responds to the question of responsibility when it involves technology. (...)
    Download  
     
    Export citation  
     
    Bookmark   41 citations  
  • Should we welcome robot teachers?Amanda J. C. Sharkey - 2016 - Ethics and Information Technology 18 (4):283-297.
    Current uses of robots in classrooms are reviewed and used to characterise four scenarios: Robot as Classroom Teacher; Robot as Companion and Peer; Robot as Care-eliciting Companion; and Telepresence Robot Teacher. The main ethical concerns associated with robot teachers are identified as: privacy; attachment, deception, and loss of human contact; and control and accountability. These are discussed in terms of the four identified scenarios. It is argued that classroom robots are likely to impact children’s’ privacy, especially when they masquerade as (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Incorporating Ethics into Artificial Intelligence.Amitai Etzioni & Oren Etzioni - 2017 - The Journal of Ethics 21 (4):403-418.
    This article reviews the reasons scholars hold that driverless cars and many other AI equipped machines must be able to make ethical decisions, and the difficulties this approach faces. It then shows that cars have no moral agency, and that the term ‘autonomous’, commonly applied to these machines, is misleading, and leads to invalid conclusions about the ways these machines can be kept ethical. The article’s most important claim is that a significant part of the challenge posed by AI-equipped machines (...)
    Download  
     
    Export citation  
     
    Bookmark   27 citations  
  • The autonomy-safety-paradox of service robotics in Europe and Japan: a comparative analysis.Hironori Matsuzaki & Gesa Lindemann - 2016 - AI and Society 31 (4):501-517.
    Service and personal care robots are starting to cross the threshold into the wilderness of everyday life, where they are supposed to interact with inexperienced lay users in a changing environment. In order to function as intended, robots must become independent entities that monitor themselves and improve their own behaviours based on learning outcomes in practice. This poses a great challenge to robotics, which we are calling the “autonomy-safety-paradox” (ASP). The integration of robot applications into society requires the reconciliation of (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Challenges for artificial cognitive systems.Antoni Gomila & Vincent C. Müller - 2012 - Journal of Cognitive Science 13 (4):452-469.
    The declared goal of this paper is to fill this gap: “... cognitive systems research needs questions or challenges that define progress. The challenges are not (yet more) predictions of the future, but a guideline to what are the aims and what would constitute progress.” – the quotation being from the project description of EUCogII, the project for the European Network for Cognitive Systems within which this formulation of the ‘challenges’ was originally developed (http://www.eucognition.org). So, we stick out our neck (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations