Switch to: References

Add citations

You must login to add citations.
  1. Problems with “Friendly AI”.Oliver Li - 2021 - Ethics and Information Technology 23 (3):543-550.
    On virtue ethical grounds, Barbro Fröding and Martin Peterson recently recommended that near-future AIs should be developed as ‘Friendly AI’. AI in social interaction with humans should be programmed such that they mimic aspects of human friendship. While it is a reasonable goal to implement AI systems interacting with humans as Friendly AI, I identify four issues that need to be addressed concerning Friendly AI with Fröding’s and Peterson’s understanding of Friendly AI as a starting point. In a first step, (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Human Brain Organoids: Why There Can Be Moral Concerns If They Grow Up in the Lab and Are Transplanted or Destroyed.Andrea Lavazza & Massimo Reichlin - 2023 - Cambridge Quarterly of Healthcare Ethics 32 (4):582-596.
    Human brain organoids (HBOs) are three-dimensional biological entities grown in the laboratory in order to recapitulate the structure and functions of the adult human brain. They can be taken to be novel living entities for their specific features and uses. As a contribution to the ongoing discussion on the use of HBOs, the authors identify three sets of reasons for moral concern. The first set of reasons regards the potential emergence of sentience/consciousness in HBOs that would endow them with a (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Human achievement and artificial intelligence.Brett Karlan - 2023 - Ethics and Information Technology 25 (3):1-12.
    In domains as disparate as playing Go and predicting the structure of proteins, artificial intelligence (AI) technologies have begun to perform at levels beyond which any humans can achieve. Does this fact represent something lamentable? Does superhuman AI performance somehow undermine the value of human achievements in these areas? Go grandmaster Lee Sedol suggested as much when he announced his retirement from professional Go, blaming the advances of Go-playing programs like AlphaGo for sapping his will to play the game at (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Can we wrong a robot?Nancy S. Jecker - 2023 - AI and Society 38 (1):259-268.
    With the development of increasingly sophisticated sociable robots, robot-human relationships are being transformed. Not only can sociable robots furnish emotional support and companionship for humans, humans can also form relationships with robots that they value highly. It is natural to ask, do robots that stand in close relationships with us have any moral standing over and above their purely instrumental value as means to human ends. We might ask our question this way, ‘Are there ways we can act towards robots (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • The Prospects of Artificial Consciousness: Ethical Dimensions and Concerns.Elisabeth Hildt - 2023 - American Journal of Bioethics Neuroscience 14 (2):58-71.
    Can machines be conscious and what would be the ethical implications? This article gives an overview of current robotics approaches toward machine consciousness and considers factors that hamper an understanding of machine consciousness. After addressing the epistemological question of how we would know whether a machine is conscious and discussing potential advantages of potential future machine consciousness, it outlines the role of consciousness for ascribing moral status. As machine consciousness would most probably differ considerably from human consciousness, several complex questions (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • The Moral Consideration of Artificial Entities: A Literature Review.Jamie Harris & Jacy Reese Anthis - 2021 - Science and Engineering Ethics 27 (4):1-95.
    Ethicists, policy-makers, and the general public have questioned whether artificial entities such as robots warrant rights or other forms of moral consideration. There is little synthesis of the research on this topic so far. We identify 294 relevant research or discussion items in our literature review of this topic. There is widespread agreement among scholars that some artificial entities could warrant moral consideration in the future, if not also the present. The reasoning varies, such as concern for the effects on (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Shifting Perspectives.David J. Gunkel - 2020 - Science and Engineering Ethics 26 (5):2527-2532.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Debate: What is Personhood in the Age of AI?David J. Gunkel & Jordan Joseph Wales - 2021 - AI and Society 36:473–486.
    In a friendly interdisciplinary debate, we interrogate from several vantage points the question of “personhood” in light of contemporary and near-future forms of social AI. David J. Gunkel approaches the matter from a philosophical and legal standpoint, while Jordan Wales offers reflections theological and psychological. Attending to metaphysical, moral, social, and legal understandings of personhood, we ask about the position of apparently personal artificial intelligences in our society and individual lives. Re-examining the “person” and questioning prominent construals of that category, (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • When does “no” mean no? Insights from sex robots.Anastasiia D. Grigoreva, Joshua Rottman & Arber Tasimi - 2024 - Cognition 244 (C):105687.
    Download  
     
    Export citation  
     
    Bookmark  
  • Moral Status and Intelligent Robots.John-Stewart Gordon & David J. Gunkel - 2021 - Southern Journal of Philosophy 60 (1):88-117.
    The Southern Journal of Philosophy, Volume 60, Issue 1, Page 88-117, March 2022.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • In search of the moral status of AI: why sentience is a strong argument.Martin Gibert & Dominic Martin - 2021 - AI and Society 1:1-12.
    Is it OK to lie to Siri? Is it bad to mistreat a robot for our own pleasure? Under what condition should we grant a moral status to an artificial intelligence system? This paper looks at different arguments for granting moral status to an AI system: the idea of indirect duties, the relational argument, the argument from intelligence, the arguments from life and information, and the argument from sentience. In each but the last case, we find unresolved issues with the (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • What Makes Work “Good” in the Age of Artificial Intelligence (AI)? Islamic Perspectives on AI-Mediated Work Ethics.Mohammed Ghaly - forthcoming - The Journal of Ethics:1-25.
    Artificial intelligence (AI) technologies are increasingly creeping into the work sphere, thereby gradually questioning and/or disturbing the long-established moral concepts and norms communities have been using to define what makes work good. Each community, and Muslims make no exception in this regard, has to revisit their moral world to provide well-thought frameworks that can engage with the challenging ethical questions raised by the new phenomenon of AI-mediated work. For a systematic analysis of the broad topic of AI-mediated work ethics from (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Artificial virtue: the machine question and perceptions of moral character in artificial moral agents.Patrick Gamez, Daniel B. Shank, Carson Arnold & Mallory North - 2020 - AI and Society 35 (4):795-809.
    Virtue ethics seems to be a promising moral theory for understanding and interpreting the development and behavior of artificial moral agents. Virtuous artificial agents would blur traditional distinctions between different sorts of moral machines and could make a claim to membership in the moral community. Accordingly, we investigate the “machine question” by studying whether virtue or vice can be attributed to artificial intelligence; that is, are people willing to judge machines as possessing moral character? An experiment describes situations where either (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • A Friendly Critique of Levinasian Machine Ethics.Patrick Gamez - 2022 - Southern Journal of Philosophy 60 (1):118-149.
    The Southern Journal of Philosophy, Volume 60, Issue 1, Page 118-149, March 2022.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Argumentation-Based Logic for Ethical Decision Making.Panayiotis Frangos, Petros Stefaneas & Sofia Almpani - 2022 - Studia Humana 11 (3-4):46-52.
    As automation in artificial intelligence is increasing, we will need to automate a growing amount of ethical decision making. However, ethical decision- making raises novel challenges for engineers, ethicists and policymakers, who will have to explore new ways to realize this task. The presented work focuses on the development and formalization of models that aim at ensuring a correct ethical behaviour of artificial intelligent agents, in a provable way, extending and implementing a logic-based proving calculus that is based on argumentation (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Towards Establishing Criteria for the Ethical Analysis of Artificial Intelligence.Michele Farisco, Kathinka Evers & Arleen Salles - 2020 - Science and Engineering Ethics 26 (5):2413-2425.
    Ethical reflection on Artificial Intelligence has become a priority. In this article, we propose a methodological model for a comprehensive ethical analysis of some uses of AI, notably as a replacement of human actors in specific activities. We emphasize the need for conceptual clarification of relevant key terms in order to undertake such reflection. Against that background, we distinguish two levels of ethical analysis, one practical and one theoretical. Focusing on the state of AI at present, we suggest that regardless (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Why the Epistemic Objection Against Using Sentience as Criterion of Moral Status is Flawed.Leonard Dung - 2022 - Science and Engineering Ethics 28 (6):1-15.
    According to a common view, sentience is necessary and sufficient for moral status. In other words, whether a being has intrinsic moral relevance is determined by its capacity for conscious experience. The _epistemic objection_ derives from our profound uncertainty about sentience. According to this objection, we cannot use sentience as a _criterion_ to ascribe moral status in practice because we won’t know in the foreseeable future which animals and AI systems are sentient while ethical questions regarding the possession of moral (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Understanding Artificial Agency.Leonard Dung - forthcoming - Philosophical Quarterly.
    Which artificial intelligence (AI) systems are agents? To answer this question, I propose a multidimensional account of agency. According to this account, a system's agency profile is jointly determined by its level of goal-directedness and autonomy as well as is abilities for directly impacting the surrounding world, long-term planning and acting for reasons. Rooted in extant theories of agency, this account enables fine-grained, nuanced comparative characterizations of artificial agency. I show that this account has multiple important virtues and is more (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Preserving the Normative Significance of Sentience.Leonard Dung - 2024 - Journal of Consciousness Studies 31 (1):8-30.
    According to an orthodox view, the capacity for conscious experience (sentience) is relevant to the distribution of moral status and value. However, physicalism about consciousness might threaten the normative relevance of sentience. According to the indeterminacy argument, sentience is metaphysically indeterminate while indeterminacy of sentience is incompatible with its normative relevance. According to the introspective argument (by François Kammerer), the unreliability of our conscious introspection undercuts the justification for belief in the normative relevance of consciousness. I defend the normative relevance (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • How to deal with risks of AI suffering.Leonard Dung - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    1. 1.1. Suffering is bad. This is why, ceteris paribus, there are strong moral reasons to prevent suffering. Moreover, typically, those moral reasons are stronger when the amount of suffering at st...
    Download  
     
    Export citation  
     
    Bookmark   1 citation