Switch to: References

Add citations

You must login to add citations.
  1. Preserving the Normative Significance of Sentience.Leonard Dung - 2024 - Journal of Consciousness Studies 31 (1):8-30.
    According to an orthodox view, the capacity for conscious experience (sentience) is relevant to the distribution of moral status and value. However, physicalism about consciousness might threaten the normative relevance of sentience. According to the indeterminacy argument, sentience is metaphysically indeterminate while indeterminacy of sentience is incompatible with its normative relevance. According to the introspective argument (by François Kammerer), the unreliability of our conscious introspection undercuts the justification for belief in the normative relevance of consciousness. I defend the normative relevance (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Is moral status done with words?Miriam Gorr - 2024 - Ethics and Information Technology 26 (1).
    This paper critically examines Coeckelbergh’s (2023) performative view of moral status. Drawing parallels to Searle’s social ontology, two key claims of the performative view are identified: (1) Making a moral status claim is equivalent to making a moral status declaration. (2) A successful declaration establishes the institutional fact that the entity has moral status. Closer examination, however, reveals flaws in both claims. The second claim faces a dilemma: individual instances of moral status declaration are likely to fail because they do (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Empathic responses and moral status for social robots: an argument in favor of robot patienthood based on K. E. Løgstrup.Simon N. Balle - 2022 - AI and Society 37 (2):535-548.
    Empirical research on human–robot interaction has demonstrated how humans tend to react to social robots with empathic responses and moral behavior. How should we ethically evaluate such responses to robots? Are people wrong to treat non-sentient artefacts as moral patients since this rests on anthropomorphism and ‘over-identification’ —or correct since spontaneous moral intuition and behavior toward nonhumans is indicative for moral patienthood, such that social robots become our ‘Others’?. In this research paper, I weave extant HRI studies that demonstrate empathic (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • On the moral status of social robots: considering the consciousness criterion.Kestutis Mosakas - 2021 - AI and Society 36 (2):429-443.
    While philosophers have been debating for decades on whether different entities—including severely disabled human beings, embryos, animals, objects of nature, and even works of art—can legitimately be considered as having moral status, this question has gained a new dimension in the wake of artificial intelligence (AI). One of the more imminent concerns in the context of AI is that of the moral rights and status of social robots, such as robotic caregivers and artificial companions, that are built to interact with (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Understanding Artificial Agency.Leonard Dung - forthcoming - Philosophical Quarterly.
    Which artificial intelligence (AI) systems are agents? To answer this question, I propose a multidimensional account of agency. According to this account, a system's agency profile is jointly determined by its level of goal-directedness and autonomy as well as is abilities for directly impacting the surrounding world, long-term planning and acting for reasons. Rooted in extant theories of agency, this account enables fine-grained, nuanced comparative characterizations of artificial agency. I show that this account has multiple important virtues and is more (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Moral zombies: why algorithms are not moral agents.Carissa Véliz - 2021 - AI and Society 36 (2):487-497.
    In philosophy of mind, zombies are imaginary creatures that are exact physical duplicates of conscious subjects but for whom there is no first-personal experience. Zombies are meant to show that physicalism—the theory that the universe is made up entirely out of physical components—is false. In this paper, I apply the zombie thought experiment to the realm of morality to assess whether moral agency is something independent from sentience. Algorithms, I argue, are a kind of functional moral zombie, such that thinking (...)
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
  • Ethical aspects of AI robots for agri-food; a relational approach based on four case studies.Simone van der Burg, Else Giesbers, Marc-Jeroen Bogaardt, Wijbrand Ouweltjes & Kees Lokhorst - forthcoming - AI and Society:1-15.
    These last years, the development of AI robots for agriculture, livestock farming and food processing industries is rapidly increasing. These robots are expected to help produce and deliver food more efficiently for a growing human population, but they also raise societal and ethical questions. As the type of questions raised by these AI robots in society have been rarely empirically explored, we engaged in four case studies focussing on four types of AI robots for agri-food ‘in the making’: manure collectors, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Why Indirect Harms do not Support Social Robot Rights.Paula Sweeney - 2022 - Minds and Machines 32 (4):735-749.
    There is growing evidence to support the claim that we react differently to robots than we do to other objects. In particular, we react differently to robots with which we have some form of social interaction. In this paper I critically assess the claim that, due to our tendency to become emotionally attached to social robots, permitting their harm may be damaging for society and as such we should consider introducing legislation to grant social robots rights and protect them from (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • To Each Technology Its Own Ethics: The Problem of Ethical Proliferation.Henrik Skaug Sætra & John Danaher - 2022 - Philosophy and Technology 35 (4):1-26.
    Ethics plays a key role in the normative analysis of the impacts of technology. We know that computers in general and the processing of data, the use of artificial intelligence, and the combination of computers and/or artificial intelligence with robotics are all associated with ethically relevant implications for individuals, groups, and society. In this article, we argue that while all technologies are ethically relevant, there is no need to create a separate ‘ethics of X’ or ‘X ethics’ for each and (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Technology and moral change: the transformation of truth and trust.Henrik Skaug Sætra & John Danaher - 2022 - Ethics and Information Technology 24 (3):1-16.
    Technologies can have profound effects on social moral systems. Is there any way to systematically investigate and anticipate these potential effects? This paper aims to contribute to this emerging field on inquiry through a case study method. It focuses on two core human values—truth and trust—describes their structural properties and conceptualisations, and then considers various mechanisms through which technology is changing and can change our perspective on those values. In brief, the paper argues that technology is transforming these values by (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Artificial virtuous agents: from theory to machine implementation.Jakob Stenseke - 2021 - AI and Society:1-20.
    Virtue ethics has many times been suggested as a promising recipe for the construction of artificial moral agents due to its emphasis on moral character and learning. However, given the complex nature of the theory, hardly any work has de facto attempted to implement the core tenets of virtue ethics in moral machines. The main goal of this paper is to demonstrate how virtue ethics can be taken all the way from theory to machine implementation. To achieve this goal, we (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Artificial virtuous agents: from theory to machine implementation.Jakob Stenseke - 2023 - AI and Society 38 (4):1301-1320.
    Virtue ethics has many times been suggested as a promising recipe for the construction of artificial moral agents due to its emphasis on moral character and learning. However, given the complex nature of the theory, hardly any work has de facto attempted to implement the core tenets of virtue ethics in moral machines. The main goal of this paper is to demonstrate how virtue ethics can be taken all the way from theory to machine implementation. To achieve this goal, we (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Robots in the Workplace: a Threat to—or Opportunity for—Meaningful Work?Jilles Smids, Sven Nyholm & Hannah Berkers - 2020 - Philosophy and Technology 33 (3):503-522.
    The concept of meaningful work has recently received increased attention in philosophy and other disciplines. However, the impact of the increasing robotization of the workplace on meaningful work has received very little attention so far. Doing work that is meaningful leads to higher job satisfaction and increased worker well-being, and some argue for a right to access to meaningful work. In this paper, we therefore address the impact of robotization on meaningful work. We do so by identifying five key aspects (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Danaher’s Ethical Behaviourism: An Adequate Guide to Assessing the Moral Status of a Robot?Jilles Smids - 2020 - Science and Engineering Ethics 26 (5):2849-2866.
    This paper critically assesses John Danaher’s ‘ethical behaviourism’, a theory on how the moral status of robots should be determined. The basic idea of this theory is that a robot’s moral status is determined decisively on the basis of its observable behaviour. If it behaves sufficiently similar to some entity that has moral status, such as a human or an animal, then we should ascribe the same moral status to the robot as we do to this human or animal. The (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • The Weirdness of the World.Eric Schwitzgebel - 2024 - Princeton University Press.
    Download  
     
    Export citation  
     
    Bookmark  
  • Engineering responsibility.Nicholas Sars - 2022 - Ethics and Information Technology 24 (3):1-10.
    Many optimistic responses have been proposed to bridge the threat of responsibility gaps which artificial systems create. This paper identifies a question which arises if this optimistic project proves successful. On a response-dependent understanding of responsibility, our responsibility practices themselves at least partially determine who counts as a responsible agent. On this basis, if AI or robot technology advance such that AI or robot agents become fitting participants within responsibility exchanges, then responsibility itself might be engineered. If we have good (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Could you hate a robot? And does it matter if you could?Helen Ryland - 2021 - AI and Society 36 (2):637-649.
    This article defends two claims. First, humans could be in relationships characterised by hate with some robots. Second, it matters that humans could hate robots, as this hate could wrong the robots (by leaving them at risk of mistreatment, exploitation, etc.). In defending this second claim, I will thus be accepting that morally considerable robots either currently exist, or will exist in the near future, and so it can matter (morally speaking) how we treat these robots. The arguments presented in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Criticizing Danaher’s Approach to Superficial State Deception.Maciej Musiał - 2023 - Science and Engineering Ethics 29 (5):1-15.
    If existing or future robots appear to have some capacity, state or property, how can we determine whether they truly have it or whether we are deceived into believing so? John Danaher addresses this question by formulating his approach to what he refers to as superficial state deception (SSD) from the perspective of his theory termed ethical behaviourism (EB), which was initially designed to determine the moral status of robots. In summary, Danaher believes that focusing on behaviour is sufficient to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Will intelligent machines become moral patients?Parisa Moosavi - forthcoming - Philosophy and Phenomenological Research.
    This paper addresses a question about the moral status of Artificial Intelligence (AI): will AIs ever become moral patients? I argue that, while it is in principle possible for an intelligent machine to be a moral patient, there is no good reason to believe this will in fact happen. I start from the plausible assumption that traditional artifacts do not meet a minimal necessary condition of moral patiency: having a good of one's own. I then argue that intelligent machines are (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Is it time for robot rights? Moral status in artificial entities.Vincent C. Müller - 2021 - Ethics and Information Technology 23 (3):579–587.
    Some authors have recently suggested that it is time to consider rights for robots. These suggestions are based on the claim that the question of robot rights should not depend on a standard set of conditions for ‘moral status’; but instead, the question is to be framed in a new way, by rejecting the is/ought distinction, making a relational turn, or assuming a methodological behaviourism. We try to clarify these suggestions and to show their highly problematic consequences. While we find (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Investigating user perceptions of commercial virtual assistants: A qualitative study.Leilasadat Mirghaderi, Monika Sziron & Elisabeth Hildt - 2022 - Frontiers in Psychology 13.
    As commercial virtual assistants become an integrated part of almost every smart device that we use on a daily basis, including but not limited to smartphones, speakers, personal computers, watches, TVs, and TV sticks, there are pressing questions that call for the study of how participants perceive commercial virtual assistants and what relational roles they assign to them. Furthermore, it is crucial to study which characteristics of commercial virtual assistants are perceived as important for establishing affective interaction with commercial virtual (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Moral Significance of the Phenomenology of Phenomenal Consciousness in Case of Artificial Agents.Kamil Mamak - 2023 - American Journal of Bioethics Neuroscience 14 (2):160-162.
    In a recent article, Joshua Shepherd identifies six problems with attributing moral status to nonhumans on the basis of consciousness (Shepherd 2023). In this commentary, I want to draw out yet ano...
    Download  
     
    Export citation  
     
    Bookmark  
  • Military robots should not look like a humans.Kamil Mamak & Kaja Kowalczewska - 2023 - Ethics and Information Technology 25 (3):1-10.
    Using robots in the military contexts is problematic at many levels. There are social, legal, and ethical issues that should be discussed first before their wider deployment. In this paper, we focus on an additional problem: their human likeness. We claim that military robots should not look like humans. That design choice may bring additional risks that endanger human lives and by that contradicts the very justification for deploying robots at war, which is decreasing human deaths and injuries. We discuss (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Humans, Neanderthals, robots and rights.Kamil Mamak - 2022 - Ethics and Information Technology 24 (3):1-9.
    Robots are becoming more visible parts of our life, a situation which prompts questions about their place in our society. One group of issues that is widely discussed is connected with robots’ moral and legal status as well as their potential rights. The question of granting robots rights is polarizing. Some positions accept the possibility of granting them human rights whereas others reject the notion that robots can be considered potential rights holders. In this paper, I claim that robots will (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Contesting the Consciousness Criterion: A More Radical Approach to the Moral Status of Non-Humans.Joan Llorca-Albareda & Gonzalo Díaz-Cobacho - 2023 - American Journal of Bioethics Neuroscience 14 (2):158-160.
    Numerous and diverse discussions about moral status have taken place over the years. However, this concept was not born until the moral weight of non-human entities was raised. Animal ethics, for i...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Problems with “Friendly AI”.Oliver Li - 2021 - Ethics and Information Technology 23 (3):543-550.
    On virtue ethical grounds, Barbro Fröding and Martin Peterson recently recommended that near-future AIs should be developed as ‘Friendly AI’. AI in social interaction with humans should be programmed such that they mimic aspects of human friendship. While it is a reasonable goal to implement AI systems interacting with humans as Friendly AI, I identify four issues that need to be addressed concerning Friendly AI with Fröding’s and Peterson’s understanding of Friendly AI as a starting point. In a first step, (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Human Brain Organoids: Why There Can Be Moral Concerns If They Grow Up in the Lab and Are Transplanted or Destroyed.Andrea Lavazza & Massimo Reichlin - 2023 - Cambridge Quarterly of Healthcare Ethics 32 (4):582-596.
    Human brain organoids (HBOs) are three-dimensional biological entities grown in the laboratory in order to recapitulate the structure and functions of the adult human brain. They can be taken to be novel living entities for their specific features and uses. As a contribution to the ongoing discussion on the use of HBOs, the authors identify three sets of reasons for moral concern. The first set of reasons regards the potential emergence of sentience/consciousness in HBOs that would endow them with a (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Human achievement and artificial intelligence.Brett Karlan - 2023 - Ethics and Information Technology 25 (3):1-12.
    In domains as disparate as playing Go and predicting the structure of proteins, artificial intelligence (AI) technologies have begun to perform at levels beyond which any humans can achieve. Does this fact represent something lamentable? Does superhuman AI performance somehow undermine the value of human achievements in these areas? Go grandmaster Lee Sedol suggested as much when he announced his retirement from professional Go, blaming the advances of Go-playing programs like AlphaGo for sapping his will to play the game at (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Can we wrong a robot?Nancy S. Jecker - 2023 - AI and Society 38 (1):259-268.
    With the development of increasingly sophisticated sociable robots, robot-human relationships are being transformed. Not only can sociable robots furnish emotional support and companionship for humans, humans can also form relationships with robots that they value highly. It is natural to ask, do robots that stand in close relationships with us have any moral standing over and above their purely instrumental value as means to human ends. We might ask our question this way, ‘Are there ways we can act towards robots (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • The Prospects of Artificial Consciousness: Ethical Dimensions and Concerns.Elisabeth Hildt - 2023 - American Journal of Bioethics Neuroscience 14 (2):58-71.
    Can machines be conscious and what would be the ethical implications? This article gives an overview of current robotics approaches toward machine consciousness and considers factors that hamper an understanding of machine consciousness. After addressing the epistemological question of how we would know whether a machine is conscious and discussing potential advantages of potential future machine consciousness, it outlines the role of consciousness for ascribing moral status. As machine consciousness would most probably differ considerably from human consciousness, several complex questions (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • The Moral Consideration of Artificial Entities: A Literature Review.Jamie Harris & Jacy Reese Anthis - 2021 - Science and Engineering Ethics 27 (4):1-95.
    Ethicists, policy-makers, and the general public have questioned whether artificial entities such as robots warrant rights or other forms of moral consideration. There is little synthesis of the research on this topic so far. We identify 294 relevant research or discussion items in our literature review of this topic. There is widespread agreement among scholars that some artificial entities could warrant moral consideration in the future, if not also the present. The reasoning varies, such as concern for the effects on (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Shifting Perspectives.David J. Gunkel - 2020 - Science and Engineering Ethics 26 (5):2527-2532.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Debate: What is Personhood in the Age of AI?David J. Gunkel & Jordan Joseph Wales - 2021 - AI and Society 36:473–486.
    In a friendly interdisciplinary debate, we interrogate from several vantage points the question of “personhood” in light of contemporary and near-future forms of social AI. David J. Gunkel approaches the matter from a philosophical and legal standpoint, while Jordan Wales offers reflections theological and psychological. Attending to metaphysical, moral, social, and legal understandings of personhood, we ask about the position of apparently personal artificial intelligences in our society and individual lives. Re-examining the “person” and questioning prominent construals of that category, (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • When does “no” mean no? Insights from sex robots.Anastasiia D. Grigoreva, Joshua Rottman & Arber Tasimi - 2024 - Cognition 244 (C):105687.
    Download  
     
    Export citation  
     
    Bookmark  
  • Moral Status and Intelligent Robots.John-Stewart Gordon & David J. Gunkel - 2021 - Southern Journal of Philosophy 60 (1):88-117.
    The Southern Journal of Philosophy, Volume 60, Issue 1, Page 88-117, March 2022.
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Synthesizing Methuselah: The Question of Artificial Agelessness.Richard B. Gibson - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (1):60-75.
    As biological organisms, we age and, eventually, die. However, age’s deteriorating effects may not be universal. Some theoretical entities, due to their synthetic composition, could exist independently from aging—artificial general intelligence (AGI). With adequate resource access, an AGI could theoretically be ageless and would be, in some sense, immortal. Yet, this need not be inevitable. Designers could imbue AGIs with artificial mortality via an internal shut-off point. The question, though, is, should they? Should researchers curtail an AGI’s potentially endless lifespan (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • In search of the moral status of AI: why sentience is a strong argument.Martin Gibert & Dominic Martin - 2021 - AI and Society 1:1-12.
    Is it OK to lie to Siri? Is it bad to mistreat a robot for our own pleasure? Under what condition should we grant a moral status to an artificial intelligence system? This paper looks at different arguments for granting moral status to an AI system: the idea of indirect duties, the relational argument, the argument from intelligence, the arguments from life and information, and the argument from sentience. In each but the last case, we find unresolved issues with the (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • In search of the moral status of AI: why sentience is a strong argument.Martin Gibert & Dominic Martin - 2022 - AI and Society 37 (1):319-330.
    Is it OK to lie to Siri? Is it bad to mistreat a robot for our own pleasure? Under what condition should we grant a moral status to an artificial intelligence (AI) system? This paper looks at different arguments for granting moral status to an AI system: the idea of indirect duties, the relational argument, the argument from intelligence, the arguments from life and information, and the argument from sentience. In each but the last case, we find unresolved issues with (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • What Makes Work “Good” in the Age of Artificial Intelligence (AI)? Islamic Perspectives on AI-Mediated Work Ethics.Mohammed Ghaly - forthcoming - The Journal of Ethics:1-25.
    Artificial intelligence (AI) technologies are increasingly creeping into the work sphere, thereby gradually questioning and/or disturbing the long-established moral concepts and norms communities have been using to define what makes work good. Each community, and Muslims make no exception in this regard, has to revisit their moral world to provide well-thought frameworks that can engage with the challenging ethical questions raised by the new phenomenon of AI-mediated work. For a systematic analysis of the broad topic of AI-mediated work ethics from (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial virtue: the machine question and perceptions of moral character in artificial moral agents.Patrick Gamez, Daniel B. Shank, Carson Arnold & Mallory North - 2020 - AI and Society 35 (4):795-809.
    Virtue ethics seems to be a promising moral theory for understanding and interpreting the development and behavior of artificial moral agents. Virtuous artificial agents would blur traditional distinctions between different sorts of moral machines and could make a claim to membership in the moral community. Accordingly, we investigate the “machine question” by studying whether virtue or vice can be attributed to artificial intelligence; that is, are people willing to judge machines as possessing moral character? An experiment describes situations where either (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • A Friendly Critique of Levinasian Machine Ethics.Patrick Gamez - 2022 - Southern Journal of Philosophy 60 (1):118-149.
    The Southern Journal of Philosophy, Volume 60, Issue 1, Page 118-149, March 2022.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Argumentation-Based Logic for Ethical Decision Making.Panayiotis Frangos, Petros Stefaneas & Sofia Almpani - 2022 - Studia Humana 11 (3-4):46-52.
    As automation in artificial intelligence is increasing, we will need to automate a growing amount of ethical decision making. However, ethical decision- making raises novel challenges for engineers, ethicists and policymakers, who will have to explore new ways to realize this task. The presented work focuses on the development and formalization of models that aim at ensuring a correct ethical behaviour of artificial intelligent agents, in a provable way, extending and implementing a logic-based proving calculus that is based on argumentation (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Towards Establishing Criteria for the Ethical Analysis of Artificial Intelligence.Michele Farisco, Kathinka Evers & Arleen Salles - 2020 - Science and Engineering Ethics 26 (5):2413-2425.
    Ethical reflection on Artificial Intelligence has become a priority. In this article, we propose a methodological model for a comprehensive ethical analysis of some uses of AI, notably as a replacement of human actors in specific activities. We emphasize the need for conceptual clarification of relevant key terms in order to undertake such reflection. Against that background, we distinguish two levels of ethical analysis, one practical and one theoretical. Focusing on the state of AI at present, we suggest that regardless (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Why the Epistemic Objection Against Using Sentience as Criterion of Moral Status is Flawed.Leonard Dung - 2022 - Science and Engineering Ethics 28 (6):1-15.
    According to a common view, sentience is necessary and sufficient for moral status. In other words, whether a being has intrinsic moral relevance is determined by its capacity for conscious experience. The _epistemic objection_ derives from our profound uncertainty about sentience. According to this objection, we cannot use sentience as a _criterion_ to ascribe moral status in practice because we won’t know in the foreseeable future which animals and AI systems are sentient while ethical questions regarding the possession of moral (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • How to deal with risks of AI suffering.Leonard Dung - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    1. 1.1. Suffering is bad. This is why, ceteris paribus, there are strong moral reasons to prevent suffering. Moreover, typically, those moral reasons are stronger when the amount of suffering at st...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • What Matters for Moral Status: Behavioral or Cognitive Equivalence?John Danaher - 2021 - Cambridge Quarterly of Healthcare Ethics 30 (3):472-478.
    Henry Shevlin’s paper—“How could we know when a robot was a moral patient?” – argues that we should recognize robots and artificial intelligence (AI) as psychological moral patients if they are cognitively equivalent to other beings that we already recognize as psychological moral patients (i.e., humans and, at least some, animals). In defending this cognitive equivalence strategy, Shevlin draws inspiration from the “behavioral equivalence” strategy that I have defended in previous work but argues that it is flawed in crucial respects. (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Should We Use Technology to Merge Minds?John Danaher & Sven Nyholm - 2021 - Cambridge Quarterly of Healthcare Ethics 30 (4):585-603.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Robot Betrayal: a guide to the ethics of robotic deception.John Danaher - 2020 - Ethics and Information Technology 22 (2):117-128.
    If a robot sends a deceptive signal to a human user, is this always and everywhere an unethical act, or might it sometimes be ethically desirable? Building upon previous work in robot ethics, this article tries to clarify and refine our understanding of the ethics of robotic deception. It does so by making three arguments. First, it argues that we need to distinguish between three main forms of robotic deception (external state deception; superficial state deception; and hidden state deception) in (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • Moral Uncertainty and Our Relationships with Unknown Minds.John Danaher - 2023 - Cambridge Quarterly of Healthcare Ethics 32 (4):482-495.
    We are sometimes unsure of the moral status of our relationships with other entities. Recent case studies in this uncertainty include our relationships with artificial agents (robots, assistant AI, etc.), animals, and patients with “locked-in” syndrome. Do these entities have basic moral standing? Could they count as true friends or lovers? What should we do when we do not know the answer to these questions? An influential line of reasoning suggests that, in such cases of moral uncertainty, we need meta-moral (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Understanding responsibility in Responsible AI. Dianoetic virtues and the hard problem of context.Mihaela Constantinescu, Cristina Voinea, Radu Uszkai & Constantin Vică - 2021 - Ethics and Information Technology 23 (4):803-814.
    During the last decade there has been burgeoning research concerning the ways in which we should think of and apply the concept of responsibility for Artificial Intelligence. Despite this conceptual richness, there is still a lack of consensus regarding what Responsible AI entails on both conceptual and practical levels. The aim of this paper is to connect the ethical dimension of responsibility in Responsible AI with Aristotelian virtue ethics, where notions of context and dianoetic virtues play a grounding role for (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations