Switch to: References

Add citations

You must login to add citations.
  1. The Problem of Evil in Virtual Worlds.Brendan Shea - 2017 - In Mark Silcox (ed.), Experience Machines: The Philosophy of Virtual Worlds. London: Rowman & Littlefield. pp. 137-155.
    In its original form, Nozick’s experience machine serves as a potent counterexample to a simplistic form of hedonism. The pleasurable life offered by the experience machine, its seems safe to say, lacks the requisite depth that many of us find necessary to lead a genuinely worthwhile life. Among other things, the experience machine offers no opportunities to establish meaningful relationships, or to engage in long-term artistic, intellectual, or political projects that survive one’s death. This intuitive objection finds some support in (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • The Kant-Inspired Indirect Argument for Non-Sentient Robot Rights.Tobias Flattery - 2023 - AI and Ethics.
    Some argue that robots could never be sentient, and thus could never have intrinsic moral status. Others disagree, believing that robots indeed will be sentient and thus will have moral status. But a third group thinks that, even if robots could never have moral status, we still have a strong moral reason to treat some robots as if they do. Drawing on a Kantian argument for indirect animal rights, a number of technology ethicists contend that our treatment of anthropomorphic or (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Moral Uncertainty and Our Relationships with Unknown Minds.John Danaher - 2023 - Cambridge Quarterly of Healthcare Ethics 32 (4):482-495.
    We are sometimes unsure of the moral status of our relationships with other entities. Recent case studies in this uncertainty include our relationships with artificial agents (robots, assistant AI, etc.), animals, and patients with “locked-in” syndrome. Do these entities have basic moral standing? Could they count as true friends or lovers? What should we do when we do not know the answer to these questions? An influential line of reasoning suggests that, in such cases of moral uncertainty, we need meta-moral (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • ’How could you even ask that?’ Moral considerability, uncertainty and vulnerability in social robotics.Alexis Elder - 2020 - Journal of Sociotechnical Critique 1 (1):1-23.
    When it comes to social robotics (robots that engage human social responses via “eyes” and other facial features, voice-based natural-language interactions, and even evocative movements), ethicists, particularly in European and North American traditions, are divided over whether and why they might be morally considerable. Some argue that moral considerability is based on internal psychological states like consciousness and sentience, and debate about thresholds of such features sufficient for ethical consideration, a move sometimes criticized for being overly dualistic in its framing (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Moral Standing of Social Robots: Untapped Insights from Africa.Nancy S. Jecker, Caesar A. Atiure & Martin Odei Ajei - 2022 - Philosophy and Technology 35 (2):1-22.
    This paper presents an African relational view of social robots’ moral standing which draws on the philosophy of ubuntu. The introduction places the question of moral standing in historical and cultural contexts. Section 2 demonstrates an ubuntu framework by applying it to the fictional case of a social robot named Klara, taken from Ishiguro’s novel, Klara and the Sun. We argue that an ubuntu ethic assigns moral standing to Klara, based on her relational qualities and pro-social virtues. Section 3 introduces (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • (1 other version)In search of the moral status of AI: why sentience is a strong argument.Martin Gibert & Dominic Martin - 2022 - AI and Society 37 (1):319-330.
    Is it OK to lie to Siri? Is it bad to mistreat a robot for our own pleasure? Under what condition should we grant a moral status to an artificial intelligence (AI) system? This paper looks at different arguments for granting moral status to an AI system: the idea of indirect duties, the relational argument, the argument from intelligence, the arguments from life and information, and the argument from sentience. In each but the last case, we find unresolved issues with (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • (1 other version)In search of the moral status of AI: why sentience is a strong argument.Martin Gibert & Dominic Martin - 2021 - AI and Society 1:1-12.
    Is it OK to lie to Siri? Is it bad to mistreat a robot for our own pleasure? Under what condition should we grant a moral status to an artificial intelligence system? This paper looks at different arguments for granting moral status to an AI system: the idea of indirect duties, the relational argument, the argument from intelligence, the arguments from life and information, and the argument from sentience. In each but the last case, we find unresolved issues with the (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • First-person representations and responsible agency in AI.Miguel Ángel Sebastián & Fernando Rudy-Hiller - 2021 - Synthese 199 (3-4):7061-7079.
    In this paper I investigate which of the main conditions proposed in the moral responsibility literature are the ones that spell trouble for the idea that Artificial Intelligence Systems could ever be full-fledged responsible agents. After arguing that the standard construals of the control and epistemic conditions don’t impose any in-principle barrier to AISs being responsible agents, I identify the requirement that responsible agents must be aware of their own actions as the main locus of resistance to attribute that kind (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • A Theodicy for Artificial Universes: Moral Considerations on Simulation Hypotheses.Stefano Gualeni - 2021 - International Journal of Technoethics 12 (1):21-31.
    ‘Simulation Hypotheses’ are imaginative scenarios that are typically employed in philosophy to speculate on how likely it is that we are currently living within a simulated universe as well as on our possibility for ever discerning whether we do in fact inhabit one. These philosophical questions in particular overshadowed other aspects and potential uses of simulation hypotheses, some of which are foregrounded in this article. More specifically, “A Theodicy for Artificial Universes” focuses on the moral implications of simulation hypotheses with (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • If robots are people, can they be made for profit? Commercial implications of robot personhood.Bartek Chomanski - forthcoming - AI and Ethics.
    It could become technologically possible to build artificial agents instantiating whatever properties are sufficient for personhood. It is also possible, if not likely, that such beings could be built for commercial purposes. This paper asks whether such commercialization can be handled in a way that is not morally reprehensible, and answers in the affirmative. There exists a morally acceptable institutional framework that could allow for building artificial persons for commercial gain. The paper first considers the minimal ethical requirements that any (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • On the moral status of social robots: considering the consciousness criterion.Kestutis Mosakas - 2021 - AI and Society 36 (2):429-443.
    While philosophers have been debating for decades on whether different entities—including severely disabled human beings, embryos, animals, objects of nature, and even works of art—can legitimately be considered as having moral status, this question has gained a new dimension in the wake of artificial intelligence (AI). One of the more imminent concerns in the context of AI is that of the moral rights and status of social robots, such as robotic caregivers and artificial companions, that are built to interact with (...)
    Download  
     
    Export citation  
     
    Bookmark   20 citations  
  • The Future of Value Sensitive Design.Batya Friedman, David Hendry, Steven Umbrello, Jeroen Van Den Hoven & Daisy Yoo - 2020 - Paradigm Shifts in ICT Ethics: Proceedings of the 18th International Conference ETHICOMP 2020.
    In this panel, we explore the future of value sensitive design (VSD). The stakes are high. Many in public and private sectors and in civil society are gradually realizing that taking our values seriously implies that we have to ensure that values effectively inform the design of technology which, in turn, shapes people’s lives. Value sensitive design offers a highly developed set of theory, tools, and methods to systematically do so.
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial Beings Worthy of Moral Consideration in Virtual Environments: An Analysis of Ethical Viability.Stefano Gualeni - 2020 - Journal of Virtual Worlds Research 13 (1).
    This article explores whether and under which circumstances it is ethically viable to include artificial beings worthy of moral consideration in virtual environments. In particular, the article focuses on virtual environments such as those in digital games and training simulations – interactive and persistent digital artifacts designed to fulfill specific purposes, such as entertainment, education, training, or persuasion. The article introduces the criteria for moral consideration that serve as a framework for this analysis. Adopting this framework, the article tackles the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Robot Betrayal: a guide to the ethics of robotic deception.John Danaher - 2020 - Ethics and Information Technology 22 (2):117-128.
    If a robot sends a deceptive signal to a human user, is this always and everywhere an unethical act, or might it sometimes be ethically desirable? Building upon previous work in robot ethics, this article tries to clarify and refine our understanding of the ethics of robotic deception. It does so by making three arguments. First, it argues that we need to distinguish between three main forms of robotic deception (external state deception; superficial state deception; and hidden state deception) in (...)
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  • Welcoming Robots into the Moral Circle: A Defence of Ethical Behaviourism.John Danaher - 2020 - Science and Engineering Ethics 26 (4):2023-2049.
    Can robots have significant moral status? This is an emerging topic of debate among roboticists and ethicists. This paper makes three contributions to this debate. First, it presents a theory – ‘ethical behaviourism’ – which holds that robots can have significant moral status if they are roughly performatively equivalent to other entities that have significant moral status. This theory is then defended from seven objections. Second, taking this theoretical position onboard, it is argued that the performative threshold that robots need (...)
    Download  
     
    Export citation  
     
    Bookmark   74 citations  
  • Taking Stock of Extension Theory of Technology.Steffen Steinert - 2016 - Philosophy and Technology 29 (1):61-78.
    In this paper, I will focus on the extension theories of technology. I will identify four influential positions that have been put forward: (1) technology as an extension of the human organism, (2) technology as an extension of the lived body and the senses, (3) technology as an extension of our intentions and desires, and (4) technology as an extension of our faculties and capabilities. I will describe and critically assess these positions one by one and highlight their advantages and (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • The Moral Status of Social Robots: A Pragmatic Approach.Paul Showler - 2024 - Philosophy and Technology 37 (2):1-22.
    Debates about the moral status of social robots (SRs) currently face a second-order, or metatheoretical impasse. On the one hand, moral individualists argue that the moral status of SRs depends on their possession of morally relevant properties. On the other hand, moral relationalists deny that we ought to attribute moral status on the basis of the properties that SRs instantiate, opting instead for other modes of reflection and critique. This paper develops and defends a pragmatic approach which aims to reconcile (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Foundations of an Ethical Framework for AI Entities: the Ethics of Systems.Andrej Dameski - 2020 - Dissertation, University of Luxembourg
    The field of AI ethics during the current and previous decade is receiving an increasing amount of attention from all involved stakeholders: the public, science, philosophy, religious organizations, enterprises, governments, and various organizations. However, this field currently lacks consensus on scope, ethico-philosophical foundations, or common methodology. This thesis aims to contribute towards filling this gap by providing an answer to the two main research questions: first, what theory can explain moral scenarios in which AI entities are participants?; and second, what (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Anthropological Crisis or Crisis in Moral Status: a Philosophy of Technology Approach to the Moral Consideration of Artificial Intelligence.Joan Llorca Albareda - 2024 - Philosophy and Technology 37 (1):1-26.
    The inquiry into the moral status of artificial intelligence (AI) is leading to prolific theoretical discussions. A new entity that does not share the material substrate of human beings begins to show signs of a number of properties that are nuclear to the understanding of moral agency. It makes us wonder whether the properties we associate with moral status need to be revised or whether the new artificial entities deserve to enter within the circle of moral consideration. This raises the (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Could you hate a robot? And does it matter if you could?Helen Ryland - 2021 - AI and Society 36 (2):637-649.
    This article defends two claims. First, humans could be in relationships characterised by hate with some robots. Second, it matters that humans could hate robots, as this hate could wrong the robots (by leaving them at risk of mistreatment, exploitation, etc.). In defending this second claim, I will thus be accepting that morally considerable robots either currently exist, or will exist in the near future, and so it can matter (morally speaking) how we treat these robots. The arguments presented in (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • An Alternative Approach to Assessing the Moral Status of Artificial Entities.Dominic McGuire - 2023 - American Journal of Bioethics Neuroscience 14 (2):76-79.
    Hildt (2023) considers the topic of machine consciousness and the ethical implications of artificial entities, such as robots, possessing different forms of consciousness. Her paper recommends that...
    Download  
     
    Export citation  
     
    Bookmark  
  • Can we design artificial persons without being manipulative?Maciej Musiał - 2024 - AI and Society 39 (3):1251-1260.
    If we could build artificial persons (APs) with a moral status comparable to this of a typical human being, how should we design those APs in the right way? This question has been addressed mainly in terms of designing APs devoted to being servants (AP servants) and debated in reference to their autonomy and the harm they might experience. Recently, it has been argued that even if developing AP servants would neither deprive them of autonomy nor cause any net harm, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • How to Treat Machines that Might Have Minds.Nicholas Agar - 2020 - Philosophy and Technology 33 (2):269-282.
    This paper offers practical advice about how to interact with machines that we have reason to believe could have minds. I argue that we should approach these interactions by assigning credences to judgements about whether the machines in question can think. We should treat the premises of philosophical arguments about whether these machines can think as offering evidence that may increase or reduce these credences. I describe two cases in which you should refrain from doing as your favored philosophical view (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • The Specter of Automation.Zachary Biondi - 2023 - Philosophia 51 (3):1093-1110.
    Karl Marx took technological development to be the heart of capitalism’s drive and, ultimately, its undoing. Machines are initially engineered to perform functions that otherwise would be performed by human workers. The economic logic pushed to its limits leads to the prospect of full automation: a world in which all labor required to meet human needs is superseded and performed by machines. To explore the future of automation, the paper considers a specific point of resemblance between human beings and machines: (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • On the margins: personhood and moral status in marginal cases of human rights.Helen Ryland - 2020 - Dissertation, University of Birmingham
    Most philosophical accounts of human rights accept that all persons have human rights. Typically, ‘personhood’ is understood as unitary and binary. It is unitary because there is generally supposed to be a single threshold property required for personhood. It is binary because it is all-or-nothing: you are either a person or you are not. A difficulty with binary views is that there will typically be subjects, like children and those with dementia, who do not meet the threshold, and so who (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The wizard and I: How transparent teleoperation and self-description (do not) affect children’s robot perceptions and child-robot relationship formation.Caroline L. van Straten, Jochen Peter, Rinaldo Kühne & Alex Barco - 2022 - AI and Society 37 (1):383-399.
    It has been well documented that children perceive robots as social, mental, and moral others. Studies on child-robot interaction may encourage this perception of robots, first, by using a Wizard of Oz set-up and, second, by having robots engage in self-description. However, much remains unknown about the effects of transparent teleoperation and self-description on children’s perception of, and relationship formation with a robot. To address this research gap initially, we conducted an experimental study with a 2 × 2 between-subject design (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Empathic responses and moral status for social robots: an argument in favor of robot patienthood based on K. E. Løgstrup.Simon N. Balle - 2022 - AI and Society 37 (2):535-548.
    Empirical research on human–robot interaction has demonstrated how humans tend to react to social robots with empathic responses and moral behavior. How should we ethically evaluate such responses to robots? Are people wrong to treat non-sentient artefacts as moral patients since this rests on anthropomorphism and ‘over-identification’ —or correct since spontaneous moral intuition and behavior toward nonhumans is indicative for moral patienthood, such that social robots become our ‘Others’?. In this research paper, I weave extant HRI studies that demonstrate empathic (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Moral Consideration of Artificial Entities: A Literature Review.Jamie Harris & Jacy Reese Anthis - 2021 - Science and Engineering Ethics 27 (4):1-95.
    Ethicists, policy-makers, and the general public have questioned whether artificial entities such as robots warrant rights or other forms of moral consideration. There is little synthesis of the research on this topic so far. We identify 294 relevant research or discussion items in our literature review of this topic. There is widespread agreement among scholars that some artificial entities could warrant moral consideration in the future, if not also the present. The reasoning varies, such as concern for the effects on (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Danaher’s Ethical Behaviourism: An Adequate Guide to Assessing the Moral Status of a Robot?Jilles Smids - 2020 - Science and Engineering Ethics 26 (5):2849-2866.
    This paper critically assesses John Danaher’s ‘ethical behaviourism’, a theory on how the moral status of robots should be determined. The basic idea of this theory is that a robot’s moral status is determined decisively on the basis of its observable behaviour. If it behaves sufficiently similar to some entity that has moral status, such as a human or an animal, then we should ascribe the same moral status to the robot as we do to this human or animal. The (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Moral Status for Malware! The Difficulty of Defining Advanced Artificial Intelligence.Miranda Mowbray - 2021 - Cambridge Quarterly of Healthcare Ethics 30 (3):517-528.
    The suggestion has been made that future advanced artificial intelligence (AI) that passes some consciousness-related criteria should be treated as having moral status, and therefore, humans would have an ethical obligation to consider its well-being. In this paper, the author discusses the extent to which software and robots already pass proposed criteria for consciousness; and argues against the moral status for AI on the grounds that human malware authors may design malware to fake consciousness. In fact, the article warns that (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Moral Mechanisms.David Davenport - 2014 - Philosophy and Technology 27 (1):47-60.
    As highly intelligent autonomous robots are gradually introduced into the home and workplace, ensuring public safety becomes extremely important. Given that such machines will learn from interactions with their environment, standard safety engineering methodologies may not be applicable. Instead, we need to ensure that the machines themselves know right from wrong; we need moral mechanisms. Morality, however, has traditionally been considered a defining characteristic, indeed the sole realm of human beings; that which separates us from animals. But if only humans (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations