Switch to: References

Citations of:

Humans and Robots: Ethics, Agency, and Anthropomorphism

Rowman & Littlefield International (2020)

Add citations

You must login to add citations.
  1. Nonhuman Moral Agency: A Practice-Focused Exploration of Moral Agency in Nonhuman Animals and Artificial Intelligence.Dorna Behdadi - 2023 - Dissertation, University of Gothenburg
    Can nonhuman animals and artificial intelligence (AI) entities be attributed moral agency? The general assumption in the philosophical literature is that moral agency applies exclusively to humans since they alone possess free will or capacities required for deliberate reflection. Consequently, only humans have been taken to be eligible for ascriptions of moral responsibility in terms of, for instance, blame or praise, moral criticism, or attributions of vice and virtue. Animals and machines may cause harm, but they cannot be appropriately ascribed (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • (1 other version)Responsibility Gaps and Retributive Dispositions: Evidence from the US, Japan and Germany.Markus Kneer & Markus Christen - manuscript
    Danaher (2016) has argued that increasing robotization can lead to retribution gaps: Situation in which the normative fact that nobody can be justly held responsible for a harmful outcome stands in conflict with our retributivist moral dispositions. In this paper, we report a cross-cultural empirical study based on Sparrow’s (2007) famous example of an autonomous weapon system committing a war crime, which was conducted with participants from the US, Japan and Germany. We find that (i) people manifest a considerable willingness (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • ’How could you even ask that?’ Moral considerability, uncertainty and vulnerability in social robotics.Alexis Elder - 2020 - Journal of Sociotechnical Critique 1 (1):1-23.
    When it comes to social robotics (robots that engage human social responses via “eyes” and other facial features, voice-based natural-language interactions, and even evocative movements), ethicists, particularly in European and North American traditions, are divided over whether and why they might be morally considerable. Some argue that moral considerability is based on internal psychological states like consciousness and sentience, and debate about thresholds of such features sufficient for ethical consideration, a move sometimes criticized for being overly dualistic in its framing (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Risk and Responsibility in Context.Adriana Placani & Stearns Broadhead (eds.) - 2023 - New York: Routledge.
    This volume bridges contemporary philosophical conceptions of risk and responsibility and offers an extensive examination of the topic. It shows that risk and responsibility combine in ways that give rise to new philosophical questions and problems. Philosophical interest in the relationship between risk and responsibility continues to rise, due in no small part due to environmental crises, emerging technologies, legal developments, and new medical advances. Despite such interest, scholars are just now working out how to conceive of the links between (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Making Sense of the Conceptual Nonsense 'Trustworthy AI'.Ori Freiman - 2022 - AI and Ethics 4.
    Following the publication of numerous ethical principles and guidelines, the concept of 'Trustworthy AI' has become widely used. However, several AI ethicists argue against using this concept, often backing their arguments with decades of conceptual analyses made by scholars who studied the concept of trust. In this paper, I describe the historical-philosophical roots of their objection and the premise that trust entails a human quality that technologies lack. Then, I review existing criticisms about 'Trustworthy AI' and the consequence of ignoring (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Granny and the Sexbots.Karen Lancaster - 2022 - In Janina Loh & Wulf Loh (eds.), Social Robotics and the Good Life: The Normative Side of Forming Emotional Bonds with Robots. Transcript Verlag. pp. 181-208.
    Although sexual activity among elderly people remains taboo, residents in eldercare institutions often still have sexual desires, and catering for these desires could improve the quality of life for some of society’s most vulnerable – and most depressed – people. I argue that sexbots are apt to provide such a sexual service. I consider the potential benefits and pitfalls of other sexual possibilities, such as having sex with other residents, nurses, or sex workers, or using sexual aids to masturbation, and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Social Robotics and the Good Life: The Normative Side of Forming Emotional Bonds with Robots.Janina Loh & Wulf Loh (eds.) - 2022 - Transcript Verlag.
    Robots as social companions in close proximity to humans have a strong potential of becoming more and more prevalent in the coming years, especially in the realms of elder day care, child rearing, and education. As human beings, we have the fascinating ability to emotionally bond with various counterparts, not exclusively with other human beings, but also with animals, plants, and sometimes even objects. Therefore, we need to answer the fundamental ethical questions that concern human-robot-interactions per se, and we need (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Robot rights in joint action.Guido Löhr - 2022 - In Vincent C. Müller (ed.), Philosophy and Theory of Artificial Intelligence 2021. Berlin: Springer.
    The claim I want to explore in this paper is simple. In social ontology, Margaret Gilbert, Abe Roth, Michael Bratman, Antonie Meijers, Facundo Alonso and others talk about rights or entitlements against other participants in joint action. I employ several intuition pumps to argue that we have reason to assume that such entitlements or rights can be ascribed even to non-sentient robots that we collaborate with. Importantly, such entitlements are primarily identified in terms of our normative discourse. Justified criticism, for (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Humans, Neanderthals, robots and rights.Kamil Mamak - 2022 - Ethics and Information Technology 24 (3):1-9.
    Robots are becoming more visible parts of our life, a situation which prompts questions about their place in our society. One group of issues that is widely discussed is connected with robots’ moral and legal status as well as their potential rights. The question of granting robots rights is polarizing. Some positions accept the possibility of granting them human rights whereas others reject the notion that robots can be considered potential rights holders. In this paper, I claim that robots will (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Philosophy of Online Manipulation.Michael Klenk & Fleur Jongepier (eds.) - 2022 - Routledge.
    Are we being manipulated online? If so, is being manipulated by online technologies and algorithmic systems notably different from human forms of manipulation? And what is under threat exactly when people are manipulated online? This volume provides philosophical and conceptual depth to debates in digital ethics about online manipulation. The contributions explore the ramifications of our increasingly consequential interactions with online technologies such as online recommender systems, social media, user-friendly design, micro-targeting, default-settings, gamification, and real-time profiling. The authors in this (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Basic issues in AI policy.Vincent C. Müller - 2022 - In Maria Amparo Grau-Ruiz (ed.), Interactive robotics: Legal, ethical, social and economic aspects. Springer. pp. 3-9.
    This extended abstract summarises some of the basic points of AI ethics and policy as they present themselves now. We explain the notion of AI, the main ethical issues in AI and the main policy aims and means.
    Download  
     
    Export citation  
     
    Bookmark  
  • Thinking unwise: a relational u-turn.Nicholas Barrow - 2022 - In Raul Hakli, Pekka Mäkelä & Johanna Seibt (eds.), Social Robots in Social Institutions. Proceedings of Robophilosophy’22. IOS Press.
    In this paper, I add to the recent flurry of research concerning the moral patiency of artificial beings. Focusing on David Gunkel's adaptation of Levinas, I identify and argue that the Relationist's extrinsic case-by-case approach of ascribing artificial moral status fails on two accounts. Firstly, despite Gunkel's effort to avoid anthropocentrism, I argue that Relationism is, itself, anthropocentric in virtue of how its case-by-case approach is, necessarily, assessed from a human perspective. Secondly I, in light of interpreting Gunkel's Relationism as (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Is it time for robot rights? Moral status in artificial entities.Vincent C. Müller - 2021 - Ethics and Information Technology 23 (3):579–587.
    Some authors have recently suggested that it is time to consider rights for robots. These suggestions are based on the claim that the question of robot rights should not depend on a standard set of conditions for ‘moral status’; but instead, the question is to be framed in a new way, by rejecting the is/ought distinction, making a relational turn, or assuming a methodological behaviourism. We try to clarify these suggestions and to show their highly problematic consequences. While we find (...)
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  • Empathic responses and moral status for social robots: an argument in favor of robot patienthood based on K. E. Løgstrup.Simon N. Balle - 2022 - AI and Society 37 (2):535-548.
    Empirical research on human–robot interaction has demonstrated how humans tend to react to social robots with empathic responses and moral behavior. How should we ethically evaluate such responses to robots? Are people wrong to treat non-sentient artefacts as moral patients since this rests on anthropomorphism and ‘over-identification’ —or correct since spontaneous moral intuition and behavior toward nonhumans is indicative for moral patienthood, such that social robots become our ‘Others’?. In this research paper, I weave extant HRI studies that demonstrate empathic (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • It’s Friendship, Jim, but Not as We Know It: A Degrees-of-Friendship View of Human–Robot Friendships.Helen Ryland - 2021 - Minds and Machines 31 (3):377-393.
    This article argues in defence of human–robot friendship. I begin by outlining the standard Aristotelian view of friendship, according to which there are certain necessary conditions which x must meet in order to ‘be a friend’. I explain how the current literature typically uses this Aristotelian view to object to human–robot friendships on theoretical and ethical grounds. Theoretically, a robot cannot be our friend because it cannot meet the requisite necessary conditions for friendship. Ethically, human–robot friendships are wrong because they (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Automation, Work and the Achievement Gap.John Danaher & Sven Nyholm - 2021 - AI and Ethics 1 (3):227–237.
    Rapid advances in AI-based automation have led to a number of existential and economic concerns. In particular, as automating technologies develop enhanced competency they seem to threaten the values associated with meaningful work. In this article, we focus on one such value: the value of achievement. We argue that achievement is a key part of what makes work meaningful and that advances in AI and automation give rise to a number achievement gaps in the workplace. This could limit people’s ability (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Danaher’s Ethical Behaviourism: An Adequate Guide to Assessing the Moral Status of a Robot?Jilles Smids - 2020 - Science and Engineering Ethics 26 (5):2849-2866.
    This paper critically assesses John Danaher’s ‘ethical behaviourism’, a theory on how the moral status of robots should be determined. The basic idea of this theory is that a robot’s moral status is determined decisively on the basis of its observable behaviour. If it behaves sufficiently similar to some entity that has moral status, such as a human or an animal, then we should ascribe the same moral status to the robot as we do to this human or animal. The (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • A Normative Approach to Artificial Moral Agency.Dorna Behdadi & Christian Munthe - 2020 - Minds and Machines 30 (2):195-218.
    This paper proposes a methodological redirection of the philosophical debate on artificial moral agency in view of increasingly pressing practical needs due to technological development. This “normative approach” suggests abandoning theoretical discussions about what conditions may hold for moral agency and to what extent these may be met by artificial entities such as AI systems and robots. Instead, the debate should focus on how and to what extent such entities should be included in human practices normally assuming moral agency and (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • Moralization and Mismoralization in Public Health.Steven R. Kraaijeveld & Euzebiusz Jamrozik - 2022 - Medicine, Health Care and Philosophy 25 (4):655-669.
    Moralization is a social-psychological process through which morally neutral issues take on moral significance. Often linked to health and disease, moralization may sometimes lead to good outcomes; yet moralization is often detrimental to individuals and to society as a whole. It is therefore important to be able to identify when moralization is inappropriate. In this paper, we offer a systematic normative approach to the evaluation of moralization. We introduce and develop the concept of ‘mismoralization’, which is when moralization is metaethically (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • There Is No Techno-Responsibility Gap.Daniel W. Tigard - 2021 - Philosophy and Technology 34 (3):589-607.
    In a landmark essay, Andreas Matthias claimed that current developments in autonomous, artificially intelligent (AI) systems are creating a so-called responsibility gap, which is allegedly ever-widening and stands to undermine both the moral and legal frameworks of our society. But how severe is the threat posed by emerging technologies? In fact, a great number of authors have indicated that the fear is thoroughly instilled. The most pessimistic are calling for a drastic scaling-back or complete moratorium on AI systems, while the (...)
    Download  
     
    Export citation  
     
    Bookmark   43 citations  
  • Technological Change and Human Obsolescence.John Danaher - 2022 - Techné: Research in Philosophy and Technology 26 (1):31-56.
    Can human life have value in a world in which humans are rendered obsolete by technological advances? This article answers this question by developing an extended analysis of the axiological impact of human obsolescence. In doing so, it makes four main arguments. First, it argues that human obsolescence is a complex phenomenon that can take on at least four distinct forms. Second, it argues that one of these forms of obsolescence is not a coherent concept and hence not a plausible (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • On the moral status of social robots: considering the consciousness criterion.Kestutis Mosakas - 2021 - AI and Society 36 (2):429-443.
    While philosophers have been debating for decades on whether different entities—including severely disabled human beings, embryos, animals, objects of nature, and even works of art—can legitimately be considered as having moral status, this question has gained a new dimension in the wake of artificial intelligence (AI). One of the more imminent concerns in the context of AI is that of the moral rights and status of social robots, such as robotic caregivers and artificial companions, that are built to interact with (...)
    Download  
     
    Export citation  
     
    Bookmark   20 citations  
  • Joint Interaction and Mutual Understanding in Social Robotics.Sebastian Schleidgen & Orsolya Friedrich - 2022 - Science and Engineering Ethics 28 (6):1-20.
    Social robotics aims at designing robots capable of joint interaction with humans. On a conceptual level, sufficient mutual understanding is usually said to be a necessary condition for joint interaction. Against this background, the following questions remain open: in which sense is it legitimate to speak of human–robot joint interaction? What exactly does it mean to speak of humans and robots sufficiently understanding each other to account for human–robot joint interaction? Is such joint interaction effectively possible by reference, e.g., to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial intelligence and responsibility gaps: what is the problem?Peter Königs - 2022 - Ethics and Information Technology 24 (3):1-11.
    Recent decades have witnessed tremendous progress in artificial intelligence and in the development of autonomous systems that rely on artificial intelligence. Critics, however, have pointed to the difficulty of allocating responsibility for the actions of an autonomous system, especially when the autonomous system causes harm or damage. The highly autonomous behavior of such systems, for which neither the programmer, the manufacturer, nor the operator seems to be responsible, has been suspected to generate responsibility gaps. This has been the cause of (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • Can we Bridge AI’s responsibility gap at Will?Maximilian Kiener - 2022 - Ethical Theory and Moral Practice 25 (4):575-593.
    Artificial intelligence increasingly executes tasks that previously only humans could do, such as drive a car, fight in war, or perform a medical operation. However, as the very best AI systems tend to be the least controllable and the least transparent, some scholars argued that humans can no longer be morally responsible for some of the AI-caused outcomes, which would then result in a responsibility gap. In this paper, I assume, for the sake of argument, that at least some of (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • A fictional dualism model of social robots.Paula Sweeney - 2021 - Ethics and Information Technology 23 (3):465-472.
    In this paper I propose a Fictional Dualism model of social robots. The model helps us to understand the human emotional reaction to social robots and also acts as a guide for us in determining the significance of that emotional reaction, enabling us to better define the moral and legislative rights of social robots within our society. I propose a distinctive position that allows us to accept that robots are tools, that our emotional reaction to them can be important to (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Liability for Robots: Sidestepping the Gaps.Bartek Chomanski - 2021 - Philosophy and Technology 34 (4):1013-1032.
    In this paper, I outline a proposal for assigning liability for autonomous machines modeled on the doctrine of respondeat superior. I argue that the machines’ users’ or designers’ liability should be determined by the manner in which the machines are created, which, in turn, should be responsive to considerations of the machines’ welfare interests. This approach has the twin virtues of promoting socially beneficial design of machines, and of taking their potential moral patiency seriously. I then argue for abandoning the (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Engineering responsibility.Nicholas Sars - 2022 - Ethics and Information Technology 24 (3):1-10.
    Many optimistic responses have been proposed to bridge the threat of responsibility gaps which artificial systems create. This paper identifies a question which arises if this optimistic project proves successful. On a response-dependent understanding of responsibility, our responsibility practices themselves at least partially determine who counts as a responsible agent. On this basis, if AI or robot technology advance such that AI or robot agents become fitting participants within responsibility exchanges, then responsibility itself might be engineered. If we have good (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Should criminal law protect love relation with robots?Kamil Mamak - 2024 - AI and Society 39 (2):573-582.
    Whether or not we call a love-like relationship with robots true love, some people may feel and claim that, for them, it is a sufficient substitute for love relationship. The love relationship between humans has a special place in our social life. On the grounds of both morality and law, our significant other can expect special treatment. It is understandable that, precisely because of this kind of relationship, we save our significant other instead of others or will not testify against (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Technological Answerability and the Severance Problem: Staying Connected by Demanding Answers.Daniel W. Tigard - 2021 - Science and Engineering Ethics 27 (5):1-20.
    Artificial intelligence and robotic technologies have become nearly ubiquitous. In some ways, the developments have likely helped us, but in other ways sophisticated technologies set back our interests. Among the latter sort is what has been dubbed the ‘severance problem’—the idea that technologies sever our connection to the world, a connection which is necessary for us to flourish and live meaningful lives. I grant that the severance problem is a threat we should mitigate and I ask: how can we stave (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • (1 other version)Introduction to the Topical Collection on AI and Responsibility.Niël Conradie, Hendrik Kempt & Peter Königs - 2022 - Philosophy and Technology 35 (4):1-6.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Can a Robot Be a Good Colleague?Sven Nyholm & Jilles Smids - 2020 - Science and Engineering Ethics 26 (4):2169-2188.
    This paper discusses the robotization of the workplace, and particularly the question of whether robots can be good colleagues. This might appear to be a strange question at first glance, but it is worth asking for two reasons. Firstly, some people already treat robots they work alongside as if the robots are valuable colleagues. It is worth reflecting on whether such people are making a mistake. Secondly, having good colleagues is widely regarded as a key aspect of what can make (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations