Switch to: References

Add citations

You must login to add citations.
  1. Is moral status done with words?Miriam Gorr - 2024 - Ethics and Information Technology 26 (1):1-11.
    This paper critically examines Coeckelbergh’s (2023) performative view of moral status. Drawing parallels to Searle’s social ontology, two key claims of the performative view are identified: (1) Making a moral status claim is equivalent to making a moral status declaration. (2) A successful declaration establishes the institutional fact that the entity has moral status. Closer examination, however, reveals flaws in both claims. The second claim faces a dilemma: individual instances of moral status declaration are likely to fail because they do (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Ethics of Terminology: Can We Use Human Terms to Describe AI?Ophelia Deroy - 2023 - Topoi 42 (3):881-889.
    Despite facing significant criticism for assigning human-like characteristics to artificial intelligence, phrases like “trustworthy AI” are still commonly used in official documents and ethical guidelines. It is essential to consider why institutions continue to use these phrases, even though they are controversial. This article critically evaluates various reasons for using these terms, including ontological, legal, communicative, and psychological arguments. All these justifications share the common feature of trying to justify the official use of terms like “trustworthy AI” by appealing to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Can robots be trustworthy?Ines Schröder, Oliver Müller, Helena Scholl, Shelly Levy-Tzedek & Philipp Kellmeyer - 2023 - Ethik in der Medizin 35 (2):221-246.
    Definition of the problem This article critically addresses the conceptualization of trust in the ethical discussion on artificial intelligence (AI) in the specific context of social robots in care. First, we attempt to define in which respect we can speak of ‘social’ robots and how their ‘social affordances’ affect the human propensity to trust in human–robot interaction. Against this background, we examine the use of the concept of ‘trust’ and ‘trustworthiness’ with respect to the guidelines and recommendations of the High-Level (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Phenomenology: What’s AI got to do with it?Alessandra Buccella & Alison A. Springle - 2023 - Phenomenology and the Cognitive Sciences 22 (3):621-636.
    Nowadays, philosophers and scientists tend to agree that, even though human and artificial intelligence work quite differently, they can still illuminate aspects of each other, and knowledge in one domain can inspire progress in the other. For instance, the notion of “artificial” or “synthetic” phenomenology has been gaining some traction in recent AI research. In this paper, we ask the question: what (if anything) is the use of thinking about phenomenology in the context of AI, and in particular machine learning? (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Kuinka ihmismieli vääristää keskustelua tekoälyn riskeistä ja etiikasta. Kognitiotieteellisiä näkökulmia keskusteluun.Michael Laakasuo, Aku Visala & Jussi Palomäki - 2020 - Ajatus 77 (1):131-168.
    Keskustelu tekoälyn soveltamiseen liittyvistä eettisistä ja poliittisista kysymyksistä käy juuri nyt kuumana. Emme halua tässä puheenvuorossa osallistua keskusteluun tarttumalla johonkin tiettyyn eettiseen ongelmaan. Sen sijaan pyrimme sanomaan jotain itsekeskustelusta ja sen vaikeudesta. Haluamme kiinnittää huomiota siihen, kuinka erilaiset ihmismielen ajattelutaipumukset ja virhepäätelmät voivat huomaamattamme vaikuttaa tapaamme hahmottaa ja ymmärtää tekoälyä ja siihen liittyviä eettisiä kysymyksiä. Kun ymmärrämme paremmin sen, kuinka hankalaa näiden kysymysten hahmottaminen arkisen mielemme kategorioilla oikein on, ja kun tunnistamme tästä syntyvät virhepäätelmät ja ajattelun vääristymät, kykenemme entistä korkeatasoisempaan (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Implementation of Ethical Decision Procedures in Autonomous Systems : the Case of the Autonomous Vehicle.Katherine Evans - 2021 - Dissertation, Sorbonne Université
    The ethics of emerging forms of artificial intelligence has become a prolific subject in both academic and public spheres. A great deal of these concerns flow from the need to ensure that these technologies do not cause harm—physical, emotional or otherwise—to the human agents with which they will interact. In the literature, this challenge has been met with the creation of artificial moral agents: embodied or virtual forms of artificial intelligence whose decision procedures are constrained by explicit normative principles, requiring (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Levels of explainable artificial intelligence for human-aligned conversational explanations.Richard Dazeley, Peter Vamplew, Cameron Foale, Charlotte Young, Sunil Aryal & Francisco Cruz - 2021 - Artificial Intelligence 299 (C):103525.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • An emerging AI mainstream: deepening our comparisons of AI frameworks through rhetorical analysis.Epifanio Torres & Will Penman - 2021 - AI and Society 36 (2):597-608.
    Comparing frameworks for AI development allows us to see trends and reflect on how we are conceptualizing, interacting with, and imagining futures for AI. Recent scholarship comparing a range of AI frameworks has often focused methodologically on consensus, which has led to problems in evaluating potentially ambiguous values. We contribute to this scholarship using a rhetorical perspective attuned to how frameworks shape people’s actions. This perspective allows us to develop the concept of an “AI mainstream” through an analysis of five (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • On the moral status of social robots: considering the consciousness criterion.Kestutis Mosakas - 2021 - AI and Society 36 (2):429-443.
    While philosophers have been debating for decades on whether different entities—including severely disabled human beings, embryos, animals, objects of nature, and even works of art—can legitimately be considered as having moral status, this question has gained a new dimension in the wake of artificial intelligence (AI). One of the more imminent concerns in the context of AI is that of the moral rights and status of social robots, such as robotic caregivers and artificial companions, that are built to interact with (...)
    Download  
     
    Export citation  
     
    Bookmark   20 citations  
  • Posthumanism, open ontologies and bio-digital becoming: Response to Luciano Floridi’s Onlife Manifesto.Michael A. Peters & Petar Jandrić - 2019 - Educational Philosophy and Theory 51 (10):971-980.
    In The Onlife Manifesto: Being Human in a Hyperconnected Era Luciano Floridi and his associates examine various aspects of the contemporary meaning of humanity. Yet, their insights give less thought to the political economy of techno-capitalism that in large measure creates ICTs and leads to their further innovation, development and commercialization. This article responses to Floridi’s work and examines political economy of the blurred distinction between human, machine and nature in the postdigital context. Taking lessons from early history of the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Creating “companions” for children: the ethics of designing esthetic features for robots.Yvette Pearson & Jason Borenstein - 2014 - AI and Society 29 (1):23-31.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Relationalism through Social Robotics.Raya A. Jones - 2013 - Journal for the Theory of Social Behaviour 43 (4):405-424.
    Social robotics is a rapidly developing industry-oriented area of research, intent on making robots in social roles commonplace in the near future. This has led to rising interest in the dynamics as well as ethics of human-robot relationships, described here as a nascent relational turn. A contrast is drawn with the 1990s’ paradigm shift associated with relational-self themes in social psychology. Constructions of the human-robot relationship reproduce the “I-You-Me” dominant model of theorising about the self with biases that (as in (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Why Care About Robots? Empathy, Moral Standing, and the Language of Suffering.Mark Coeckelbergh - 2018 - Kairos 20 (1):141-158.
    This paper tries to understand the phenomenon that humans are able to empathize with robots and the intuition that there might be something wrong with “abusing” robots by discussing the question regarding the moral standing of robots. After a review of some relevant work in empirical psychology and a discussion of the ethics of empathizing with robots, a philosophical argument concerning the moral standing of robots is made that questions distant and uncritical moral reasoning about entities’ properties and that recommends (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Not Ecological Enough: A Commentary on an Eco-Relational Approach in Robot Ethics.Joshua C. Gellers - 2024 - Philosophy and Technology 37 (2):1-6.
    This Commentary offers a critique of an eco-relational approach in robot ethics, highlighting the importance of articulating an ecologically-sensitive ethical orientation that incorporates the entire more-than-human world, including technological entities like forms of artificial intelligence. While the eco-relational approach enhances our understanding of the complex way in which morally significant properties operate on a phenomenological level, it is not without its flaws. In particular, this perspective focuses on ethical concepts when it needs to be rooted in ethical systems, misrepresents the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Reconfiguring the alterity relation: the role of communication in interactions with social robots and chatbots.Dakota Root - forthcoming - AI and Society:1-12.
    Don Ihde’s alterity relation focuses on the quasi-otherness of dynamic technologies that interact with humans. The alterity relation is one means to study relations between humans and artificial intelligence (AI) systems. However, research on alterity relations has not defined the difference between playing with a toy, using a computer, and interacting with a social robot or chatbot. We suggest that Ihde’s quasi-other concept fails to account for the interactivity, autonomy, and adaptability of social robots and chatbots, which more closely approach (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • “An Eye Turned into a Weapon”: a Philosophical Investigation of Remote Controlled, Automated, and Autonomous Drone Warfare.Oliver Müller - 2020 - Philosophy and Technology 34 (4):875-896.
    Military drones combine surveillance technology with missile equipment in a far-reaching way. In this article, I argue that military drones could and should be object for a philosophical investigation, referring in particular on Chamayou’s theory of the drone, who also coined the term “an eye turned into a weapon.” Focusing on issues of human self-understanding, agency, and alterity, I examine the intricate human-technology relations in the context of designing and deploying military drones. For that purpose, I am drawing on the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Towards a Theory of Posthuman Care: Real Humans and Caring Robots.Amelia DeFalco - 2020 - Body and Society 26 (3):31-60.
    This essay interrogates the common assumption that good care is necessarily human care. It looks to disruptive fictional representations of robot care to assist its development of a theory of posthuman care that jettisons the implied anthropocentrism of ethics of care philosophy but retains care’s foregrounding of entanglement, embodiment and obligation. The essay reads speculative representations of robot care, particularly the Swedish television programme Äkta människor (Real Humans), alongside ethics of care philosophy and critical posthumanism to highlight their synergetic critiques (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Can we trust robots?Mark Coeckelbergh - 2012 - Ethics and Information Technology 14 (1):53-60.
    Can we trust robots? Responding to the literature on trust and e-trust, this paper asks if the question of trust is applicable to robots, discusses different approaches to trust, and analyses some preconditions for trust. In the course of the paper a phenomenological-social approach to trust is articulated, which provides a way of thinking about trust that puts less emphasis on individual choice and control than the contractarian-individualist approach. In addition, the argument is made that while robots are neither human (...)
    Download  
     
    Export citation  
     
    Bookmark   40 citations  
  • Interpreting ordinary uses of psychological and moral terms in the AI domain.Hyungrae Noh - 2023 - Synthese 201 (6):1-33.
    Intuitively, proper referential extensions of psychological and moral terms exclude artifacts. Yet ordinary speakers commonly treat AI robots as moral patients and use psychological terms to explain their behavior. This paper examines whether this referential shift from the human domain to the AI domain entails semantic changes: do ordinary speakers literally consider AI robots to be psychological or moral beings? Three non-literalist accounts for semantic changes concerning psychological and moral terms used in the AI domain will be discussed: the technical (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • What is it like to encounter an autonomous artificial agent?Karsten Weber - 2013 - AI and Society 28 (4):483-489.
    Following up on Thomas Nagel’s paper “What is it like to be a bat?” and Alan Turing’s essay “Computing machinery and intelligence,” it shall be claimed that a successful interaction of human beings and autonomous artificial agents depends more on which characteristics human beings ascribe to the agent than on whether the agent really has those characteristics. It will be argued that Masahiro Mori’s concept of the “uncanny valley” as well as evidence from several empirical studies supports that assertion. Finally, (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Interacting with Machines: Can an Artificially Intelligent Agent Be a Partner?Philipp Schmidt & Sophie Loidolt - 2023 - Philosophy and Technology 36 (3):1-32.
    In the past decade, the fields of machine learning and artificial intelligence (AI) have seen unprecedented developments that raise human-machine interactions (HMI) to the next level.Smart machines, i.e., machines endowed with artificially intelligent systems, have lost their character as mere instruments. This, at least, seems to be the case if one considers how humans experience their interactions with them. Smart machines are construed to serve complex functions involving increasing degrees of freedom, and they generate solutions not fully anticipated by humans. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation