Switch to: References

Add citations

You must login to add citations.
  1. Interpreting ordinary uses of psychological and moral terms in the AI domain.Hyungrae Noh - 2023 - Synthese 201 (6):1-33.
    Intuitively, proper referential extensions of psychological and moral terms exclude artifacts. Yet ordinary speakers commonly treat AI robots as moral patients and use psychological terms to explain their behavior. This paper examines whether this referential shift from the human domain to the AI domain entails semantic changes: do ordinary speakers literally consider AI robots to be psychological or moral beings? Three non-literalist accounts for semantic changes concerning psychological and moral terms used in the AI domain will be discussed: the technical (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Social robots and the intentional stance.Walter Veit & Heather Browning - 2023 - Behavioral and Brain Sciences 46:e47.
    Why is it that people simultaneously treat social robots as mere designed artefacts, yet show willingness to interact with them as if they were real agents? Here, we argue that Dennett's distinction between the intentional stance and the design stance can help us to resolve this puzzle, allowing us to further our understanding of social robots as interactive depictions.
    Download  
     
    Export citation  
     
    Bookmark  
  • Will We Know Them When We Meet Them? Human Cyborg and Nonhuman Personhood.Léon Turner - 2023 - Zygon 58 (4):1076-1098.
    In this article, I assess (1) whether some cyborgs and AI robots can theoretically be considered persons; and (2) how we will know if/when they have attained personhood. Since our discourses of personhood are inherently pluralistic and our concepts of both humanness and personhood are inherently nebulous, both some cyborgs, and some AI robots, I conclude, could theoretically be considered persons depending on what, exactly, one means by “person.” The practical problem of how we distinguish them from nonpersonal AI entities (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Robots as Malevolent Moral Agents: Harmful Behavior Results in Dehumanization, Not Anthropomorphism.Aleksandra Swiderska & Dennis Küster - 2020 - Cognitive Science 44 (7):e12872.
    A robot's decision to harm a person is sometimes considered to be the ultimate proof of it gaining a human‐like mind. Here, we contrasted predictions about attribution of mental capacities from moral typecasting theory, with the denial of agency from dehumanization literature. Experiments 1 and 2 investigated mind perception for intentionally and accidentally harmful robotic agents based on text and image vignettes. Experiment 3 disambiguated agent intention (malevolent and benevolent), and additionally varied the type of agent (robotic and human) using (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Talking about moving machines.Céline Pieters, Emmanuelle Danblon, Philippe Soueres & Jean-Paul Laumond - 2022 - Interaction Studies 23 (2):322-340.
    Globally, robots can be described as some sets of moving parts that are dedicated to a task while using their own energy. Yet, humans commonly qualify those machines as being intelligent, autonomous or being able to learn, know, feel, make decisions, etc. Is it merely a way of talking or does it mean that robots could eventually be more than a complex set of moving parts? On the one hand, the language of robotics allows multiple interpretations (leading sometimes to misreading (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Differences in Social Expectations About Robot Signals and Human Signals.Lorenzo Parenti, Marwen Belkaid & Agnieszka Wykowska - 2023 - Cognitive Science 47 (12):e13393.
    In our daily lives, we are continually involved in decision-making situations, many of which take place in the context of social interaction. Despite the ubiquity of such situations, there remains a gap in our understanding of how decision-making unfolds in social contexts, and how communicative signals, such as social cues and feedback, impact the choices we make. Interestingly, there is a new social context to which humans are recently increasingly more frequently exposed—social interaction with not only other humans but also (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A Pragmatic Approach to the Intentional Stance Semantic, Empirical and Ethical Considerations for the Design of Artificial Agents.Guglielmo Papagni & Sabine Koeszegi - 2021 - Minds and Machines 31 (4):505-534.
    Artificial agents are progressively becoming more present in everyday-life situations and more sophisticated in their interaction affordances. In some specific cases, like Google Duplex, GPT-3 bots or Deep Mind’s AlphaGo Zero, their capabilities reach or exceed human levels. The use contexts of everyday life necessitate making such agents understandable by laypeople. At the same time, displaying human levels of social behavior has kindled the debate over the adoption of Dennett’s ‘intentional stance’. By means of a comparative analysis of the literature (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • How Do Object Shape, Semantic Cues, and Apparent Velocity Affect the Attribution of Intentionality to Figures With Different Types of Movements?Diego Morales-Bader, Ramón D. Castillo, Charlotte Olivares & Francisca Miño - 2020 - Frontiers in Psychology 11.
    Download  
     
    Export citation  
     
    Bookmark  
  • Can a Robot Lie? Exploring the Folk Concept of Lying as Applied to Artificial Agents.Markus Https://Orcidorg Kneer - 2021 - Cognitive Science 45 (10):e13032.
    The potential capacity for robots to deceive has received considerable attention recently. Many papers explore the technical possibility for a robot to engage in deception for beneficial purposes (e.g., in education or health). In this short experimental paper, I focus on a more paradigmatic case: robot lying (lying being the textbook example of deception) for nonbeneficial purposes as judged from the human point of view. More precisely, I present an empirical experiment that investigates the following three questions: (a) Are ordinary (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Automation, Alignment, and the Cooperative Interface.Julian David Jonker - forthcoming - The Journal of Ethics:1-22.
    The paper demonstrates that social alignment is distinct from value alignment as it is currently understood in the AI safety literature, and argues that social alignment is an important research agenda. Work provides an important example for the argument, since work is a cooperative endeavor, and it is part of the larger manifold of social cooperation. These cooperative aspects of work are individually and socially valuable, and so they must be given a central place when evaluating the impact of AI (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Attribution of intentional agency towards robots reduces one’s own sense of agency.Francesca Ciardo, Frederike Beyer, Davide De Tommaso & Agnieszka Wykowska - 2020 - Cognition 194:104109.
    Download  
     
    Export citation  
     
    Bookmark   1 citation