Switch to: References

Add citations

You must login to add citations.
  1. Guilty Artificial Minds: Folk Attributions of Mens Rea and Culpability to Artificially Intelligent Agents.Michael T. Stuart & Markus Kneer - 2021 - Proceedings of the ACM on Human-Computer Interaction 5 (CSCW2).
    While philosophers hold that it is patently absurd to blame robots or hold them morally responsible [1], a series of recent empirical studies suggest that people do ascribe blame to AI systems and robots in certain contexts [2]. This is disconcerting: Blame might be shifted from the owners, users or designers of AI systems to the systems themselves, leading to the diminished accountability of the responsible human agents [3]. In this paper, we explore one of the potential underlying reasons for (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Trust in Medical Artificial Intelligence: A Discretionary Account.Philip J. Nickel - 2022 - Ethics and Information Technology 24 (1):1-10.
    This paper sets out an account of trust in AI as a relationship between clinicians, AI applications, and AI practitioners in which AI is given discretionary authority over medical questions by clinicians. Compared to other accounts in recent literature, this account more adequately explains the normative commitments created by practitioners when inviting clinicians’ trust in AI. To avoid committing to an account of trust in AI applications themselves, I sketch a reductive view on which discretionary authority is exercised by AI (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • What Might Machines Mean?Mitchell Green & Jan G. Michel - 2022 - Minds and Machines 32 (2):323-338.
    This essay addresses the question whether artificial speakers can perform speech acts in the technical sense of that term common in the philosophy of language. We here argue that under certain conditions artificial speakers can perform speech acts so understood. After explaining some of the issues at stake in these questions, we elucidate a relatively uncontroversial way in which machines can communicate, namely through what we call verbal signaling. But verbal signaling is not sufficient for the performance of a speech (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Playing the Blame Game with Robots.Markus Kneer & Michael T. Stuart - 2021 - In Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction (HRI’21 Companion). New York, NY, USA:
    Recent research shows – somewhat astonishingly – that people are willing to ascribe moral blame to AI-driven systems when they cause harm [1]–[4]. In this paper, we explore the moral- psychological underpinnings of these findings. Our hypothesis was that the reason why people ascribe moral blame to AI systems is that they consider them capable of entertaining inculpating mental states (what is called mens rea in the law). To explore this hypothesis, we created a scenario in which an AI system (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations