Switch to: References

Add citations

You must login to add citations.
  1. Is Your Computer Lying? AI and Deception.Noreen Herzfeld - 2023 - Sophia 62 (4):665-678.
    Recent developments in AI, especially the spectacular success of Large Language models, have instigated renewed questioning of what remains distinctively human. As AI stands poised to take over more and more human tasks, what is left that distinguishes humans? One way we might identify a humanlike intelligence would be when we detect it telling lies. Yet AIs lack both the intention and the motivation to truly tell lies, instead producing merely bullshit. With neither emotions, embodiment, nor the social awareness that (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Making Trust Safe for AI? Non-agential Trust as a Conceptual Engineering Problem.Juri Viehoff - 2023 - Philosophy and Technology 36 (4):1-29.
    Should we be worried that the concept of trust is increasingly used when we assess non-human agents and artefacts, say robots and AI systems? Whilst some authors have developed explanations of the concept of trust with a view to accounting for trust in AI systems and other non-agents, others have rejected the idea that we should extend trust in this way. The article advances this debate by bringing insights from conceptual engineering to bear on this issue. After setting up a (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • What Might Machines Mean?Mitchell Green & Jan G. Michel - 2022 - Minds and Machines 32 (2):323-338.
    This essay addresses the question whether artificial speakers can perform speech acts in the technical sense of that term common in the philosophy of language. We here argue that under certain conditions artificial speakers can perform speech acts so understood. After explaining some of the issues at stake in these questions, we elucidate a relatively uncontroversial way in which machines can communicate, namely through what we call verbal signaling. But verbal signaling is not sufficient for the performance of a speech (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Lying about the future: Shuar-Achuar epistemic norms, predictions, and commitments.Alejandro Erut, Kristopher M. Smith & H. Clark Barrett - 2023 - Cognition 239 (C):105552.
    Download  
     
    Export citation  
     
    Bookmark  
  • Interpreting ordinary uses of psychological and moral terms in the AI domain.Hyungrae Noh - 2023 - Synthese 201 (6):1-33.
    Intuitively, proper referential extensions of psychological and moral terms exclude artifacts. Yet ordinary speakers commonly treat AI robots as moral patients and use psychological terms to explain their behavior. This paper examines whether this referential shift from the human domain to the AI domain entails semantic changes: do ordinary speakers literally consider AI robots to be psychological or moral beings? Three non-literalist accounts for semantic changes concerning psychological and moral terms used in the AI domain will be discussed: the technical (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation