Switch to: References

Add citations

You must login to add citations.
  1. Why AI May Undermine Phronesis and What to Do about It.Cheng-Hung Tsai & Hsiu-lin Ku - forthcoming - AI and Ethics.
    Phronesis, or practical wisdom, is a capacity the possession of which enables one to make good practical judgments and thus fulfill the distinctive function of human beings. Nir Eisikovits and Dan Feldman convincingly argue that this capacity may be undermined by statistical machine-learning-based AI. The critic questions: why should we worry that AI undermines phronesis? Why can’t we epistemically defer to AI, especially when it is superintelligent? Eisikovits and Feldman acknowledge such objection but do not consider it seriously. In this (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • “Your friendly AI assistant”: the anthropomorphic self-representations of ChatGPT and its implications for imagining AI.Karin van Es & Dennis Nguyen - forthcoming - AI and Society:1-13.
    This study analyzes how ChatGPT portrays and describes itself, revealing misleading myths about AI technologies, specifically conversational agents based on large language models. This analysis allows for critical reflection on the potential harm these misconceptions may pose for public understanding of AI and related technologies. While previous research has explored AI discourses and representations more generally, few studies focus specifically on AI chatbots. To narrow this research gap, an experimental-qualitative investigation into auto-generated AI representations based on prompting was conducted. Over (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Risks Deriving from the Agential Profiles of Modern AI Systems.Barnaby Crook - forthcoming - In Vincent C. Müller, Aliya R. Dewey, Leonard Dung & Guido Löhr, Philosophy of Artificial Intelligence: The State of the Art. Berlin: SpringerNature.
    Modern AI systems based on deep learning are neither traditional tools nor full-blown agents. Rather, they are characterised by idiosyncratic agential profiles, i.e., combinations of agency-relevant properties. Modern AI systems lack superficial features which enable people to recognise agents but possess sophisticated information processing capabilities which can undermine human goals. I argue that systems fitting this description, when they are adversarial with respect to human users, pose particular risks to those users. To explicate my argument, I provide conditions under which (...)
    Download  
     
    Export citation  
     
    Bookmark