Switch to: References

Add citations

You must login to add citations.
  1. Addressing Social Misattributions of Large Language Models: An HCXAI-based Approach.Andrea Ferrario, Alberto Termine & Alessandro Facchini - forthcoming - Available at Https://Arxiv.Org/Abs/2403.17873 (Extended Version of the Manuscript Accepted for the Acm Chi Workshop on Human-Centered Explainable Ai 2024 (Hcxai24).
    Human-centered explainable AI (HCXAI) advocates for the integration of social aspects into AI explanations. Central to the HCXAI discourse is the Social Transparency (ST) framework, which aims to make the socio-organizational context of AI systems accessible to their users. In this work, we suggest extending the ST framework to address the risks of social misattributions in Large Language Models (LLMs), particularly in sensitive areas like mental health. In fact LLMs, which are remarkably capable of simulating roles and personas, may lead (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Narrativity and responsible and transparent ai practices.Paul Hayes & Noel Fitzpatrick - forthcoming - AI and Society:1-21.
    This paper builds upon recent work in narrative theory and the philosophy of technology by examining the place of transparency and responsibility in discussions of AI, and what some of the implications of this might be for thinking ethically about AI and especially AI practices, that is, the structured social activities implicating and defining what AI is. In this paper, we aim to show how pursuing a narrative understanding of technology and AI can support knowledge of process and practice through (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Minding the gap(s): public perceptions of AI and socio-technical imaginaries.Laura Sartori & Giulia Bocca - 2023 - AI and Society 38 (2):443-458.
    Deepening and digging into the social side of AI is a novel but emerging requirement within the AI community. Future research should invest in an “AI for people”, going beyond the undoubtedly much-needed efforts into ethics, explainability and responsible AI. The article addresses this challenge by problematizing the discussion around AI shifting the attention to individuals and their awareness, knowledge and emotional response to AI. First, we outline our main argument relative to the need for a socio-technical perspective in the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Inteligencia artificial sostenible y evaluación ética constructiva.Antonio Luis Terrones Rodríguez - 2022 - Isegoría 67:10-10.
    El aumento considerable de la capacidad de la inteligencia artificial (IA) implica un alto consumo de recursos energéticos. La situación ambiental actual, caracterizada por la acuciante degradación de ecosistemas y la ruptura del equilibrio, exige tomar medidas en diversos ámbitos. La IA no puede quedar al margen, y aunque es empleada para objetivos de sostenibilidad, debe plantearse como sostenible en términos integrales. La propuesta de una inteligencia artificial sostenible se argumenta a partir de una evaluación ética constructiva, donde la inclusión (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Think Differently We Must! An AI Manifesto for the Future.Emma Dahlin - forthcoming - AI and Society:1-4.
    There is a problematic tradition of dualistic and reductionist thinking in artificial intelligence (AI) research, which is evident in AI storytelling and imaginations as well as in public debates about AI. Dualistic thinking is based on the assumption of a fixed reality and a hierarchy of power, and it simplifies the complex relationships between humans and machines. This commentary piece argues that we need to work against the grain of such logics and instead develop a thinking that acknowledges AI–human interconnectedness (...)
    Download  
     
    Export citation  
     
    Bookmark