Switch to: References

Add citations

You must login to add citations.
  1. Thought Experiments as an Error Detection and Correction Tool.Igor Bascandziev - 2024 - Cognitive Science 48 (1):e13401.
    The ability to recognize and correct errors in one's explanatory understanding is critically important for learning. However, little is known about the mechanisms that determine when and under what circumstances errors are detected and how they are corrected. The present study investigated thought experiments as a potential tool that can reveal errors and trigger belief revision in the service of error correction. Across two experiments, 1149 participants engaged in reasoning about force and motion (a domain with well‐documented misconceptions) in a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • An explanation space to align user studies with the technical development of Explainable AI.Garrick Cabour, Andrés Morales-Forero, Élise Ledoux & Samuel Bassetto - 2023 - AI and Society 38 (2):869-887.
    Providing meaningful and actionable explanations for end-users is a situated problem requiring the intersection of multiple disciplines to address social, operational, and technical challenges. However, the explainable artificial intelligence community has not commonly adopted or created tangible design tools that allow interdisciplinary work to develop reliable AI-powered solutions. This paper proposes a formative architecture that defines the explanation space from a user-inspired perspective. The architecture comprises five intertwined components to outline explanation requirements for a task: (1) the end-users’ mental models, (...)
    Download  
     
    Export citation  
     
    Bookmark