Switch to: References

Add citations

You must login to add citations.
  1. Understanding with Toy Surrogate Models in Machine Learning.Andrés Páez - 2024 - Minds and Machines 34 (4):45.
    In the natural and social sciences, it is common to use toy models—extremely simple and highly idealized representations—to understand complex phenomena. Some of the simple surrogate models used to understand opaque machine learning (ML) models, such as rule lists and sparse decision trees, bear some resemblance to scientific toy models. They allow non-experts to understand how an opaque ML model works globally via a much simpler model that highlights the most relevant features of the input space and their effect on (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Twenty-four years of empirical research on trust in AI: a bibliometric review of trends, overlooked issues, and future directions.Michaela Benk, Sophie Kerstan, Florian von Wangenheim & Andrea Ferrario - forthcoming - AI and Society:1-24.
    Trust is widely regarded as a critical component to building artificial intelligence (AI) systems that people will use and safely rely upon. As research in this area continues to evolve, it becomes imperative that the research community synchronizes its empirical efforts and aligns on the path toward effective knowledge creation. To lay the groundwork toward achieving this objective, we performed a comprehensive bibliometric analysis, supplemented with a qualitative content analysis of over two decades of empirical research measuring trust in AI, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Evolving interpretable decision trees for reinforcement learning.Vinícius G. Costa, Jorge Pérez-Aracil, Sancho Salcedo-Sanz & Carlos E. Pedreira - 2024 - Artificial Intelligence 327 (C):104057.
    Download  
     
    Export citation  
     
    Bookmark  
  • Algorithmic Decision-Making, Agency Costs, and Institution-Based Trust.Keith Dowding & Brad R. Taylor - 2024 - Philosophy and Technology 37 (2):1-22.
    Algorithm Decision Making (ADM) systems designed to augment or automate human decision-making have the potential to produce better decisions while also freeing up human time and attention for other pursuits. For this potential to be realised, however, algorithmic decisions must be sufficiently aligned with human goals and interests. We take a Principal-Agent (P-A) approach to the questions of ADM alignment and trust. In a broad sense, ADM is beneficial if and only if human principals can trust algorithmic agents to act (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Assessing the communication gap between AI models and healthcare professionals: Explainability, utility and trust in AI-driven clinical decision-making.Oskar Wysocki, Jessica Katharine Davies, Markel Vigo, Anne Caroline Armstrong, Dónal Landers, Rebecca Lee & André Freitas - 2023 - Artificial Intelligence 316 (C):103839.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Counterfactual explanations for misclassified images: How human and machine explanations differ.Eoin Delaney, Arjun Pakrashi, Derek Greene & Mark T. Keane - 2023 - Artificial Intelligence 324 (C):103995.
    Download  
     
    Export citation  
     
    Bookmark