Switch to: References

Add citations

You must login to add citations.
  1. Commonsense for AI: an interventional approach to explainability and personalization.Fariborz Farahmand - forthcoming - AI and Society:1-9.
    AI systems are expected to impact the ways we communicate, learn, and interact with technology. However, there are still major concerns about their commonsense reasoning, and personalization. This article computationally explains causal (vs. statistical) inference, at different levels of abstraction, and provides three examples of how we can use do-operator, a mathematical operator for intervention, to address some of these concerns. The first example is from an educational module that I developed and implemented for undergraduate engineering students, as part of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Transparency for AI systems: a value-based approach.Stefan Buijsman - 2024 - Ethics and Information Technology 26 (2):1-11.
    With the widespread use of artificial intelligence, it becomes crucial to provide information about these systems and how they are used. Governments aim to disclose their use of algorithms to establish legitimacy and the EU AI Act mandates forms of transparency for all high-risk and limited-risk systems. Yet, what should the standards for transparency be? What information is needed to show to a wide public that a certain system can be used legitimately and responsibly? I argue that process-based approaches fail (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Explanation Hacking: The perils of algorithmic recourse.E. Sullivan & Atoosa Kasirzadeh - forthcoming - In Juan Manuel Durán & Giorgia Pozzi (eds.), Philosophy of science for machine learning: Core issues and new perspectives. Springer.
    We argue that the trend toward providing users with feasible and actionable explanations of AI decisions—known as recourse explanations—comes with ethical downsides. Specifically, we argue that recourse explanations face several conceptual pitfalls and can lead to problematic explanation hacking, which undermines their ethical status. As an alternative, we advocate that explanations of AI decisions should aim at understanding.
    Download  
     
    Export citation  
     
    Bookmark  
  • Axe the X in XAI: A Plea for Understandable AI.Andrés Páez - forthcoming - In Juan Manuel Durán & Giorgia Pozzi (eds.), Philosophy of science for machine learning: Core issues and new perspectives. Springer.
    In a recent paper, Erasmus et al. (2021) defend the idea that the ambiguity of the term “explanation” in explainable AI (XAI) can be solved by adopting any of four different extant accounts of explanation in the philosophy of science: the Deductive Nomological, Inductive Statistical, Causal Mechanical, and New Mechanist models. In this chapter, I show that the authors’ claim that these accounts can be applied to deep neural networks as they would to any natural phenomenon is mistaken. I also (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Algorithmic decision-making: the right to explanation and the significance of stakes.Lauritz Munch, Jens Christian Bjerring & Jakob Mainz - 2024 - Big Data and Society.
    The stakes associated with an algorithmic decision are often said to play a role in determining whether the decision engenders a right to an explanation. More specifically, “high stakes” decisions are often said to engender such a right to explanation whereas “low stakes” or “non-high” stakes decisions do not. While the overall gist of these ideas is clear enough, the details are lacking. In this paper, we aim to provide these details through a detailed investigation of what we will call (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Ethics of Artificial Intelligence.Stefan Buijsman, Michael Klenk & Jeroen van den Hoven - forthcoming - In Nathalie Smuha (ed.), Cambridge Handbook on the Law, Ethics and Policy of AI. Cambridge University Press.
    Artificial Intelligence (AI) is increasingly adopted in society, creating numerous opportunities but at the same time posing ethical challenges. Many of these are familiar, such as issues of fairness, responsibility and privacy, but are presented in a new and challenging guise due to our limited ability to steer and predict the outputs of AI systems. This chapter first introduces these ethical challenges, stressing that overviews of values are a good starting point but frequently fail to suffice due to the context (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Explainable AI and Causal Understanding: Counterfactual Approaches Considered.Sam Baron - 2023 - Minds and Machines 33 (2):347-377.
    The counterfactual approach to explainable AI (XAI) seeks to provide understanding of AI systems through the provision of counterfactual explanations. In a recent systematic review, Chou et al. (Inform Fus 81:59–83, 2022) argue that the counterfactual approach does not clearly provide causal understanding. They diagnose the problem in terms of the underlying framework within which the counterfactual approach has been developed. To date, the counterfactual approach has not been developed in concert with the approach for specifying causes developed by Pearl (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Reliability and Interpretability in Science and Deep Learning.Luigi Scorzato - 2024 - Minds and Machines 34 (3):1-31.
    In recent years, the question of the reliability of Machine Learning (ML) methods has acquired significant importance, and the analysis of the associated uncertainties has motivated a growing amount of research. However, most of these studies have applied standard error analysis to ML models—and in particular Deep Neural Network (DNN) models—which represent a rather significant departure from standard scientific modelling. It is therefore necessary to integrate the standard error analysis with a deeper epistemological analysis of the possible differences between DNN (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Principle-at-Risk Analysis (PaRA): Operationalising Digital Ethics by Bridging Principles and Operations of a Digital Ethics Advisory Panel.André T. Nemat, Sarah J. Becker, Simon Lucas, Sean Thomas, Isabel Gadea & Jean Enno Charton - 2023 - Minds and Machines 33 (4):737-760.
    Recent attempts to develop and apply digital ethics principles to address the challenges of the digital transformation leave organisations with an operationalisation gap. To successfully implement such guidance, they must find ways to translate high-level ethics frameworks into practical methods and tools that match their specific workflows and needs. Here, we describe the development of a standardised risk assessment tool, the Principle-at-Risk Analysis (PaRA), as a means to close this operationalisation gap for a key level of the ethics infrastructure at (...)
    Download  
     
    Export citation  
     
    Bookmark