Switch to: References

Add citations

You must login to add citations.
  1. Explanation Hacking: The perils of algorithmic recourse.E. Sullivan & Atoosa Kasirzadeh - forthcoming - In Juan Manuel Durán & Giorgia Pozzi (eds.), Philosophy of science for machine learning: Core issues and new perspectives. Springer.
    We argue that the trend toward providing users with feasible and actionable explanations of AI decisions—known as recourse explanations—comes with ethical downsides. Specifically, we argue that recourse explanations face several conceptual pitfalls and can lead to problematic explanation hacking, which undermines their ethical status. As an alternative, we advocate that explanations of AI decisions should aim at understanding.
    Download  
     
    Export citation  
     
    Bookmark  
  • What Kind of Explanations Do We Get from Agent-Based Models of Scientific Inquiry?Dunja Šešelja - 2022 - In Tomas Marvan, Hanne Andersen, Hasok Chang, Benedikt Löwe & Ivo Pezlar (eds.), Proceedings of the 16th International Congress of Logic, Methodology and Philosophy of Science and Technology. London: College Publications.
    Agent-based modelling has become a well-established method in social epistemology and philosophy of science but the question of what kind of explanations these models provide remains largely open. This paper is dedicated to this issue. It starts by distinguishing between real-world phenomena, real-world possibilities, and logical possibilities as different kinds of targets which agent-based models can represent. I argue that models representing the former two kinds provide how-actually explanations or causal how-possibly explanations. In contrast, models that represent logical possibilities provide (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A Defense of Truth as a Necessary Condition on Scientific Explanation.Christopher Pincock - 2021 - Erkenntnis 88 (2):621-640.
    How can a reflective scientist put forward an explanation using a model when they are aware that many of the assumptions used to specify that model are false? This paper addresses this challenge by making two substantial assumptions about explanatory practice. First, many of the propositions deployed in the course of explaining have a non-representational function. In particular, a proposition that a scientist uses and also believes to be false, i.e. an “idealization”, typically has some non-representational function in the practice, (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations