Switch to: References

Add citations

You must login to add citations.
  1. Human-AI coevolution.Dino Pedreschi, Luca Pappalardo, Emanuele Ferragina, Ricardo Baeza-Yates, Albert-László Barabási, Frank Dignum, Virginia Dignum, Tina Eliassi-Rad, Fosca Giannotti, János Kertész, Alistair Knott, Yannis Ioannidis, Paul Lukowicz, Andrea Passarella, Alex Sandy Pentland, John Shawe-Taylor & Alessandro Vespignani - 2025 - Artificial Intelligence 339 (C):104244.
    Download  
     
    Export citation  
     
    Bookmark  
  • Explaining AI through mechanistic interpretability.Lena Kästner & Barnaby Crook - 2024 - European Journal for Philosophy of Science 14 (4):1-25.
    Recent work in explainable artificial intelligence (XAI) attempts to render opaque AI systems understandable through a divide-and-conquer strategy. However, this fails to illuminate how trained AI systems work as a whole. Precisely this kind of functional understanding is needed, though, to satisfy important societal desiderata such as safety. To remedy this situation, we argue, AI researchers should seek mechanistic interpretability, viz. apply coordinated discovery strategies familiar from the life sciences to uncover the functional organisation of complex AI systems. Additionally, theorists (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Human performance consequences of normative and contrastive explanations: An experiment in machine learning for reliability maintenance.Davide Gentile, Birsen Donmez & Greg A. Jamieson - 2023 - Artificial Intelligence 321 (C):103945.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The black box problem revisited. Real and imaginary challenges for automated legal decision making.Bartosz Brożek, Michał Furman, Marek Jakubiec & Bartłomiej Kucharzyk - 2024 - Artificial Intelligence and Law 32 (2):427-440.
    This paper addresses the black-box problem in artificial intelligence (AI), and the related problem of explainability of AI in the legal context. We argue, first, that the black box problem is, in fact, a superficial one as it results from an overlap of four different – albeit interconnected – issues: the opacity problem, the strangeness problem, the unpredictability problem, and the justification problem. Thus, we propose a framework for discussing both the black box problem and the explainability of AI. We (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Creating meaningful work in the age of AI: explainable AI, explainability, and why it matters to organizational designers.Kristin Wulff & Hanne Finnestrand - forthcoming - AI and Society:1-14.
    In this paper, we contribute to research on enterprise artificial intelligence (AI), specifically to organizations improving the customer experiences and their internal processes through using the type of AI called machine learning (ML). Many organizations are struggling to get enough value from their AI efforts, and part of this is related to the area of explainability. The need for explainability is especially high in what is called black-box ML models, where decisions are made without anyone understanding how an AI reached (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Tractability of explaining classifier decisions.Martin C. Cooper & João Marques-Silva - 2023 - Artificial Intelligence 316 (C):103841.
    Download  
     
    Export citation  
     
    Bookmark  
  • Understanding, Idealization, and Explainable AI.Will Fleisher - 2022 - Episteme 19 (4):534-560.
    Many AI systems that make important decisions are black boxes: how they function is opaque even to their developers. This is due to their high complexity and to the fact that they are trained rather than programmed. Efforts to alleviate the opacity of black box systems are typically discussed in terms of transparency, interpretability, and explainability. However, there is little agreement about what these key concepts mean, which makes it difficult to adjudicate the success or promise of opacity alleviation methods. (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • The virtues of interpretable medical AI.Joshua Hatherley, Robert Sparrow & Mark Howard - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (3):323-332.
    Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are 'black boxes'. The initial response in the literature was a demand for 'explainable AI'. However, recently, several authors have suggested that making AI more explainable or 'interpretable' is likely to be at the cost of the accuracy of these systems and that prioritising interpretability in medical AI may constitute a 'lethal prejudice'. In this paper, we defend the value of interpretability (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • AI, Opacity, and Personal Autonomy.Bram Vaassen - 2022 - Philosophy and Technology 35 (4):1-20.
    Advancements in machine learning have fuelled the popularity of using AI decision algorithms in procedures such as bail hearings, medical diagnoses and recruitment. Academic articles, policy texts, and popularizing books alike warn that such algorithms tend to be opaque: they do not provide explanations for their outcomes. Building on a causal account of transparency and opacity as well as recent work on the value of causal explanation, I formulate a moral concern for opaque algorithms that is yet to receive a (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Defining Explanation and Explanatory Depth in XAI.Stefan Buijsman - 2022 - Minds and Machines 32 (3):563-584.
    Explainable artificial intelligence (XAI) aims to help people understand black box algorithms, particularly of their outputs. But what are these explanations and when is one explanation better than another? The manipulationist definition of explanation from the philosophy of science offers good answers to these questions, holding that an explanation consists of a generalization that shows what happens in counterfactual cases. Furthermore, when it comes to explanatory depth this account holds that a generalization that has more abstract variables, is broader in (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Explanatory pragmatism: a context-sensitive framework for explainable medical AI.Diana Robinson & Rune Nyrup - 2022 - Ethics and Information Technology 24 (1).
    Explainable artificial intelligence (XAI) is an emerging, multidisciplinary field of research that seeks to develop methods and tools for making AI systems more explainable or interpretable. XAI researchers increasingly recognise explainability as a context-, audience- and purpose-sensitive phenomenon, rather than a single well-defined property that can be directly measured and optimised. However, since there is currently no overarching definition of explainability, this poses a risk of miscommunication between the many different researchers within this multidisciplinary space. This is the problem we (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • A sociotechnical perspective for the future of AI: narratives, inequalities, and human control.Andreas Theodorou & Laura Sartori - 2022 - Ethics and Information Technology 24 (1):1-11.
    Different people have different perceptions about artificial intelligence (AI). It is extremely important to bring together all the alternative frames of thinking—from the various communities of developers, researchers, business leaders, policymakers, and citizens—to properly start acknowledging AI. This article highlights the ‘fruitful collaboration’ that sociology and AI could develop in both social and technical terms. We discuss how biases and unfairness are among the major challenges to be addressed in such a sociotechnical perspective. First, as intelligent machines reveal their nature (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Two Dimensions of Opacity and the Deep Learning Predicament.Florian J. Boge - 2021 - Minds and Machines 32 (1):43-75.
    Deep neural networks have become increasingly successful in applications from biology to cosmology to social science. Trained DNNs, moreover, correspond to models that ideally allow the prediction of new phenomena. Building in part on the literature on ‘eXplainable AI’, I here argue that these models are instrumental in a sense that makes them non-explanatory, and that their automated generation is opaque in a unique way. This combination implies the possibility of an unprecedented gap between discovery and explanation: When unsupervised models (...)
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  • Transparency and the Black Box Problem: Why We Do Not Trust AI.Warren J. von Eschenbach - 2021 - Philosophy and Technology 34 (4):1607-1622.
    With automation of routine decisions coupled with more intricate and complex information architecture operating this automation, concerns are increasing about the trustworthiness of these systems. These concerns are exacerbated by a class of artificial intelligence that uses deep learning, an algorithmic system of deep neural networks, which on the whole remain opaque or hidden from human comprehension. This situation is commonly referred to as the black box problem in AI. Without understanding how AI reaches its conclusions, it is an open (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Algorithmic and human decision making: for a double standard of transparency.Mario Günther & Atoosa Kasirzadeh - 2022 - AI and Society 37 (1):375-381.
    Should decision-making algorithms be held to higher standards of transparency than human beings? The way we answer this question directly impacts what we demand from explainable algorithms, how we govern them via regulatory proposals, and how explainable algorithms may help resolve the social problems associated with decision making supported by artificial intelligence. Some argue that algorithms and humans should be held to the same standards of transparency and that a double standard of transparency is hardly justified. We give two arguments (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Is explainable artificial intelligence intrinsically valuable?Nathan Colaner - 2022 - AI and Society 37 (1):231-238.
    There is general consensus that explainable artificial intelligence is valuable, but there is significant divergence when we try to articulate why, exactly, it is desirable. This question must be distinguished from two other kinds of questions asked in the XAI literature that are sometimes asked and addressed simultaneously. The first and most obvious is the ‘how’ question—some version of: ‘how do we develop technical strategies to achieve XAI?’ Another question is specifying what kind of explanation is worth having in the (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI.Juan Manuel Durán & Karin Rolanda Jongsma - 2021 - Journal of Medical Ethics 47 (5):medethics - 2020-106820.
    The use of black box algorithms in medicine has raised scholarly concerns due to their opaqueness and lack of trustworthiness. Concerns about potential bias, accountability and responsibility, patient autonomy and compromised trust transpire with black box algorithms. These worries connect epistemic concerns with normative issues. In this paper, we outline that black box algorithms are less problematic for epistemic reasons than many scholars seem to believe. By outlining that more transparency in algorithms is not always necessary, and by explaining that (...)
    Download  
     
    Export citation  
     
    Bookmark   52 citations  
  • What do we want from Explainable Artificial Intelligence (XAI)? – A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research.Markus Langer, Daniel Oster, Timo Speith, Lena Kästner, Kevin Baum, Holger Hermanns, Eva Schmidt & Andreas Sesing - 2021 - Artificial Intelligence 296 (C):103473.
    Previous research in Explainable Artificial Intelligence (XAI) suggests that a main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems (we call these “stakeholders' desiderata”) in a variety of contexts. However, the literature on XAI is vast, spreads out across multiple largely disconnected disciplines, and it often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desiderata. This paper discusses the main classes of stakeholders calling for explainability (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • Explaining black-box classifiers using post-hoc explanations-by-example: The effect of explanations and error-rates in XAI user studies.Eoin M. Kenny, Courtney Ford, Molly Quinn & Mark T. Keane - 2021 - Artificial Intelligence 294 (C):103459.
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Procedural fairness in algorithmic decision-making: the role of public engagement.Marie Christin Decker, Laila Wegner & Carmen Leicht-Scholten - 2025 - Ethics and Information Technology 27 (1):1-16.
    Despite the widespread use of automated decision-making (ADM) systems, they are often developed without involving the public or those directly affected, leading to concerns about systematic biases that may perpetuate structural injustices. Existing formal fairness approaches primarily focus on statistical outcomes across demographic groups or individual fairness, yet these methods reveal ambiguities and limitations in addressing fairness comprehensively. This paper argues for a holistic approach to algorithmic fairness that integrates procedural fairness, considering both decision-making processes and their outcomes. Procedural fairness (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Pragmatic Turn in Explainable Artificial Intelligence.Andrés Páez - 2019 - Minds and Machines 29 (3):441-459.
    In this paper I argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI. Intuitively, the purpose of providing an explanation of a model or a decision is to make it understandable to its stakeholders. But without a previous grasp of what it means to say that an agent understands a model or a decision, the explanatory strategies will (...)
    Download  
     
    Export citation  
     
    Bookmark   36 citations  
  • The Rhetoric and Reality of Anthropomorphism in Artificial Intelligence.David Watson - 2019 - Minds and Machines 29 (3):417-440.
    Artificial intelligence has historically been conceptualized in anthropomorphic terms. Some algorithms deploy biomimetic designs in a deliberate attempt to effect a sort of digital isomorphism of the human brain. Others leverage more general learning strategies that happen to coincide with popular theories of cognitive science and social epistemology. In this paper, I challenge the anthropomorphic credentials of the neural network algorithm, whose similarities to human cognition I argue are vastly overstated and narrowly construed. I submit that three alternative supervised learning (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • AI Systems Under Criminal Law: a Legal Analysis and a Regulatory Perspective.Francesca Lagioia & Giovanni Sartor - 2020 - Philosophy and Technology 33 (3):433-465.
    Criminal liability for acts committed by AI systems has recently become a hot legal topic. This paper includes three different contributions. The first contribution is an analysis of the extent to which an AI system can satisfy the requirements for criminal liability: accomplishing an actus reus, having the corresponding mens rea, possessing the cognitive capacities needed for responsibility. The second contribution is a discussion of criminal activity accomplished by an AI entity, with reference to a recent case involving an online (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Policy advice and best practices on bias and fairness in AI.Jose M. Alvarez, Alejandra Bringas Colmenarejo, Alaa Elobaid, Simone Fabbrizzi, Miriam Fahimi, Antonio Ferrara, Siamak Ghodsi, Carlos Mougan, Ioanna Papageorgiou, Paula Reyero, Mayra Russo, Kristen M. Scott, Laura State, Xuan Zhao & Salvatore Ruggieri - 2024 - Ethics and Information Technology 26 (2):1-26.
    The literature addressing bias and fairness in AI models (fair-AI) is growing at a fast pace, making it difficult for novel researchers and practitioners to have a bird’s-eye view picture of the field. In particular, many policy initiatives, standards, and best practices in fair-AI have been proposed for setting principles, procedures, and knowledge bases to guide and operationalize the management of bias and fairness. The first objective of this paper is to concisely survey the state-of-the-art of fair-AI methods and resources, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • On the Philosophy of Unsupervised Learning.David S. Watson - 2023 - Philosophy and Technology 36 (2):1-26.
    Unsupervised learning algorithms are widely used for many important statistical tasks with numerous applications in science and industry. Yet despite their prevalence, they have attracted remarkably little philosophical scrutiny to date. This stands in stark contrast to supervised and reinforcement learning algorithms, which have been widely studied and critically evaluated, often with an emphasis on ethical concerns. In this article, I analyze three canonical unsupervised learning problems: clustering, abstraction, and generative modeling. I argue that these methods raise unique epistemological and (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • The virtues of interpretable medical AI.Joshua Hatherley, Robert Sparrow & Mark Howard - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (3).
    Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are “black boxes.” The initial response in the literature was a demand for “explainable AI.” However, recently, several authors have suggested that making AI more explainable or “interpretable” is likely to be at the cost of the accuracy of these systems and that prioritizing interpretability in medical AI may constitute a “lethal prejudice.” In this paper, we defend the value of interpretability (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • On the robustness of sparse counterfactual explanations to adverse perturbations.Marco Virgolin & Saverio Fracaros - 2023 - Artificial Intelligence 316 (C):103840.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Evaluating XAI: A comparison of rule-based and example-based explanations.Jasper van der Waa, Elisabeth Nieuwburg, Anita Cremers & Mark Neerincx - 2021 - Artificial Intelligence 291 (C):103404.
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • From what to how: an initial review of publicly available AI ethics tools, methods and research to translate principles into practices.Jessica Morley, Luciano Floridi, Libby Kinsey & Anat Elhalal - 2020 - Science and Engineering Ethics 26 (4):2141-2168.
    The debate about the ethical implications of Artificial Intelligence dates from the 1960s :741–742, 1960; Wiener in Cybernetics: or control and communication in the animal and the machine, MIT Press, New York, 1961). However, in recent years symbolic AI has been complemented and sometimes replaced by Neural Networks and Machine Learning techniques. This has vastly increased its potential utility and impact on society, with the consequence that the ethical debate has gone mainstream. Such a debate has primarily focused on principles—the (...)
    Download  
     
    Export citation  
     
    Bookmark   86 citations  
  • A top-level model of case-based argumentation for explanation: Formalisation and experiments.Henry Prakken & Rosa Ratsma - 2022 - Argument and Computation 13 (2):159-194.
    This paper proposes a formal top-level model of explaining the outputs of machine-learning-based decision-making applications and evaluates it experimentally with three data sets. The model draws on AI & law research on argumentation with cases, which models how lawyers draw analogies to past cases and discuss their relevant similarities and differences in terms of relevant factors and dimensions in the problem domain. A case-based approach is natural since the input data of machine-learning applications can be seen as cases. While the (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Using ontologies to enhance human understandability of global post-hoc explanations of black-box models.Roberto Confalonieri, Tillman Weyde, Tarek R. Besold & Fermín Moscoso del Prado Martín - 2021 - Artificial Intelligence 296 (C):103471.
    Download  
     
    Export citation  
     
    Bookmark  
  • A Teleological Approach to Information Systems Design.Mattia Fumagalli, Roberta Ferrario & Giancarlo Guizzardi - 2024 - Minds and Machines 34 (3):1-35.
    In recent years, the design and production of information systems have seen significant growth. However, these information artefacts often exhibit characteristics that compromise their reliability. This issue appears to stem from the neglect or underestimation of certain crucial aspects in the application of Information Systems Design (ISD). For example, it is frequently difficult to prove when one of these products does not work properly or works incorrectly (falsifiability), their usage is often left to subjective experience and somewhat arbitrary choices (anecdotes), (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • How Much Should You Care About Algorithmic Transparency as Manipulation?Ulrik Franke - 2022 - Philosophy and Technology 35 (4):1-7.
    Wang (_Philosophy & Technology_ 35, 2022) introduces a Foucauldian power account of algorithmic transparency. This short commentary explores when this power account is appropriate. It is first observed that the power account is a constructionist one, and that such accounts often come with both factual and evaluative claims. In an instance of Hume’s law, the evaluative claims do not follow from the factual claims, leaving open the question of how much constructionist commitment (Hacking, 1999) one should have. The concept of (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Analogue Models and Universal Machines. Paradigms of Epistemic Transparency in Artificial Intelligence.Hajo Greif - 2022 - Minds and Machines 32 (1):111-133.
    The problem of epistemic opacity in Artificial Intelligence is often characterised as a problem of intransparent algorithms that give rise to intransparent models. However, the degrees of transparency of an AI model should not be taken as an absolute measure of the properties of its algorithms but of the model’s degree of intelligibility to human users. Its epistemically relevant elements are to be specified on various levels above and beyond the computational one. In order to elucidate this claim, I first (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Owning Decisions: AI Decision-Support and the Attributability-Gap.Jannik Zeiser - 2024 - Science and Engineering Ethics 30 (4):1-19.
    Artificial intelligence (AI) has long been recognised as a challenge to responsibility. Much of this discourse has been framed around robots, such as autonomous weapons or self-driving cars, where we arguably lack control over a machine’s behaviour and therefore struggle to identify an agent that can be held accountable. However, most of today’s AI is based on machine-learning technology that does not act on its own, but rather serves as a decision-support tool, automatically analysing data to help human agents make (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The quest of parsimonious XAI: A human-agent architecture for explanation formulation.Yazan Mualla, Igor Tchappi, Timotheus Kampik, Amro Najjar, Davide Calvaresi, Abdeljalil Abbas-Turki, Stéphane Galland & Christophe Nicolle - 2022 - Artificial Intelligence 302 (C):103573.
    Download  
     
    Export citation  
     
    Bookmark  
  • Transparent AI: reliabilist and proud.Abhishek Mishra - forthcoming - Journal of Medical Ethics.
    Durán et al argue in ‘Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI’1 that traditionally proposed solutions to make black box machine learning models in medicine less opaque and more transparent are, though necessary, ultimately not sufficient to establish their overall trustworthiness. This is because transparency procedures currently employed, such as the use of an interpretable predictor,2 cannot fully overcome the opacity of such models. Computational reliabilism, an alternate approach to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Cognitive architectures for artificial intelligence ethics.Steve J. Bickley & Benno Torgler - 2023 - AI and Society 38 (2):501-519.
    As artificial intelligence (AI) thrives and propagates through modern life, a key question to ask is how to include humans in future AI? Despite human involvement at every stage of the production process from conception and design through to implementation, modern AI is still often criticized for its “black box” characteristics. Sometimes, we do not know what really goes on inside or how and why certain conclusions are met. Future AI will face many dilemmas and ethical issues unforeseen by their (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Legal requirements on explainability in machine learning.Adrien Bibal, Michael Lognoul, Alexandre de Streel & Benoît Frénay - 2020 - Artificial Intelligence and Law 29 (2):149-169.
    Deep learning and other black-box models are becoming more and more popular today. Despite their high performance, they may not be accepted ethically or legally because of their lack of explainability. This paper presents the increasing number of legal requirements on machine learning model interpretability and explainability in the context of private and public decision making. It then explains how those legal requirements can be implemented into machine-learning models and concludes with a call for more inter-disciplinary research on explainability.
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • A data-centric approach for ethical and trustworthy AI in journalism.Laurence Dierickx, Andreas Lothe Opdahl, Sohail Ahmed Khan, Carl-Gustav Lindén & Diana Carolina Guerrero Rojas - 2024 - Ethics and Information Technology 26 (4):1-13.
    AI-driven journalism refers to various methods and tools for gathering, verifying, producing, and distributing news information. Their potential is to extend human capabilities and create new forms of augmented journalism. Although scholars agreed on the necessity to embed journalistic values in these systems to make AI systems accountable, less attention was paid to data quality, while the results’ accuracy and efficiency depend on high-quality data in any machine learning task. Assessing data quality in the context of AI-driven journalism requires a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Algorithmic Transparency, Manipulation, and Two Concepts of Liberty.Ulrik Franke - 2024 - Philosophy and Technology 37 (1):1-6.
    As more decisions are made by automated algorithmic systems, the transparency of these systems has come under scrutiny. While such transparency is typically seen as beneficial, there is a also a critical, Foucauldian account of it. From this perspective, worries have recently been articulated that algorithmic transparency can be used for manipulation, as part of a disciplinary power structure. Klenk (Philosophy & Technology 36, 79, 2023) recently argued that such manipulation should not be understood as exploitation of vulnerable victims, but (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • How to Make AlphaGo’s Children Explainable.Woosuk Park - 2022 - Philosophies 7 (3):55.
    Under the rubric of understanding the problem of explainability of AI in terms of abductive cognition, I propose to review the lessons from AlphaGo and her more powerful successors. As AI players in Baduk have arrived at superhuman level, there seems to be no hope for understanding the secret of their breathtakingly brilliant moves. Without making AI players explainable in some ways, both human and AI players would be less-than omniscient, if not ignorant, epistemic agents. Are we bound to have (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Sources of Understanding in Supervised Machine Learning Models.Paulo Pirozelli - 2022 - Philosophy and Technology 35 (2):1-19.
    In the last decades, supervised machine learning has seen the widespread growth of highly complex, non-interpretable models, of which deep neural networks are the most typical representative. Due to their complexity, these models have showed an outstanding performance in a series of tasks, as in image recognition and machine translation. Recently, though, there has been an important discussion over whether those non-interpretable models are able to provide any sort of understanding whatsoever. For some scholars, only interpretable models can provide understanding. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Counterfactual explanations for misclassified images: How human and machine explanations differ.Eoin Delaney, Arjun Pakrashi, Derek Greene & Mark T. Keane - 2023 - Artificial Intelligence 324 (C):103995.
    Download  
     
    Export citation  
     
    Bookmark  
  • Robots are judging me: Perceived fairness of algorithmic recruitment tools.Airlie Hilliard, Nigel Guenole & Franziska Leutner - 2022 - Frontiers in Psychology 13.
    Recent years have seen rapid advancements in selection assessments, shifting away from human and toward algorithmic judgments of candidates. Indeed, algorithmic recruitment tools have been created to screen candidates’ resumes, assess psychometric characteristics through game-based assessments, and judge asynchronous video interviews, among other applications. While research into candidate reactions to these technologies is still in its infancy, early research in this regard has explored user experiences and fairness perceptions. In this article, we review applicants’ perceptions of the procedural fairness of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Levels of explainable artificial intelligence for human-aligned conversational explanations.Richard Dazeley, Peter Vamplew, Cameron Foale, Charlotte Young, Sunil Aryal & Francisco Cruz - 2021 - Artificial Intelligence 299 (C):103525.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Detecting and explaining unfairness in consumer contracts through memory networks.Federico Ruggeri, Francesca Lagioia, Marco Lippi & Paolo Torroni - 2021 - Artificial Intelligence and Law 30 (1):59-92.
    Recent work has demonstrated how data-driven AI methods can leverage consumer protection by supporting the automated analysis of legal documents. However, a shortcoming of data-driven approaches is poor explainability. We posit that in this domain useful explanations of classifier outcomes can be provided by resorting to legal rationales. We thus consider several configurations of memory-augmented neural networks where rationales are given a special role in the modeling of context knowledge. Our results show that rationales not only contribute to improve the (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • “That's (not) the output I expected!” On the role of end user expectations in creating explanations of AI systems.Maria Riveiro & Serge Thill - 2021 - Artificial Intelligence 298:103507.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Argumentative explanations for interactive recommendations.Antonio Rago, Oana Cocarascu, Christos Bechlivanidis, David Lagnado & Francesca Toni - 2021 - Artificial Intelligence 296 (C):103506.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • A framework for step-wise explaining how to solve constraint satisfaction problems.Bart Bogaerts, Emilio Gamba & Tias Guns - 2021 - Artificial Intelligence 300 (C):103550.
    Download  
     
    Export citation  
     
    Bookmark