Results for 'XAI'

30 found
Order:
  1. Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open Challenges and Interdisciplinary Research Directions.Luca Longo, Mario Brcic, Federico Cabitza, Jaesik Choi, Roberto Confalonieri, Javier Del Ser, Riccardo Guidotti, Yoichi Hayashi, Francisco Herrera, Andreas Holzinger, Richard Jiang, Hassan Khosravi, Freddy Lecue, Gianclaudio Malgieri, Andrés Páez, Wojciech Samek, Johannes Schneider, Timo Speith & Simone Stumpf - 2024 - Information Fusion 106 (June 2024).
    As systems based on opaque Artificial Intelligence (AI) continue to flourish in diverse real-world applications, understanding these black box models has become paramount. In response, Explainable AI (XAI) has emerged as a field of research with practical and ethical benefits across various domains. This paper not only highlights the advancements in XAI and its application in real-world scenarios but also addresses the ongoing challenges within XAI, emphasizing the need for broader perspectives and collaborative efforts. We bring together experts from diverse (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  2. EXPLAINABLE ARTIFICIAL INTELLIGENCE (XAI): ENHANCING TRANSPARENCY AND TRUST IN MACHINE LEARNING MODELS.Prasad Pasam Thulasiram - 2025 - International Journal for Innovative Engineering and Management Research 14 (1):204-213.
    This research reviews explanation and interpretation for Explainable Artificial Intelligence (XAI) methods in order to boost complex machine learning model interpretability. The study shows the influence and belief of XAI in users that trust an Artificial Intelligence system and investigates ethical concerns, particularly fairness and biasedness of all the nontransparent models. It discusses the shortfalls related to XAI techniques, putting crucial emphasis on extended scope, enhancement and scalability potential. A number of outstanding issuesespecially in need of further work can involve (...)
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  3.  69
    Explainable AI (XAI).Rami Al-Dahdooh, Ahmad Marouf, Mahmoud Jamal Abu Ghali, Ali Osama Mahdi, Bassem S. Abu-Nasser & Samy S. Abu-Naser - 2025 - International Journal of Academic Information Systems Research (IJAISR) 9 (1):65-70.
    Abstract: As artificial intelligence (AI) systems become increasingly complex and pervasive, the need for transparency and interpretability has never been more critical. Explainable AI (XAI) addresses this need by providing methods and techniques to make AI decisions more understandable to humans. This paper explores the core principles of XAI, highlighting its importance for trust, accountability, and ethical AI deployment. We examine various XAI techniques, including interpretable models and post-hoc explanation methods, and discuss their strengths and limitations. Additionally, we present case (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4. Axe the X in XAI: A Plea for Understandable AI.Andrés Páez - forthcoming - In Juan Manuel Durán & Giorgia Pozzi, Philosophy of science for machine learning: Core issues and new perspectives. Springer.
    In a recent paper, Erasmus et al. (2021) defend the idea that the ambiguity of the term “explanation” in explainable AI (XAI) can be solved by adopting any of four different extant accounts of explanation in the philosophy of science: the Deductive Nomological, Inductive Statistical, Causal Mechanical, and New Mechanist models. In this chapter, I show that the authors’ claim that these accounts can be applied to deep neural networks as they would to any natural phenomenon is mistaken. I also (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5. SIDEs: Separating Idealization from Deceptive ‘Explanations’ in xAI.Emily Sullivan - forthcoming - Proceedings of the 2024 Acm Conference on Fairness, Accountability, and Transparency.
    Explainable AI (xAI) methods are important for establishing trust in using black-box models. However, recent criticism has mounted against current xAI methods that they disagree, are necessarily false, and can be manipulated, which has started to undermine the deployment of black-box models. Rudin (2019) goes so far as to say that we should stop using black-box models altogether in high-stakes cases because xAI explanations ‘must be wrong’. However, strict fidelity to the truth is historically not a desideratum in science. Idealizations (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  6. What do we want from Explainable Artificial Intelligence (XAI)? – A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research.Markus Langer, Daniel Oster, Timo Speith, Lena Kästner, Kevin Baum, Holger Hermanns, Eva Schmidt & Andreas Sesing - 2021 - Artificial Intelligence 296 (C):103473.
    Previous research in Explainable Artificial Intelligence (XAI) suggests that a main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems (we call these “stakeholders' desiderata”) in a variety of contexts. However, the literature on XAI is vast, spreads out across multiple largely disconnected disciplines, and it often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desiderata. This paper discusses the main classes of stakeholders calling for explainability (...)
    Download  
     
    Export citation  
     
    Bookmark   27 citations  
  7.  34
    Exploring Explainable AI (XAI) In Deep Learning: Balancing Transparency and Model Performance In.R. Kamali - 2024 - International Journal of Multidisciplinary and Scientific Emerging Research 12 (2):921-926.
    The growing adoption of deep learning models in critical domains, such as healthcare, finance, and autonomous systems, has highlighted the need for interpretability and transparency. Explainable AI (XAI) aims to provide insights into the decision-making processes of complex models, improving their trustworthiness and enabling accountability. However, one of the key challenges is balancing the trade-off between model transparency and performance. While explainability can sometimes compromise the predictive power of models, deep learning, with its inherent complexity, exacerbates this issue. This paper (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8.  30
    Contextual Transparency: A Framework for Reporting AI, Genai, and Agentic System Deployments across Industries.Pradhan Rashmiranjan - 2025 - International Journal of Innovative Research in Computer and Communication Engineering 13 (3):2161-2168.
    The industrial proliferation of AI necessitates robust contextual transparency, often lacking in current reporting. This paper introduces a framework for comprehensive reporting of AI, GenAI, and Agentic AI, moving beyond performance metrics. It prioritizes data provenance, algorithmic clarity, operational context, and decision rationale, crucial for trust and accountability. Data provenance ensures integrity, algorithmic clarity demystifies operations, operational context situates performance, and XAI elucidates outputs. For GenAI, transparency on model architecture, training data, and ethics is paramount. Agentic AI requires insights into (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  9.  41
    Transparency and Interpretability in Cloud- based Machine Learning with Explainable AI.V. Talati Dhruvitkumar - 2024 - International Journal of Multidisciplinary Research in Science, Engineering and Technology 7 (7):11823-11831.
    With the increased complexity of machine learning models and their widespread use in cloud applications, interpretability and transparency of decision-making are the highest priority. Explainable AI (XAI) methods seek to shed light on the inner workings of machine learning models, hence making them more interpretable and enabling users to rely on them. In this article, we explain the importance of XAI in cloud-computer environments, specifically with regards to having interpretable models and explainable decision-making. [1] XAI is the essence of a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. Local explanations via necessity and sufficiency: unifying theory and practice.David Watson, Limor Gultchin, Taly Ankur & Luciano Floridi - 2022 - Minds and Machines 32:185-218.
    Necessity and sufficiency are the building blocks of all successful explanations. Yet despite their importance, these notions have been conceptually underdeveloped and inconsistently applied in explainable artificial intelligence (XAI), a fast-growing research area that is so far lacking in firm theoretical foundations. Building on work in logic, probability, and causality, we establish the central role of necessity and sufficiency in XAI, unifying seemingly disparate methods in a single formal framework. We provide a sound and complete algorithm for computing explanatory factors (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  11. The Relations Between Pedagogical and Scientific Explanations of Algorithms: Case Studies from the French Administration.Maël Pégny - manuscript
    The opacity of some recent Machine Learning (ML) techniques have raised fundamental questions on their explainability, and created a whole domain dedicated to Explainable Artificial Intelligence (XAI). However, most of the literature has been dedicated to explainability as a scientific problem dealt with typical methods of computer science, from statistics to UX. In this paper, we focus on explainability as a pedagogical problem emerging from the interaction between lay users and complex technological systems. We defend an empirical methodology based on (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. Ameliorating Algorithmic Bias, or Why Explainable AI Needs Feminist Philosophy.Linus Ta-Lun Huang, Hsiang-Yun Chen, Ying-Tung Lin, Tsung-Ren Huang & Tzu-Wei Hung - 2022 - Feminist Philosophy Quarterly 8 (3).
    Artificial intelligence (AI) systems are increasingly adopted to make decisions in domains such as business, education, health care, and criminal justice. However, such algorithmic decision systems can have prevalent biases against marginalized social groups and undermine social justice. Explainable artificial intelligence (XAI) is a recent development aiming to make an AI system’s decision processes less opaque and to expose its problematic biases. This paper argues against technical XAI, according to which the detection and interpretation of algorithmic bias can be handled (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  13. The Pragmatic Turn in Explainable Artificial Intelligence.Andrés Páez - 2019 - Minds and Machines 29 (3):441-459.
    In this paper I argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI. Intuitively, the purpose of providing an explanation of a model or a decision is to make it understandable to its stakeholders. But without a previous grasp of what it means to say that an agent understands a model or a decision, the explanatory strategies will (...)
    Download  
     
    Export citation  
     
    Bookmark   39 citations  
  14. Cultural Bias in Explainable AI Research.Uwe Peters & Mary Carman - forthcoming - Journal of Artificial Intelligence Research.
    For synergistic interactions between humans and artificial intelligence (AI) systems, AI outputs often need to be explainable to people. Explainable AI (XAI) systems are commonly tested in human user studies. However, whether XAI researchers consider potential cultural differences in human explanatory needs remains unexplored. We highlight psychological research that found significant differences in human explanations between many people from Western, commonly individualist countries and people from non-Western, often collectivist countries. We argue that XAI research currently overlooks these variations and that (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  15.  32
    Developing New Methods for Bias Detection, Mitigation, and Algorithmic Transparency.Shradha Shinde Hemant Kokil, Rutuja Narayankar, Gayatri Kadam - 2025 - International Journal of Multidisciplinary and Scientific Emerging Research 13 (2):895-897.
    The growing use of artificial intelligence (AI) systems in decision-making across various domains has raised critical concerns about bias, fairness, and transparency. AI algorithms can inadvertently perpetuate biases based on the data they are trained on, resulting in outcomes that disproportionately affect certain groups. This paper proposes new methods for detecting and mitigating bias in AI systems while ensuring greater algorithmic transparency. The focus is on developing innovative approaches to identify bias at multiple stages of AI development, from data collection (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16. Unjustified Sample Sizes and Generalizations in Explainable AI Research: Principles for More Inclusive User Studies.Uwe Peters & Mary Carman - forthcoming - IEEE Intelligent Systems.
    Many ethical frameworks require artificial intelligence (AI) systems to be explainable. Explainable AI (XAI) models are frequently tested for their adequacy in user studies. Since different people may have different explanatory needs, it is important that participant samples in user studies are large enough to represent the target population to enable generalizations. However, it is unclear to what extent XAI researchers reflect on and justify their sample sizes or avoid broad generalizations across people. We analyzed XAI user studies (N = (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  17. Certifiable AI.Jobst Landgrebe - 2022 - Applied Sciences 12 (3):1050.
    Implicit stochastic models, including both ‘deep neural networks’ (dNNs) and the more recent unsupervised foundational models, cannot be explained. That is, it cannot be determined how they work, because the interactions of the millions or billions of terms that are contained in their equations cannot be captured in the form of a causal model. Because users of stochastic AI systems would like to understand how they operate in order to be able to use them safely and reliably, there has emerged (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  18. Leveraging Explainable AI and Multimodal Data for Stress Level Prediction in Mental Health Diagnostics.Destiny Agboro - 2025 - International Journal of Research and Scientific Innovation.
    The increasing prevalence of mental health issues, particularly stress, has necessitated the development of data-driven, interpretable machine learning models for early detection and intervention. This study leverages multimodal data, including activity levels, perceived stress scores (PSS), and event counts, to predict stress levels among individuals. A series of models, including Logistic Regression, Random Forest, Gradient Boosting, and Neural Networks, were evaluated for their predictive performance. Results demonstrated that ensemble models, particularly Random Forest and Gradient Boosting, performed significantly better compared to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. Explaining Go: Challenges in Achieving Explainability in AI Go Programs.Zack Garrett - 2023 - Journal of Go Studies 17 (2):29-60.
    There has been a push in recent years to provide better explanations for how AIs make their decisions. Most of this push has come from the ethical concerns that go hand in hand with AIs making decisions that affect humans. Outside of the strictly ethical concerns that have prompted the study of explainable AIs (XAIs), there has been research interest in the mere possibility of creating XAIs in various domains. In general, the more accurate we make our models the harder (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20. Living with Uncertainty: Full Transparency of AI isn’t Needed for Epistemic Trust in AI-based Science.Uwe Peters - forthcoming - Social Epistemology Review and Reply Collective.
    Can AI developers be held epistemically responsible for the processing of their AI systems when these systems are epistemically opaque? And can explainable AI (XAI) provide public justificatory reasons for opaque AI systems’ outputs? Koskinen (2024) gives negative answers to both questions. Here, I respond to her and argue for affirmative answers. More generally, I suggest that when considering people’s uncertainty about the factors causally determining an opaque AI’s output, it might be worth keeping in mind that a degree of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. ANNs and Unifying Explanations: Reply to Erasmus, Brunet, and Fisher.Yunus Prasetya - 2022 - Philosophy and Technology 35 (2):1-9.
    In a recent article, Erasmus, Brunet, and Fisher (2021) argue that Artificial Neural Networks (ANNs) are explainable. They survey four influential accounts of explanation: the Deductive-Nomological model, the Inductive-Statistical model, the Causal-Mechanical model, and the New-Mechanist model. They argue that, on each of these accounts, the features that make something an explanation is invariant with regard to the complexity of the explanans and the explanandum. Therefore, they conclude, the complexity of ANNs (and other Machine Learning models) does not make them (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  22. Enhancing Interpretability in Distributed Constraint Optimization Problems.M. Bhuvana Chandra C. Anand - 2025 - International Journal of Multidisciplinary Research in Science, Engineering and Technology 8 (1):361-364.
    Distributed Constraint Optimization Problems (DCOPs) provide a framework for solving multi-agent coordination tasks efficiently. However, their black-box nature often limits transparency and trust in decision-making processes. This paper explores methods to enhance interpretability in DCOPs, leveraging explainable AI (XAI) techniques. We introduce a novel approach incorporating heuristic explanations, constraint visualization, and modelagnostic methods to provide insights into DCOP solutions. Experimental results demonstrate that our method improves human understanding and debugging of DCOP solutions while maintaining solution quality.
    Download  
     
    Export citation  
     
    Bookmark  
  23.  53
    Scalable AI and data processing strategies for hybrid cloud environments.V. Talati Dhruvitkumar - 2021 - International Journal of Science and Research Archive 10 (03):482-492.
    Hybrid cloud infrastructure is increasingly becoming essential to enable scalable artificial intelligence (AI) as well as data processing, and it offers organizations greater flexibility, computational capabilities, and cost efficiency. This paper discusses the strategic use of hybrid cloud environments to enhance AI-based data workflows while addressing key challenges such as latency, integration complexity, infrastructure management, and security. In-depth discussions of solutions like federated multi-cloud models, cloud-native workload automation, quantum computing, and blockchaindriven data governance are presented. Examples of real-world implementation case (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. Beyond Human: Deep Learning, Explainability and Representation.M. Beatrice Fazi - 2021 - Theory, Culture and Society 38 (7-8):55-77.
    This article addresses computational procedures that are no longer constrained by human modes of representation and considers how these procedures could be philosophically understood in terms of ‘algorithmic thought’. Research in deep learning is its case study. This artificial intelligence (AI) technique operates in computational ways that are often opaque. Such a black-box character demands rethinking the abstractive operations of deep learning. The article does so by entering debates about explainability in AI and assessing how technoscience and technoculture tackle the (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  25. AI, Opacity, and Personal Autonomy.Bram Vaassen - 2022 - Philosophy and Technology 35 (4):1-20.
    Advancements in machine learning have fuelled the popularity of using AI decision algorithms in procedures such as bail hearings, medical diagnoses and recruitment. Academic articles, policy texts, and popularizing books alike warn that such algorithms tend to be opaque: they do not provide explanations for their outcomes. Building on a causal account of transparency and opacity as well as recent work on the value of causal explanation, I formulate a moral concern for opaque algorithms that is yet to receive a (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  26. Interpretability and Unification.Adrian Erasmus & Tyler D. P. Brunet - 2022 - Philosophy and Technology 35 (2):1-6.
    In a recent reply to our article, “What is Interpretability?,” Prasetya argues against our position that artificial neural networks are explainable. It is claimed that our indefeasibility thesis—that adding complexity to an explanation of a phenomenon does not make the phenomenon any less explainable—is false. More precisely, Prasetya argues that unificationist explanations are defeasible to increasing complexity, and thus, we may not be able to provide such explanations of highly complex AI models. The reply highlights an important lacuna in our (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  27.  75
    Integrating Predictive Analytics into Risk Management: A Modern Approach for Financial Institutions.Palakurti Naga Ramesh - 2025 - International Journal of Innovative Research in Science Engineering and Technology 14 (1):122-132.
    This paper examines how predictive analytics enhances risk management in financial institutions. Advanced tools like machine learning and statistical modeling help predict risks, identify trends, and implement strategies to prevent losses by analyzing historical and real-time data. It covers the use of predictive analytics for credit risk, market risk, operational risk, and fraud detection, with practical case studies. Additionally, it discusses challenges, ethical issues, and prospects in this field.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  28. MULTI AGENT MODEL BASED RISK PREDICTION IN BANKING TRANSACTION USING DEEP LEARNING MODEL.Girish Wali Praveen Sivathapandi - 2023 - JOURNAl OF CRITICAL REVIEWS 10 (2):289-298.
    The banking sector faces growing challenges in identifying and managing risks due to the complexity of financial transactions and increasing fraud. This research presents a framework that combines multiple agents with deep learning to improve risk prediction in banking. Each agent focuses on specific tasks like cleaning data, selecting important features, and detecting unusual activities, ensuring a detailed risk assessment. A deep learning model is used to analyze large amounts of transaction data and identify patterns that may signal potential risks. (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  29. Explainable transformers in financial forecasting.P. Prakash V. Govindaraj, H. V. Jaganathan - 2023 - World Journal of Advanced Research and Reviews 20 (02):1434–1441.
    This study presents a novel transformer-based model specifically designed for financial forecasting, integrating explainability mechanisms such as SHAP (SHapley Additive exPlanations) values and attention visualizations to enhance interpretability. Unlike previous models, which often compromise between accuracy and transparency, our approach balances predictive accuracy with interpretability, allowing stakeholders to gain deeper insights into the factors driving market changes. By revealing critical market influences through feature importance and attention maps, this model provides both robustness and transparency, catering to the needs of high-stakes (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  30. Towards Knowledge-driven Distillation and Explanation of Black-box Models.Roberto Confalonieri, Guendalina Righetti, Pietro Galliani, Nicolas Toquard, Oliver Kutz & Daniele Porello - 2021 - In Roberto Confalonieri, Guendalina Righetti, Pietro Galliani, Nicolas Toquard, Oliver Kutz & Daniele Porello, Proceedings of the Workshop on Data meets Applied Ontologies in Explainable {AI} {(DAO-XAI} 2021) part of Bratislava Knowledge September {(BAKS} 2021), Bratislava, Slovakia, September 18th to 19th, 2021. CEUR 2998.
    We introduce and discuss a knowledge-driven distillation approach to explaining black-box models by means of two kinds of interpretable models. The first is perceptron (or threshold) connectives, which enrich knowledge representation languages such as Description Logics with linear operators that serve as a bridge between statistical learning and logical reasoning. The second is Trepan Reloaded, an ap- proach that builds post-hoc explanations of black-box classifiers in the form of decision trees enhanced by domain knowledge. Our aim is, firstly, to target (...)
    Download  
     
    Export citation  
     
    Bookmark