EXPLAINABLE ARTIFICIAL INTELLIGENCE (XAI): ENHANCING TRANSPARENCY AND TRUST IN MACHINE LEARNING MODELS

International Journal for Innovative Engineering and Management Research 14 (1):204-213 (2025)
  Copy   BIBTEX

Abstract

This research reviews explanation and interpretation for Explainable Artificial Intelligence (XAI) methods in order to boost complex machine learning model interpretability. The study shows the influence and belief of XAI in users that trust an Artificial Intelligence system and investigates ethical concerns, particularly fairness and biasedness of all the nontransparent models. It discusses the shortfalls related to XAI techniques, putting crucial emphasis on extended scope, enhancement and scalability potential. A number of outstanding issuesespecially in need of further work can involve standardization, user-centered design and interdisciplinary in strategies for improving the practical utility of XAI.

Analytics

Added to PP
2025-03-16

Downloads
353 (#77,577)

6 months
353 (#6,072)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?