Transparency and Interpretability in Cloud- based Machine Learning with Explainable AI

International Journal of Multidisciplinary Research in Science, Engineering and Technology 7 (7):11823-11831 (2024)
  Copy   BIBTEX

Abstract

With the increased complexity of machine learning models and their widespread use in cloud applications, interpretability and transparency of decision-making are the highest priority. Explainable AI (XAI) methods seek to shed light on the inner workings of machine learning models, hence making them more interpretable and enabling users to rely on them. In this article, we explain the importance of XAI in cloud-computer environments, specifically with regards to having interpretable models and explainable decision-making. [1] XAI is the essence of a paradigm shift in cloud-based ML, promoting transparency, accountability, and ethical decision-making. As cloud- based ML keeps becoming mainstream, the need for XAI increases, highlighting the need for continued innovation and cooperation for realizing the full potential of interpretable AI systems. We speak about current techniques for realizing explainability in AI systems and their feasibility and issues in cloud environments. Additionally, we discuss the implications of XAI among different stakeholders such as developers, end-users, and regulatory authorities and identify future research directions in this fast-growing area.

Analytics

Added to PP
2025-03-22

Downloads
20 (#106,105)

6 months
20 (#104,266)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?