Developing New Methods for Bias Detection, Mitigation, and Algorithmic Transparency

International Journal of Multidisciplinary and Scientific Emerging Research 13 (2):895-897 (2025)
  Copy   BIBTEX

Abstract

The growing use of artificial intelligence (AI) systems in decision-making across various domains has raised critical concerns about bias, fairness, and transparency. AI algorithms can inadvertently perpetuate biases based on the data they are trained on, resulting in outcomes that disproportionately affect certain groups. This paper proposes new methods for detecting and mitigating bias in AI systems while ensuring greater algorithmic transparency. The focus is on developing innovative approaches to identify bias at multiple stages of AI development, from data collection to model deployment. Additionally, the paper emphasizes the need for transparent AI models that allow for explainability and accountability in decision-making. The proposed methods include novel fairness metrics, new tools for detecting biases in datasets, and frameworks for ensuring transparency through explainable AI (XAI) techniques.

Analytics

Added to PP
2025-04-14

Downloads
20 (#107,607)

6 months
20 (#105,805)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?