Switch to: References

Add citations

You must login to add citations.
  1. Procedural fairness in algorithmic decision-making: the role of public engagement.Marie Christin Decker, Laila Wegner & Carmen Leicht-Scholten - 2025 - Ethics and Information Technology 27 (1):1-16.
    Despite the widespread use of automated decision-making (ADM) systems, they are often developed without involving the public or those directly affected, leading to concerns about systematic biases that may perpetuate structural injustices. Existing formal fairness approaches primarily focus on statistical outcomes across demographic groups or individual fairness, yet these methods reveal ambiguities and limitations in addressing fairness comprehensively. This paper argues for a holistic approach to algorithmic fairness that integrates procedural fairness, considering both decision-making processes and their outcomes. Procedural fairness (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Creating meaningful work in the age of AI: explainable AI, explainability, and why it matters to organizational designers.Kristin Wulff & Hanne Finnestrand - forthcoming - AI and Society:1-14.
    In this paper, we contribute to research on enterprise artificial intelligence (AI), specifically to organizations improving the customer experiences and their internal processes through using the type of AI called machine learning (ML). Many organizations are struggling to get enough value from their AI efforts, and part of this is related to the area of explainability. The need for explainability is especially high in what is called black-box ML models, where decisions are made without anyone understanding how an AI reached (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Connecting ethics and epistemology of AI.Federica Russo, Eric Schliesser & Jean Wagemans - forthcoming - AI and Society:1-19.
    The need for fair and just AI is often related to the possibility of understanding AI itself, in other words, of turning an opaque box into a glass box, as inspectable as possible. Transparency and explainability, however, pertain to the technical domain and to philosophy of science, thus leaving the ethics and epistemology of AI largely disconnected. To remedy this, we propose an integrated approach premised on the idea that a glass-box epistemology should explicitly consider how to incorporate values and (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • AI, Opacity, and Personal Autonomy.Bram Vaassen - 2022 - Philosophy and Technology 35 (4):1-20.
    Advancements in machine learning have fuelled the popularity of using AI decision algorithms in procedures such as bail hearings, medical diagnoses and recruitment. Academic articles, policy texts, and popularizing books alike warn that such algorithms tend to be opaque: they do not provide explanations for their outcomes. Building on a causal account of transparency and opacity as well as recent work on the value of causal explanation, I formulate a moral concern for opaque algorithms that is yet to receive a (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • AI employment decision-making: integrating the equal opportunity merit principle and explainable AI.Gary K. Y. Chan - forthcoming - AI and Society:1-12.
    Artificial intelligence tools used in employment decision-making cut across the multiple stages of job advertisements, shortlisting, interviews and hiring, and actual and potential bias can arise in each of these stages. One major challenge is to mitigate AI bias and promote fairness in opaque AI systems. This paper argues that the equal opportunity merit principle is an ethical approach for fair AI employment decision-making. Further, explainable AI can mitigate the opacity problem by placing greater emphasis on enhancing the understanding of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Governing algorithmic decisions: The role of decision importance and governance on perceived legitimacy of algorithmic decisions.Kirsten Martin & Ari Waldman - 2022 - Big Data and Society 9 (1).
    The algorithmic accountability literature to date has primarily focused on procedural tools to govern automated decision-making systems. That prescriptive literature elides a fundamentally empirical question: whether and under what circumstances, if any, is the use of algorithmic systems to make public policy decisions perceived as legitimate? The present study begins to answer this question. Using factorial vignette survey methodology, we explore the relative importance of the type of decision, the procedural governance, the input data used, and outcome errors on perceptions (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • AI, big data, and the future of consent.Adam J. Andreotta, Nin Kirkham & Marco Rizzi - 2022 - AI and Society 37 (4):1715-1728.
    In this paper, we discuss several problems with current Big data practices which, we claim, seriously erode the role of informed consent as it pertains to the use of personal information. To illustrate these problems, we consider how the notion of informed consent has been understood and operationalised in the ethical regulation of biomedical research (and medical practices, more broadly) and compare this with current Big data practices. We do so by first discussing three types of problems that can impede (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Safety by simulation: theorizing the future of robot regulation.Mika Viljanen - 2024 - AI and Society 39 (1):139-154.
    Mobility robots may soon be among us, triggering a need for safety regulation. Robot safety regulation, however, remains underexplored, with only a few articles analyzing what regulatory approaches could be feasible. This article offers an account of the available regulatory strategies and attempts to theorize the effects of simulation-based safety regulation. The article first discusses the distinctive features of mobility robots as regulatory targets and argues that emergent behavior constitutes the key regulatory concern in designing robot safety regulation regimes. In (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation