Switch to: References

Add citations

You must login to add citations.
  1. On Hedden's proof that machine learning fairness metrics are flawed.Anders Søgaard, Klemens Kappel & Thor Grünbaum - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    1. Fairness is about the just distribution of society's resources, and in ML, the main resource being distributed is model performance, e.g. the translation quality produced by machine translation...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Detecting your depression with your smartphone? – An ethical analysis of epistemic injustice in passive self-tracking apps.Mirjam Faissner, Eva Kuhn, Regina Müller & Sebastian Laacke - 2024 - Ethics and Information Technology 26 (2):1-14.
    Smartphone apps might offer a low-threshold approach to the detection of mental health conditions, such as depression. Based on the gathering of ‘passive data,’ some apps generate a user’s ‘digital phenotype,’ compare it to those of users with clinically confirmed depression and issue a warning if a depressive episode is likely. These apps can, thus, serve as epistemic tools for affected users. From an ethical perspective, it is crucial to consider epistemic injustice to promote socially responsible innovations within digital mental (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Beyond Preferences in AI Alignment.Tan Zhi-Xuan, Micah Carroll, Matija Franklin & Hal Ashton - forthcoming - Philosophical Studies:1-51.
    The dominant practice of AI alignment assumes (1) that preferences are an adequate representation of human values, (2) that human rationality can be understood in terms of maximizing the satisfaction of preferences, and (3) that AI systems should be aligned with the preferences of one or more humans to ensure that they behave safely and in accordance with our values. Whether implicitly followed or explicitly endorsed, these commitments constitute what we term apreferentistapproach to AI alignment. In this paper, we characterize (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Toward an Ethics of AI Belief.Winnie Ma & Vincent Valton - 2024 - Philosophy and Technology 37 (3):1-28.
    In this paper we, an epistemologist and a machine learning scientist, argue that we need to pursue a novel area of philosophical research in AI – the ethics of belief for AI. Here we take the ethics of belief to refer to a field at the intersection of epistemology and ethics concerned with possible moral, practical, and other non-truth-related dimensions of belief. In this paper we will primarily be concerned with the normative question within the ethics of belief regarding what (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Why we should talk about institutional (dis)trustworthiness and medical machine learning.Michiel De Proost & Giorgia Pozzi - 2025 - Medicine, Health Care and Philosophy 28 (1):83-92.
    The principle of trust has been placed at the centre as an attitude for engaging with clinical machine learning systems. However, the notions of trust and distrust remain fiercely debated in the philosophical and ethical literature. In this article, we proceed on a structural level ex negativo as we aim to analyse the concept of “institutional distrustworthiness” to achieve a proper diagnosis of how we should not engage with medical machine learning. First, we begin with several examples that hint at (...)
    Download  
     
    Export citation  
     
    Bookmark