Switch to: References

Add citations

You must login to add citations.
  1. Socially disruptive technologies and epistemic injustice.J. K. G. Hopster - 2024 - Ethics and Information Technology 26 (1):1-8.
    Recent scholarship on technology-induced ‘conceptual disruption’ has spotlighted the notion of a conceptual gap. Conceptual gaps have also been discussed in scholarship on epistemic injustice, yet up until now these bodies of work have remained disconnected. This article shows that ‘gaps’ of interest to both bodies of literature are closely related, and argues that a joint examination of conceptual disruption and epistemic injustice is fruitful for both fields. I argue that hermeneutical marginalization—a skewed division of hermeneutical resources, which serves to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Machine learning in healthcare and the methodological priority of epistemology over ethics.Thomas Grote - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    This paper develops an account of how the implementation of ML models into healthcare settings requires revising the methodological apparatus of philosophical bioethics. On this account, ML models are cognitive interventions that provide decision-support to physicians and patients. Due to reliability issues, opaque reasoning processes, and information asymmetries, ML models pose inferential problems for them. These inferential problems lay the grounds for many ethical problems that currently claim centre-stage in the bioethical debate. Accordingly, this paper argues that the best way (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Algorithmic Profiling as a Source of Hermeneutical Injustice.Silvia Milano & Carina Prunkl - forthcoming - Philosophical Studies:1-19.
    It is well-established that algorithms can be instruments of injustice. It is less frequently discussed, however, how current modes of AI deployment often make the very discovery of injustice difficult, if not impossible. In this article, we focus on the effects of algorithmic profiling on epistemic agency. We show how algorithmic profiling can give rise to epistemic injustice through the depletion of epistemic resources that are needed to interpret and evaluate certain experiences. By doing so, we not only demonstrate how (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • AI as an Epistemic Technology.Ramón Alvarado - 2023 - Science and Engineering Ethics 29 (5):1-30.
    In this paper I argue that Artificial Intelligence and the many data science methods associated with it, such as machine learning and large language models, are first and foremost epistemic technologies. In order to establish this claim, I first argue that epistemic technologies can be conceptually and practically distinguished from other technologies in virtue of what they are designed for, what they do and how they do it. I then proceed to show that unlike other kinds of technology (_including_ other (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Dirty data labeled dirt cheap: epistemic injustice in machine learning systems.Gordon Hull - 2023 - Ethics and Information Technology 25 (3):1-14.
    Artificial intelligence (AI) and machine learning (ML) systems increasingly purport to deliver knowledge about people and the world. Unfortunately, they also seem to frequently present results that repeat or magnify biased treatment of racial and other vulnerable minorities. This paper proposes that at least some of the problems with AI’s treatment of minorities can be captured by the concept of epistemic injustice. To substantiate this claim, I argue that (1) pretrial detention and physiognomic AI systems commit testimonial injustice because their (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Beyond Preferences in AI Alignment.Tan Zhi-Xuan, Micah Carroll, Matija Franklin & Hal Ashton - forthcoming - Philosophical Studies:1-51.
    The dominant practice of AI alignment assumes (1) that preferences are an adequate representation of human values, (2) that human rationality can be understood in terms of maximizing the satisfaction of preferences, and (3) that AI systems should be aligned with the preferences of one or more humans to ensure that they behave safely and in accordance with our values. Whether implicitly followed or explicitly endorsed, these commitments constitute what we term a preferentist approach to AI alignment. In this paper, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Detecting your depression with your smartphone? – An ethical analysis of epistemic injustice in passive self-tracking apps.Mirjam Faissner, Eva Kuhn, Regina Müller & Sebastian Laacke - 2024 - Ethics and Information Technology 26 (2):1-14.
    Smartphone apps might offer a low-threshold approach to the detection of mental health conditions, such as depression. Based on the gathering of ‘passive data,’ some apps generate a user’s ‘digital phenotype,’ compare it to those of users with clinically confirmed depression and issue a warning if a depressive episode is likely. These apps can, thus, serve as epistemic tools for affected users. From an ethical perspective, it is crucial to consider epistemic injustice to promote socially responsible innovations within digital mental (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Conversational Artificial Intelligence and the Potential for Epistemic Injustice.Michiel De Proost & Giorgia Pozzi - 2023 - American Journal of Bioethics 23 (5):51-53.
    In their article, Sedlakova and Trachsel (2023) propose a holistic, ethical, and epistemic analysis of conversational artificial intelligence (CAI) in psychotherapeutic settings. They mainly descri...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Is Epistemic Autonomy Technologically Possible Within Social Media? A Socio-Epistemological Investigation of the Epistemic Opacity of Social Media Platforms.Margherita Mattioni - forthcoming - Topoi:1-14.
    This article aims to provide a coherent and comprehensive theoretical framework of the main socio-epistemic features of social media. The first part consists of a concise discussion of the main epistemic consequences of personalised information filtering, with a focus on echo chambers and their many different implications. The middle section instead hosts an analytical investigation of the cognitive and epistemic environments of these platforms aimed at establishing whether, and to what extent, they allow their users to be epistemically vigilant with (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Toward an Ethics of AI Belief.Winnie Ma & Vincent Valton - 2024 - Philosophy and Technology 37 (3):1-28.
    In this paper we, an epistemologist and a machine learning scientist, argue that we need to pursue a novel area of philosophical research in AI – the ethics of belief for AI. Here we take the ethics of belief to refer to a field at the intersection of epistemology and ethics concerned with possible moral, practical, and other non-truth-related dimensions of belief. In this paper we will primarily be concerned with the normative question within the ethics of belief regarding what (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Digital Simulacra and the Call for Epistemic Responsibility: An Ubuntu Perspective.Brandon Ferlito & Michiel De Proost - 2023 - American Journal of Bioethics 23 (9):91-93.
    Cho and Martinez-Martin (2023) discuss the ethical challenges associated with the use of digital simulacra (also known as digital twins) in biomedicine, specifically focusing on the issue of episte...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • First-person disavowals of digital phenotyping and epistemic injustice in psychiatry.Stephanie K. Slack & Linda Barclay - 2023 - Medicine, Health Care and Philosophy 26 (4):605-614.
    Digital phenotyping will potentially enable earlier detection and prediction of mental illness by monitoring human interaction with and through digital devices. Notwithstanding its promises, it is certain that a person’s digital phenotype will at times be at odds with their first-person testimony of their psychological states. In this paper, we argue that there are features of digital phenotyping in the context of psychiatry which have the potential to exacerbate the tendency to dismiss patients’ testimony and treatment preferences, which can be (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Merging Minds: The Conceptual and Ethical Impacts of Emerging Technologies for Collective Minds.David M. Lyreskog, Hazem Zohny, Julian Savulescu & Ilina Singh - 2023 - Neuroethics 16 (1):1-17.
    A growing number of technologies are currently being developed to improve and distribute thinking and decision-making. Rapid progress in brain-to-brain interfacing and swarming technologies promises to transform how we think about collective and collaborative cognitive tasks across domains, ranging from research to entertainment, and from therapeutics to military applications. As these tools continue to improve, we are prompted to monitor how they may affect our society on a broader level, but also how they may reshape our fundamental understanding of agency, (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Automated opioid risk scores: a case for machine learning-induced epistemic injustice in healthcare.Giorgia Pozzi - 2023 - Ethics and Information Technology 25 (1):1-12.
    Artificial intelligence-based (AI) technologies such as machine learning (ML) systems are playing an increasingly relevant role in medicine and healthcare, bringing about novel ethical and epistemological issues that need to be timely addressed. Even though ethical questions connected to epistemic concerns have been at the center of the debate, it is going unnoticed how epistemic forms of injustice can be ML-induced, specifically in healthcare. I analyze the shortcomings of an ML system currently deployed in the USA to predict patients’ likelihood (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Testimonial injustice in medical machine learning.Giorgia Pozzi - 2023 - Journal of Medical Ethics 49 (8):536-540.
    Machine learning (ML) systems play an increasingly relevant role in medicine and healthcare. As their applications move ever closer to patient care and cure in clinical settings, ethical concerns about the responsibility of their use come to the fore. I analyse an aspect of responsible ML use that bears not only an ethical but also a significant epistemic dimension. I focus on ML systems’ role in mediating patient–physician relations. I thereby consider how ML systems may silence patients’ voices and relativise (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations