Switch to: References

Add citations

You must login to add citations.
  1. What Are Humans Doing in the Loop? Co-Reasoning and Practical Judgment When Using Machine Learning-Driven Decision Aids.Sabine Salloch & Andreas Eriksen - forthcoming - American Journal of Bioethics.
    Within the ethical debate on Machine Learning-driven decision support systems (ML_CDSS), notions such as “human in the loop” or “meaningful human control” are often cited as being necessary for ethical legitimacy. In addition, ethical principles usually serve as the major point of reference in ethical guidance documents, stating that conflicts between principles need to be weighed and balanced against each other. Starting from a neo-Kantian viewpoint inspired by Onora O'Neill, this article makes a concrete suggestion of how to interpret the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Ethical use of artificial intelligence to prevent sudden cardiac death: an interview study of patient perspectives.Marieke A. R. Bak, Georg L. Lindinger, Hanno L. Tan, Jeannette Pols, Dick L. Willems, Ayca Koçar & Menno T. Maris - 2024 - BMC Medical Ethics 25 (1):1-15.
    BackgroundThe emergence of artificial intelligence (AI) in medicine has prompted the development of numerous ethical guidelines, while the involvement of patients in the creation of these documents lags behind. As part of the European PROFID project we explore patient perspectives on the ethical implications of AI in care for patients at increased risk of sudden cardiac death (SCD).AimExplore perspectives of patients on the ethical use of AI, particularly in clinical decision-making regarding the implantation of an implantable cardioverter-defibrillator (ICD).MethodsSemi-structured, future scenario-based (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Influence of Using Novel Predictive Technologies on Judgments of Stigma, Empathy, and Compassion among Healthcare Professionals.Daniel Z. Buchman, Daphne Imahori, Christopher Lo, Katrina Hui, Caroline Walker, James Shaw & Karen D. Davis - 2024 - American Journal of Bioethics Neuroscience 15 (1):32-45.
    Background Our objective was to evaluate whether the description of a machine learning (ML) app or brain imaging technology to predict the onset of schizophrenia or alcohol use disorder (AUD) influences healthcare professionals’ judgments of stigma, empathy, and compassion.Methods We randomized healthcare professionals (N = 310) to one vignette about a person whose clinician seeks to predict schizophrenia or an AUD, using a ML app, brain imaging, or a psychosocial assessment. Participants used scales to measure their judgments of stigma, empathy, (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • First-person disavowals of digital phenotyping and epistemic injustice in psychiatry.Stephanie K. Slack & Linda Barclay - 2023 - Medicine, Health Care and Philosophy 26 (4):605-614.
    Digital phenotyping will potentially enable earlier detection and prediction of mental illness by monitoring human interaction with and through digital devices. Notwithstanding its promises, it is certain that a person’s digital phenotype will at times be at odds with their first-person testimony of their psychological states. In this paper, we argue that there are features of digital phenotyping in the context of psychiatry which have the potential to exacerbate the tendency to dismiss patients’ testimony and treatment preferences, which can be (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Ethics, First.Melissa D. McCradden - 2023 - American Journal of Bioethics 23 (9):55-56.
    If you’ve ever had a time where your smartwatch buzzed to say you’ve completed your daily steps while you were actually just brushing your hair, you’ll know our data aren’t perfect. Every day, we c...
    Download  
     
    Export citation  
     
    Bookmark