Switch to: References

Add citations

You must login to add citations.
  1. Machine learning, healthcare resource allocation, and patient consent.Jamie Webb - forthcoming - The New Bioethics:1-22.
    The impact of machine learning in healthcare on patient informed consent is now the subject of significant inquiry in bioethics. However, the topic has predominantly been considered in the context of black box diagnostic or treatment recommendation algorithms. The impact of machine learning involved in healthcare resource allocation on patient consent remains undertheorized. This paper will establish where patient consent is relevant in healthcare resource allocation, before exploring the impact on informed consent from the introduction of black box machine learning (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Revisiting the ought implies can dictum in light of disruptive medical innovation.Michiel De Proost & Seppe Segers - 2024 - Journal of Medical Ethics 50 (7):466-470.
    It is a dominant dictum in ethics that ‘ought implies can’ (OIC): if an agent morally ought to do an action, the agent must be capable of performing that action. Yet, with current technological developments, such as in direct-to-consumer genomics, big data analytics and wearable technologies, there may be reasons to reorient this ethical principle. It is our modest aim in this article to explore how the current wave of allegedly disruptive innovation calls for a renewed interest for this dictum. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Hammer or Measuring Tape? Artificial Intelligence and Justice in Healthcare.Jan-Hendrik Heinrichs - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (3):311-322.
    Artificial intelligence (AI) is a powerful tool for several healthcare tasks. AI tools are suited to optimize predictive models in medicine. Ethical debates about AI’s extension of the predictive power of medical models suggest a need to adapt core principles of medical ethics. This article demonstrates that a popular interpretation of the principle of justice in healthcare needs amendment given the effect of AI on decision-making. The procedural approach to justice, exemplified with Norman Daniels and James Sabin’s accountability for reasonableness (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Disrupting medical necessity: Setting an old medical ethics theme in new light.Seppe Segers & Michiel De Proost - 2023 - Clinical Ethics 18 (3):335-342.
    Recent medical innovations like ‘omics’ technologies, mobile health (mHealth) applications or telemedicine are perceived as part of a shift towards a more preventive, participatory and affordable healthcare model. These innovations are often regarded as ‘disruptive technologies’. It is a topic of debate to what extent these technologies may transform the medical enterprise, and relatedly, what this means for medical ethics. The question of whether these developments disrupt established ethical principles like respect for autonomy has indeed received increasing normative attention during (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • What you believe you want, may not be what the algorithm knows.Seppe Segers - 2023 - Journal of Medical Ethics 49 (3):177-178.
    Tensions between respect for autonomy and paternalism loom large in Ferrario et al ’s discussion of artificial intelligence (AI)-based preference predictors.1 To be sure, their analysis (rightfully) brings out the moral matter of respecting patient preferences. My point here, however, is that their consideration of AI-based preference predictors in treatment of incapacitated patients opens more fundamental moral questions about the desirability of over-ruling considered patient preferences, not only if these are disclosed by surrogates, but possibly also in treating competent patients. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Algorithms for Ethical Decision-Making in the Clinic: A Proof of Concept.Lukas J. Meier, Alice Hein, Klaus Diepold & Alena Buyx - 2022 - American Journal of Bioethics 22 (7):4-20.
    Machine intelligence already helps medical staff with a number of tasks. Ethical decision-making, however, has not been handed over to computers. In this proof-of-concept study, we show how an algorithm based on Beauchamp and Childress’ prima-facie principles could be employed to advise on a range of moral dilemma situations that occur in medical institutions. We explain why we chose fuzzy cognitive maps to set up the advisory system and how we utilized machine learning to train it. We report on the (...)
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  • The ethics of machine learning-based clinical decision support: an analysis through the lens of professionalisation theory.Sabine Salloch & Nils B. Heyen - 2021 - BMC Medical Ethics 22 (1):1-9.
    BackgroundMachine learning-based clinical decision support systems (ML_CDSS) are increasingly employed in various sectors of health care aiming at supporting clinicians’ practice by matching the characteristics of individual patients with a computerised clinical knowledge base. Some studies even indicate that ML_CDSS may surpass physicians’ competencies regarding specific isolated tasks. From an ethical perspective, however, the usage of ML_CDSS in medical practice touches on a range of fundamental normative issues. This article aims to add to the ethical discussion by using professionalisation theory (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Is Health-Related Digital Autonomy Setting the Autonomy Bar Too High?Stephanie K. Slack - 2021 - American Journal of Bioethics 21 (7):40-42.
    Laacke et al. argue that an extended concept of patient autonomy—Health-Related Digital Autonomy —is required to address the autonomy-related ethical challenges associated with the pot...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Non-empirical methods for ethics research on digital technologies in medicine, health care and public health: a systematic journal review.Frank Ursin, Regina Müller, Florian Funer, Wenke Liedtke, David Renz, Svenja Wiertz & Robert Ranisch - 2024 - Medicine, Health Care and Philosophy 27 (4):513-528.
    Bioethics has developed approaches to address ethical issues in health care, similar to how technology ethics provides guidelines for ethical research on artificial intelligence, big data, and robotic applications. As these digital technologies are increasingly used in medicine, health care and public health, thus, it is plausible that the approaches of technology ethics have influenced bioethical research. Similar to the “empirical turn” in bioethics, which led to intense debates about appropriate moral theories, ethical frameworks and meta-ethics due to the increased (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Conceptualising and regulating all neural data from consumer-directed devices as medical data: more scope for an unnecessary expansion of medical influence?Brad Partridge & Susan Dodds - 2023 - Ethics and Information Technology 25 (4):1-8.
    Neurodevices that collect neural (or brain activity) data have been characterised as having the ability to register the inner workings of human mentality. There are concerns that the proliferation of such devices in the consumer-directed realm may result in the mass processing and commercialisation of neural data (as has been the case with social media data) and even threaten the mental privacy of individuals. To prevent this, some argue that all raw neural data should be conceptualised and regulated as “medical (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Coercive Potential of Digital Mental Health.Isobel Butorac & Adrian Carter - 2021 - American Journal of Bioethics 21 (7):28-30.
    Digital mental health can be understood as the in situ quantification of an individual’s data from personal devices to measure human behavior in both health and disease (Huckvale, Venkatesh and Chr...
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • AIDD, Autonomy, and Military Ethics.Sally J. Scholz - 2021 - American Journal of Bioethics 21 (7):1-3.
    In “Artificial Intelligence, Social Media and Depression,” Laacke and colleagues consider the ethical implications of artificial intelligence depression detector tools to assist pract...
    Download  
     
    Export citation  
     
    Bookmark  
  • Four Stages in Social Media Network Analysis—Building Blocks for Health-Related Digital Autonomy in Artificial Intelligence, Social Media, and Depression.Carol G. Gu, Elizabeth Lerner Papautsky, Andrew D. Boyd & John Zulueta - 2021 - American Journal of Bioethics 21 (7):38-40.
    The authors of the concept Health-Related Digital Autonomy have laid the first building block to examine the interactions between artificial intelligence, social media, and depression f...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • 'You have to put a lot of trust in me': autonomy, trust, and trustworthiness in the context of mobile apps for mental health.Regina Müller, Nadia Primc & Eva Kuhn - 2023 - Medicine, Health Care and Philosophy 26 (3):313-324.
    Trust and trustworthiness are essential for good healthcare, especially in mental healthcare. New technologies, such as mobile health apps, can affect trust relationships. In mental health, some apps need the trust of their users for therapeutic efficacy and explicitly ask for it, for example, through an avatar. Suppose an artificial character in an app delivers healthcare. In that case, the following questions arise: Whom does the user direct their trust to? Whether and when can an avatar be considered trustworthy? Our (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Consultation with Doctor Twitter: Consent Fatigue, and the Role of Developers in Digital Medical Ethics.Robert Ranisch - 2021 - American Journal of Bioethics 21 (7):24-25.
    Laacke et al. investigate the ethical implications of possible artificial intelligence systems that automatically detect signs of depression by analyzing data from social media. The art...
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Health-Related Digital Autonomy. A Response to the Commentaries.Sebastian Laacke, Regina Mueller, Georg Schomerus & Sabine Salloch - 2021 - American Journal of Bioethics 21 (10):W1-W5.
    The COVID-19 pandemic has been a threat to both physical and mental health. The spreading disease and its impacts, the containment measures and the way all of our lives have dramatically changed ha...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Artificial Intelligence, Social Media, and Suicide Prevention: Principle of Beneficence Besides Respect for Autonomy.Hui Zhang, Yuming Wang, Zhenxiang Zhang, Fangxia Guan, Hongmei Zhang & Zhiping Guo - 2021 - American Journal of Bioethics 21 (7):43-45.
    The target article by Laacke et al. focuses on the specific context of identifying people in social media with a high risk of depression by using artificial intelligence technologies. I...
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • A New Type of 'Greenwashing'? Social Media Companies Predicting Depression and Other Mental Illnesses.Daniel D’Hotman & Jesse Schnall - 2021 - American Journal of Bioethics 21 (7):36-38.
    Laacke et al. describe the emergence of novel analytical tools—artificial intelligence depression detectors —that employ artificial intelligence to predict depression. The authors the...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Error, Reliability and Health-Related Digital Autonomy in AI Diagnoses of Social Media Analysis.Ramón Alvarado & Nicolae Morar - 2021 - American Journal of Bioethics 21 (7):26-28.
    The rapid expansion of computational tools and of data science methods in healthcare has, undoubtedly, raised a whole new set of bioethical challenges. As Laacke and colleagues rightly note,...
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Black Boxes and Bias in AI Challenge Autonomy.Craig M. Klugman - 2021 - American Journal of Bioethics 21 (7):33-35.
    In “Artificial Intelligence, Social Media and Depression: A New Concept of Health-Related Digital Autonomy,” Laacke and colleagues posit a revised model of autonomy when using digital algori...
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • The Right to Contest AI Profiling Based on Social Media Data.Thomas Ploug & Søren Holm - 2021 - American Journal of Bioethics 21 (7):21-23.
    Artificial Intelligence systems—and in particular various types of machine learning models—have significant potential for improving the performance and effectiveness of diagnostics and treatme...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Health-Related Digital Autonomy: An Important, But Unfinished Step.Taimur Kouser & Jeff Ward - 2021 - American Journal of Bioethics 21 (7):31-33.
    A mark of our modern age is the translation of the non-digital to the digital, an evolution likely only to accelerate and to demand the proactive development of robust ethical guidance to navigate...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • AI, Suicide Prevention and the Limits of Beneficence.Bert Heinrichs & Aurélie Halsband - 2022 - Philosophy and Technology 35 (4):1-18.
    In this paper, we address the question of whether AI should be used for suicide prevention on social media data. We focus on algorithms that can identify persons with suicidal ideation based on their postings on social media platforms and investigate whether private companies like Facebook are justified in using these. To find out if that is the case, we start with providing two examples for AI-based means of suicide prevention in social media. Subsequently, we frame suicide prevention as an (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Good Proctor or “Big Brother”? Ethics of Online Exam Supervision Technologies.Simon Coghlan, Tim Miller & Jeannie Paterson - 2021 - Philosophy and Technology 34 (4):1581-1606.
    Online exam supervision technologies have recently generated significant controversy and concern. Their use is now booming due to growing demand for online courses and for off-campus assessment options amid COVID-19 lockdowns. Online proctoring technologies purport to effectively oversee students sitting online exams by using artificial intelligence systems supplemented by human invigilators. Such technologies have alarmed some students who see them as a “Big Brother-like” threat to liberty and privacy, and as potentially unfair and discriminatory. However, some universities and educators defend (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations