Switch to: References

Add citations

You must login to add citations.
  1. Are clinicians ethically obligated to disclose their use of medical machine learning systems to patients?Joshua Hatherley - forthcoming - Journal of Medical Ethics.
    It is commonly accepted that clinicians are ethically obligated to disclose their use of medical machine learning systems to patients, and that failure to do so would amount to a moral fault for which clinicians ought to be held accountable. Call this ‘the disclosure thesis.’ Four main arguments have been, or could be, given to support the disclosure thesis in the ethics literature: the risk-based argument, the rights-based argument, the materiality argument and the autonomy argument. In this article, I argue (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Non-empirical methods for ethics research on digital technologies in medicine, health care and public health: a systematic journal review.Frank Ursin, Regina Müller, Florian Funer, Wenke Liedtke, David Renz, Svenja Wiertz & Robert Ranisch - forthcoming - Medicine, Health Care and Philosophy:1-16.
    Bioethics has developed approaches to address ethical issues in health care, similar to how technology ethics provides guidelines for ethical research on artificial intelligence, big data, and robotic applications. As these digital technologies are increasingly used in medicine, health care and public health, thus, it is plausible that the approaches of technology ethics have influenced bioethical research. Similar to the “empirical turn” in bioethics, which led to intense debates about appropriate moral theories, ethical frameworks and meta-ethics due to the increased (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Moral sensitivity and the limits of artificial moral agents.Joris Graff - 2024 - Ethics and Information Technology 26 (1):1-12.
    Machine ethics is the field that strives to develop ‘artificial moral agents’ (AMAs), artificial systems that can autonomously make moral decisions. Some authors have questioned the feasibility of machine ethics, by questioning whether artificial systems can possess moral competence, or the capacity to reach morally right decisions in various situations. This paper explores this question by drawing on the work of several moral philosophers (McDowell, Wiggins, Hampshire, and Nussbaum) who have characterised moral competence in a manner inspired by Aristotle. Although (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Perspectives of patients and clinicians on big data and AI in health: a comparative empirical investigation.Patrik Hummel, Matthias Braun, Serena Bischoff, David Samhammer, Katharina Seitz, Peter A. Fasching & Peter Dabrock - forthcoming - AI and Society:1-15.
    Background Big data and AI applications now play a major role in many health contexts. Much research has already been conducted on ethical and social challenges associated with these technologies. Likewise, there are already some studies that investigate empirically which values and attitudes play a role in connection with their design and implementation. What is still in its infancy, however, is the comparative investigation of the perspectives of different stakeholders. Methods To explore this issue in a multi-faceted manner, we conducted (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Why We Should Understand Conversational AI as a Tool.Marlies N. van Lingen, Noor A. A. Giesbertz, J. Peter van Tintelen & Karin R. Jongsma - 2023 - American Journal of Bioethics 23 (5):22-24.
    The introduction of chatGPT illustrates the rapid developments within Conversational Artificial Intelligence (CAI) technologies (Gordijn and Have 2023). Ethical reflection and analysis of CAI are c...
    Download  
     
    Export citation  
     
    Bookmark  
  • AI and society: a virtue ethics approach.Mirko Farina, Petr Zhdanov, Artur Karimov & Andrea Lavazza - forthcoming - AI and Society:1-14.
    Advances in artificial intelligence and robotics stand to change many aspects of our lives, including our values. If trends continue as expected, many industries will undergo automation in the near future, calling into question whether we can still value the sense of identity and security our occupations once provided us with. Likewise, the advent of social robots driven by AI, appears to be shifting the meaning of numerous, long-standing values associated with interpersonal relationships, like friendship. Furthermore, powerful actors’ and institutions’ (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Relative explainability and double standards in medical decision-making: Should medical AI be subjected to higher standards in medical decision-making than doctors?Saskia K. Nagel, Jan-Christoph Heilinger & Hendrik Kempt - 2022 - Ethics and Information Technology 24 (2):20.
    The increased presence of medical AI in clinical use raises the ethical question which standard of explainability is required for an acceptable and responsible implementation of AI-based applications in medical contexts. In this paper, we elaborate on the emerging debate surrounding the standards of explainability for medical AI. For this, we first distinguish several goods explainability is usually considered to contribute to the use of AI in general, and medical AI in specific. Second, we propose to understand the value of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Response to our reviewers.Juan Manuel Durán & Karin Rolanda Jongsma - 2021 - Journal of Medical Ethics 47 (7):514-514.
    We would like to thank the authors of the commentaries for their critical appraisal of our feature article, Who is afraid of black box algorithms?1 Their comments, suggestions and concerns are various, and we are glad that our article contributes to the academic debate about the ethical and epistemic conditions for medical Explanatory AI. We would like to bring to attention a few issues that are common worries across reviewers. Most prominently are the merits of computational reliabilism —in particular, when (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Responsibility and decision-making authority in using clinical decision support systems: an empirical-ethical exploration of German prospective professionals’ preferences and concerns.Florian Funer, Wenke Liedtke, Sara Tinnemeyer, Andrea Diana Klausen, Diana Schneider, Helena U. Zacharias, Martin Langanke & Sabine Salloch - 2024 - Journal of Medical Ethics 50 (1):6-11.
    Machine learning-driven clinical decision support systems (ML-CDSSs) seem impressively promising for future routine and emergency care. However, reflection on their clinical implementation reveals a wide array of ethical challenges. The preferences, concerns and expectations of professional stakeholders remain largely unexplored. Empirical research, however, may help to clarify the conceptual debate and its aspects in terms of their relevance for clinical practice. This study explores, from an ethical point of view, future healthcare professionals’ attitudes to potential changes of responsibility and decision-making (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Agree to disagree: the symmetry of burden of proof in human–AI collaboration.Karin Rolanda Jongsma & Martin Sand - 2022 - Journal of Medical Ethics 48 (4):230-231.
    In their paper ‘Responsibility, second opinions and peer-disagreement: ethical and epistemological challenges of using AI in clinical diagnostic contexts’, Kempt and Nagel discuss the use of medical AI systems and the resulting need for second opinions by human physicians, when physicians and AI disagree, which they call the rule of disagreement.1 The authors defend RoD based on three premises: First, they argue that in cases of disagreement in medical practice, there is an increased burden of proof for the physician in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial Intelligence Needs Data: Challenges Accessing Italian Databases to Train AI.Ciara Staunton, Roberta Biasiotto, Katharina Tschigg & Deborah Mascalzoni - 2024 - Asian Bioethics Review 16 (3):423-435.
    Population biobanks are an increasingly important infrastructure to support research and will be a much-needed resource in the delivery of personalised medicine. Artificial intelligence (AI) systems can process and cross-link very large amounts of data quickly and be used not only for improving research power but also for helping with complex diagnosis and prediction of diseases based on health profiles. AI, therefore, potentially has a critical role to play in personalised medicine, and biobanks can provide a lot of the necessary (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Moral Relevance Approach for AI Ethics.Shuaishuai Fang - 2024 - Philosophies 9 (2):42.
    Artificial intelligence (AI) ethics is proposed as an emerging and interdisciplinary field concerned with addressing the ethical issues of AI, such as the issue of moral decision-making. The conflict between our intuitive moral judgments constitutes an inevitable obstacle to decision-making in AI ethics. This article outlines the Moral Relevance Approach, which could provide a considerable moral foundation for AI ethics. Taking moral relevance as the precondition of the consequentialist principles, the Moral Relevance Approach aims to plausibly consider individual moral claims. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • “I don’t think people are ready to trust these algorithms at face value”: trust and the use of machine learning algorithms in the diagnosis of rare disease.Angeliki Kerasidou, Christoffer Nellåker, Aurelia Sauerbrei, Shirlene Badger & Nina Hallowell - 2022 - BMC Medical Ethics 23 (1):1-14.
    BackgroundAs the use of AI becomes more pervasive, and computerised systems are used in clinical decision-making, the role of trust in, and the trustworthiness of, AI tools will need to be addressed. Using the case of computational phenotyping to support the diagnosis of rare disease in dysmorphology, this paper explores under what conditions we could place trust in medical AI tools, which employ machine learning.MethodsSemi-structured qualitative interviews with stakeholders who design and/or work with computational phenotyping systems. The method of constant (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Putting explainable AI in context: institutional explanations for medical AI.Jacob Browning & Mark Theunissen - 2022 - Ethics and Information Technology 24 (2).
    There is a current debate about if, and in what sense, machine learning systems used in the medical context need to be explainable. Those arguing in favor contend these systems require post hoc explanations for each individual decision to increase trust and ensure accurate diagnoses. Those arguing against suggest the high accuracy and reliability of the systems is sufficient for providing epistemic justified beliefs without the need for explaining each individual decision. But, as we show, both solutions have limitations—and it (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations