Switch to: Citations

Add references

You must login to add references.
  1. Principles alone cannot guarantee ethical AI.Brent Mittelstadt - 2019 - Nature Machine Intelligence 1 (11):501-507.
    Download  
     
    Export citation  
     
    Bookmark   102 citations  
  • Two faces of responsibility.Gary Watson - 1996 - Philosophical Topics 24 (2):227–48.
    Download  
     
    Export citation  
     
    Bookmark   401 citations  
  • Dermatologist-level classification of skin cancer with deep neural networks.Andre Esteva, Brett Kuprel, Roberto A. Novoa, Justin Ko, Susan M. Swetter, Helen M. Blau & Sebastian Thrun - 2017 - Nature 542 (7639):115-118.
    Download  
     
    Export citation  
     
    Bookmark   64 citations  
  • The global landscape of AI ethics guidelines.A. Jobin, M. Ienca & E. Vayena - 2019 - Nature Machine Intelligence 1.
    Download  
     
    Export citation  
     
    Bookmark   228 citations  
  • First-person disavowals of digital phenotyping and epistemic injustice in psychiatry.Stephanie K. Slack & Linda Barclay - 2023 - Medicine, Health Care and Philosophy 26 (4):605-614.
    Digital phenotyping will potentially enable earlier detection and prediction of mental illness by monitoring human interaction with and through digital devices. Notwithstanding its promises, it is certain that a person’s digital phenotype will at times be at odds with their first-person testimony of their psychological states. In this paper, we argue that there are features of digital phenotyping in the context of psychiatry which have the potential to exacerbate the tendency to dismiss patients’ testimony and treatment preferences, which can be (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Defending explicability as a principle for the ethics of artificial intelligence in medicine.Jonathan Adams - 2023 - Medicine, Health Care and Philosophy 26 (4):615-623.
    The difficulty of explaining the outputs of artificial intelligence (AI) models and what has led to them is a notorious ethical problem wherever these technologies are applied, including in the medical domain, and one that has no obvious solution. This paper examines the proposal, made by Luciano Floridi and colleagues, to include a new ‘principle of explicability’ alongside the traditional four principles of bioethics that make up the theory of ‘principlism’. It specifically responds to a recent set of criticisms that (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Reflection Machines: Supporting Effective Human Oversight Over Medical Decision Support Systems.Pim Haselager, Hanna Schraffenberger, Serge Thill, Simon Fischer, Pablo Lanillos, Sebastiaan van de Groes & Miranda van Hooff - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (3):380-389.
    Human decisions are increasingly supported by decision support systems (DSS). Humans are required to remain “on the loop,” by monitoring and approving/rejecting machine recommendations. However, use of DSS can lead to overreliance on machines, reducing human oversight. This paper proposes “reflection machines” (RM) to increase meaningful human control. An RM provides a medical expert not with suggestions for a decision, but with questions that stimulate reflection about decisions. It can refer to data points or suggest counterarguments that are less compatible (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Algorithms for Ethical Decision-Making in the Clinic: A Proof of Concept.Lukas J. Meier, Alice Hein, Klaus Diepold & Alena Buyx - 2022 - American Journal of Bioethics 22 (7):4-20.
    Machine intelligence already helps medical staff with a number of tasks. Ethical decision-making, however, has not been handed over to computers. In this proof-of-concept study, we show how an algorithm based on Beauchamp and Childress’ prima-facie principles could be employed to advise on a range of moral dilemma situations that occur in medical institutions. We explain why we chose fuzzy cognitive maps to set up the advisory system and how we utilized machine learning to train it. We report on the (...)
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  • Explicability of artificial intelligence in radiology: Is a fifth bioethical principle conceptually necessary?Frank Ursin, Cristian Timmermann & Florian Steger - 2022 - Bioethics 36 (2):143-153.
    Recent years have witnessed intensive efforts to specify which requirements ethical artificial intelligence (AI) must meet. General guidelines for ethical AI consider a varying number of principles important. A frequent novel element in these guidelines, that we have bundled together under the term explicability, aims to reduce the black-box character of machine learning algorithms. The centrality of this element invites reflection on the conceptual relation between explicability and the four bioethical principles. This is important because the application of general ethical (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • (1 other version)A united framework of five principles for AI in society.Luciano Floridi & Josh Cowls - 2019 - Harvard Data Science Review 1 (1).
    Artificial Intelligence (AI) is already having a major impact on society. As a result, many organizations have launched a wide range of initiatives to establish ethical principles for the adoption of socially beneficial AI. Unfortunately, the sheer volume of proposed principles threatens to overwhelm and confuse. How might this problem of ‘principle proliferation’ be solved? In this paper, we report the results of a fine-grained analysis of several of the highest-profile sets of ethical principles for AI. We assess whether these (...)
    Download  
     
    Export citation  
     
    Bookmark   76 citations  
  • E-Cigarettes and the Multiple Responsibilities of the FDA.Larisa Svirsky, Dana Howard & Micah L. Berman - 2021 - American Journal of Bioethics 22 (10):5-14.
    This paper considers the responsibilities of the FDA with regard to disseminating information about the benefits and harms of e-cigarettes. Tobacco harm reduction advocates claim that the FDA has been overcautious and has violated ethical obligations by failing to clearly communicate to the public that e-cigarettes are far less harmful than cigarettes. We argue, by contrast, that the FDA’s obligations in this arena are more complex than they may appear at first blush. Though the FDA is accountable for informing the (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Principles of Biomedical Ethics: Marking Its Fortieth Anniversary.James Childress & Tom Beauchamp - 2019 - American Journal of Bioethics 19 (11):9-12.
    Volume 19, Issue 11, November 2019, Page 9-12.
    Download  
     
    Export citation  
     
    Bookmark   293 citations  
  • AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations.Luciano Floridi, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Luetge, Robert Madelin, Ugo Pagallo, Francesca Rossi, Burkhard Schafer, Peggy Valcke & Effy Vayena - 2018 - Minds and Machines 28 (4):689-707.
    This article reports the findings of AI4People, an Atomium—EISMD initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations—to assess, to develop, to incentivise, and to support good AI—which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other (...)
    Download  
     
    Export citation  
     
    Bookmark   199 citations  
  • Computer knows best? The need for value-flexibility in medical AI.Rosalind J. McDougall - 2019 - Journal of Medical Ethics 45 (3):156-160.
    Artificial intelligence (AI) is increasingly being developed for use in medicine, including for diagnosis and in treatment decision making. The use of AI in medical treatment raises many ethical issues that are yet to be explored in depth by bioethicists. In this paper, I focus specifically on the relationship between the ethical ideal of shared decision making and AI systems that generate treatment recommendations, using the example of IBM’s Watson for Oncology. I argue that use of this type of system (...)
    Download  
     
    Export citation  
     
    Bookmark   55 citations  
  • Meaningful Human Control over AI for Health? A Review.Eva Maria Hille, Patrik Hummel & Matthias Braun - forthcoming - Journal of Medical Ethics.
    Artificial intelligence is currently changing many areas of society. Especially in health, where critical decisions are made, questions of control must be renegotiated: who is in control when an automated system makes clinically relevant decisions? Increasingly, the concept of meaningful human control (MHC) is being invoked for this purpose. However, it is unclear exactly how this concept is to be understood in health. Through a systematic review, we present the current state of the concept of MHC in health. The results (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Testimonial injustice in medical machine learning.Giorgia Pozzi - 2023 - Journal of Medical Ethics 49 (8):536-540.
    Machine learning (ML) systems play an increasingly relevant role in medicine and healthcare. As their applications move ever closer to patient care and cure in clinical settings, ethical concerns about the responsibility of their use come to the fore. I analyse an aspect of responsible ML use that bears not only an ethical but also a significant epistemic dimension. I focus on ML systems’ role in mediating patient–physician relations. I thereby consider how ML systems may silence patients’ voices and relativise (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • On the ethics of algorithmic decision-making in healthcare.Thomas Grote & Philipp Berens - 2020 - Journal of Medical Ethics 46 (3):205-211.
    In recent years, a plethora of high-profile scientific publications has been reporting about machine learning algorithms outperforming clinicians in medical diagnosis or treatment recommendations. This has spiked interest in deploying relevant algorithms with the aim of enhancing decision-making in healthcare. In this paper, we argue that instead of straightforwardly enhancing the decision-making capabilities of clinicians and healthcare institutions, deploying machines learning algorithms entails trade-offs at the epistemic and the normative level. Whereas involving machine learning might improve the accuracy of medical (...)
    Download  
     
    Export citation  
     
    Bookmark   71 citations  
  • Artificial Intelligence and Black‐Box Medical Decisions: Accuracy versus Explainability.Alex John London - 2019 - Hastings Center Report 49 (1):15-21.
    Although decision‐making algorithms are not new to medicine, the availability of vast stores of medical data, gains in computing power, and breakthroughs in machine learning are accelerating the pace of their development, expanding the range of questions they can address, and increasing their predictive power. In many cases, however, the most powerful machine learning techniques purchase diagnostic or predictive accuracy at the expense of our ability to access “the knowledge within the machine.” Without an explanation in terms of reasons or (...)
    Download  
     
    Export citation  
     
    Bookmark   78 citations  
  • The Future Ethics of Artificial Intelligence in Medicine: Making Sense of Collaborative Models.Torbjørn Gundersen & Kristine Bærøe - 2022 - Science and Engineering Ethics 28 (2):1-16.
    This article examines the role of medical doctors, AI designers, and other stakeholders in making applied AI and machine learning ethically acceptable on the general premises of shared decision-making in medicine. Recent policy documents such as the EU strategy on trustworthy AI and the research literature have often suggested that AI could be made ethically acceptable by increased collaboration between developers and other stakeholders. The article articulates and examines four central alternative models of how AI can be designed and applied (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Specifying, balancing, and interpreting bioethical principles.Henry S. Richardson - 2000 - Journal of Medicine and Philosophy 25 (3):285 – 307.
    The notion that it is useful to specify norms progressively in order to resolve doubts about what to do, which I developed initially in a 1990 article, has been only partly assimilated by the bioethics literature. The thought is not just that it is helpful to work with relatively specific norms. It is more than that: specification can replace deductive subsumption and balancing. Here I argue against two versions of reliance on balancing that are prominent in recent bioethical discussions. Without (...)
    Download  
     
    Export citation  
     
    Bookmark   53 citations  
  • Patient Expertise and Medical Authority: Epistemic Implications for the Provider–Patient Relationship.Jamie Carlin Watson - 2024 - Journal of Medicine and Philosophy 49 (1):58-71.
    The provider–patient relationship is typically regarded as an expert-to-novice relationship, and with good reason. Providers have extensive education and experience that have developed in them the competence to treat conditions better and with fewer harms than anyone else. However, some researchers argue that many patients with long-term conditions (LTCs), such as arthritis and chronic pain, have become “experts” at managing their LTC. Unfortunately, there is no generally agreed-upon conception of “patient expertise” or what it implies for the provider–patient relationship. I (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • More Process, Less Principles: The Ethics of Deploying AI and Robotics in Medicine.Amitabha Palmer & David Schwan - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (1):121-134.
    Current national and international guidelines for the ethical design and development of artificial intelligence (AI) and robotics emphasize ethical theory. Various governing and advisory bodies have generated sets of broad ethical principles, which institutional decisionmakers are encouraged to apply to particular practical decisions. Although much of this literature examines the ethics of designing and developing AI and robotics, medical institutions typically must make purchase and deployment decisions about technologies that have already been designed and developed. The primary problem facing medical (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • In Defence of Principlism in AI Ethics and Governance.Elizabeth Seger - 2022 - Philosophy and Technology 35 (2):1-7.
    It is widely acknowledged that high-level AI principles are difficult to translate into practices via explicit rules and design guidelines. Consequently, many AI research and development groups that claim to adopt ethics principles have been accused of unwarranted “ethics washing”. Accordingly, there remains a question as to if and how high-level principles should be expected to influence the development of safe and beneficial AI. In this short commentary I discuss two roles high-level principles might play in AI ethics and governance. (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Primer on an ethics of AI-based decision support systems in the clinic.Matthias Braun, Patrik Hummel, Susanne Beck & Peter Dabrock - 2021 - Journal of Medical Ethics 47 (12):3-3.
    Making good decisions in extremely complex and difficult processes and situations has always been both a key task as well as a challenge in the clinic and has led to a large amount of clinical, legal and ethical routines, protocols and reflections in order to guarantee fair, participatory and up-to-date pathways for clinical decision-making. Nevertheless, the complexity of processes and physical phenomena, time as well as economic constraints and not least further endeavours as well as achievements in medicine and healthcare (...)
    Download  
     
    Export citation  
     
    Bookmark   29 citations  
  • Development and Validation of a Deep Learning System for Diabetic Retinopathy and Related Eye Diseases Using Retinal Images From Multiethnic Populations With Diabetes.Daniel Shu Wei Ting, Carol Yim-Lui Cheung, Gilbert Lim, Gavin Siew Wei Tan, Nguyen D. Quang, Alfred Gan, Haslina Hamzah, Renata Garcia-Franco, Ian Yew San Yeo, Shu Yen Lee, Edmund Yick Mun Wong, Charumathi Sabanayagam, Mani Baskaran, Farah Ibrahim, Ngiap Chuan Tan, Eric A. Finkelstein, Ecosse L. Lamoureux, Ian Y. Wong, Neil M. Bressler, Sobha Sivaprasad, Rohit Varma, Jost B. Jonas, Ming Guang He, Ching-Yu Cheng, Gemmy Chui Ming Cheung, Tin Aung, Wynne Hsu, Mong Li Lee & Tien Yin Wong - 2017 - JAMA 318 (22):2211.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Responsibility, second opinions and peer-disagreement: ethical and epistemological challenges of using AI in clinical diagnostic contexts.Hendrik Kempt & Saskia K. Nagel - 2022 - Journal of Medical Ethics 48 (4):222-229.
    In this paper, we first classify different types of second opinions and evaluate the ethical and epistemological implications of providing those in a clinical context. Second, we discuss the issue of how artificial intelligent could replace the human cognitive labour of providing such second opinion and find that several AI reach the levels of accuracy and efficiency needed to clarify their use an urgent ethical issue. Third, we outline the normative conditions of how AI may be used as second opinion (...)
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  • The right to refuse diagnostics and treatment planning by artificial intelligence.Thomas Ploug & Søren Holm - 2020 - Medicine, Health Care and Philosophy 23 (1):107-114.
    In an analysis of artificially intelligent systems for medical diagnostics and treatment planning we argue that patients should be able to exercise a right to withdraw from AI diagnostics and treatment planning for reasons related to (1) the physician’s role in the patients’ formation of and acting on personal preferences and values, (2) the bias and opacity problem of AI systems, and (3) rational concerns about the future societal effects of introducing AI systems in the health care sector.
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Evidence, ethics and the promise of artificial intelligence in psychiatry.Melissa McCradden, Katrina Hui & Daniel Z. Buchman - 2023 - Journal of Medical Ethics 49 (8):573-579.
    Researchers are studying how artificial intelligence (AI) can be used to better detect, prognosticate and subgroup diseases. The idea that AI might advance medicine’s understanding of biological categories of psychiatric disorders, as well as provide better treatments, is appealing given the historical challenges with prediction, diagnosis and treatment in psychiatry. Given the power of AI to analyse vast amounts of information, some clinicians may feel obligated to align their clinical judgements with the outputs of the AI system. However, a potential (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • A Framework to Evaluate Ethical Considerations with ML-HCA Applications—Valuable, Even Necessary, but Never Comprehensive.Danton Char, Michael Abràmoff & Chris Feudtner - 2020 - American Journal of Bioethics 20 (11):W6-W10.
    Machine learning is fundamental to multiple visions of health care’s future, from precision medicine 2020) to a model of health delivery and research...
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Groundhog Day for Medical Artificial Intelligence.Alex John London - 2018 - Hastings Center Report 48 (3):inside back cover-inside back co.
    Following a boom in investment and overinflated expectations in the 1980s, artificial intelligence entered a period of retrenchment known as the “AI winter.” With advances in the field of machine learning and the availability of large datasets for training various types of artificial neural networks, AI is in another cycle of halcyon days. Although medicine is particularly recalcitrant to change, applications of AI in health care have professionals in fields like radiology worried about the future of their careers and have (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Concordance as evidence in the Watson for Oncology decision-support system.Aaro Tupasela & Ezio Di Nucci - 2020 - AI and Society 35 (4):811-818.
    Machine learning platforms have emerged as a new promissory technology that some argue will revolutionize work practices across a broad range of professions, including medical care. During the past few years, IBM has been testing its Watson for Oncology platform at several oncology departments around the world. Published reports, news stories, as well as our own empirical research show that in some cases, the levels of concordance over recommended treatment protocols between the platform and human oncologists have been quite low. (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations