Switch to: Citations

Add references

You must login to add references.
  1. Evidence, ethics and the promise of artificial intelligence in psychiatry.Melissa McCradden, Katrina Hui & Daniel Z. Buchman - 2023 - Journal of Medical Ethics 49 (8):573-579.
    Researchers are studying how artificial intelligence (AI) can be used to better detect, prognosticate and subgroup diseases. The idea that AI might advance medicine’s understanding of biological categories of psychiatric disorders, as well as provide better treatments, is appealing given the historical challenges with prediction, diagnosis and treatment in psychiatry. Given the power of AI to analyse vast amounts of information, some clinicians may feel obligated to align their clinical judgements with the outputs of the AI system. However, a potential (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Patient participation in Dutch ethics support: practice, ideals, challenges and recommendations—a national survey.Marleen Eijkholt, Janine de Snoo-Trimp, Wieke Ligtenberg & Bert Molewijk - 2022 - BMC Medical Ethics 23 (1):1-14.
    Background: Patient participation in clinical ethics support services has been marked as an important issue. There seems to be a wide variety of practices globally, but extensive theoretical or empirical studies on the matter are missing. Scarce publications indicate that, in Europe, patient participation in CESS varies from region to region, and per type of support. Practices vary from being non-existent, to patients being a full conversation partner. This contrasts with North America, where PP seems more or less standard. While (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Artificial intelligence for good health: a scoping review of the ethics literature.Jennifer Gibson, Vincci Lui, Nakul Malhotra, Jia Ce Cai, Neha Malhotra, Donald J. Willison, Ross Upshur, Erica Di Ruggiero & Kathleen Murphy - 2021 - BMC Medical Ethics 22 (1):1-17.
    BackgroundArtificial intelligence has been described as the “fourth industrial revolution” with transformative and global implications, including in healthcare, public health, and global health. AI approaches hold promise for improving health systems worldwide, as well as individual and population health outcomes. While AI may have potential for advancing health equity within and between countries, we must consider the ethical implications of its deployment in order to mitigate its potential harms, particularly for the most vulnerable. This scoping review addresses the following question: (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Artificial Intelligence and Patient-Centered Decision-Making.Jens Christian Bjerring & Jacob Busch - 2020 - Philosophy and Technology 34 (2):349-371.
    Advanced AI systems are rapidly making their way into medical research and practice, and, arguably, it is only a matter of time before they will surpass human practitioners in terms of accuracy, reliability, and knowledge. If this is true, practitioners will have a prima facie epistemic and professional obligation to align their medical verdicts with those of advanced AI systems. However, in light of their complexity, these AI systems will often function as black boxes: the details of their contents, calculations, (...)
    Download  
     
    Export citation  
     
    Bookmark   41 citations  
  • On the ethics of algorithmic decision-making in healthcare.Thomas Grote & Philipp Berens - 2020 - Journal of Medical Ethics 46 (3):205-211.
    In recent years, a plethora of high-profile scientific publications has been reporting about machine learning algorithms outperforming clinicians in medical diagnosis or treatment recommendations. This has spiked interest in deploying relevant algorithms with the aim of enhancing decision-making in healthcare. In this paper, we argue that instead of straightforwardly enhancing the decision-making capabilities of clinicians and healthcare institutions, deploying machines learning algorithms entails trade-offs at the epistemic and the normative level. Whereas involving machine learning might improve the accuracy of medical (...)
    Download  
     
    Export citation  
     
    Bookmark   65 citations  
  • Principles alone cannot guarantee ethical AI.Brent Mittelstadt - 2019 - Nature Machine Intelligence 1 (11):501-507.
    Download  
     
    Export citation  
     
    Bookmark   97 citations  
  • Should Artificial Intelligence be used to support clinical ethical decision-making? A systematic review of reasons.Sabine Salloch, Tim Kacprowski, Wolf-Tilo Balke, Frank Ursin & Lasse Benzinger - 2023 - BMC Medical Ethics 24 (1):1-9.
    BackgroundHealthcare providers have to make ethically complex clinical decisions which may be a source of stress. Researchers have recently introduced Artificial Intelligence (AI)-based applications to assist in clinical ethical decision-making. However, the use of such tools is controversial. This review aims to provide a comprehensive overview of the reasons given in the academic literature for and against their use.MethodsPubMed, Web of Science, Philpapers.org and Google Scholar were searched for all relevant publications. The resulting set of publications was title and abstract (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Operationalising AI ethics: barriers, enablers and next steps.Jessica Morley, Libby Kinsey, Anat Elhalal, Francesca Garcia, Marta Ziosi & Luciano Floridi - 2023 - AI and Society 38 (1):411-423.
    By mid-2019 there were more than 80 AI ethics guides available in the public domain. Despite this, 2020 saw numerous news stories break related to ethically questionable uses of AI. In part, this is because AI ethics theory remains highly abstract, and of limited practical applicability to those actually responsible for designing algorithms and AI systems. Our previous research sought to start closing this gap between the ‘what’ and the ‘how’ of AI ethics through the creation of a searchable typology (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Primer on an ethics of AI-based decision support systems in the clinic.Matthias Braun, Patrik Hummel, Susanne Beck & Peter Dabrock - 2021 - Journal of Medical Ethics 47 (12):3-3.
    Making good decisions in extremely complex and difficult processes and situations has always been both a key task as well as a challenge in the clinic and has led to a large amount of clinical, legal and ethical routines, protocols and reflections in order to guarantee fair, participatory and up-to-date pathways for clinical decision-making. Nevertheless, the complexity of processes and physical phenomena, time as well as economic constraints and not least further endeavours as well as achievements in medicine and healthcare (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  • Adherence, shared decision-making and patient autonomy.Lars Sandman, Bradi B. Granger, Inger Ekman & Christian Munthe - 2012 - Medicine, Health Care and Philosophy 15 (2):115-127.
    In recent years the formerly quite strong interest in patient compliance has been questioned for being too paternalistic and oriented towards overly narrow biomedical goals as the basis for treatment recommendations. In line with this there has been a shift towards using the notion of adherence to signal an increased weight for patients’ preferences and autonomy in decision making around treatments. This ‘adherence-paradigm’ thus encompasses shared decision-making as an ideal and patient perspective and autonomy as guiding goals of care. What (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • “Many roads lead to Rome and the Artificial Intelligence only shows me one road”: an interview study on physician attitudes regarding the implementation of computerised clinical decision support systems.Sigrid Sterckx, Tamara Leune, Johan Decruyenaere, Wim Van Biesen & Daan Van Cauwenberge - 2022 - BMC Medical Ethics 23 (1):1-14.
    Research regarding the drivers of acceptance of clinical decision support systems by physicians is still rather limited. The literature that does exist, however, tends to focus on problems regarding the user-friendliness of CDSS. We have performed a thematic analysis of 24 interviews with physicians concerning specific clinical case vignettes, in order to explore their underlying opinions and attitudes regarding the introduction of CDSS in clinical practice, to allow a more in-depth analysis of factors underlying acceptance of CDSS. We identified three (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The importance of values in evidence-based medicine.Michael P. Kelly, Iona Heath, Jeremy Howick & Trisha Greenhalgh - 2015 - BMC Medical Ethics 16 (1):69.
    Evidence-based medicine has always required integration of patient values with ‘best’ clinical evidence. It is widely recognized that scientific practices and discoveries, including those of EBM, are value-laden. But to date, the science of EBM has focused primarily on methods for reducing bias in the evidence, while the role of values in the different aspects of the EBM process has been almost completely ignored.
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  • Trustworthy artificial intelligence and ethical design: public perceptions of trustworthiness of an AI-based decision-support tool in the context of intrapartum care.Angeliki Kerasidou, Antoniya Georgieva & Rachel Dlugatch - 2023 - BMC Medical Ethics 24 (1):1-16.
    BackgroundDespite the recognition that developing artificial intelligence (AI) that is trustworthy is necessary for public acceptability and the successful implementation of AI in healthcare contexts, perspectives from key stakeholders are often absent from discourse on the ethical design, development, and deployment of AI. This study explores the perspectives of birth parents and mothers on the introduction of AI-based cardiotocography (CTG) in the context of intrapartum care, focusing on issues pertaining to trust and trustworthiness.MethodsSeventeen semi-structured interviews were conducted with birth parents (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Towards an empirical ethics in care: relations with technologies in health care.Jeannette Pols - 2015 - Medicine, Health Care and Philosophy 18 (1):81-90.
    This paper describes the approach of empirical ethics, a form of ethics that integrates non-positivist ethnographic empirical research and philosophy. Empirical ethics as it is discussed here builds on the ‘empirical turn’ in epistemology. It radicalizes the relational approach that care ethics introduced to think about care between people by drawing in relations between people and technologies as things people relate to. Empirical ethics studies care practices by analysing their intra-normativity, or the ways of living together the actors within these (...)
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  • In principle obstacles for empathic AI: why we can’t replace human empathy in healthcare.Carlos Montemayor, Jodi Halpern & Abrol Fairweather - 2022 - AI and Society 37 (4):1353-1359.
    What are the limits of the use of artificial intelligence (AI) in the relational aspects of medical and nursing care? There has been a lot of recent work and applications showing the promise and efficiency of AI in clinical medicine, both at the research and treatment levels. Many of the obstacles discussed in the literature are technical in character, regarding how to improve and optimize current practices in clinical medicine and also how to develop better data bases for optimal parameter (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • “I don’t think people are ready to trust these algorithms at face value”: trust and the use of machine learning algorithms in the diagnosis of rare disease.Angeliki Kerasidou, Christoffer Nellåker, Aurelia Sauerbrei, Shirlene Badger & Nina Hallowell - 2022 - BMC Medical Ethics 23 (1):1-14.
    BackgroundAs the use of AI becomes more pervasive, and computerised systems are used in clinical decision-making, the role of trust in, and the trustworthiness of, AI tools will need to be addressed. Using the case of computational phenotyping to support the diagnosis of rare disease in dysmorphology, this paper explores under what conditions we could place trust in medical AI tools, which employ machine learning.MethodsSemi-structured qualitative interviews with stakeholders who design and/or work with computational phenotyping systems. The method of constant (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations