Switch to: References

Add citations

You must login to add citations.
  1. Healthy Mistrust: Medical Black Box Algorithms, Epistemic Authority, and Preemptionism.Andreas Wolkenstein - forthcoming - Cambridge Quarterly of Healthcare Ethics:1-10.
    In the ethics of algorithms, a specifically epistemological analysis is rarely undertaken in order to gain a critique (or a defense) of the handling of or trust in medical black box algorithms (BBAs). This article aims to begin to fill this research gap. Specifically, the thesis is examined according to which such algorithms are regarded as epistemic authorities (EAs) and that the results of a medical algorithm must completely replace other convictions that patients have (preemptionism). If this were true, it (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Machine learning models, trusted research environments and UK health data: ensuring a safe and beneficial future for AI development in healthcare.Charalampia Kerasidou, Maeve Malone, Angela Daly & Francesco Tava - 2023 - Journal of Medical Ethics 49 (12):838-843.
    Digitalisation of health and the use of health data in artificial intelligence, and machine learning (ML), including for applications that will then in turn be used in healthcare are major themes permeating current UK and other countries’ healthcare systems and policies. Obtaining rich and representative data is key for robust ML development, and UK health data sets are particularly attractive sources for this. However, ensuring that such research and development is in the public interest, produces public benefit and preserves privacy (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A Justifiable Investment in AI for Healthcare: Aligning Ambition with Reality.Kassandra Karpathakis, Jessica Morley & Luciano Floridi - 2024 - Minds and Machines 34 (4):1-40.
    Healthcare systems are grappling with critical challenges, including chronic diseases in aging populations, unprecedented health care staffing shortages and turnover, scarce resources, unprecedented demands and wait times, escalating healthcare expenditure, and declining health outcomes. As a result, policymakers and healthcare executives are investing in artificial intelligence (AI) solutions to increase operational efficiency, lower health care costs, and improve patient care. However, current level of investment in developing healthcare AI among members of the global digital health partnership does not seem to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Adaptable robots, ethics, and trust: a qualitative and philosophical exploration of the individual experience of trustworthy AI.Stephanie Sheir, Arianna Manzini, Helen Smith & Jonathan Ives - forthcoming - AI and Society:1-14.
    Much has been written about the need for trustworthy artificial intelligence (AI), but the underlying meaning of trust and trustworthiness can vary or be used in confusing ways. It is not always clear whether individuals are speaking of a technology’s trustworthiness, a developer’s trustworthiness, or simply of gaining the trust of users by any means. In sociotechnical circles, trustworthiness is often used as a proxy for ‘the good’, illustrating the moral heights to which technologies and developers ought to aspire, at (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • “I don’t think people are ready to trust these algorithms at face value”: trust and the use of machine learning algorithms in the diagnosis of rare disease.Angeliki Kerasidou, Christoffer Nellåker, Aurelia Sauerbrei, Shirlene Badger & Nina Hallowell - 2022 - BMC Medical Ethics 23 (1):1-14.
    BackgroundAs the use of AI becomes more pervasive, and computerised systems are used in clinical decision-making, the role of trust in, and the trustworthiness of, AI tools will need to be addressed. Using the case of computational phenotyping to support the diagnosis of rare disease in dysmorphology, this paper explores under what conditions we could place trust in medical AI tools, which employ machine learning.MethodsSemi-structured qualitative interviews with stakeholders who design and/or work with computational phenotyping systems. The method of constant (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Encompassing trust in medical AI from the perspective of medical students: a quantitative comparative study.Anamaria Malešević, Mária Kolesárová & Anto Čartolovni - 2024 - BMC Medical Ethics 25 (1):1-11.
    In the years to come, artificial intelligence will become an indispensable tool in medical practice. The digital transformation will undoubtedly affect today’s medical students. This study focuses on trust from the perspective of three groups of medical students - students from Croatia, students from Slovakia, and international students studying in Slovakia. A paper-pen survey was conducted using a non-probabilistic convenience sample. In the second half of 2022, 1715 students were surveyed at five faculties in Croatia and three in Slovakia. Specifically, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Data-driven research and healthcare: public trust, data governance and the NHS. [REVIEW]Charalampia Kerasidou & Angeliki Kerasidou - 2023 - BMC Medical Ethics 24 (1):1-9.
    It is widely acknowledged that trust plays an important role for the acceptability of data sharing practices in research and healthcare, and for the adoption of new health technologies such as AI. Yet there is reported distrust in this domain. Although in the UK, the NHS is one of the most trusted public institutions, public trust does not appear to accompany its data sharing practices for research and innovation, specifically with the private sector, that have been introduced in recent years. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Non-empirical methods for ethics research on digital technologies in medicine, health care and public health: a systematic journal review.Frank Ursin, Regina Müller, Florian Funer, Wenke Liedtke, David Renz, Svenja Wiertz & Robert Ranisch - forthcoming - Medicine, Health Care and Philosophy:1-16.
    Bioethics has developed approaches to address ethical issues in health care, similar to how technology ethics provides guidelines for ethical research on artificial intelligence, big data, and robotic applications. As these digital technologies are increasingly used in medicine, health care and public health, thus, it is plausible that the approaches of technology ethics have influenced bioethical research. Similar to the “empirical turn” in bioethics, which led to intense debates about appropriate moral theories, ethical frameworks and meta-ethics due to the increased (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • No Agent in the Machine: Being Trustworthy and Responsible about AI.Niël Henk Conradie & Saskia K. Nagel - 2024 - Philosophy and Technology 37 (2):1-24.
    Many recent AI policies have been structured under labels that follow a particular trend: national or international guidelines, policies or regulations, such as the EU’s and USA’s ‘Trustworthy AI’ and China’s and India’s adoption of ‘Responsible AI’, use a label that follows the recipe of [agentially loaded notion + ‘AI’]. A result of this branding, even if implicit, is to encourage the application by laypeople of these agentially loaded notions to the AI technologies themselves. Yet, these notions are appropriate only (...)
    Download  
     
    Export citation  
     
    Bookmark