Switch to: References

Add citations

You must login to add citations.
  1. Standards for Belief Representations in LLMs.Daniel A. Herrmann & Benjamin A. Levinstein - 2024 - Minds and Machines 35 (1):1-25.
    As large language models (LLMs) continue to demonstrate remarkable abilities across various domains, computer scientists are developing methods to understand their cognitive processes, particularly concerning how (and if) LLMs internally represent their beliefs about the world. However, this field currently lacks a unified theoretical foundation to underpin the study of belief in LLMs. This article begins filling this gap by proposing adequacy conditions for a representation in an LLM to count as belief-like. We argue that, while the project of belief (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Black-box assisted medical decisions: AI power vs. ethical physician care.Berman Chan - 2023 - Medicine, Health Care and Philosophy 26 (3):285-292.
    Without doctors being able to explain medical decisions to patients, I argue their use of black box AIs would erode the effective and respectful care they provide patients. In addition, I argue that physicians should use AI black boxes only for patients in dire straits, or when physicians use AI as a “co-pilot” (analogous to a spellchecker) but can independently confirm its accuracy. I respond to A.J. London’s objection that physicians already prescribe some drugs without knowing why they work.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • ChatGPT, Education, and Understanding.Federica Isabella Malfatti - forthcoming - Social Epistemology.
    Is ChatGPT a good teacher? Or could it be? As understanding is widely acknowledged as one of the fundamental aims of education, the answer to these questions depends on whether ChatGPT fosters or could foster the acquisition of understanding in its users. In this paper, I tackle this issue in two steps. In the first part of the paper, I explore and analyze the set of skills and social-epistemic virtues that a teacher must exemplify to perform her job well – (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • When can we Kick (Some) Humans “Out of the Loop”? An Examination of the use of AI in Medical Imaging for Lumbar Spinal Stenosis.Kathryn Muyskens, Yonghui Ma, Jerry Menikoff, James Hallinan & Julian Savulescu - 2025 - Asian Bioethics Review 17 (1):207-223.
    Artificial intelligence (AI) has attracted an increasing amount of attention, both positive and negative. Its potential applications in healthcare are indeed manifold and revolutionary, and within the realm of medical imaging and radiology (which will be the focus of this paper), significant increases in accuracy and speed, as well as significant savings in cost, stand to be gained through the adoption of this technology. Because of its novelty, a norm of keeping humans “in the loop” wherever AI mechanisms are deployed (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Why algorithmic speed can be more important than algorithmic accuracy.Jakob Mainz, Lauritz Munch, Jens Christian Bjerring & Sissel Godtfredsen - 2023 - Clinical Ethics 18 (2):161-164.
    Artificial Intelligence (AI) often outperforms human doctors in terms of decisional speed. For some diseases, the expected benefit of a fast but less accurate decision exceeds the benefit of a slow but more accurate one. In such cases, we argue, it is often justified to rely on a medical AI to maximise decision speed – even if the AI is less accurate than human doctors.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Response to our reviewers.Juan Manuel Durán & Karin Rolanda Jongsma - 2021 - Journal of Medical Ethics 47 (7):514-514.
    We would like to thank the authors of the commentaries for their critical appraisal of our feature article, Who is afraid of black box algorithms?1 Their comments, suggestions and concerns are various, and we are glad that our article contributes to the academic debate about the ethical and epistemic conditions for medical Explanatory AI. We would like to bring to attention a few issues that are common worries across reviewers. Most prominently are the merits of computational reliabilism —in particular, when (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Trust and Explainable AI: Promises and Limitations.Sara Blanco - 2022 - Ethicomp Conference Proceedings.
    Download  
     
    Export citation  
     
    Bookmark  
  • Towards a Taxonomy of AI Risks in the Health Domain.Delaram Golpayegani, Joshua Hovsha, Leon Rossmaier, Rana Saniei & Jana Misic - 2022 - 2022 Fourth International Conference on Transdisciplinary Ai (Transai).
    The adoption of AI in the health sector has its share of benefits and harms to various stakeholder groups and entities. There are critical risks involved in using AI systems in the health domain; risks that can have severe, irreversible, and life-changing impacts on people’s lives. With the development of innovative AI-based applications in the medical and healthcare sectors, new types of risks emerge. To benefit from novel AI applications in this domain, the risks need to be managed in order (...)
    Download  
     
    Export citation  
     
    Bookmark