Explicability of artificial intelligence in radiology: Is a fifth bioethical principle conceptually necessary?

Bioethics 36 (2):143-153 (2022)
  Copy   BIBTEX

Abstract

Recent years have witnessed intensive efforts to specify which requirements ethical artificial intelligence (AI) must meet. General guidelines for ethical AI consider a varying number of principles important. A frequent novel element in these guidelines, that we have bundled together under the term explicability, aims to reduce the black-box character of machine learning algorithms. The centrality of this element invites reflection on the conceptual relation between explicability and the four bioethical principles. This is important because the application of general ethical frameworks to clinical decision-making entails conceptual questions: Is explicability a free-standing principle? Is it already covered by the well-established four bioethical principles? Or is it an independent value that needs to be recognized as such in medical practice? We discuss these questions in a conceptual-ethical analysis, which builds upon the findings of an empirical document analysis. On the example of the medical specialty of radiology, we analyze the position of radiological associations on the ethical use of medical AI. We address three questions: Are there references to explicability or a similar concept? What are the reasons for such inclusion? Which ethical concepts are referred to?

Author's Profile

Cristian Timmermann
Universität Augsburg

Analytics

Added to PP
2021-07-14

Downloads
109 (#87,082)

6 months
72 (#66,109)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?