Artificial Intelligence and Patient-Centered Decision-Making

Philosophy and Technology 34 (2):349-371 (2020)
Download Edit this record How to cite View on PhilPapers
Abstract
Advanced AI systems are rapidly making their way into medical research and practice, and, arguably, it is only a matter of time before they will surpass human practitioners in terms of accuracy, reliability, and knowledge. If this is true, practitioners will have a prima facie epistemic and professional obligation to align their medical verdicts with those of advanced AI systems. However, in light of their complexity, these AI systems will often function as black boxes: the details of their contents, calculations, and procedures cannot be meaningfully understood by human practitioners. When AI systems reach this level of complexity, we can also speak of black-box medicine. In this paper, we want to argue that black-box medicine conflicts with core ideals of patient-centered medicine. In particular, we claim, black-box medicine is not conducive for supporting informed decision-making based on shared information, shared deliberation, and shared mind between practitioner and patient.
Categories
No categories specified
(categorize this paper)
Reprint years
2021
ISBN(s)
PhilPapers/Archive ID
BJEAIA
Upload history
First archival date: 2020-01-02
Latest version: 2 (2020-01-02)
View other versions
Added to PP index
2020-01-02

Total views
463 ( #12,395 of 2,433,133 )

Recent downloads (6 months)
173 ( #3,147 of 2,433,133 )

How can I increase my downloads?

Downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.