Consequences of unexplainable machine learning for the notions of a trusted doctor and patient autonomy

Proceedings of the 2nd EXplainable AI in Law Workshop (XAILA 2019) Co-Located with 32nd International Conference on Legal Knowledge and Information Systems (JURIX 2019) (2020)
Download Edit this record How to cite View on PhilPapers
Abstract
This paper provides an analysis of the way in which two foundational principles of medical ethics–the trusted doctor and patient autonomy–can be undermined by the use of machine learning (ML) algorithms and addresses its legal significance. This paper can be a guide to both health care providers and other stakeholders about how to anticipate and in some cases mitigate ethical conflicts caused by the use of ML in healthcare. It can also be read as a road map as to what needs to be done to achieve an acceptable level of explainability in an ML algorithm when it is used in a healthcare context.
PhilPapers/Archive ID
KLICOU
Upload history
Archival date: 2020-09-23
View other versions
Added to PP
2020-09-23

Downloads
183 (#39,453)

6 months
34 (#25,278)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?