Privacy Implications of AI-Enabled Predictive Analytics in Clinical Diagnostics, and How to Mitigate Them

Bioethica Forum (forthcoming)
  Copy   BIBTEX

Abstract

AI-enabled predictive analytics is widely deployed in clinical care settings for healthcare monitoring, diagnostics and risk management. The technology may offer valuable insights into individual and population health patterns, trends and outcomes. Predictive analytics may, however, also tangibly affect individual patient privacy and the right thereto. On the one hand, predictive analytics may undermine a patient’s state of privacy by constructing or modifying their health identity independent of the patient themselves. On the other hand, the use of predictive analytics may violate the patient’s right to privacy if the patient has no control over the use or output of the technology. These repercussions ultimately erode patient autonomy and agency. This paper discusses these implications in further detail, and proposes possible measures for their mitigation. They involve the incorporation in the AI systems of accuracy-enhancing statistical models and methods, more privacy-conscious institutional policies and practices, and effective choice for patients to accept or refuse diagnostics and treatment drawing on AI-enabled predictive analytics.

Author's Profile

Dessislava S. Fessenko
Harvard University

Analytics

Added to PP
n/a

Downloads
11 (#101,776)

6 months
11 (#100,546)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?