Abstract
AI-enabled predictive analytics is widely deployed in clinical care settings for healthcare monitoring, diagnostics and risk management. The technology may offer valuable insights into individual and population health patterns, trends and outcomes. Predictive analytics may, however, also tangibly affect individual patient privacy and the right thereto. On the one hand, predictive analytics may undermine a patient’s state of privacy by constructing or modifying their health identity independent of the patient themselves. On the other hand, the use of predictive analytics may violate the patient’s right to privacy if the patient has no control over the use or output of the technology. These repercussions ultimately erode patient autonomy and agency. This paper discusses these implications in further detail, and proposes possible measures for their mitigation. They involve the incorporation in the AI systems of accuracy-enhancing statistical models and methods, more privacy-conscious institutional policies and practices, and effective choice for patients to accept or refuse diagnostics and treatment drawing on AI-enabled predictive analytics.