Abstract
An AI-based “patient preference predictor” (PPP) is a proposed method for guiding healthcare decisions for patients who lack decision-making capacity. The proposal is to use correlations between sociodemographic data and known healthcare preferences to construct a model that predicts the unknown preferences of a particular patient. In this paper, I highlight a distinction that has been largely overlooked so far in debates about the PPP–that between algorithmic prediction and decision-making–and argue that much of the recent philosophical disagreement stems from this oversight. I show how three prominent objections to the PPP only challenge its use as the sole determinant of a choice, and actually support its use as a source of evidence about patient preferences to inform human decision-making. The upshot is that we should adopt the evidential conception of the PPP and shift our evaluation of this technology towards the ethics of algorithmic prediction, rather than decision-making.