Abstract
Some physicians, in their care of patients at risk of misusing opioids, use machine learning (ML)-based prediction drug monitoring programmes (PDMPs) to guide their decision making in the prescription of opioids. This can cause a conflict: a PDMP Score can indicate a patient is at a high risk of opioid abuse while a patient expressly reports oppositely. The prescriber is then left to balance the credibility and trust of the patient with the PDMP Score. Pozzi1 argues that a prescriber who downgrades the credibility of a patient’s testimony based on a low PDMP Score is epistemically and morally unjustified and contributes to a form of testimonial injustice. This results in patients being silenced, excluded from decision-making processes and subjected to structural injustices. Additionally, the use of ML systems in medical practices raises concerns about perpetuating existing inequalities, overestimating their capabilities and displacing human authority. However, almost the very same critiques apply to human-based systems. Formalisation, ML systems included, should instead be viewed positively,2 and precisely as a powerful means to begin eroding these and other problems in ethically sensitive domains. In this case, the epistemic virtues of formalisation include promoting transparency, consistency and replicability in decision making. Rigorous ML systems can also help ensure that models …