Epistemic virtues of harnessing rigorous machine learning systems in ethically sensitive domains

Journal of Medical Ethics 49 (8):547-548 (2023)
  Copy   BIBTEX

Abstract

Some physicians, in their care of patients at risk of misusing opioids, use machine learning (ML)-based prediction drug monitoring programmes (PDMPs) to guide their decision making in the prescription of opioids. This can cause a conflict: a PDMP Score can indicate a patient is at a high risk of opioid abuse while a patient expressly reports oppositely. The prescriber is then left to balance the credibility and trust of the patient with the PDMP Score. Pozzi1 argues that a prescriber who downgrades the credibility of a patient’s testimony based on a low PDMP Score is epistemically and morally unjustified and contributes to a form of testimonial injustice. This results in patients being silenced, excluded from decision-making processes and subjected to structural injustices. Additionally, the use of ML systems in medical practices raises concerns about perpetuating existing inequalities, overestimating their capabilities and displacing human authority. However, almost the very same critiques apply to human-based systems. Formalisation, ML systems included, should instead be viewed positively,2 and precisely as a powerful means to begin eroding these and other problems in ethically sensitive domains. In this case, the epistemic virtues of formalisation include promoting transparency, consistency and replicability in decision making. Rigorous ML systems can also help ensure that models …

Author's Profile

Analytics

Added to PP
2023-05-09

Downloads
1,028 (#16,321)

6 months
147 (#27,416)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?