Abstract
Recently, there has been a large push for the use of artificial intelligence in medical settings. The promise of artificial intelligence (AI) in medicine is considerable, but its moral implications are in-sufficiently examined. If AI is used in medical diagnosis and treatment, it may pose a substantial problem for informed consent. The short version of the problem is this: medical AI will likely surpass human doctors in accuracy, meaning that patients have a prudential reason to prefer treatment from an AI. However, given the black box problem, medical AI cannot explain to patients how it makes decisions, yet such an explanation seems to be required by informed consent. Thus, it seems that doing what is best for patients (treatment via AI), even if patients want to permit this, might be prohibited by medicine’s commitment to informed consent. Conflicts between beneficence and autonomy are not new, but medical AI poses a novel version of this conflict, because this problem is one in which even if the patient says they want to use their autonomy to receive better care, the commitment to autonomy (via informed consent) seems to block them from doing so. Given this dilemma, should we abandon informed consent, or should we not use medical AI? My thesis is that we can have our cake and eat it too; we can use opaque AI in clinical medicine and retain our com-mitment to informed consent, although it may require revising our understanding of informed con-sent. Specifically, it will require us to distinguish between two levels of consent (higher-order and first-order consent).