The Testimony Gap: Machines and Reasons

Minds and Machines 35 (1):1-16 (2025)
  Copy   BIBTEX

Abstract

Most people who have considered the matter have concluded that machines cannot be moral agents. Responsibility for acting on the outputs of machines must always rest with a human being. A key problem for the ethical use of AI, then, is to ensure that it does not block the attribution of responsibility to humans or lead to individuals being unfairly held responsible for things over which they had no control. This is the “responsibility gap”. In this paper, we argue that the claim that machines cannot be held responsible for their actions has unacknowledged implications for the conditions under which the outputs of AI can serve as reasons for belief. Following Robert Brandom, we argue that, because the assertion of a claim is an action, moral agency is a necessary condition for the giving and evaluating of reasons in discourse. Thus, the same considerations that suggest that machines cannot be held responsible for their actions suggest that they cannot be held to account for the epistemic value — or lack of value — of their outputs. If there is a responsibility gap, there is also a “testimony gap.” An under-recognised problem with the use of AI, then, is to ensure that it does not block the attribution of testimony to human beings or lead to individuals being held responsible for claims that they have not asserted. More generally, the “assertions” of machines are only capable of serving as justifications for belief or action where one or more people accept responsibility for them.

Author Profiles

Robert Sparrow
Monash University

Analytics

Added to PP
2025-03-04

Downloads
164 (#98,072)

6 months
164 (#30,225)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?