Responsibility Internalism and Responsibility for AI

Dissertation, Syracuse University (2023)
  Copy   BIBTEX

Abstract

I argue for responsibility internalism. That is, moral responsibility (i.e., accountability, or being apt for praise or blame) depends only on factors internal to agents. Employing this view, I also argue that no one is responsible for what AI does but this isn’t morally problematic in a way that counts against developing or using AI. Responsibility is grounded in three potential conditions: the control (or freedom) condition, the epistemic (or awareness) condition, and the causal responsibility condition (or consequences). I argue that causal responsibility is irrelevant for moral responsibility, and that the control condition and the epistemic condition depend only on factors internal to agents. Moreover, since what AI does is at best a consequence of our actions, and the consequences of our actions are irrelevant to our responsibility, no one is responsible for what AI does. That is, the so-called responsibility gap exits. However, this isn’t morally worrisome for developing or using AI. Firstly, I argue, current AI doesn’t generate a new kind of concern about responsibility that the older technologies don’t. Then, I argue that responsibility gap is not worrisome because neither responsibility gap, nor my argument for its existence, entails that no one can be justly punished, held accountable, or incurs duties in reparations when AI causes a harm.

Author's Profile

Huzeyfe Demirtas
Chapman University

Analytics

Added to PP
2023-08-15

Downloads
285 (#57,305)

6 months
177 (#16,715)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?