Human Goals Are Constitutive of Agency in Artificial Intelligence

Philosophy and Technology 34 (4):1731-1750 (2021)
  Copy   BIBTEX

Abstract

The question whether AI systems have agency is gaining increasing importance in discussions of responsibility for AI behavior. This paper argues that an approach to artificial agency needs to be teleological, and consider the role of human goals in particular if it is to adequately address the issue of responsibility. I will defend the view that while AI systems can be viewed as autonomous in the sense of identifying or pursuing goals, they rely on human goals and other values incorporated into their design, and are, as such, dependent on human agents. As a consequence, AI systems cannot be held morally responsible, and responsibility attributions should take into account normative and social aspects involved in the design and deployment of the said AI. My argument falls in line with approaches critical of attributing moral agency to artificial agents, but draws from the philosophy of action, highlighting further philosophical underpinnings of current debates on artificial agency.

Author's Profile

Elena Popa
Jagiellonian University

Analytics

Added to PP
2021-10-23

Downloads
769 (#18,182)

6 months
482 (#3,139)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?