Artificial Intelligence Systems, Responsibility and Agential Self-Awareness

In Vincent C. Müller (ed.), Philosophy and Theory of Artificial Intelligence 2021. Berlin, Germany: pp. 15-25 (2022)
  Copy   BIBTEX

Abstract

This paper investigates the claim that artificial Intelligence Systems cannot be held morally responsible because they do not have an ability for agential self-awareness e.g. they cannot be aware that they are the agents of an action. The main suggestion is that if agential self-awareness and related first person representations presuppose an awareness of a self, the possibility of responsible artificial intelligence systems cannot be evaluated independently of research conducted on the nature of the self. Focusing on a specific account of the self from the phenomenological tradition, this paper suggests that a minimal necessary condition that artificial intelligence systems must satisfy so that they have a capability for self-awareness, is having a minimal self defined as ‘a sense of ownership’. As this sense of ownership is usually associated with having a living body, one suggestion is that artificial intelligence systems must have similar living bodies so they can have a sense of self. Discussing cases of robotic animals as examples of the possibility of artificial intelligence systems having a sense of self, the paper concludes that the possibility of artificial intelligence systems having a ‘sense of ownership’ or a sense of self may be a necessary condition for having responsibility.

Author's Profile

Lydia Farina
Nottingham University

Analytics

Added to PP
2022-11-20

Downloads
391 (#41,652)

6 months
281 (#7,406)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?