Artificial Intelligence Systems, Responsibility and Agential Self-Awareness

In Vincent C. Müller (ed.), Philosophy and Theory of Artificial Intelligence 2021. Berlin, Germany: pp. 15-25 (2022)
  Copy   BIBTEX


This paper investigates the claim that artificial Intelligence Systems cannot be held morally responsible because they do not have an ability for agential self-awareness e.g. they cannot be aware that they are the agents of an action. The main suggestion is that if agential self-awareness and related first person representations presuppose an awareness of a self, the possibility of responsible artificial intelligence systems cannot be evaluated independently of research conducted on the nature of the self. Focusing on a specific account of the self from the phenomenological tradition, this paper suggests that a minimal necessary condition that artificial intelligence systems must satisfy so that they have a capability for self-awareness, is having a minimal self defined as ‘a sense of ownership’. As this sense of ownership is usually associated with having a living body, one suggestion is that artificial intelligence systems must have similar living bodies so they can have a sense of self. Discussing cases of robotic animals as examples of the possibility of artificial intelligence systems having a sense of self, the paper concludes that the possibility of artificial intelligence systems having a ‘sense of ownership’ or a sense of self may be a necessary condition for having responsibility.

Author's Profile

Lydia Farina
Nottingham University


Added to PP

461 (#37,653)

6 months
303 (#6,997)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?