AI Wellbeing

Asian Journal of Philosophy (forthcoming)
  Copy   BIBTEX

Abstract

Under what conditions would an artificially intelligent system have wellbeing? Despite its clear bearing on the ethics of human interactions with artificial systems, this question has received little direct attention. Because all major theories of wellbeing hold that an individual’s welfare level is partially determined by their mental life, we begin by considering whether artificial systems have mental states. We show that a wide range of theories of mental states, when combined with leading theories of wellbeing, predict that certain existing artificial systems have wellbeing. Along the way, we argue that there are good reasons to believe that artificial systems can have wellbeing even if they are not phenomenally conscious. While we do not claim to demonstrate conclusively that AI systems have wellbeing, we argue that there is a significant probability that some AI systems have or will soon have wellbeing, and that this should lead us to reassess our relationship with the intelligent systems we create.

Author Profiles

Simon Goldstein
University of Hong Kong
Cameron Domenico Kirk-Giannini
Rutgers University - Newark

Analytics

Added to PP
2023-07-02

Downloads
2,230 (#4,664)

6 months
770 (#999)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?