An Enactive Approach to Value Alignment in Artificial Intelligence: A Matter of Relevance

In Vincent C. Müller (ed.), Philosophy and Theory of AI. Springer Cham. pp. 119-135 (2021)
  Copy   BIBTEX

Abstract

The “Value Alignment Problem” is the challenge of how to align the values of artificial intelligence with human values, whatever they may be, such that AI does not pose a risk to the existence of humans. Existing approaches appear to conceive of the problem as "how do we ensure that AI solves the problem in the right way", in order to avoid the possibility of AI turning humans into paperclips in order to “make more paperclips” or eradicating the human race to “solve climate change”. This paper proposes an approach to Alignment rooted in the Enactive theory of mind that reconceptualises it as "how do we make relevant to AI what it relevant to humans". This conceptualisation is supported with a discussion of 4E cognition and goes on to suggest that the Alignment problem and the Frame problem are the same problem. The paper concludes with a discussion of the tradeoffs of the different approaches to value alignment.

Author's Profile

Michael Cannon
Eindhoven University of Technology

Analytics

Added to PP
2022-11-19

Downloads
309 (#71,231)

6 months
125 (#36,468)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?