The linguistic dead zone of value-aligned agency, natural and artificial

Philosophical Studies:1-23 (2024)
  Copy   BIBTEX

Abstract

The value alignment problem for artificial intelligence (AI) asks how we can ensure that the “values”—i.e., objective functions—of artificial systems are aligned with the values of humanity. In this paper, I argue that linguistic communication is a necessary condition for robust value alignment. I discuss the consequences that the truth of this claim would have for research programmes that attempt to ensure value alignment for AI systems—or, more loftily, those programmes that seek to design robustly beneficial or ethical artificial agents.

Author's Profile

Travis LaCroix
Durham University

Analytics

Added to PP
n/a

Downloads
56 (#99,124)

6 months
56 (#88,086)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?