Morality First?

AI and Society:1-13 (forthcoming)
  Copy   BIBTEX

Abstract

The Morality First strategy for developing AI systems that can represent and respond to human values aims to first develop systems that can represent and respond to moral values. I argue that Morality First and other X-First views are unmotivated. Moreover, according to some widely accepted philosophical views about value, these strategies are positively distorting. The natural alternative, according to which no domain of value comes “first” introduces a new set of challenges and highlights an important but otherwise obscured problem for e-AI developers.

Author's Profile

Nathaniel Sharadin
University of Hong Kong

Analytics

Added to PP
2024-05-08

Downloads
78 (#65,908)

6 months
78 (#92,279)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?