AI and Society:1-20 (forthcoming)
AbstractThis article argues that existing approaches to programming ethical AI fail to resolve a serious moral-semantic trilemma, generating interpretations of ethical requirements that are either too semantically strict, too semantically flexible, or overly unpredictable. This paper then illustrates the trilemma utilizing a recently proposed ‘general ethical dilemma analyzer,’ GenEth. Finally, it uses empirical evidence to argue that human beings resolve the semantic trilemma using general cognitive and motivational processes involving ‘mental time-travel,’ whereby we simulate different possible pasts and futures. I demonstrate how mental time-travel psychology leads us to resolve the semantic trilemma through a six-step process of interpersonal negotiation and renegotiation, and then conclude by showing how comparative advantages in processing power would plausibly cause AI to use similar processes to solve the semantic trilemma more reliably than we do, leading AI to make better moral-semantic choices than humans do by our very own lights.
Archival historyFirst archival date: 2018-11-04
Latest version: 2 (2021-08-17)
View all versions
Added to PP
Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.How can I increase my downloads?