Abstract
This article argues that existing approaches to programming ethical AI fail to resolve a serious moral-semantic trilemma, generating interpretations of ethical requirements that are either too semantically strict, too semantically flexible, or overly unpredictable. This paper then illustrates the trilemma utilizing a recently proposed ‘general ethical dilemma analyzer,’ GenEth. Finally, it uses empirical evidence to argue that human beings resolve the semantic trilemma using general cognitive and motivational processes involving ‘mental time-travel,’ whereby we simulate different possible pasts and futures. I demonstrate how mental time-travel psychology leads us to resolve the semantic trilemma through a six-step process of interpersonal negotiation and renegotiation, and then conclude by showing how comparative advantages in processing power would plausibly cause AI to use similar processes to solve the semantic trilemma more reliably than we do, leading AI to make better moral-semantic choices than humans do by our very own lights.