Abstract
Abstract: The field of artificial general intelligence (AGI) safety is quickly growing. However, the nature of human values, with which future AGI should be aligned, is underdefined. Different AGI safety researchers have suggested different theories about the nature of human values, but there are contradictions. This article presents an overview of what AGI safety researchers have written about the nature of human values, up to the beginning of 2019. 21 authors were overviewed, and some of them have several theories. A theory classification method is suggested, where the theories are judged according to the level of their complexity and behaviorists-internalists scale, as well as the level of their generality-humanity. We suggest that a multiplicity of well-supported theories means that the nature of human values is difficult to define, and some meta-level theory is needed.