Switch to: References

Add citations

You must login to add citations.
  1. The linguistic dead zone of value-aligned agency, natural and artificial.Travis LaCroix - 2024 - Philosophical Studies:1-23.
    The value alignment problem for artificial intelligence (AI) asks how we can ensure that the “values”—i.e., objective functions—of artificial systems are aligned with the values of humanity. In this paper, I argue that linguistic communication is a necessary condition for robust value alignment. I discuss the consequences that the truth of this claim would have for research programmes that attempt to ensure value alignment for AI systems—or, more loftily, those programmes that seek to design robustly beneficial or ethical artificial agents.
    Download  
     
    Export citation  
     
    Bookmark  
  • Automation, Alignment, and the Cooperative Interface.Julian David Jonker - 2024 - The Journal of Ethics 28 (3):483-504.
    The paper demonstrates that social alignment is distinct from value alignment as it is currently understood in the AI safety literature, and argues that social alignment is an important research agenda. Work provides an important example for the argument, since work is a cooperative endeavor, and it is part of the larger manifold of social cooperation. These cooperative aspects of work are individually and socially valuable, and so they must be given a central place when evaluating the impact of AI (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Risk-averse autonomous systems: A brief history and recent developments from the perspective of optimal control.Yuheng Wang & Margaret P. Chapman - 2022 - Artificial Intelligence 311 (C):103743.
    Download  
     
    Export citation  
     
    Bookmark   1 citation