Abstract
Recent proposals for human-centered AI (HCAI) help avoid the challenging task of specifying an objective for AI systems, since HCAI is designed to learn the objectives of the humans it is trying to assist. We think the move to HCAI is an important innovation but are concerned with how an instrumental, economic model of human rational agency has dominated research into HCAI. This paper brings the philosophical debate about human rational agency into the HCAI context, showing how more substantive ways to understand human objectives may improve HCAI. We contrast the economic approach with two more substantive modern alternatives: the realist and constructivist approaches to human rational agency. The realist approach holds that rational agents pursue what is good, while the constructivist approach holds that rational agents make autonomous choices. We illustrate each by considering its implications for fine-tuning large language models for chatbots such as ChatGPT.