Algorithmic Nudging: The Need for an Interdisciplinary Oversight

Topoi 42 (3):799-807 (2023)
  Copy   BIBTEX


Nudge is a popular public policy tool that harnesses well-known biases in human judgement to subtly guide people’s decisions, often to improve their choices or to achieve some socially desirable outcome. Thanks to recent developments in artificial intelligence (AI) methods new possibilities emerge of how and when our decisions can be nudged. On the one hand, algorithmically personalized nudges have the potential to vastly improve human daily lives. On the other hand, blindly outsourcing the development and implementation of nudges to “black box” AI systems means that the ultimate reasons for why such nudges work, that is, the underlying human cognitive processes that they harness, will often be unknown. In this paper, we unpack this concern by considering a series of examples and case studies that demonstrate how AI systems can learn to harness biases in human judgment to reach a specified goal. Drawing on an analogy in a philosophical debate concerning the methodology of economics, we call for the need of an interdisciplinary oversight of AI systems that are tasked and deployed to nudge human behaviours.

Author Profiles

Ophelia Deroy
Ludwig Maximilians Universität, München


Added to PP

148 (#66,857)

6 months
113 (#18,018)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?