Invisible Influence: Artificial Intelligence and the Ethics of Adaptive Choice Architectures

Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society 1 (2019)
  Copy   BIBTEX

Abstract

For several years, scholars have (for good reason) been largely preoccupied with worries about the use of artificial intelligence and machine learning (AI/ML) tools to make decisions about us. Only recently has significant attention turned to a potentially more alarming problem: the use of AI/ML to influence our decision-making. The contexts in which we make decisions—what behavioral economists call our choice architectures—are increasingly technologically-laden. Which is to say: algorithms increasingly determine, in a wide variety of contexts, both the sets of options we choose from and the way those options are framed. Moreover, artificial intelligence and machine learning (AI/ML) makes it possible for those options and their framings—the choice architectures—to be tailored to the individual chooser. They are constructed based on information collected about our individual preferences, interests, aspirations, and vulnerabilities, with the goal of influencing our decisions. At the same time, because we are habituated to these technologies we pay them little notice. They are, as philosophers of technology put it, transparent to us—effectively invisible. I argue that this invisible layer of technological mediation, which structures and influences our decision-making, renders us deeply susceptible to manipulation. Absent a guarantee that these technologies are not being used to manipulate and exploit, individuals will have little reason to trust them.

Author's Profile

Daniel Susser
Cornell University

Analytics

Added to PP
2019-06-08

Downloads
1,141 (#13,777)

6 months
186 (#16,131)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?