Invisible Influence: Artificial Intelligence and the Ethics of Adaptive Choice Architectures

Download Edit this record How to cite View on PhilPapers
Abstract
For several years, scholars have (for good reason) been largely preoccupied with worries about the use of artificial intelligence and machine learning (AI/ML) tools to make decisions about us. Only recently has significant attention turned to a potentially more alarming problem: the use of AI/ML to influence our decision-making. The contexts in which we make decisions—what behavioral economists call our choice architectures—are increasingly technologically-laden. Which is to say: algorithms increasingly determine, in a wide variety of contexts, both the sets of options we choose from and the way those options are framed. Moreover, artificial intelligence and machine learning (AI/ML) makes it possible for those options and their framings—the choice architectures—to be tailored to the individual chooser. They are constructed based on information collected about our individual preferences, interests, aspirations, and vulnerabilities, with the goal of influencing our decisions. At the same time, because we are habituated to these technologies we pay them little notice. They are, as philosophers of technology put it, transparent to us—effectively invisible. I argue that this invisible layer of technological mediation, which structures and influences our decision-making, renders us deeply susceptible to manipulation. Absent a guarantee that these technologies are not being used to manipulate and exploit, individuals will have little reason to trust them.
PhilPapers/Archive ID
SUSIIA-2
Upload history
Archival date: 2019-06-08
View other versions
Added to PP index
2019-06-08

Total views
340 ( #18,667 of 2,446,240 )

Recent downloads (6 months)
63 ( #10,163 of 2,446,240 )

How can I increase my downloads?

Downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.