What’s wrong with Automated Influence

Canadian Journal of Philosophy:1-24 (forthcoming)
Download Edit this record How to cite View on PhilPapers
Abstract
Automated Influence is the use of AI to collect, integrate and analyse people's data in order to deliver targeted interventions that shape their behaviour. We consider three central objections against Automated Influence, focusing on privacy, exploitation, and manipulation, showing in each case how a structural version of that objection has more purchase than its interactional counterpart. By rejecting the interactional focus of 'AI Ethics', in favour of a more structural, political philosophy of AI, we show that the real problem with Automated Influence is the crisis of legitimacy that it precipitates.
PhilPapers/Archive ID
BENWWW-3
Upload history
First archival date: 2021-08-17
Latest version: 2 (2021-08-19)
View other versions
Added to PP index
2021-08-17

Total views
572 ( #12,301 of 69,105 )

Recent downloads (6 months)
239 ( #2,101 of 69,105 )

How can I increase my downloads?

Downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.