AI, Opacity, and Personal Autonomy

Philosophy and Technology 35 (4):1-20 (2022)
Download Edit this record How to cite View on PhilPapers
Advancements in machine learning have fuelled the popularity of using AI decision algorithms in procedures such as bail hearings, medical diagnoses and recruitment. Academic articles, policy texts, and popularizing books alike warn that such algorithms tend to be opaque: they do not provide explanations for their outcomes. Building on a causal account of transparency and opacity as well as recent work on the value of causal explanation, I formulate a moral concern for opaque algorithms that is yet to receive a systematic treatment in the literature: when such algorithms are used in life-changing decisions, they can obstruct us from effectively shaping our lives according to our goals and preferences, thus undermining our autonomy. I argue that this concern deserves closer attention as it furnishes the call for transparency in algorithmic decision-making with both new tools and new challenges.
PhilPapers/Archive ID
Upload history
First archival date: 2022-09-22
Latest version: 2 (2022-09-22)
View other versions
Added to PP

12 (#73,139)

6 months
12 (#55,482)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?