Is There a Trade-Off Between Human Autonomy and the ‘Autonomy’ of AI Systems?

In Conference on Philosophy and Theory of Artificial Intelligence. Springer International Publishing. pp. 67-71 (2022)
  Copy   BIBTEX

Abstract

Autonomy is often considered a core value of Western society that is deeply entrenched in moral, legal, and political practices. The development and deployment of artificial intelligence (AI) systems to perform a wide variety of tasks has raised new questions about how AI may affect human autonomy. Numerous guidelines on the responsible development of AI now emphasise the need for human autonomy to be protected. In some cases, this need is linked to the emergence of increasingly ‘autonomous’ AI systems that can perform tasks without human control or supervision. Do such ‘autonomous’ systems pose a risk to our own human autonomy? In this article, I address the question of a trade-off between human autonomy and system ‘autonomy’.

Author's Profile

Analytics

Added to PP
2023-08-07

Downloads
794 (#18,279)

6 months
729 (#1,737)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?