Risk Imposition by Artificial Agents: The Moral Proxy Problem

In Silja Vöneky, Philipp Kellmeyer, Oliver Müller & Wolfram Burgard (eds.), The Cambridge Handbook of Responsible Artificial Intelligence: Interdisciplinary Perspectives. Cambridge University Press (forthcoming)
Download Edit this record How to cite View on PhilPapers
Where artificial agents are not liable to be ascribed true moral agency and responsibility in their own right, we can understand them as acting as proxies for human agents, as making decisions on their behalf. What I call the ‘Moral Proxy Problem’ arises because it is often not clear for whom a specific artificial agent is acting as a moral proxy. In particular, we need to decide whether artificial agents should be acting as proxies for low-level agents — e.g. individual users of the artificial agents — or whether they should be moral proxies for high-level agents — e.g. designers, distributors or regulators, that is, those who can potentially control the choice behaviour of many artificial agents at once. Who we think an artificial agent is a moral proxy for determines from which agential perspective the choice problems artificial agents will be faced with should be framed: should we frame them like the individual choice scenarios previously faced by individual human agents? Or should we, rather, consider the expected aggregate effects of the many choices made by all the artificial agents of a particular type all at once? This paper looks at how artificial agents should be designed to make risky choices, and argues that the question of risky choice by artificial agents shows the moral proxy problem to be both practically relevant and difficult.
PhilPapers/Archive ID
Upload history
Archival date: 2021-08-16
View other versions
Added to PP

96 (#54,487)

6 months
40 (#21,012)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?