Moral Encounters of the Artificial Kind: Towards a non-anthropocentric account of machine moral agency

Dissertation, Stellenbosch University (2019)
Download Edit this record How to cite View on PhilPapers
The aim of this thesis is to advance a philosophically justifiable account of Artificial Moral Agency (AMA). Concerns about the moral status of Artificial Intelligence (AI) traditionally turn on questions of whether these systems are deserving of moral concern (i.e. if they are moral patients) or whether they can be sources of moral action (i.e. if they are moral agents). On the Organic View of Ethical Status, being a moral patient is a necessary condition for an entity to qualify as a moral agent. This view claims that because artificial agents (AAs) lack sentience, they cannot be proper subjects of moral concern and hence cannot be considered to be moral agents. I raise conceptual and epistemic issues with regards to the sense of sentience employed on this view, and I argue that the Organic View does not succeed in showing that machines cannot be moral patients. Nevertheless, irrespective of this failure, I also argue that the entire project is misdirected in that moral patiency need not be a necessary condition for moral agency. Moreover, I claim that whereas machines may conceivably be moral patients in the future, there is a strong case to be made that they are (or will very soon be) moral agents. Whereas it is often argued that machines cannot be agents simpliciter, let alone moral agents, I claim that this argument is predicated on a conception of agency that makes unwarranted metaphysical assumptions even in the case of human agents. Once I have established the shortcomings of this “standard account”, I move to elaborate on other, more plausible, conceptions of agency, on which some machines clearly qualify as agents. Nevertheless, the argument is still often made that while some machines may be agents, they cannot be moral agents, given their ostensible lack of the requisite phenomenal states. Against this thesis, I argue that the requirement of internal states for moral agency is philosophically unsound, as it runs up against the problem of other minds. In place of such intentional accounts of moral agency, I provide a functionalist alternative, which makes conceptual room for the existence of AMAs. The implications of this thesis are that at some point in the future we may be faced with situations for which no human being is morally responsible, but a machine may be. Moreover, this responsibility holds, I claim, independently of whether the agent in question is “punishable” or not.
No categories specified
(categorize this paper)
PhilPapers/Archive ID
Upload history
Archival date: 2020-06-17
View other versions
Added to PP index

Total views
11 ( #51,563 of 51,210 )

Recent downloads (6 months)
11 ( #38,232 of 51,210 )

How can I increase my downloads?

Downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.