Are current AIs moral agents?

Abstract

In the following essay, I will argue that the current AIs are not moral agents. I will first criticize the influential argument from sentience accounted by Véliz. According to Véliz, AIs are not moral agents because AIs can not feel pleasure and pain. However, I will show that moral agency does not necessarily require the ability to be sentient and refute Véliz’s argument. Instead, I will propose an argument from responsibility. First, I will establish the truth that moral agents necessarily have the ability to bear moral responsibility. Next, I will explain three major accounts of responsibility introduced in the Stanford Encyclopaedia of Philosophy. Then for each of the three accounts, I will show that current AIs do not have the ability to bear responsibility. In conclusion, this argument shows that current AIs can not be moral agents. Finally, this essay will discuss the possibility of AIs being partial moral agents and institutional moral agents.

Author's Profile

Xin Guan
Holistic AI

Analytics

Added to PP
2024-11-28

Downloads
5 (#102,827)

6 months
5 (#102,150)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?