Abstract
In the following essay, I will argue that the current AIs are not moral agents. I will first criticize the influential argument from sentience accounted by Véliz. According to Véliz, AIs are not moral agents because AIs can not feel pleasure and pain. However, I will show that moral agency does not necessarily require the ability to be sentient and refute Véliz’s argument. Instead, I will propose an argument from responsibility. First, I will establish the truth that moral agents necessarily have the ability to bear moral responsibility. Next, I will explain three major accounts of responsibility introduced in the Stanford Encyclopaedia of Philosophy. Then for each of the three accounts, I will show that current AIs do not have the ability to bear responsibility. In conclusion, this argument shows that current AIs can not be moral agents. Finally, this essay will discuss the possibility of AIs being partial moral agents and institutional moral agents.