Abstract
Since the automation revolution of our technological era, diverse machines or robots have gradually begun to reconfigure our lives. With this expansion, it seems that those machines are now faced with a new challenge: more autonomous decision-making involving life or death consequences. This paper explores the philosophical possibility of artificial moral agency through the following question: could a machine obtain the cognitive capacities needed to be a moral agent? In this regard, I propose to expose, under a normative-cognitive perspective, the minimum criteria through which we could recognize an artificial entity as a genuine moral entity. Although my proposal should be considered from a reasonable level of abstraction, I will critically analyze and identify how an artificial agent could integrate those cognitive features. Finally, I intend to discuss their limitations or possibilities.