Abstract
As engineers propose constructing humanlike automata, the question arises as to whether such machines merit human rights. The issue warrants serious and rigorous examination, although it has not yet cohered into a conversation. To put it into a sure direction, this paper proposes phrasing it in terms of whether humans are morally obligated to extend to maximally humanlike automata full human rights, or those set forth in common international rights documents. This paper’s approach is to consider the ontology of humans and of automata and whether ontological difference between them, that pertains to the very bases of human rights, affects the latter’s claims to full human rights. Considering common bases of human rights, can these bases tell us whether a certain ontological distinction of humans from automata—or a de facto distinction about humans tacitly acknowledged by full-rights-recognizing societies—makes a difference in whether humans are morally obligated to assign these entities full rights? Human rights to security also arise. The conclusion is that humans need not be under any moral obligation to confer full human rights on automata. The paper’s ultimate point is not to close the discussion with this ontological cap but to set a solid moral and legal groundwork for opening it up tout court.