Abstract
According to the singularity hypothesis, rapid and accelerating technological progress will in due course lead to the creation of a human-level artificial intelligence capable of designing a successor artificial intelligence of significantly greater cognitive prowess, and this will inaugurate a series of increasingly super-intelligent machines. But how much sense can we make of the idea of a being whose cognitive architecture is qualitatively superior to our own? This article argues that one fundamental limitation of human cognitive architecture is an inbuilt commitment to a metaphysical division between subject and object, a commitment that could be overcome in an artificial intelligence lacking our biological heritage