Abstract
This paper proposes a novel constraint on artificial consciousness. The central claim
is that no artificial system can be genuinely conscious unless it instantiates a form of self-referential inference that is irreducibly perspectival and non-computable. Drawing on Quantum Bayesianism (QBism), I argue that consciousness should be understood as an anticipatory process grounded in subjective belief revision, not as an emergent product of computational complexity. Classical systems, however sophisticated, lack the architecture required to support this mode of updating. I conclude that artificial consciousness demands more than computation—it demands a subject.