Abstract
In the field of machine consciousness, it has been argued that in order to build human-like conscious machines, we must first have a computational model of qualia. To this end, some have proposed a framework that supports qualia in machines by implementing a model with three computational areas (i.e., the subconceptual, conceptual, and linguistic areas). These abstract mechanisms purportedly enable the assessment of artificial qualia. However, several critics of the machine consciousness project dispute this possibility. For instance, Searle, in his Chinese room objection, argues that however sophisticated a computational system is, it can never exhibit intentionality; thus, would also fail to exhibit consciousness or any of its varieties. This paper argues that the proposed architecture mentioned above answers the problem posed by Searle, at least in part. Specifically, it argues that we could reformulate Searle’s worries in the Chinese room in terms of the three-stage artificial qualia model. And by doing so, we could see that the person doing all the translations in the room could realize the three areas in the proposed framework. Consequently, this demonstrates the actualization of self-consciousness in machines.