Abstract
Can large language models be considered intelligent? Arguments against this proposition
often assume that genuine intelligence cannot exist without consciousness, understanding, or
creative thinking. We discuss each of these roadblocks to machine intelligence and conclude
that, in light of findings and conceptualizations in scientific research on these topics, none of
them rule out the possibility of viewing current AI systems based on large language models as intelligent. We argue that consciousness is not relevant for AI, while creativity and understanding should be considered functional traits that, in principle, can be implemented in a machine. Many arguments in circulation that claim current AI systems necessarily lack
these traits either rely on human exceptionalism or mistaken understanding of human intelligence. Arguments that highlight alleged flaws in current systems—such as a lack of reliability, agency, or understanding—may be important, but they often obscure the many qualitative similarities between human cognition and current AI models. We further suggest that a critical examination of high-performance AI systems can serve as a mirror through which we can reflect on our own intelligence. Many arguments against the prospects of AI may prove disconcerting for naively optimistic assessments of capacities of the human mind.