Abstract
Intelligence in current AI research is measured according to designer-assigned tasks that lack any relevance for an agent itself. As such, tasks and their evaluation reveal a lot more about our intelligence than the possible intelligence of agents that we design and evaluate. As a possible first step in remedying this, this article introduces the notion of “self-concern,” a property of a complex system that describes its tendency to bring about states that are compatible with its continued self-maintenance. Self-concern, as argued, is the foundation of the kind of basic intelligence found across all biological systems, because it reflects any such system's existential task of continued viability. This article aims to cautiously progress a few steps closer to a better understanding of some necessary organisational conditions that are central to self-concern in biological systems. By emulating these conditions in embodied AI, perhaps something like genuine self-concern can be implemented in machines, bringing AI one step closer to its original goal of emulating human-like intelligence.