Humans are cognitive entities. Our ongoing interactions with the environment are threaded with creations and usages of meaningful information. Animal life is also populated with meaningful information related to survival constraints. Information managed by artificial agents can also be considered as having meanings, as derived from the designer. Such perspective brings us to propose an evolutionary approach to cognition based on meaningful information management. We use a systemic tool, the Meaning Generator System (MGS), and apply it consecutively to animals, humans and artificial agents [1, 2]. The MGS receives information from its environment and compares it with its constraint. The generated meaning is the connection existing between the received information and the constraint. It triggers an action aimed at satisfying the constraint. The action modifies the environment and the generated meaning. Meaning generation links agents to their environments. The MGS is a system: a set of elements linked by a set of relations. Any system submitted to a constraint and capable of receiving information can lead to a MGS. Animals, humans and robots are agents containing MGSs dealing with different constraints. Similar MGSs carrying different constraints will generate different meanings. Cognition is system dependent. Contrary to approaches on meaning generation based on psychology or linguistics, the MGS approach is not based on human mind. We want to avoid the circularity of taking human mind as a starting point. Free will and self-consciousness participate to the management of human meanings. They do not exist for animals or robots. Staying alive is a constraint that we share with animals. Robots ignore that constraint. We first use the MGS for animals with “stay alive” and “group life” constraints. The analysis of meaning and cognition in animals is however limited by our un-complete understanding of the nature of life (the question of final causes). Extending the analysis of meaning generation and cognition to humans is complex and has some true limitations as the nature of human mind is a mystery for today science and philosophy. The natures of our feelings, free will or self-consciousness are unknown. Approaches to identify human constraints are however possible, where the MGS can highlight some openings [3, 4]. Modeling meaning management in artificial agents is rather straightforward with the MGS. We, the designers, know the agents and the constraints. The derived nature of constraints, meaning and cognition is however to be highlighted. We define a meaningful representation of an item for an agent as being the networks of meanings relative to the item for the agent, with the action scenarios involving the item. Such meaningful representations embed the agents in their environments and are far from the GOFAI type of representations. Cognition, meanings and representations exist by and for the agents. We finish by summarizing the points presented here and highlight possible continuations .
 “Information and Meaning”  “Introduction to a systemic theory of meaning”  “Computation on Information, Meaning and Representations. An Evolutionary Approach”  “Proposal for a shared evolutionary nature of language and consciousness”