Abstract
Understanding computation as “a process of the dynamic change of information” brings to look at the different types of computation and information. Computation of information does not exist alone by itself but is to be considered as part of a system that uses it for some given purpose. Information can be meaningless like a thunderstorm noise, it can be meaningful like an alert signal, or like the representation of a desired food. A thunderstorm noise participates to the generation of meaningful information about coming rain. An alert signal has a meaning as allowing a safety constraint to be satisfied. The representation of a desired food participates to the satisfaction of some metabolic constraints for the organism. Computations on information and representations will be different in nature and in complexity as the systems that link them have different constraints to satisfy. Animals have survival constraints to satisfy. Humans have many specific constraints coming in addition. And computers will compute what the designer and programmer ask for. We propose to analyze the different relations between information, meaning and representation by taking an evolutionary approach on the systems that link them. Such a bottom-up approach allows starting with simple organisms and avoids an implicit focus on humans, which is the most complex and difficult case. To make available a common background usable for the many different cases, we use a systemic tool that defines the generation of meaningful information by and for a system submitted to a constraint [Menant, 2003]. This systemic tool allows to position information, meaning and representations for systems relatively to environmental entities in an evolutionary perspective. We begin by positioning the notions of information, meaning and representation and recall the characteristics of the Meaning Generator System (MGS) that link a system submitted to a constraint to its environment. We then use the MGS for animals and highlight the network nature of the interrelated meanings about an entity of the environment. This brings us to define the representation of an item for an agent as being the network of meanings relative to the item for the agent. Such meaningful representations embed the agents in their environments and are far from the Good Old Fashion Artificial Intelligence type ones. The MGS approach is then used for humans with a limitation resulting of the unknown nature of human consciousness. Application of the MGS to artificial systems brings to look for compatibilities with different levels of Artificial Intelligence (AI) like embodied-situated AI, the Guidance Theory of Representations, and enactive AI. Concerns relative to different types of autonomy and organic or artificial constraints are highlighted. We finish by summarizing the points addressed and by proposing some continuations.