Warszawa, Polska: Liberi Libri (
2020)
Copy
BIBTEX
Abstract
The aim of this study is to justify the belief that there are biological normative
mechanisms that fulfill non-trivial causal roles in the explanations (as formulated by
researchers) of actions and behaviors present in specific systems. One example of such
mechanisms is the predictive mechanisms described and explained by predictive processing
(hereinafter PP), which (1) guide actions and (2) shape causal transitions between states that
have specific content and fulfillment conditions (e.g. mental states). Therefore, I am guided
by a specific theoretical goal associated with the need to indicate those conditions that should
be met by the non-trivial theory of normative mechanisms and the specific models proposed
by PP supporters.
In this work, I use classical philosophical methods, such as conceptual analysis and
critical reflection. I also analyze selected studies in the field of cognitive science, cognitive
psychology, neurology, information theory and biology in terms of the methodology,
argumentation and language used, in accordance with their theoretical importance for the
issues discussed in this study. In this sense, the research presented here is interdisciplinary.
My research framework is informed by the mechanistic model of explanation, which defines
the necessary and sufficient conditions for explaining a given phenomenon. The research
methods I chose are therefore related to the problems that I intend to solve.
In the introductory chapter, “The concept of predictive processing”, I discuss the
nature of PP as well as its main assumptions and theses. I also highlight the key concepts and
distinctions for this research framework. Many authors argue that PP is a contemporary
version of Kantianism and is exposed to objections similar to those made against the approach
of Immanuel Kant. I discuss this thesis and show that it is only in a very general sense that the
PP framework is neo-Kantian. Here we are not dealing with transcendental deduction nor with
the application of transcendental arguments. I argue that PP is based on reverse engineering
and abduction inferences. In the second part of this chapter, I respond to the objection
formulated by Dan Zahavi, who directly accuses this research framework of anti-realistic
consequences. I demonstrate that the position of internalism, present in the so-called
conservative PP, does not imply anti-realism, and that, due to the explanatory role played in
it by structural representations directed at real patterns, it is justified to claim that PP is
realistic. In this way, I show that PP is a non-trivial research framework, having its subject,
specific methods and its own epistemic status. Finally, I discuss positions classified as the socalled radical PP.
In the chapter “Predictive processing as a Bayesian explanatory model” I justify the
thesis according to which PP offers Bayesian modeling. Many researchers claim that the brain
is an implemented statistical probabilistic network that is an approximation of the Bayesian
7
rule. In practice, this means that all cognitive processes are to apply Bayes' rule and can be
described in terms of probability distributions. Such a solution arouses objections among
many researchers and is the subject of wide criticism. The purpose of this chapter is to justify
the thesis that Bayesian PP is a non-trivial research framework. For this purpose, I argue that
it explains certain phenomena not only at the computational level described by David Marr,
but also at the level of algorithms and implementation. Later in this chapter I demonstrate
that PP is normative modeling. Proponents of the use of Bayesian models in psychology or
decision theory argue that they are normative because they allow the formulation of formal
rules of action that show what needs to be done to make a given action optimal. Critics of this
approach emphasize that such thinking about the normativity of Bayesian modeling is
unjustified and that science should shift from prescriptive to descriptive positions. In a polemic
with Shira Elqayam and Jonathan Evans (2011), I show that the division they propose into
prescriptivism and Bayesian descriptivism is apparent, because, as I argue, there are two forms
of prescriptivism, i.e. the weak and the strong. I argue that the weak version is epistemic and
can lead to anti-realism, while the strong version is ontic and allows one to justify realism in
relation to Bayesian models. I argue that a weak version of prescriptivism is valid for PP. It
allows us to adopt anti-realism in relation to PP. In practice, this means that you can explain
phenomena using Bayes' rule. This does not, however, imply that they are Bayesian in nature.
However, the full justification of realism in relation to the Bayesian PP presupposes the
adoption of strong prescriptivism. This position assumes that phenomena are explained by
Bayesian rule because they are Bayesian as such. If they are Bayesian in nature, then they
should be explained using Bayesian modeling. This thesis will be substantiated in the chapters
“Normative functions and mechanisms in the context of predictive processing” and
“Normative mechanisms and actions in predictive processing”.
In the chapter “The Free Energy Principle in predictive processing”, I discuss the Free
Energy Principle (hereinafter FEP) formulated by Karl Friston and some of its implications.
According to this principle, all biological systems (defined in terms of Markov blankets)
minimize the free energy of their internal states in order to maintain homeostasis. Some
researchers believe that PP is a special case of applying this principle to cognition, and that
predictive mechanisms are homeostatic mechanisms that minimize free energy. The
discussion of FEP is important due to the fact that some authors consider it to be important
for explanatory purposes and normative. If this is the case, then FEP turns out to be crucial in
explaining normative predictive mechanisms and, in general, any normative biological
mechanisms. To define the explanatory possibilities of this principle, I refer to the discussion
of its supporters on the issue they define as the problem of continuity and discontinuity
between life and mind. A critical analysis of this discussion and the additional arguments I
have formulated have allowed me to revise the explanatory ambitions of FEP. I also reject the
belief that this principle is necessary to explain the nature of predictive mechanisms. I argue
that the principle formulated and defended by Friston is an important research heuristic for
PP analysis.
8
In the chapter “Normative functions and mechanisms in predictive processing”, I start
my analyzes by formulating an answer to the question about the normative nature of
homeostatic mechanisms. I demonstrate that predictive mechanisms are not homeostatic. I
defend the view that a full explanation of normative mechanisms presupposes an explanation
of normative functions. I discuss the most important proposals for understanding the
normativity of a function, both from a systemic and teleosemantic perspective. I conclude that
the non-trivial concept of a function must meet two requirements which I define as
explanatory and normative. I show that none of the theories I have invoked satisfactorily
meets both of these requirements. Instead, I propose a model of normativity based on
Bickhard's account, but supplemented by a mechanistic perspective. I argue that a function is
normative when: (1) it allows one to explain the dysfunction of a given mechanism; (2) it
contributes to the maintenance of the organism's stability by shaping and limiting possible
relations, processes and behaviors of a given system; and when (3) (according to the
representational and predictive functions) it enables explaining the attribution of logical
values of certain representations / predictions. In such an approach, a mechanism is normative
when it performs certain normative functions and when it is constitutive for a specific action
or behavior, despite the fact that for some reason it cannot realize it either currently or in the
long-term.
Such an understanding of the normativity of mechanisms presupposes the acceptance
of the epistemic hypothesis. I argue that this hypothesis is not cognitively satisfactory, and
therefore the ontic hypothesis should be justified, which is directly related to adopting the
position of ontic prescriptivism. For this reason, referring to the mechanistic theory of
scientific explanations, I formulate an ontical interpretation of the concept of a normative
mechanism. According to this approach, a mechanism or a function is normative when they
perform such and such causal roles in explaining certain actions and behaviors. With regard
to the normative properties of predictive mechanisms and functions, this means that they are
the causes of specific actions an organism carries out in the environment. In this way, I justify
the necessity of accepting the ontic hypothesis and rejecting the epistemic hypothesis.
The fifth chapter, “Normative mechanisms and actions in predictive processing”, is
devoted to the dark room problem and the related exploration-exploitation trade-off. A dark
room is the state that an agent could be in if it minimized the sum of all potential prediction
errors. I demonstrate that, in accordance with the basic assumption of PP about the need for
continuous and long-term minimization of prediction errors, such a state should be desirable
for the agent. Is it really so? Many authors believe it is not. I argue that the test of the value
of PP is the possibility of a non-trivial solution of this problem, which can be reduced to the
choice between active and uncertainty-increasing exploration and safe and easily predictable
exploitation. I show that the solution proposed by PP supporters present in the literature does
not enable a fully satisfactory explanation of this dilemma.
Then I defend the position according to which the full explanation of the normative
mechanisms, and, subsequently, the solution to the dilemma of exploration and exploitation,
involves reference to the existence of constraints present in the environment. The constraints
9
include elements of the environment that make a given mechanism not only causal but also
normative. They are therefore key to explaining the predictive mechanisms. They do not only
play the role of the context in which the mechanism is implemented, but, above all, are its
constitutive component. I argue that the full explanation of the role of constraints in
normative predictive mechanisms presupposes the integration of individual models of specific
cognitive phenomena, because only the mechanistic integration of PP with other models
allows for a non-trivial explanation of the nature of normative predictive mechanisms that
would have a strong explanatory value. The explanatory monism present in many approaches
to PP makes it impossible to solve the problem of the dark room.
Later in this chapter, I argue that the Bayesian PP is normative not because it enables
the formulation of such and such rules of action, but because the predictive mechanisms
themselves are normative. They are normative because they condition the choice of such and
such actions by agents. In this way, I justify the hypothesis that normative mechanisms make
it possible to explain the phenomenon of agent motivation, which is crucial for solving the
dark room problem.
In the last part of the chapter, I formulate the hypothesis of distributed normativity,
which assumes that the normative nature of certain mechanisms, functions or objects is
determined by the relations into which these mechanisms, functions or objects enter. This
means that what is normative (in the primary sense) is the relational structure that constitutes
the normativity of specific items included in it. I suggest that this hypothesis opens up many
areas of research and makes it possible to rethink many problems.
In the “Conclusion”, I summarize the results of my research and indicate further
research perspectives.