This paper deals with the question of agency and intentionality in the context of the free-energy principle. The free-energy principle is a system-theoretic framework for understanding living self-organizing systems and how they relate to their environments. I will first sketch the main philosophical positions in the literature: a rationalist Helmholtzian interpretation (Hohwy 2013; Clark 2013), a cybernetic interpretation (Seth 2015b) and the enactive affordance-based interpretation (Bruineberg and Rietveld 2014; Bruineberg et al. 2016) and will then show how agency and intentionality (...) are construed differently on these different philosophical interpretations. I will then argue that a purely Helmholtzian is limited, in that it can account only account for agency in the context of perceptual inference. The cybernetic account cannot give a full account of action, since purposiveness is accounted for only to the extent that it pertains to the control of homeostatic essential variables. I will then argue that the enactive affordance-based account attempts to provide broader account of purposive action without presupposing goals and intentions coming from outside of the theory. In the second part of the paper, I will discuss how each of these three interpretations conceives of the sense agency and intentionality in different ways. (shrink)
This work addresses the autonomous organization of biological systems. It does so by considering the boundaries of biological systems, from individual cells to Home sapiens, in terms of the presence of Markov blankets under the activeinference scheme—a corollary of the free energy principle. A Markov blanket defines the boundaries of a system in a statistical sense. Here we consider how a collective of Markov blankets can self-assemble into a global system that itself has a Markov blanket; thereby (...) providing an illustration of how autonomous systems can be understood as having layers of nested and self-sustaining boundaries. This allows us to show that: (i) any living system is a Markov blanketed system and (ii) the boundaries of such systems need not be co-extensive with the biophysical boundaries of a living organism. In other words, autonomous systems are hierarchically composed of Markov blankets of Markov blankets—all the way down to individual cells, all the way up to you and me, and all the way out to include elements of the local environment. (shrink)
The paper is dedicated to particular cases of interaction and mutual impact of philosophy and cognitive science. Thus, philosophical preconditions in the middle of the 20th century shaped the newly born cognitive science as mainly based on conceptual and propositional representations and syntactical inference. Further developments towards neural networks and statistical representations did not change the prejudice much: many still believe that network models must be complemented with some extra tools that would account for proper human cognitive traits. I (...) address some real implemented connectionist models that show how ‘new associationism ’ of the neural network approach may not only surpass Humean limitations, but, as well, realistically explain abstraction, inference and prediction. Then I stay on Predictive Processing theories in a little more detail to demonstrate that sophisticated statistical tools applied to a biologically realist ontology may not only provide solutions to scientific problems or integrate different cognitive paradigms but propose some philosophical insights either. To conclude, I touch on a certain parallelism of Predictive Processing and philosophical inferentialism as presented by Robert Brandom. (shrink)
This monograph is an in-depth and engaging discourse on the deeply cognitive roots of human scientific quest. The process of making scientific inferences is continuous with the day-to-day inferential activity of individuals, and is predominantly inductive in nature. Inductive inference, which is fallible, exploratory, and open-ended, is of essential relevance in our incessant efforts at making sense of a complex and uncertain world around us, and covers a vast range of cognitive activities, among which scientific exploration constitutes the pinnacle. (...) Inductive inference has a personal aspect to it, being rooted in the cognitive unconscious of individuals, which has recently been found to be of paramount importance in a wide range of complex cognitive processes. One other major aspect of the process of inference making, including the making of scientific inferences, is the role of a vast web of beliefs lodged in the human mind, as also of a huge repertoire of heuristics, that constitute an important component of ‘unconscious intelligence’. Finally, human cognitive activity is dependent in a large measure on emotions and affects, that operate mostly at an unconscious level. Of special importance in scientific inferential activity is the process of hypothesis making, which is examined in this book, along with the above aspects of inductive inference, at considerable depth. The book focuses on the inadequacy of the viewpoint of naive realism in understanding the contextual nature of scientific theories, where a cumulative progress towards an ultimate truth about Nature appears to be too simplistic a generalization. It poses a critique to the commonly perceived image of science which is seen as the last word in logic and objectivity, the latter in the double sense of being independent of individual psychological propensities and, at the same time, approaching a correct understanding of the workings of a mind-independent nature. Adopting the naturalist point of view, it examines the essential tension between the cognitive endeavours of individuals and scientific communities, immersed in belief systems and cultures, on the one hand, and the engagement with a mind-independent reality on the other. In the end, science emerges as an interpretation of nature, which is perceived by us only contextually, as successively emerging cross-sections of limited scope and extent. Successive waves of theory building in science appear as episodic and kaleidoscopic changes in perspective as certain in-built borders are crossed, rather than as a cumulative progress towards some ultimate truth. While written in an informal and conversational style, the book raises a number of deep and intriguing questions located at the interface of cognitive psychology and philosophy of science, meant for both the general lay reader and the specialist. Of particular interest is the way it explores the role of belief (aided by emotions and affects) in making the process of inductive inference possible since belief is a subtle though all-pervasive cognitive factor not adequately investigated in the current literature. (shrink)
Transcranial magnetic stimulation is used to make inferences about relationships between brain areas and their functions because, in contrast to neuroimaging tools, it modulates neuronal activity. The central aim of this article is to critically evaluate to what extent it is possible to draw causal inferences from repetitive TMS data. To that end, we describe the logical limitations of inferences based on rTMS experiments. The presented analysis suggests that rTMS alone does not provide the sort of premises that are sufficient (...) to warrant strong inferences about the direct causal properties of targeted brain structures. Overcoming these limitations demands a close look at the designs of rTMS studies, especially the methodological and theoretical conditions which are necessary for the functional decomposition of the relations between brain areas and cognitive functions. The main points of this article are that TMS-based inferences are limited in that stimulation-related causal effects are not equivalent to structure-related causal effects due to TMS side effects, the electric field distribution, and the sensitivity of neuroimaging and behavioral methods in detecting structure-related effects and disentangling them from confounds. Moreover, the postulated causal effects can be based on indirect effects. A few suggestions on how to manage some of these limitations are presented. We discuss the benefits of combining rTMS with neuroimaging in experimental reasoning and we address the restrictions and requirements of rTMS control conditions. The use of neuroimaging and control conditions allows stronger inferences to be gained, but the strength of the inferences that can be drawn depends on the individual experiment’s designs. Moreover, in some cases, TMS might not be an appropriate method of answering causality-related questions or the hypotheses have to account for the limitations of this technique. We hope this summary and formalization of the reasoning behind rTMS research can be of use not only for scientists and clinicians who intend to interpret rTMS results causally but also for philosophers interested in causal inferences based on brain stimulation research. (shrink)
Newton published his deduction of universal gravity in Principia (first ed., 1687). To establish the universality (the particle-to-particle nature) of gravity, Newton must establish the additivity of mass. I call ‘additivity’ the property a body's quantity of matter has just in case, if gravitational force is proportional to that quantity, the force can be taken to be the sum of forces proportional to each particle's quantity of matter. Newton's argument for additivity is obscure. I analyze and assess manuscript versions of (...) Newton's initial argument within his initial deduction, dating from early 1685. Newton's strategy depends on distinguishing two quantities of matter, which I call ‘active’ and ‘passive’, by how they are measured. These measurement procedures frame conditions on the additivity of each quantity so measured. While Newton has direct evidence for the additivity of passive quantity of matter, he does not for that of the active quantity. Instead, he tries to infer the latter from the former via conceptual analyses of the third law of motion grounded largely on analogies to magnetic attractions. The conditions needed to establish passive additivity frustrate Newton's attempted inference to active additivity. (shrink)
In this paper, I argue that the “positive argument” for Constructive Empiricism (CE), according to which CE “makes better sense of science, and of scientific activity, than realism does” (van Fraassen 1980, 73), is an Inference to the Best Explanation (IBE). But constructive empiricists are critical of IBE, and thus they have to be critical of their own “positive argument” for CE. If my argument is sound, then constructive empiricists are in the awkward position of having to reject their (...) own “positive argument” for CE by their own lights. (shrink)
In this essay we collect and put together a number of ideas relevant to the under- standing of the phenomenon of creativity, confining our considerations mostly to the domain of cognitive psychology while we will, on a few occasions, hint at neuropsy- chological underpinnings as well. In this, we will mostly focus on creativity in science, since creativity in other domains of human endeavor have common links with scientific creativity while differing in numerous other specific respects. We begin by briefly (...) introducing a few basic notions relating to cognition, among which the notion of ‘concepts’ is of basic relevance. The myriads of concepts lodged in our mind constitute a ‘conceptual space’ of an enormously complex structure, where con- cepts are correlated by beliefs that are themselves made up of concepts and are as- sociated with emotions. The conceptual space, moreover, is perpetually in a state of dynamic evolution that is once again of a complex nature. A major component of the dynamic evolution is made up of incessant acts of inference, where an inference occurs essentially by means of a succession of correlations among concepts set up with beliefs and heuristics, the latter being beliefs of a special kind, namely, ones relatively free of emotional associations and possessed of a relatively greater degree of justification. Beliefs, along with heuristics, have been described as the ‘mind’s software’, and con- stitute important cognitive components of the self-linked psychological resources of an individual. The self is the psychological engine driving all our mental and physical activity, and is in a state of ceaseless dynamics resulting from one’s most intimate ex- periences of the world accumulating in the course of one’s journey through life. Many of our psychological resources are of a dual character, having both a self-linked and a shared character, the latter being held in common with larger groups of people and imbibed from cultural inputs. We focus on the privately held self-linked beliefs of an individual, since these are presumably of central relevance in making possible inductive inferences – ones in which there arises a fundamental need of adopting a choice or making a decision. Beliefs, decisions, and inferences, all have the common link to the self of an individual and, in this, are fundamentally analogous to free will, where all of these have an aspect of non-determinism inherent in them. Creativity involves a major restructuring of the conceptual space where a sustained inferential process eventually links remote conceptual domains, thereby opening up the possibility of a large number of new correlations between remote concepts by a cascading process. Since the process of inductive inference depends crucially on de- cisions at critical junctures of the inferential chain, it becomes necessary to examine the basic mechanism underlying the making of decisions. In the framework that we attempt to build up for the understanding of scientific creativity, this role of decision making in the inferential process assumes central relevance. With this background in place, we briefly sketch the affect theory of decisions. Affect is an innate system of response to perceptual inputs received either from the exter- nal world or from the internal physiological and psychological environment whereby a positive or negative valence gets associated with a perceptual input. Almost every sit- uation faced by an individual, even one experienced tacitly, i.e., without overt aware-ness, elicits an affective response from him, carrying a positive or negative valence that underlies all sorts of decision making, including ones carried out unconsciously in inferential processes. Referring to the process of inferential exploration of the conceptual space that gener- ates the possibility of correlations being established between remote conceptual do- mains, such exploration is guided and steered at every stage by the affect system, analogous to the way a complex computer program proceeds through junctures where the program ascertains whether specified conditions are met with by way of generating appropriate numerical values – for instance, the program takes different routes, depending on whether some particular numerical value turns out to be positive or negative. The valence generated by the affect system in the process of adoption of a choice plays a similar role which therefore is of crucial relevance in inferential processes, especially in the exploration of the conceptual space where remote domains need to be linked up – the affect system produces a response along a single value dimension, resembling a number with a sign and a magnitude. While the affect system plays a guiding role in the exploration of the conceptual space, the process of exploration itself consists of the establishment of correlations between concepts by means of beliefs and heuristics, the self-linked ones among the latter having a special role in making possible the inferential journey along alternative routes whenever the shared rules of inference become inadequate. A successful access to a remote conceptual domain, necessary for the creative solution of a standing problem or anomaly – one that could not be solved within the limited domain hitherto accessed – requires a phase of relatively slow cumulative search and then, at some stage, a rapid cascading process when a solution is in sight. Representing the conceptual space in the form of a complex network, the overall process can be likened to one of self-organized criticality commonly observed in the dynamical evolution of complex systems. In order that inferential access to remote domains may actually be possible, it is necessary that restrictions on the exploration process – necessary for setting the context in ordinary instances of inductive inference – be relaxed and a relatively free exploration in a larger conceptual terrain be made possible. This is achieved by the mind going into the default mode, where external constraints – ones imposed by shared beliefs and modes of exploration – are made inoperative. While explaining all these various aspects of the creative process, we underline the supremely important role that analogy plays in it. Broadly speaking, analogy is in the nature of a heuristic, establishing correlations between concepts. However, analo- gies are very special in that these are particularly effective in establishing correlations among remote concepts, since analogy works without regard to the contiguity of the concepts in the conceptual space. In establishing links between concepts, analogies have the power to light up entire terrains in the conceptual space when a rapid cas- cading of fresh correlations becomes possible. The creative process occurs within the mind of a single individual or of a few closely collaborating individuals, but is then continued by an entire epistemic community, eventually resulting in a conceptual revolution. Such conceptual revolutions make pos- sible the radical revision of scientific theories whereby the scope of an extant theory is broadened and a new theoretical framework makes its appearance. The emerging theory is characterized by a certain degree of incommensurability when compared with the earlier one – a feature that may appear strange at first sight. But incommen- surability does not mean incompatibility and the apparently contrary features of the relation between the successive theories may be traced to the multi-layered structureof the conceptual space where concepts are correlated not by means of single links but by multiple ones, thereby generating multiple layers of correlation, among which some are retained and some created afresh in a conceptual restructuring. We conclude with the observation that creativity occurs on all scales. Analogous to correlations being set up across domains in the conceptual space and new domains being generated, processes with similar features can occur within the confines of a domain where a new layer of inferential links may be generated, connecting up sub- domains. In this context, insight can be looked upon as an instance of creativity within the confines of a domain of a relatively limited extent. (shrink)
Susceptibility to the rubber hand illusion (RHI) varies. To date, however, there is no consensus explanation of this variability. Previous studies, focused on the role of multisensory integration, have searched for neural correlates of the illusion. But those studies have failed to identify a sufficient set of functionally specific neural correlates. Because some evidence suggests that frontal α power is one means of tracking neural instantiations of self, we hypothesized that the higher the frontal α power during the eyes-closed resting (...) state, the more stable the self. As a corollary, we infer that the more stable the self, the less susceptible are participants to a blurring of boundaries—to feeling that the rubber hand belongs to them. Indeed, we found that frontal α amplitude oscillations negatively correlate with susceptibility. Moreover, since lower frequencies often modulate higher frequencies, we explored the possibility that this might be the case for the RHI. Indeed, some evidence suggests that high frontal α power observed in low-RHI participants is modulated by δ frequency oscillations. We conclude that while neural correlates of multisensory integration might be necessary for the RHI, sufficient explanation involves variable intrinsic neural activity that modulates how the brain responds to incompatible sensory stimuli. (shrink)
Scientific models share one central characteristic with fiction: their relation to the physical world is ambiguous. It is often unclear whether an element in a model represents something in the world or presents an artifact of model building. Fiction, too, can resemble our world to varying degrees. However, we assign a different epistemic function to scientific representations. As artifacts of human activity, how are scientific representations allowing us to make inferences about real phenomena? In reply to this concern, philosophers of (...) science have started analyzing scientific representations in terms of fictionalization strategies. Many arguments center on a dyadic relation between the model and its target system, focusing on structural resemblances and “as if” scenarios. This chapter provides a different approach. It looks more closely at model building to analyze the interpretative strategies dealing with the representational limits of models. How do we interpret ambiguous elements in models? Moreover, how do we determine the validity of model-based inferences to information that is not an explicit part of a representational structure? I argue that the problem of ambiguous inference emerges from two features of representations, namely their hybridity and incompleteness. To distinguish between fictional and non-fictional elements in scientific models my suggestion is to look at the integrative strategies that link a particular model to other methods in an ongoing research context. To exemplify this idea, I examine protein modeling through X-ray crystallography as a pivotal method in biochemistry. (shrink)
Over the last fifteen years, an ambitious explanatory framework has been proposed to unify explanations across biology and cognitive science. Activeinference, whose most famous tenet is the free energy principle, has inspired excitement and confusion in equal measure. Here, we lay the ground for proper critical analysis of activeinference, in three ways. First, we give simplified versions of its core mathematical models. Second, we outline the historical development of activeinference and its (...) relationship to other theoretical approaches. Third, we describe three different kinds of claim -- labelled mathematical, empirical and general -- routinely made by proponents of the framework, and suggest dialectical links between them. Overall, we aim to increase philosophical understanding of activeinference so that it may be more readily evaluated. -/- This the final submitted version of the Introduction to the Topical Collection "The Free Energy Principle: From Biology to Cognition", forthcoming in Biology & Philosophy. (shrink)
The goal of this short chapter, aimed at philosophers, is to provide an overview and brief explanation of some central concepts involved in predictive processing (PP). Even those who consider themselves experts on the topic may find it helpful to see how the central terms are used in this collection. To keep things simple, we will first informally define a set of features important to predictive processing, supplemented by some short explanations and an alphabetic glossary. -/- The features described here (...) are not shared in all PP accounts. Some may not be necessary for an individual model; others may be contested. Indeed, not even all authors of this collection will accept all of them. To make this transparent, we have encouraged contributors to indicate briefly which of the features are necessary to support the arguments they provide, and which (if any) are incompatible with their account. For the sake of clarity, we provide the complete list here, very roughly ordered by how central we take them to be for “Vanilla PP” (i.e., a formulation of predictive processing that will probably be accepted by most researchers working on this topic). More detailed explanations will be given below. Note that these features do not specify individually necessary and jointly sufficient conditions for the application of the concept of “predictive processing”. All we currently have is a semantic cluster, with perhaps some overlapping sets of jointly sufficient criteria. The framework is still developing, and it is difficult, maybe impossible, to provide theory-neutral explanations of all PP ideas without already introducing strong background assumptions. (shrink)
In his paper, Jakob Hohwy outlines a theory of the brain as an organ for prediction-error minimization, which he claims has the potential to profoundly alter our understanding of mind and cognition. One manner in which our understanding of the mind is altered, according to PEM, stems from the neurocentric conception of the mind that falls out of the framework, which portrays the mind as “inferentially-secluded” from its environment. This in turn leads Hohwy to reject certain theses of embodied cognition. (...) Focusing on this aspect of Hohwy’s argument, we first outline the key components of the PEM framework such as the “evidentiary boundary,” before looking at why this leads Hohwy to reject certain theses of embodied cognition. We will argue that although Hohwy may be correct to reject specific theses of embodied cognition, others are in fact implied by the PEM framework and may contribute to its development. We present the metaphor of the “body as a laboratory” in order to highlight wha... (shrink)
The notion of a physiological individuals has been developed and applied in the philosophy of biology to understand symbiosis, an understanding of which is key to theorising about the major transition in evolution from multi-organismality to multi-cellularity. The paper begins by asking what such symbiotic individuals can help to reveal about a possible transition in the evolution of cognition. Such a transition marks the movement from cooperating individual biological cognizers to a functionally integrated cognizing unit. Somewhere along the way, did (...) such cognizing units simultaneously have cognizers as parts? Expanding upon the multiscale integration view of the Free Energy Principle, this paper develops an account of reciprocal integration, demonstrating how some coupled biological cognizing systems, when certain constraints are met, can result in a cognizing unit that is in ways greater than the sum of its cognizing parts. Symbiosis between V. Fischeri bacteria and the bobtail squid is used to provide an illustration this account. A novel manner of conceptualizing biological cognizers as gradient is then suggested. Lastly it is argued that the reason why the notion of ontologically nested cognizers may be unintuitive stems from the fact that our folk-psychology notion of what a cognizer is has been deeply influenced by our folk-biological manner of understanding biological individuals as units of reproduction. (shrink)
Decision-making has traditionally been modelled as a serial process, consisting of a number of distinct stages. The traditional account assumes that an agent first acquires the necessary perceptual evidence, by constructing a detailed inner repre- sentation of the environment, in order to deliberate over a set of possible options. Next, the agent considers her goals and beliefs, and subsequently commits to the best possible course of action. This process then repeats once the agent has learned from the consequences of her (...) actions and subsequently updated her beliefs. Under this interpretation, the agent’s body is considered merely as a means to report the decision, or to acquire the relevant goods. However, embodied cognition argues that an agent’s body should be understood as a proper part of the decision-making pro- cess. Accepting this principle challenges a number of commonly held beliefs in the cognitive sciences, but may lead to a more unified account of decision-making. This thesis explores an embodied account of decision-making using a recent frame- work known as predictive processing. This framework has been proposed by some as a functional description of neural activity. However, if it is approached from an embodied perspective, it can also offer a novel account of decision-making that ex- tends the scope of our explanatory considerations out beyond the brain and the body. We explore work in the cognitive sciences that supports this view, and argue that decision theory can benefit from adopting an embodied and predictive perspective. (shrink)
In mental action there is no motor output to be controlled and no sensory input vector that could be manipulated by bodily movement. It is therefore unclear whether this specific target phenomenon can be accommodated under the predictive processing framework at all, or if the concept of “activeinference” can be adapted to this highly relevant explanatory domain. This contribution puts the phenomenon of mental action into explicit focus by introducing a set of novel conceptual instruments and developing (...) a first positive model, concentrating on epistemic mental actions and epistemic self-control. Action initiation is a functionally adequate form of self-deception; mental actions are a specific form of predictive control of effective connectivity, accompanied and possibly even functionally mediated by a conscious “epistemic agent model”. The overall process is aimed at increasing the epistemic value of pre-existing states in the conscious self-model, without causally looping through sensory sheets or using the non-neural body as an instrument for activeinference. (shrink)
Methodological problems often arise when a special case is confused with the general principle. So you will find affordances only for ‚artifacts’ if you restrict the analysis to ‚artifacts’. The general principle, however, is an ‚invitation character’, which triggers an action. Consequently, an action-theoretical approach known as ‚pragmatic turn’ in cognitive science is recommended. According to this approach, the human being is not a passive-receptive being but actively produces those action effects that open up the world to us. This ‚ideomotor (...) approach’ focuses on the so-called ‚epistemic actions’, which guide our perception as conscious and unconscious cognitions. Due to ‚embodied cognition’ the own body is assigned an indispensable role. The action theoretical approach of ‚enactive cognition’ enables that every form can be consistently processualized. Thus, each ‚Gestalt’ is understood as the process result of interlocking cognitions of ‚forward modelling’ and ‚inverse modelling’. As can be shown, these cognitions are fed by previous experiences of real interaction, which later changes into a mental trial treatment, which is highly automated and can therefore take place unconsciously. It is now central that every object may have such affordances that call for instrumental or epistemic action. In the simplest case, it is the body and the facial expressions of our counterpart that can be understood as a question and provoke an answer/reaction. Thus, emotion is not only to be understood as expression/output according to the scheme ‚input-processing-output’, but acts itself as a provocative act/input. Consequently, artifacts are neither necessary nor sufficient conditions for affordances. Rather, they exist in all areas of cognition—from Enactive Cognition to Social Cognition. (shrink)
Many philosophers claim that the neurocomputational framework of predictive processing entails a globally inferentialist and representationalist view of cognition. Here, I contend that this is not correct. I argue that, given the theoretical commitments these philosophers endorse, no structure within predictive processing systems can be rightfully identified as a representational vehicle. To do so, I first examine some of the theoretical commitments these philosophers share, and show that these commitments provide a set of necessary conditions the satisfaction of which allows (...) us to identify representational vehicles. Having done so, I introduce a predictive processing system capable of activeinference, in the form of a simple robotic “brain”. I examine it thoroughly, and show that, given the necessary conditions highlighted above, none of its components qualifies as a representational vehicle. I then consider and allay some worries my claim could raise. I consider whether the anti-representationalist verdict thus obtained could be generalized, and provide some reasons favoring a positive answer. I further consider whether my arguments here could be blocked by allowing the same representational vehicle to possess multiple contents, and whether my arguments entail some extreme form of revisionism, answering in the negative in both cases. A quick conclusion follows. (shrink)
Predictive processing theories are increasingly popular in philosophy of mind; such process theories often gain support from the Free Energy Principle (FEP)—a nor- mative principle for adaptive self-organized systems. Yet there is a current and much discussed debate about conflicting philosophical interpretations of FEP, e.g., repre- sentational versus non-representational. Here we argue that these different interpre- tations depend on implicit assumptions about what qualifies (or fails to qualify) as representational. We deploy the Free Energy Principle (FEP) instrumentally to dis- tinguish (...) four main notions of representation, which focus on organizational, struc- tural, content-related and functional aspects, respectively. The various ways that these different aspects matter in arriving at representational or non-representational interpretations of the Free Energy Principle are discussed. We also discuss how the Free Energy Principle may be seen as a unified view where terms that tradition- ally belong to different ontologies—e.g., notions of model and expectation versus notions of autopoiesis and synchronization—can be harmonized. However, rather than attempting to settle the representationalist versus non-representationalist debate and reveal something about what representations are simpliciter, this paper demon- strates how the Free Energy Principle may be used to reveal something about those partaking in the debate; namely, what our hidden assumptions about what represen- tations are—assumptions that act as sometimes antithetical starting points in this persistent philosophical debate. (shrink)
Self-evidencing describes the purported predictive processing of all self-organising systems, whether conscious or not. Self-evidencing in itself is therefore not sufficient for consciousness. Different systems may however be capable of self-evidencing in different, specific and distinct ways. Some of these ways of self-evidencing can be matched up with, and explain, several properties of consciousness. This carves out a distinction in nature between those systems that are conscious, as described by these properties, and those that are not. This approach throws new (...) light on phenomenology, and suggests that some self-evidencing may be characteristic of consciousness. (shrink)
Learning is fundamentally about action, enabling the successful navigation of a changing and uncertain environment. The experience of pain is central to this process, indicating the need for a change in action so as to mitigate potential threat to bodily integrity. This review considers the application of Bayesian models of learning in pain that inherently accommodate uncertainty and action, which, we shall propose are essential in understanding learning in both acute and persistent cases of pain.
The extended mind thesis claims that a subject’s mind sometimes encompasses the environmental props the subject interacts with while solving cognitive tasks. Recently, the debate over the extended mind has been focused on Markov Blankets: the statistical boundaries separating biological systems from the environment. Here, I argue such a focus is mistaken, because Markov Blankets neither adjudicate, nor help us adjudicate, whether the extended mind thesis is true. To do so, I briefly introduce Markov Blankets and the free energy principle (...) in Section 2. I then turn from exposition to criticism. In Section 3, I argue that using Markov Blankets to determine whether the mind extends will provide us with an answer based on circular reasoning. In Section 4, I consider whether Markov Blankets help us track the boundaries of the mind, answering in the negative. This is because resorting to Markov Blankets to track the boundaries of the mind yields extensionally inadequate conclusions which violate the parity principle. In Section 5, I further argue that Markov Blankets led us to sidestep the debate over the extended mind, as they make internalism about the mind vacuously true. A brief concluding paragraph follows. (shrink)
This paper aims to provide a theoretical framework for explaining the subjective character of pain experience in terms of what we will call ‘embodied predictive processing’. The predictive processing (PP) theory is a family of views that take perception, action, emotion and cognition to all work together in the service of prediction error minimisation. In this paper we propose an embodied perspective on the PP theory we call the ‘embodied predictive processing (EPP) theory. The EPP theory proposes to explain pain (...) in terms of processes distributed across the whole body. The prediction error minimising system that generates pain experience comprises the immune system, the endocrine system, and the autonomic system in continuous causal interaction with pathways spread across the whole neural axis. We will argue that these systems function in a coordinated and coherent manner as a single complex adaptive system to maintain homeostasis. This system, which we refer to as the neural-endocrine-immune (NEI) system, maintains homeostasis through the process of prediction error minimisation. We go on to propose a view of the NEI ensemble as a multiscale nesting of Markov blankets that integrates the smallest scale of the cell to the largest scale of the embodied person in pain. We set out to show how the EPP theory can make sense of how pain experience could be neurobiologically constituted. We take it to be a constraint on the adequacy of a scientific explanation of subjectivity of pain experience that it makes it intelligible how pain can simultaneously be a local sensing of the body, and, at the same time, a more global, all-encompassing attitude towards the environment. Our aim in what follows is to show how the EPP theory can meet this constraint. (shrink)
A cognitivist account of decision-making views choice behaviour as a serial process of deliberation and commitment, which is separate from perception and action. By contrast, recent work in embodied decision-making has argued that this account is incompatible with emerging neurophysiological data. We argue that this account has significant overlap with an embodied account of predictive processing, and that both can offer mutual development for the other. However, more importantly, by demonstrating this close connection we uncover an alternative perspective on the (...) nature of decision-making, and the mechanisms that underlie our choice behaviour. This alternative perspective allows us to respond to a challenge for predictive processing, which claims that the satisfaction of distal goal-states is underspecified. Answering this challenge requires the adoption of an embodied perspective. (shrink)
We explore the question of whether machines can infer information about our psychological traits or mental states by observing samples of our behaviour gathered from our online activities. Ongoing technical advances across a range of research communities indicate that machines are now able to access this information, but the extent to which this is possible and the consequent implications have not been well explored. We begin by highlighting the urgency of asking this question, and then explore its conceptual underpinnings, in (...) order to help emphasise the relevant issues. To answer the question, we review a large number of empirical studies, in which samples of behaviour are used to automatically infer a range of psychological constructs, including affect and emotions, aptitudes and skills, attitudes and orientations (e.g. values and sexual orientation), personality, and disorders and conditions (e.g. depression and addiction). We also present a general perspective that can bring these disparate studies together and allow us to think clearly about their philosophical and ethical implications, such as issues related to consent, privacy, and the use of persuasive technologies for controlling human behaviour. (shrink)
This essay presents a point of view for looking at `free will', with the purpose of interpreting where exactly the freedom lies. For, freedom is what we mean by it. It compares the exercise of free will with the making of inferences, which usually is predominantly inductive in nature. The making of inference and the exercise of free will, both draw upon psychological resources that define our ‘selves’. I examine the constitution of the self of an individual, especially the (...) involvement of personal beliefs, personal memories, affects, emotions, and the hugely important psychological value-system, all of which distinguish the self of one individual from that of another. The foundational position that adopted in this essay is that all psychological processes are correlated with corresponding ones involving large scale neural aggregates in the brain, communicating with one another through wavelike modes of excitation and de-excitation. Of central relevance is the value-network around which the affect system is organized, the latter, in turn, being the axis around which is assembled the self, with all its emotional correlates. The self is a complex system. I include a brief outline of what complexity consists of. In reality all systems are complex, for complexity is ubiquitous, and certain parts of nature appear to us to be ‘simple’ only in certain specific contexts. It is in this background that the issue of determinism is viewed in this essay. Instead of looking at determinism as a grand principle entrenched in nature independent of our interpretation of it, I look at our ability to explain and to predict events and phenomena around us, which is made possible by the existence of causal links running in a complex course that set up correlations between diverse parts of nature, in this way putting the stamp of necessity on these events. However, the complexity of systems limits our ability to explain and to predict to within certain horizons defined by contexts. Our ability to explain and predict in matters relating to acts of free will is similarly limited by the operations of the self that remain hidden from our own awareness. The aspects of necessity and determinism appear to us in the form of reason and rationality that explain and predict only within a limited horizon, while the rest depends on the complex operation of self-linked psychological resources, where the latter appear as contingent in the context of the exercise of free will. The hallmark of complex systems is the existence of amplifying factors that operate as destabilizing ones, along with inhibiting or stabilizing factors as well that generally limit the destabilizing influences to local occurrences, while preserving the global integrity of a system. This complex interplay of destabilizing and stabilizing influences lead to the possibility of an enormous number of distinct modes of behavior that appear as emergent phenomena in complex systems. Looking at the particular case of the self of an individual that guide her actions and thoughts, it is the operation of our emotions, built around the psychological value-system, that provide for the amplifying and inhibiting factors mentioned above. The operation of these self-linked factors stamps our actions and thoughts as contingent ones that do not fit with our concepts of reason and rationality. And this is what provides the basis of our idea of free will. Free will is not ‘free’ in virtue of exemption from causal links running through all our self-based processes - ones that remain hidden from our awareness, but is free of what is perceived to be ‘reason and rationality’ based on knowledge and the common pool of beliefs and principles associated with a shared world-view. When we speak of the choice involved in an act of exercise of free will what we actually refer to is the commonly observed fact that various different individuals of a similar disposition respond differently when placed under similar circumstances. In other words, free will is ‘free’ precisely because it is not subject to constraints of a commonly accepted and shared set of principles, beliefs, and values. While it is possible that, in an exercise of free will, the self-linked psychological resources are brought into action when a number of alternatives are presented to the self by the operation of the ingredients of a commonly shared world-view, it seems likely that, that set of alternatives is not of primary relevance in the final response that the mind produces, since the latter is primarily a product of the operation of the self-based psychological resources. What is of greater relevance here is the operation of emotion-driven processes, guided by the psychological value-system based on the activity of the so-called reward-punishment network in the brain. These processes lead the individual to a response that appears to be free from the shackles of determinism precisely because their mechanisms, which are hidden from us, do not conform to commonly accepted and shared rules and principles. In contrast, inductive inference is a process that is based on the ‘cognitive face’ of self, where the self-based psychological resources play a supportive role to the commonly shared world-view of an individual. There is never any freedom from the all-pervading causal links representing correlations among all objects, entities, and events in nature. In the midst of all this, the closest thing to freedom that we can have in our life comes with self-examination and self-improvement. The possibility of self-examination appears in the form of specific conjunctions between our complex self-processes and the ceaseless changes of scenario in our external world. This actually makes the emergent phenomenon of self-examination a matter of chance, but one that keeps on appearing again and again in our life. Once realized, self examination creates possibilities that would not be there in the absence of it, and these possibilities include the enhancement of further self-enrichment and further diversity in the exercise of our free will. (shrink)
In temporal binding, the temporal interval between one event and another, occurring some time later, is subjectively compressed. We discuss two ways in which temporal binding has been conceptualized. In studies showing temporal binding between a voluntary action and its causal consequences, such binding is typically interpreted as providing a measure of an implicit or pre-reflective “sense of agency”. However, temporal binding has also been observed in contexts not involving voluntary action, but only the passive observation of a cause-effect sequence. (...) In those contexts, it has been interpreted as a top-down effect on perception reflecting a belief in causality. These two views need not be in conflict with one another, if one thinks of them as concerning two separate mechanisms through which temporal binding can occur. In this paper, we explore an alternative possibility: that there is a unitary way of explaining temporal binding both within and outside the context of voluntary action as a top-down effect on perception reflecting a belief in causality. Any such explanation needs to account for ways in which agency, and factors connected with agency, have been shown to affect the strength of temporal binding. We show that principles of causal inference and causal selection already familiar from the literature on causal learning have the potential to explain why the strength of people’s causal beliefs can be affected by the extent to which they are themselves actively involved in bringing about events, thus in turn affecting binding. (shrink)
Neuroscience has studied deductive reasoning over the last 20 years under the assumption that deductive inferences are not only de jure but also de facto distinct from other forms of inference. The objective of this research is to verify if logically valid deductions leave any cerebral electrical trait that is distinct from the trait left by non-valid deductions. 23 subjects with an average age of 20.35 years were registered with MEG and placed into a two conditions paradigm (100 trials (...) for each condition) which each presented the exact same relational complexity (same variables and content) but had distinct logical complexity. Both conditions show the same electromagnetic components (P3, N4) in the early temporal window (250–525 ms) and P6 in the late temporal window (500–775 ms). The significant activity in both valid and invalid conditions is found in sensors from medial prefrontal regions, probably corresponding to the ACC or to the medial prefrontal cortex. The amplitude and intensity of valid deductions is significantly lower in both temporal windows (p = 0.0003). The reaction time was 54.37% slower in the valid condition. Validity leaves a minimal but measurable hypoactive electrical trait in brain processing. The minor electrical demand is attributable to the recursive and automatable character of valid deductions, suggesting a physical indicator of computational deductive properties. It is hypothesized that all valid deductions are recursive and hypoactive. (shrink)
This paper details how ghost hunting, as a set of learning activities, can be used to enhance critical thinking and philosophy of science classes. We describe in some detail our own work with ghost hunting, and reflect on both intended and unintended consequences of this pedagogical choice. This choice was partly motivated by students’ lack of familiarity with science and philosophic questions about it. We offer reflections on our three different implementations of the ghost hunting activities. In addition, we discuss (...) the practical nuances of implementing these activities, as well the relation of ghost hunting to our course content, including informal fallacies and some models for scientific inference. We conclude that employing ghost hunting along-side traditional activities and content of critical thinking and philosophy of science offers a number of benefits, including being fun, increasing student attendance, enhancing student learning, and providing a platform for campus wide dialogues about philosophy. (shrink)
The philosophical literature on reasoning is dominated by the assumption that reasoning is essentially a matter of following rules. This paper challenges this view, by arguing that it misrepresents the nature of reasoning as a personal level activity. Reasoning must reflect the reasoner’s take on her evidence. The rule-following model seems ill-suited to accommodate this fact. Accordingly, this paper suggests replacing the rule-following model with a different, semantic approach to reasoning.
In evolutionary biology, niche construction is sometimes described as a genuine evolutionary process whereby organisms, through their activities and regulatory mechanisms, modify their environment such as to steer their own evolutionary trajectory, and that of other species. There is ongoing debate, however, on the extent to which niche construction ought to be considered a bona fide evolutionary force, on a par with natural selection. Recent formulations of the variational free-energy principle as applied to the life sciences describe the properties of (...) living systems, and their selection in evolution, in terms of variational inference. We argue that niche construction can be described using a variational approach. We propose new arguments to support the niche construction perspective, and to extend the variational approach to niche construction to current perspectives in various scientific fields. (shrink)
Around 97% of climate scientists endorse anthropogenic global warming (AGW), the theory that human activities are partly responsible for recent increases in global average temperatures. Clearly, this widespread endorsement of AGW is a reason for non-experts to believe in AGW. But what is the epistemic significance of the fact that some climate scientists do not endorse AGW? This paper contrasts expert unanimity, in which virtually no expert disagrees with some theory, with expert consensus, in which some non-negligible proportion either rejects (...) or is uncertain about the theory. It is argued that, from a layperson’s point of view, an expert consensus is often stronger evidence for a theory’s truth than unanimity. Several lessons are drawn from this conclusion, e.g. concerning what laypeople should infer from expert pronouncements, how journalists should report on scientific theories, and how working scientists should communicate with the public. (shrink)
Event concepts are unstructured atomic concepts that apply to event types. A paradigm example of such an event type would be that of diaper changing, and so a putative example of an atomic event concept would be DADDY'S-CHANGING-MY-DIAPER.1 I will defend two claims about such concepts. First, the conceptual claim that it is in principle possible to possess a concept such as DADDY'S-CHANGING-MY-DIAPER without possessing the concept DIAPER. Second, the empirical claim that we actually possess such concepts and that they (...) play an important role in our cognitive lives. The argument for the empirical claim has the form of an inference to the best explanation and is aimed at those who are already willing to attribute concepts and beliefs to infants and nonhuman animals. Many animals and prelinguistic infants seem capable of re-identifying event-types in the world, and they seem to store information about things happening at particular times and places. My account offers a plausible model of how such organisms are able to do this without attributing linguistically structured mental states to them. And although language allows adults to form linguistically structured mental representations of the world, there is no good reason to think that such structured representations necessarily replace the unstructured ones. There is also no good reason for a philosopher who is willing to explain the behavior of an organism by appealing to atomic concepts of individuals or kinds to not use a similar form of explanation when explaining the organism's capacity to recognize events. -/- We can form empirical concepts of individuals, kinds, properties, event-types, and states of affairs, among other things, and I assume that such concepts function like what François Recanati calls ‘mental files’ or what Ruth Millikan calls ‘substance concepts’ (Recanati 2012; Millikan 1999, 2000, 2017). To possess such a concept one must have a reliable capacity to re-identify the object in question, but this capacity of re-identification does not fix the reference of the concept. Such concepts allow us to collect and utilize useful information about things that we re-encounter in our environment. We can distinguish between a perception-action system and a perception-belief system, and I will argue that empirical concepts, including atomic event concepts, can play a role in both systems. The perception-action system involves the application of concepts in the service of (often skilled) action. We can think of the concept as a mental file containing motor-plans that can be activated once the individual recognizes that they are in a certain situation. In this way, recognizing something (whether an object or an event) as a token of a type, plays a role in guiding immediate action. The perception-belief system, in contrast, allows for the formation of beliefs that can play a role in deliberation and planning and in the formation of expectations. I distinguish between two particular types of belief which I call where-beliefs and when-beliefs, and I argue that we can model the formation of such perceptual beliefs in nonlinguistic animals and human infants in terms of the formation of a link between an empirical concept and a position on a cognitive map. According to the account offered, seemingly complex beliefs, such as a baby's belief that Daddy changed her diaper in the kitchen earlier, will not be linguistically structured. If we think that prelinguistic infants possess such concepts and are able to form such beliefs, it is likely that adults do too. The ability to form such beliefs does not require the capacity for public language, and we can model them in nonlinguistic terms; thus, we have no good reason to think of such beliefs as propositional attitudes. Of course, we can use sentences to refer to such beliefs, and thus it is possible to think of such beliefs as somehow relations to propositions. But it is not clear to me what is gained by this as we have a perfectly good way to think about the structure of such beliefs that does not involve any appeal to language. (shrink)
Self-awareness represents the capacity of becoming the object of one’s own attention. In this state one actively identifies, processes, and stores information about the self. This paper surveys the self-awareness literature by emphasizing definition issues, measurement techniques, effects and functions of self-attention, and antecedents of self-awareness. Key self-related concepts (e.g., minimal, reflective consciousness) are distinguished from the central notion of self-awareness. Reviewed measures include questionnaires, implicit tasks, and self-recognition. Main effects and functions of self-attention consist in selfevaluation, escape from the (...) self, amplification of one's subjective experience, increased self-knowledge, self-regulation, and inferences about others' mental states (Theory-of-Mind). A neurocognitive and socioecological model of self-awareness is described in which the role of face-to-face interactions, reflected appraisals, mirrors, media, inner speech, imagery, autobiographical knowledge, and neurological structures is underlined. (shrink)
My project is to reconsider the Kantian conception of practical reason. Some Kantians think that practical reasoning must be more active than theoretical reasoning, on the putative grounds that such reasoning need not contend with what is there anyway, independently of its exercise. Behind that claim stands the thesis that practical reason is essentially efficacious. I accept the efficacy principle, but deny that it underwrites this inference about practical reason. My inquiry takes place against the background of recent (...) Kantian metaethical debate — each side of which, I argue, correctly points to issues that need to be jointly accommodated in the Kantian account of practical reason. The constructivist points to the essential efficacy of practical reason, while the realist claims that any genuinely cognitive exercise of practical reason owes allegiance to what is there anyway, independently of its exercise. I argue that a Kantian account of respect for persons (“recognition respect”) suggests how the two claims might be jointly accommodated. The result is an empirical moral realism that is itself neutral on the received Kantian metaethical debate. (shrink)
The pessimistic induction is built upon the uniformity principle that the future resembles the past. In daily scientific activities, however, scientists sometimes rely on what I call the disuniformity principle that the future differs from the past. They do not give up their research projects despite the repeated failures. They believe that they will succeed although they failed repeatedly, and as a result they achieve what they intended to achieve. Given that the disuniformity principle is useful in certain cases in (...) science, we might reasonably use it to infer that present theories are true unlike past theories. Hence, pessimists have the burden to show that our prediction about the fate of present theories is more likely to be true if we use the uniformity principle than if we use the disuniformity principle. (shrink)
We introduce the notion of complexity, first at an intuitive level and then in relatively more concrete terms, explaining the various characteristic features of complex systems with examples. There exists a vast literature on complexity, and our exposition is intended to be an elementary introduction, meant for a broad audience. -/- Briefly, a complex system is one whose description involves a hierarchy of levels, where each level is made of a large number of components interacting among themselves. The time evolution (...) of such a system is of a complex nature, depending on the interactions among subsystems in the next level below the one under consideration and, at the same time, conditioned by the level above, where the latter sets the context for the evolution. Generally speaking, the interactions among the constituents of the various levels lead to a dynamics characterized by numerous characteristic scales, each level having its own set of scales. What is more, a level commonly exhibits ‘emergent properties’ that cannot be derived from considerations relating to its component systems taken in isolation or to those in a different contextual setting. In the dynamic evolution of some particular level, there occurs a self-organized emergence of a higher level and the process is repeated at still higher levels. -/- The interaction and self-organization of the components of a complex system follow the principle commonly expressed by saying that the ‘whole is different from the sum of the parts’. In the case of systems whose behavior can be expressed mathematically in terms of differential equations this means that the interactions are nonlinear in nature. -/- While all of the above features are not universally exhibited by complex systems, these are nevertheless indicative of a broad commonness relative to which individual systems can be described and analyzed. There exist measures of complexity which, once again, are not of universal applicability, being more heuristic than exact. The present state of knowledge and understanding of complex systems is itself an emerging one. Still, a large number of results on various systems can be related to their complex character, making complexity an immensely fertile concept in the study of natural, biological, and social phenomena. -/- All this puts a very definite limitation on the complete description of a complex system as a whole since such a system can be precisely described only contextually, relative to some particular level, where emergent properties rule out an exact description of more than one levels within a common framework. -/- We discuss the implications of these observations in the context of our conception of the so-called noumenal reality that has a mind-independent existence and is perceived by us in the form of the phenomenal reality. The latter is derived from the former by means of our perceptions and interpretations, and our efforts at sorting out and making sense of the bewildering complexity of reality takes the form of incessant processes of inference that lead to theories. Strictly speaking, theories apply to models that are constructed as idealized versions of parts of reality, within which inferences and abstractions can be carried out meaningfully, enabling us to construct the theories. -/- There exists a correspondence between the phenomenal and the noumenal realities in terms of events and their correlations, where these are experienced as the complex behavior of systems or entities of various descriptions. The infinite diversity of behavior of systems in the phenomenal world are explained within specified contexts by theories. The latter are constructs generated in our ceaseless attempts at interpreting the world, and the question arises as to whether these are reflections of `laws of nature' residing in the noumenal world. This is a fundamental concern of scientific realism, within the fold of which there exists a trend towards the assumption that theories express truths about the noumenal reality. We examine this assumption (referred to as a ‘point of view’ in the present essay) closely and indicate that an alternative point of view is also consistent within the broad framework of scientific realism. This is the view that theories are domain-specific and contextual, and that these are arrived at by independent processes of inference and abstractions in the various domains of experience. Theories in contiguous domains of experience dovetail and interpenetrate with one another, and bear the responsibility of correctly explaining our observations within these domains. -/- With accumulating experience, theories get revised and the network of our theories of the world acquires a complex structure, exhibiting a complex evolution. There exists a tendency within the fold of scientific realism of interpreting this complex evolution in rather simple terms, where one assumes (this, again, is a point of view) that theories tend more and more closely to truths about Nature and, what is more, progress towards an all-embracing ‘ultimate theory’ -- a foundational one in respect of all our inquiries into nature. We examine this point of view closely and outline the alternative view -- one broadly consistent with scientific realism -- that there is no ‘ultimate’ law of nature, that theories do not correspond to truths inherent in reality, and that successive revisions in theory do not lead monotonically to some ultimate truth. Instead, the theories generated in succession are incommensurate with each other, testifying to the fact that a theory gives us a perspective view of some part of reality, arrived at contextually. Instead of resembling a monotonically converging series successive theories are analogous to asymptotic series. -/- Before we summarize all the above considerations, we briefly address the issue of the complexity of the {\it human mind} -- one as pervasive as the complexity of Nature at large. The complexity of the mind is related to the complexity of the underlying neuronal organization in the brain, which operates within a larger biological context, its activities being modulated by other physiological systems, notably the one involving a host of chemical messengers. The mind, with no materiality of its own, is nevertheless emergent from the activity of interacting neuronal assemblies in the brain. As in the case of reality at large, there can be no ultimate theory of the mind, from which one can explain and predict the entire spectrum of human behavior, which is an infinitely rich and diverse one. (shrink)
According to one theory, the brain is a sophisticated hypothesis tester: perception is Bayesian unconscious inference where the brain actively uses predictions to test, and then refine, models about what the causes of its sensory input might be. The brain’s task is simply continually to minimise prediction error. This theory, which is getting increasingly popular, holds great explanatory promise for a number of central areas of research at the intersection of philosophy and cognitive neuroscience. I show how the theory (...) can help us understand striking phenomena at three cognitive levels: vision, sensory integration, and belief. First, I illustrate central aspects of the theory by showing how it provides a nice explanation of why binocular rivalry occurs. Then I suggest how the theory may explain the role of the unified sense of self in rubber hand and full body illusions driven by visuotactile conflict. Finally, I show how it provides an approach to delusion formation that is consistent with one-deficit accounts of monothematic delusions. (shrink)
In demonstration, speakers use real-world activity both for its practical effects and to help make their points. The demonstrations of origami mathematics, for example, reconfigure pieces of paper by folding, while simultaneously allowing their author to signal geometric inferences. Demonstration challenges us to explain how practical actions can get such precise significance and how this meaning compares with that of other representations. In this paper, we propose an explanation inspired by David Lewis’s characterizations of coordination and scorekeeping in conversation. In (...) particular, we argue that words, gestures, diagrams and demonstrations can function together as integrated ensembles that contribute to conversation, because interlocutors use them in parallel ways to coordinate updates to the conversational record. (shrink)
According to proponents of the sensorimotor contingency theory of perception (Hurley & Noë 2003, Noë 2004, O’Regan 2011), active control of camera movement is necessary for the emergence of distal attribution in tactile-visual sensory substitution (TVSS) because it enables the subject to acquire knowledge of the way stimulation in the substituting modality varies as a function of self-initiated, bodily action. This chapter, by contrast, approaches distal attribution as a solution to a causal inference problem faced by the subject’s (...) perceptual systems. Given all of the available endogenous and exogenous evidence available to those systems, what is the most probable source of stimulation in the substituting modality? From this perspective, active control over the camera’s movements matters for rather different reasons. Most importantly, it generates proprioceptive and efference-copy based information about the camera’s body-relative position necessary to make use of the spatial cues present in the stimulation that the subject receives for purposes of egocentric object localization. (shrink)
Charles S. Peirce (1839-1914) made relevant contributions to deductive logic, but he was primarily interested in the logic of science, and more especially in what he called 'abduction' (as opposed to deduction and induction), which is the process whereby hypotheses are generated in order to explain the surprising facts. Indeed, Peirce considered abduction to be at the heart not only of scientific research, but of all ordinary human activities. Nevertheless, in spite of Peirce's work and writings in the field of (...) methodology of research, scarce attention has been paid to the logic of discovery over the last hundred years, despite an impressive development not only of scientific research but also of logic. -/- Having this in mind, the exposition is divided into five parts: 1) a brief presentation of Peirce, focusing on his work as a professional scientist; 2) an exposition of the classification of inferences by the young Peirce: deduction, induction and hypothesis; 3) a sketch of the notion of abduction in the mature Peirce; 4) an exposition of the logic of surprise; and finally, by way of conclusion, 5) a discussion of this peculiar ability of guessing understood as a rational instinct. -/- . (shrink)
Contemporary reasoning about health is infused with the work products of experts, and expert reasoning about health itself is an active site for invention and design. Building on Toulmin’s largely undeveloped ideas on field-dependence, we argue that expert fields can develop new inference rules that, together with the backing they require, become accepted ways of drawing and defending conclusions. The new inference rules themselves function as warrants, and we introduce the term “warranting device” to refer to an (...) assembly of the rule plus whatever material, procedural, and institutional resources are required to assure its dependability. We present a case study on the Cochrane Review, a new method for synthesizing evidence across large numbers of scientific studies. After reviewing the evolution and current structure of the device, we discuss the distinctive kinds of critical questions that may be raised around Cochrane Reviews, both within the expert field and beyond. Although Toulmin’s theory of field-dependence is often criticized for its relativism, we find that, as a matter of practical fact, field-specific warrants do not enjoy immunity from external critique. On the contrary, they can be opened to evaluation and critique from any interested perspective. (shrink)
This is a defense of Pyrrhonian skepticism against the charge that the suspension of judgment based on equipollence is vitiated by the assent given to the equipollence in question. The apparent conflict has a conceptual side as well as a practical side, examined here as separate challenges with a section devoted to each. The conceptual challenge is that the skeptical transition from an equipollence of arguments to a suspension of judgment is undermined either by a logical contradiction or by an (...) epistemic inconsistency, perhaps by both, because the determination and affirmation of equipollence is itself a judgment of sorts, one that is not suspended. The practical challenge is that, independently of any conceptual confusion or contradiction, suspending judgment in reaction to equipollence evinces doxastic commitment to equipollence, if only because human beings are not capable of making assessments requiring rational determination without believing the corresponding premises and conclusions to be true. The two analytic sections addressing these challenges are preceded by two prefatory sections, one laying out the epistemic process, the other reviewing the evidentiary context. The response from the conceptual perspective is that the suspension of judgment based on equipollence is not a reasoned conclusion adopted as the truth of the matter but a natural reaction to an impression left by the apparently equal weight of opposing arguments. The response from the practical perspective is that the acknowledgment of equipollence is not just an affirmation of the equal weight of arguments but also an admission of inability to decide, suggesting that any assent, express or implied, is thrust upon the Pyrrhonist in a state of epistemic paralysis affecting the will and the intellect on the matter being investigated. This just leaves a deep disagreement, if any, regarding whether equipollence is an inference based on discursive activity or an impression coming from passive receptivity. But this, even if resolved in favor of the critic (which it need not and ought not be), is not the same as confusion or inconsistency on the part of the Pyrrhonist, the demonstration of which is the primary aim of this paper. (shrink)
To find the neural substrates of consciousness, researchers compare subjects’ neural activity when they are aware of stimuli against neural activity when they are not aware. Ideally, to guarantee that the neural substrates of consciousness—and nothing but the neural substrates of consciousness—are isolated, the only difference between these two contrast conditions should be conscious awareness. Nevertheless, in practice, it is quite challenging to eliminate confounds and irrelevant differences between conscious and unconscious conditions. In particular, there is an often-neglected confound that (...) is crucial to eliminate from neuroimaging studies: task performance. Unless subjects’ task performance is matched (and hence perceptual signal processing is matched), researchers risk finding the neural correlates of perception, rather than conscious perception. Here, we discuss the theoretical motivations for the performance matching framework and review empirical demonstrations of, and theoretical inferences derived from, obtaining differences in consciousness while controlling for task performance. We summarize signal detection theoretic modeling frameworks that explain how it is that we can derive performance-matched differences in consciousness without the effect being trivially driven by differences in criterion setting, and also provide principles for designing experimental paradigms that yield performance-matched differences in awareness. Finally, we address potential technical and theoretical issues that stem from matching performance across conditions of awareness, and we introduce the notion of “triangulation” for designing comprehensive experimental sets that can better reveal the neural substrates of consciousness. (shrink)
I distinguish two theses regarding technological successors to current humans (posthumans): an anthropologically bounded posthumanism (ABP) and an anthropologically unbounded posthumanism (AUP). ABP proposes transcendental conditions on agency that can be held to constrain the scope for “weirdness” in the space of possible posthumans a priori. AUP, by contrast, leaves the nature of posthuman agency to be settled empirically (or technologically). Given AUP there are no “future proof” constraints on the strangeness of posthuman agents. -/- In Posthuman Life I defended (...) AUP via a critique of Donald Davidson’s work on intentionality and a “naturalistic deconstruction” of transcendental phenomenology (See also Roden 2013). In this paper I extend this critique to Robert Brandom’s account of the relationship between normativity and intentionality in Making It Explicit (MIE) and in other writings. -/- Brandom’s account understands intentionality in terms of the capacity to undertake and ascribe inferential normative commitments. It makes “first class agency” dependent on the ability to participate in discursive social practices. It implies that posthumans – insofar as they qualify as agents at all – would need to be social and discursive beings. -/- The problem with this approach, I will argue, is that it replicates a problem that Brandom discerns in Dennett’s intentional stance approach. It tells us nothing about the conditions under which a being qualifies as a potential interpreter and thus little about the conditions for meaning, understanding or agency. -/- I support this diagnosis by showing that Brandom cannot explain how a non-sapient community could bootstrap itself into sapience by setting up a basic deontic scorekeeping system without appealing (along with Davidson and Dennett) to the ways in which an idealized observer would interpret their activity. -/- This strongly suggests that interpretationist and pragmatist accounts cannot explain the semantic or the intentional without regressing to assumptions about ideal interpreters or background practices whose scope they are incapable of delimiting. It follows that Anthropologically Unbounded Posthumanism is not seriously challenged by the claim that agency and meaning are “constituted” by social practices. -/- AUP implies that we can infer no claims about the denizens of “Posthuman Possibility Space” a priori, by reflecting on the pragmatic transcendental conditions for semantic content. We thus have no reason to suppose that posthuman agents would have to be subjects of discourse or, indeed, members of communities. The scope for posthuman weirdness can be determined by recourse to engineering alone. (shrink)
Spinoza’s conatus doctrine, the main proposition of which claims, “[e]ach thing, to the extent it is in itself, strives [conatur] to persevere in its being” (E3p6), has been the subject of growing interest. This is understandable, for Spinoza’s psychology and ethics are based on this doctrine. In my paper I shall examine the way Spinoza argues for E3p6 in its demonstration which runs as follows: "For singular things are modes by which God’s attributes are expressed in a certain and determinate (...) way (by 1p25c), i.e. (by 1p34), things that express, in a certain and determinate way, God’s power, by which God is and acts. And no thing has anything in itself by which it can be destroyed, or which takes its existence away (by p4). On the contrary, it is opposed to everything which can take its existence away (by p5). Therefore, to the extent it can, and is in itself, it strives to persevere in its being." This argument has been severely criticized for being defective in many ways. E3p6d contains four items, E1p25c, 1p34, 3p4, and 3p5. Most often, only the two last mentioned are regarded as doing any real work in the demonstration. However, I shall argue that having a proper grasp of Spinoza’s concept of power enables us see that the demonstration’s beginning, built on E1p25c and 1p34, brings forth a certain dynamic framework in which finite things are centers of causal power, capable of producing effects in virtue of their essences. My examination of this framework shows the beginning of the demonstration to be irreplaceable: in the end, conatus is one form of power, and E1p25c and 1p34 not only bring the notion of power into play, but also inform us on how finite things’ power should be understood in the monistic system. So, I disagree with such commentators as Jonathan Bennett, Edwin Curley, Daniel Garber, Michael Della Rocca, and Richard Manning, who see Spinoza as trying to derive the conatus doctrine from E3p4 and 3p5 alone; and I agree with Alexandre Matheron, Henry Allison, and Martin Lin, who stress the importance of E1p25c and 1p34. -/- However, this still leaves us the task of reconstructing the whole derivation and showing how its various ingredients fit together. If E1p25c and 1p34 are so important, could E3p6 not be derived from them alone, as Martin Lin has argued? In other words, why are E3p4 and 3p5 needed at all? To answer these questions I shall provide an interpretation of E3p6d that explains how the argument is supposed to work. E1p25c and 1p34 say that finite things are, in essence, dynamic causers, which, in case of opposition, truly resist opposing factors with their power and do not simply cease their causal activities whenever facing obstacles; in other words, they strive against any opposition. However, this is not enough to guarantee that they could not act self-destructively or restrain their own power, which would make them incapable of self-preservation. But this would go against E3p4, “No thing can be destroyed except through an external cause,” and Spinoza uses it to claim, “no thing has anything in itself by which it can be destroyed, or which takes its existence away.” So all this allows Spinoza to hold that finite things are consistent causers, that is, entities endowed with power and, insofar they cause effects solely in virtue of their essence, they never use their power self-destructively. The significance and role of the final item in the demonstration, E3p5 (“Things are of a contrary nature, i.e., cannot be in the same subject, insofar as one can destroy the other”), still needs to be determined. Indeed, considering its content and the way it is used in the demonstration, it seems to be a surprisingly decisive ingredient in the argument. Namely, what “each thing, to the extent it is in itself,” that is, insofar as any thing is considered disregarding everything external to it, strives to preserve, is its being (esse), not simply its present state. This together with E3p5’s view of subjecthood – which I shall explicate in my paper – suggests that we should rethink what kind of “being” or “existence” is meant in E3p6. Indeed, for Spinoza, each subject has a definable essence from which, as far as the subject in question is in itself, certain properties or effects necessarily follow; consequently a subject’s full being involves not only instantiating a certain essence, but also those properties inferable from the essence-expressing definition. Thus, E3p5 is meant to bring forward that things are not merely non-self-destroyers but subjects from whose definitions properties follow; and as Spinoza thinks to have shown (by E1p25c and 1p34) that finite modifications are entities endowed with power, any subject has true power to produce the properties or effects derivable from its definition, which, Spinoza claims, implies opposing everything harmful. In other words, things exercise power as their definition states, i.e. according to their definitions, and thus bringing in the idea of things as expressers of power enables Spinoza to convert logical oppositions (of E3p5) into real ones (of E3p6). To summarize, Spinoza reasons that each true finite thing is, in itself, an expresser of power (E1p25c, 1p34) that never acts self-destructively (E3p4) but instead strives to drive itself through opponents to produce effects as they follow from the definition of the thing in question (E1p25c, 1p34, and 3p5). Therefore, “each thing, to the extent it is in itself, strives to persevere in its being.” The demonstration of E3p6 has its roots deep in Spinoza’s ontology, and since its concept of power is supposed to provide the metaphysical grounding for real opposition, the importance of E1p25c and 1p34 should not be underestimated just because Spinoza – as often happens – puts his point exceedingly briefly. Moreover, the derivation is basically valid and contains no superfluous elements. Finally, all this tells us something decisive about the meaning of Spinoza’s doctrine: according to it, things are active causers whose “power to exist and act” has conatus character in temporality, amounting not only to striving to prolong the duration of one’s actualization but also to striving to be as active or autonomous as possible, that is, to attain a state determined by the striving subject’s essence alone. (shrink)
Climate change as a social issue challenged the disciplinary and methodological traditions of research. Moreover, climate change becomes more problematic as schools must be able to engage learners in learning situations that are challenging and rooted in geographical pedagogical traditions. Though it is present in the curriculum, the present study systematically reviews the teaching of climate change from selected literature from 2019 to 2021. The objective of this study is to investigate approaches and strategies in the teaching and learning of (...) climate change as well as its integration across different learning areas in the basic education curriculum within a global continuum and the conception and operationalization of climate change education. Of the accessed meaningful related literature, the researchers selected one hundred fifty (150) pieces of literature further trimmed down into fifty-seven (57) and then to nineteen (19) from the year 2019 to 2021. The selection of literature is based on the following criteria set by the researcher: educational approach and implication, the methodology employed, and perspectives about climate change. Much of the present literature stressed science as a potent subject for discussing climate change, but others were covered as well, including climate education, arts, primary and middle school, after-school activities, and professional development. A systematic study of climate change, a model, computer games, classroom instructions, and learning capacities were all aims of the review. Teaching and learning approaches and strategies were identified. Methodology, perspectives, inferences, and recommendations were thematically discussed. (shrink)
There is widespread recognition at universities that a proper understanding of science is needed for all undergraduates. Good jobs are increasingly found in fields related to Science, Technology, Engineering, and Medicine, and science now enters almost all aspects of our daily lives. For these reasons, scientific literacy and an understanding of scientific methodology are a foundational part of any undergraduate education. Recipes for Science provides an accessible introduction to the main concepts and methods of scientific reasoning. With the help of (...) an array of contemporary and historical examples, definitions, visual aids, and exercises for active learning, the textbook helps to increase students’ scientific literacy. The first part of the book covers the definitive features of science: naturalism, experimentation, modeling, and the merits and shortcomings of both activities. The second part covers the main forms of inference in science: deductive, inductive, abductive, probabilistic, statistical, and causal. The book concludes with a discussion of explanation, theorizing and theory-change, and the relationship between science and society. The textbook is designed to be adaptable to a wide variety of different kinds of courses. In any of these different uses, the book helps students better navigate our scientific, 21st-century world, and it lays the foundation for more advanced undergraduate coursework in a wide variety of liberal arts and science courses. Selling Points Helps students develop scientific literacy—an essential aspect of _any_ undergraduate education in the 21 st century, including a broad understanding of scientific reasoning, methods, and concepts Written for all beginning college students: preparing science majors for more focused work in particular science; introducing the humanities’ investigations of science; and helping non-science majors become more sophisticated consumers of scientific information Provides an abundance of both contemporary and historical examples Covers reasoning strategies and norms applicable in all fields of physical, life, and social sciences, _as well as_ strategies and norms distinctive of specific sciences Includes visual aids to clarify and illustrate ideas Provides text boxes with related topics and helpful definitions of key terms, and includes a final Glossary with all key terms Includes Exercises for Active Learning at the end of each chapter, which will ensure full student engagement and mastery of the information include earlier in the chapter Provides annotated ‘For Further Reading’ sections at the end of each chapter, guiding students to the best primary and secondary sources available Offers a Companion Website, with: For Students: direct links to many of the primary sources discussed in the text, student self-check assessments, a bank of exam questions, and ideas for extended out-of-class projects For Instructors: a password-protected Teacher’s Manual, which provides student exam questions with answers, extensive lecture notes, classroom-ready Power Point presentations, and sample syllabi Extensive Curricular Development materials, helping any instructor who needs to create a Scientific Reasoning Course, ex nihilo. (shrink)
This short paper grew out of an observation—made in the course of a larger research project—of a surprising convergence between, on the one hand, certain themes in the work of Mary Hesse and Nelson Goodman in the 1950/60s and, on the other hand, recent work on the representational resources of science, in particular regarding model-based representation. The convergence between these more recent accounts of representation in science and the earlier proposals by Hesse and Goodman consists in the recognition that, in (...) order to secure successful representation in science, collective representational resources must be available. Such resources may take the form of (amongst others) mathematical formalisms, diagrammatic methods, notational rules, or—in the case of material models—conventions regarding the use and manipulation of the constituent parts. More often than not, an abstract characterization of such resources tells only half the story, as they are constituted equally by the pattern of (practical and theoretical) activities—such as instances of manipulation or inference—of the researchers who deploy them. In other words, representational resources need to be sustained by a social practice; this is what renders them collective representational resources in the first place. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.