We criticise Shepard's notions of “invariance” and “universality,” and the incorporation of Shepard's work on inference into the general framework of his paper. We then criticise Tenenbaum and Griffiths' account of Shepard (1987b), including the attributed likelihood function, and the assumption of “weak sampling.” Finally, we endorse Barlow's suggestion that minimum message length (MML) theory has useful things to say about the Bayesianinference problems discussed by Shepard and Tenenbaum and Griffiths. [Barlow; Shepard; Tenenbaum & Griffiths].
Why are conditional degrees of belief in an observation E, given a statistical hypothesis H, aligned with the objective probabilities expressed by H? After showing that standard replies are not satisfactory, I develop a suppositional analysis of conditional degree of belief, transferring Ramsey’s classical proposal to statistical inference. The analysis saves the alignment, explains the role of chance-credence coordination, and rebuts the charge of arbitrary assessment of evidence in Bayesianinference. Finally, I explore the implications of this (...) analysis for Bayesian reasoning with idealized models in science. (shrink)
Probability updating via Bayes' rule often entails extensive informational and computational requirements. In consequence, relatively few practical applications of Bayesian adaptive control techniques have been attempted. This paper discusses an alternative approach to adaptive control, Bayesian in spirit, which shifts attention from the updating of probability distributions via transitional probability assessments to the direct updating of the criterion function, itself, via transitional utility assessments. Results are illustrated in terms of an adaptive reinvestment two-armed bandit problem.
Decoupling is a general principle that allows us to separate simple components in a complex system. In statistics, decoupling is often expressed as independence, no association, or zero covariance relations. These relations are sharp statistical hypotheses, that can be tested using the FBST - Full Bayesian Significance Test. Decoupling relations can also be introduced by some techniques of Design of Statistical Experiments, DSEs, like randomization. This article discusses the concepts of decoupling, randomization and sparsely connected statistical models in the (...) epistemological framework of cognitive constructivism. (shrink)
An important problem with machine learning is that when label number n>2, it is very difficult to construct and optimize a group of learning functions, and we wish that optimized learning functions are still useful when prior distribution P(x) (where x is an instance) is changed. To resolve this problem, the semantic information G theory, Logical BayesianInference (LBI), and a group of Channel Matching (CM) algorithms together form a systematic solution. MultilabelMultilabel A semantic channel in the G (...) theory consists of a group of truth functions or membership functions. In comparison with likelihood functions, Bayesian posteriors, and Logistic functions used by popular methods, membership functions can be more conveniently used as learning functions without the above problem. In Logical BayesianInference (LBI), every label’s learning is independent. For Multilabel learning, we can directly obtain a group of optimized membership functions from a big enough sample with labels, without preparing different samples for different labels. A group of Channel Matching (CM) algorithms are developed for machine learning. For the Maximum Mutual Information (MMI) classification of three classes with Gaussian distributions on a two-dimensional feature space, 2-3 iterations can make mutual information between three classes and three labels surpass 99% of the MMI for most initial partitions. For mixture models, the Expectation-Maxmization (EM) algorithm is improved and becomes the CM-EM algorithm, which can outperform the EM algorithm when mixture ratios are imbalanced, or local convergence exists. The CM iteration algorithm needs to combine neural networks for MMI classifications on high-dimensional feature spaces. LBI needs further studies for the unification of statistics and logic. (shrink)
In the world of philosophy of science, the dominant theory of confirmation is Bayesian. In the wider philosophical world, the idea of inference to the best explanation exerts a considerable influence. Here we place the two worlds in collision, using Bayesian confirmation theory to argue that explanatoriness is evidentially irrelevant.
Defenders of Inference to the Best Explanation claim that explanatory factors should play an important role in empirical inference. They disagree, however, about how exactly to formulate this role. In particular, they disagree about whether to formulate IBE as an inference rule for full beliefs or for degrees of belief, as well as how a rule for degrees of belief should relate to Bayesianism. In this essay I advance a new argument against non-Bayesian versions of IBE. (...) My argument focuses on cases in which we are concerned with multiple levels of explanation of some phenomenon. I show that in many such cases, following IBE as an inference rule for full beliefs leads to deductively inconsistent beliefs, and following IBE as a non-Bayesian updating rule for degrees of belief leads to probabilistically incoherent degrees of belief. (shrink)
Tenenbaum and Griffiths (2001) have proposed that their Bayesian model of generalisation unifies Shepard’s (1987) and Tversky’s (1977) similarity-based explanations of two distinct patterns of generalisation behaviours by reconciling them under a single coherent task analysis. I argue that this proposal needs refinement: instead of unifying the heterogeneous notion of psychological similarity, the Bayesian approach unifies generalisation by rendering the distinct patterns of behaviours informationally relevant. I suggest that generalisation as a Bayesianinference should be seen (...) as a complement to, instead of a replacement of, similarity-based explanations. Furthermore, I show that the unificatory powers of the Bayesian model of generalisation can contribute to the selection of one of these models of psychological similarity. (shrink)
Learning is fundamentally about action, enabling the successful navigation of a changing and uncertain environment. The experience of pain is central to this process, indicating the need for a change in action so as to mitigate potential threat to bodily integrity. This review considers the application of Bayesian models of learning in pain that inherently accommodate uncertainty and action, which, we shall propose are essential in understanding learning in both acute and persistent cases of pain.
In this paper, I argue that theories of perception that appeal to Helmholtz’s idea of unconscious inference (“Helmholtzian” theories) should be taken literally, i.e. that the inferences appealed to in such theories are inferences in the full sense of the term, as employed elsewhere in philosophy and in ordinary discourse. -/- In the course of the argument, I consider constraints on inference based on the idea that inference is a deliberate acton, and on the idea that inferences (...) depend on the syntactic structure of representations. I argue that inference is a personal-level but sometimes unconscious process that cannot in general be distinguished from association on the basis of the structures of the representations over which it’s defined. I also critique arguments against representationalist interpretations of Helmholtzian theories, and argue against the view that perceptual inference is encapsulated in a module. (shrink)
The exponential growth of social data both in volume and complexity has increasingly exposed many of the shortcomings of the conventional frequentist approach to statistics. The scientific community has called for careful usage of the approach and its inference. Meanwhile, the alternative method, Bayesian statistics, still faces considerable barriers toward a more widespread application. The bayesvl R package is an open program, designed for implementing Bayesian modeling and analysis using the Stan language’s no-U-turn (NUTS) sampler. The package (...) combines the ability to construct Bayesian network models using directed acyclic graphs (DAGs), the Markov chain Monte Carlo (MCMC) simulation technique, and the graphic capability of the ggplot2 package. As a result, it can improve the user experience and intuitive understanding when constructing and analyzing Bayesian network models. A case example is offered to illustrate the usefulness of the package for Big Data analytics and cognitive computing. (shrink)
In this paper, I consider the relationship between Inference to the Best Explanation and Bayesianism, both of which are well-known accounts of the nature of scientific inference. In Sect. 2, I give a brief overview of Bayesianism and IBE. In Sect. 3, I argue that IBE in its most prominently defended forms is difficult to reconcile with Bayesianism because not all of the items that feature on popular lists of “explanatory virtues”—by means of which IBE ranks competing explanations—have (...) confirmational import. Rather, some of the items that feature on these lists are “informational virtues”—properties that do not make a hypothesis \ more probable than some competitor \ given evidence E, but that, roughly-speaking, give that hypothesis greater informative content. In Sect. 4, I consider as a response to my argument a recent version of compatibilism which argues that IBE can provide further normative constraints on the objectively correct probability function. I argue that this response does not succeed, owing to the difficulty of defending with any generality such further normative constraints. Lastly, in Sect. 5, I propose that IBE should be regarded, not as a theory of scientific inference, but rather as a theory of when we ought to “accept” H, where the acceptability of H is fixed by the goals of science and concerns whether H is worthy of commitment as research program. In this way, IBE and Bayesianism, as I will show, can be made compatible, and thus the Bayesian and the proponent of IBE can be friends. (shrink)
Politics is rife with motivated cognition. People do not dispassionately engage with the evidence when they form political beliefs; they interpret it selectively, generating justifications for their desired conclusions and reasons why contrary evidence should be ignored. Moreover, research shows that epistemic ability (e.g. intelligence and familiarity with evidence) is correlated with motivated cognition. Bjørn Hallsson has pointed out that this raises a puzzle for the epistemology of disagreement. On the one hand, we typically think that epistemic ability in an (...) interlocutor gives us reason to downgrade our belief upon learning that we disagree. On the other hand, if our interlocutor is under the sway of motivated cognition, then we have reason to discount his opinion. In this paper, I argue that Hallsson's puzzle is solved by adopting a Bayesian approach to disagreement. If an interlocutor is under the sway of motivated cognition, his disagreement should not affect our beliefs--no matter his ability. Because we implicitly and to high accuracy know his beliefs before he reveals them to us, disagreement provides us with no new information on which to conditionalize. I advance a model which accommodates the motivated cognition dynamic and other key epistemic features of disagreement. (shrink)
Given the reproducibility crisis (or replication crisis), more psychologists and social-cultural scientists are getting involved with Bayesianinference. Therefore, the current article provides a brief overview of programs (or software) and steps to conduct Bayesian data analysis in social sciences.
Even if our justified beliefs are closed under known entailment, there may still be instances of transmission failure. Transmission failure occurs when P entails Q, but a subject cannot acquire a justified belief that Q by deducing it from P. Paradigm cases of transmission failure involve inferences from mundane beliefs (e.g., that the wall in front of you is red) to the denials of skeptical hypotheses relative to those beliefs (e.g., that the wall in front of you is not white (...) and lit by red lights). According to the Bayesian explanation, transmission failure occurs when (i) the subject’s belief that P is based on E, and (ii) P(Q|E) P(Q). No modifications of the Bayesian explanation are capable of accommodating such cases, so the explanation must be rejected as inadequate. Alternative explanations employing simple subjunctive conditionals are fully capable of capturing all of the paradigm cases, as well as those missed by the Bayesian explanation. (shrink)
To estimate causal relationships, time series econometricians must be aware of spurious correlation, a problem first mentioned by Yule (1926). To deal with this problem, one can work either with differenced series or multivariate models: VAR (VEC or VECM) models. These models usually include at least one cointegration relation. Although the Bayesian literature on VAR/VEC is quite advanced, Bauwens et al. (1999) highlighted that “the topic of selecting the cointegrating rank has not yet given very useful and convincing results”. (...) The present article applies the Full Bayesian Significance Test (FBST), especially designed to deal with sharp hypotheses, to cointegration rank selection tests in VECM time series models. It shows the FBST implementation using both simulated and available (in the literature) data sets. As illustration, standard non informative priors are used. (shrink)
The full Bayesian signi/cance test (FBST) for precise hypotheses is presented, with some illustrative applications. In the FBST we compute the evidence against the precise hypothesis. We discuss some of the theoretical properties of the FBST, and provide an invariant formulation for coordinate transformations, provided a reference density has been established. This evidence is the probability of the highest relative surprise set, “tangential” to the sub-manifold (of the parameter space) that defines the null hypothesis.
Delusional beliefs have sometimes been considered as rational inferences from abnormal experiences. We explore this idea in more detail, making the following points. Firstly, the abnormalities of cognition which initially prompt the entertaining of a delusional belief are not always conscious and since we prefer to restrict the term “experience” to consciousness we refer to “abnormal data” rather than “abnormal experience”. Secondly, we argue that in relation to many delusions (we consider eight) one can clearly identify what the abnormal cognitive (...) data are which prompted the delusion and what the neuropsychological impairment is which is responsible for the occurrence of these data; but one can equally clearly point to cases where this impairments is present but delusion is not. So the impairment is not sufficient for delusion to occur. A second cognitive impairment, one which impairs the ability to evaluate beliefs, must also be present. Thirdly (and this is the main thrust of our chapter) we consider in detail what the nature of the inference is that leads from the abnormal data to the belief. This is not deductive inference and it is not inference by enumerative induction; it is abductive inference. We offer a Bayesian account of abductive inference and apply it to the explanation of delusional belief. (shrink)
In this article, I will provide a critical overview of the form of non-deductive reasoning commonly known as “Inference to the Best Explanation” (IBE). Roughly speaking, according to IBE, we ought to infer the hypothesis that provides the best explanation of our evidence. In section 2, I survey some contemporary formulations of IBE and highlight some of its putative applications. In section 3, I distinguish IBE from C.S. Peirce’s notion of abduction. After underlining some of the essential elements of (...) IBE, the rest of the entry is organized around an examination of various problems that IBE confronts, along with some extant attempts to address these problems. In section 4, I consider the question of when a fact requires an explanation, since presumably IBE applies only in cases where some explanation is called for. In section 5, I consider the difficult question of how we ought to understand IBE in light of the fact that among philosophers, there is significant disagreement about what constitutes an explanation. In section 6, I consider different strategies for justifying the truth-conduciveness of the explanatory virtues, e.g., simplicity, unification, scope, etc., criteria which play an indispensable role in any given application of IBE. In section 7, I survey some of the most recent literature on IBE, much of which consists of investigations of the status of IBE from the standpoint of the Bayesian philosophy of science. (shrink)
We argue that a modified version of Mill’s method of agreement can strongly confirm causal generalizations. This mode of causal inference implicates the explanatory virtues of mechanism, analogy, consilience, and simplicity, and we identify it as a species of Inference to the Best Explanation (IBE). Since rational causal inference provides normative guidance, IBE is not a heuristic for Bayesian rationality. We give it an objective Bayesian formalization, one that has no need of principles of indifference (...) and yields responses to the Voltaire objection, van Fraassen’s Bad Lot objection, and John Norton’s recent objection to IBE. (shrink)
An influential suggestion about the relationship between Bayesianism and inference to the best explanation holds that IBE functions as a heuristic to approximate Bayesian reasoning. While this view promises to unify Bayesianism and IBE in a very attractive manner, important elements of the view have not yet been spelled out in detail. I present and argue for a heuristic conception of IBE on which IBE serves primarily to locate the most probable available explanatory hypothesis to serve as a (...) working hypothesis in an agent’s further investigations. Along the way, I criticize what I consider to be an overly ambitious conception of the heuristic role of IBE, according to which IBE serves as a guide to absolute probability values. My own conception, by contrast, requires only that IBE can function as a guide to the comparative probability values of available hypotheses. This is shown to be a much more realistic role for IBE given the nature and limitations of the explanatory considerations with which IBE operates. (shrink)
The unit root problem plays a central role in empirical applications in the time series econometric literature. However, significance tests developed under the frequentist tradition present various conceptual problems that jeopardize the power of these tests, especially for small samples. Bayesian alternatives, although having interesting interpretations and being precisely defined, experience problems due to the fact that that the hypothesis of interest in this case is sharp or precise. The Bayesian significance test used in this article, for the (...) unit root hypothesis, is based solely on the posterior density function, without the need of imposing positive probabilities to sets of zero Lebesgue measure. Furthermore, it is conducted under strict observance of the likelihood principle. It was designed mainly for testing sharp null hypotheses and it is called FBST for Full Bayesian Significance Test. (shrink)
The replication crisis has prompted many to call for statistical reform within the psychological sciences. Here we examine issues within Frequentist statistics that may have led to the replication crisis, and we examine the alternative—Bayesian statistics—that many have suggested as a replacement. The Frequentist approach and the Bayesian approach offer radically different perspectives on evidence and inference with the Frequentist approach prioritising error control and the Bayesian approach offering a formal method for quantifying the relative strength (...) of evidence for hypotheses. We suggest that rather than mere statistical reform, what is needed is a better understanding of the different modes of statistical inference and a better understanding of how statistical inference relates to scientific inference. (shrink)
Christian apologists, like Willian Lane Craig and Stephen T. Davis, argue that belief in Jesus’ resurrection is reasonable because it provides the best explanation of the available evidence. In this article, I refute that thesis. To do so, I lay out how the logic of inference to the best explanation (IBE) operates, including what good explanations must be and do by definition, and then apply IBE to the issue at hand. Multiple explanations—including (what I will call) The Resurrection Hypothesis, (...) The Lie Hypothesis, The Coma Hypothesis, The Imposter Hypothesis, and The Legend Hypothesis—will be considered. While I will not attempt to rank them all from worst to best, what I will reveal is how and why The Legend Hypothesis is unquestionably the best explanation, and The Resurrection Hypothesis is undeniably the worst. Consequently, not only is Craig and Davis’ conclusion mistaken, but belief in the literal resurrection of Jesus is irrational. In presenting this argument, I do not take myself to be breaking new ground; Robert Cavin and Carlos Colombetti have already presented a Bayesian refutation of Craig and Davis’ arguments. But I do take myself to be presenting an argument that the average person (and philosopher) can follow. It is my goal for the average person (and philosopher) to be able to clearly understand how and why the hypothesis “God supernaturally raised Jesus from the dead” fails utterly as an explanation of the evidence that Christian apologist cite for Jesus’ resurrection. (shrink)
In a series of papers over the past twenty years, and in a new book, Igor Douven has argued that Bayesians are too quick to reject versions of inference to the best explanation that cannot be accommodated within their framework. In this paper, I survey their worries and attempt to answer them using a series of pragmatic and purely epistemic arguments that I take to show that Bayes’ Rule really is the only rational way to respond to your evidence.
In this paper we discuss the new Tweety puzzle. The original Tweety puzzle was addressed by approaches in non-monotonic logic, which aim to adequately represent the Tweety case, namely that Tweety is a penguin and, thus, an exceptional bird, which cannot fly, although in general birds can fly. The new Tweety puzzle is intended as a challenge for probabilistic theories of epistemic states. In the first part of the paper we argue against monistic Bayesians, who assume that epistemic states can (...) at any given time be adequately described by a single subjective probability function. We show that monistic Bayesians cannot provide an adequate solution to the new Tweety puzzle, because this requires one to refer to a frequency-based probability function. We conclude that monistic Bayesianism cannot be a fully adequate theory of epistemic states. In the second part we describe an empirical study, which provides support for the thesis that monistic Bayesianism is also inadequate as a descriptive theory of cognitive states. In the final part of the paper we criticize Bayesian approaches in cognitive science, insofar as their monistic tendency cannot adequately address the new Tweety puzzle. We, further, argue against monistic Bayesianism in cognitive science by means of a case study. In this case study we show that Oaksford and Chater’s (2007, 2008) model of conditional inference—contrary to the authors’ theoretical position—has to refer also to a frequency-based probability function. (shrink)
The major competing statistical paradigms share a common remarkable but unremarked thread: in many of their inferential applications, different probability interpretations are combined. How this plays out in different theories of inference depends on the type of question asked. We distinguish four question types: confirmation, evidence, decision, and prediction. We show that Bayesian confirmation theory mixes what are intuitively “subjective” and “objective” interpretations of probability, whereas the likelihood-based account of evidence melds three conceptions of what constitutes an “objective” (...) probability. (shrink)
Statistical tests that detect and measure deviation from the Hardy-Weinberg equilibrium (HWE) have been devised but are limited when testing for deviation at multiallelic DNA loci is attempted. Here we present the full Bayesian significance test (FBST) for the HWE. This test depends neither on asymptotic results nor on the number of possible alleles for the particular locus being evaluated. The FBST is based on the computation of an evidence index in favor of the HWE hypothesis. A great deal (...) of forensic inference based on DNA evidence assumes that the HWE is valid for the genetic loci being used. We applied the FBST to genotypes obtained at several multiallelic short tandem repeat loci during routine parentage testing; the locus Penta E exemplifies those clearly in HWE while others such as D10S1214 and D19S253 do not appear to show this. (shrink)
We are often justified in acting on the basis of evidential confirmation. I argue that such evidence supports belief in non-quantificational generic generalizations, rather than universally quantified generalizations. I show how this account supports, rather than undermines, a Bayesian account of confirmation. Induction from confirming instances of a generalization to belief in the corresponding generic is part of a reasoning instinct that is typically (but not always) correct, and allows us to approximate the predictions that formal epistemology would make.
Modern scientific cosmology pushes the boundaries of knowledge and the knowable. This is prompting questions on the nature of scientific knowledge. A central issue is what defines a 'good' model. When addressing global properties of the Universe or its initial state this becomes a particularly pressing issue. How to assess the probability of the Universe as a whole is empirically ambiguous, since we can examine only part of a single realisation of the system under investigation: at some point, data will (...) run out. We review the basics of applying Bayesian statistical explanation to the Universe as a whole. We argue that a conventional Bayesian approach to model inference generally fails in such circumstances, and cannot resolve, e.g., the so-called 'measure problem' in inflationary cosmology. Implicit and non-empirical valuations inevitably enter model assessment in these cases. This undermines the possibility to perform Bayesian model comparison. One must therefore either stay silent, or pursue a more general form of systematic and rational model assessment. We outline a generalised axiological Bayesian model inference framework, based on mathematical lattices. This extends inference based on empirical data (evidence) to additionally consider the properties of model structure (elegance) and model possibility space (beneficence). We propose this as a natural and theoretically well-motivated framework for introducing an explicit, rational approach to theoretical model prejudice and inference beyond data. (shrink)
The new paradigm in the psychology of reasoning draws on Bayesian formal frameworks, and some advocates of the new paradigm think of these formal frameworks as providing a computational-level theory of rational human inference. I argue that Bayesian theories should not be seen as providing a computational-level theory of rational human inference, where by “Bayesian theories” I mean theories that claim that all rational credal states are probabilistically coherent and that rational adjustments of degrees of (...) belief in the light of new evidence must be in accordance with some sort of conditionalization. The problems with the view I am criticizing can best be seen when we look at chains of inferences, rather than single-step inferences. Chains of inferences have been neglected almost entirely within the new paradigm. (shrink)
This paper shows how an efficient and parallel algorithm for inference in Bayesian Networks (BNs) can be built and implemented combining sparse matrix factorization methods with variable elimination algorithms for BNs. This entails a complete separation between a first symbolic phase, and a second numerical phase.
The Full Bayesian Significance Test, FBST, is extensively reviewed. Its test statistic, a genuine Bayesian measure of evidence, is discussed in detail. Its behavior in some problems of statistical inference like testing for independence in contingency tables is discussed.
Simple random sampling resolutions of the raven paradox relevantly diverge from scientific practice. We develop a stratified random sampling model, yielding a better fit and apparently rehabilitating simple random sampling as a legitimate idealization. However, neither accommodates a second concern, the objection from potential bias. We develop a third model that crucially invokes causal considerations, yielding a novel resolution that handles both concerns. This approach resembles Inference to the Best Explanation (IBE) and relates the generalization’s confirmation to confirmation of (...) an associated law. We give it an objective Bayesian formalization and discuss the compatibility of Bayesianism and IBE. (shrink)
Knill, Kersten, & Mamassian (Chapter 6) provide an interesting discussion of how the Bayesian formulation can be used to help investigate human vision. In their view, computational theories can be based on an ideal observer that uses Bayesianinference to make optimal use of available information. Four factors are important here: the image information used, the output structures estimated, the priors assumed (i.e., knowledge about the structure of the world), and the likelihood function used (i.e., knowledge about (...) the projection of the world onto the sensors). Knill & Kersten argue that such a framework not only helps analyze a perceptual task, but can also help investigators to define it. Two examples are provided (the interpretation of surface contour and the perception of moving shadows) to show how this approach can be used in practice. As the authors admit, most (if not all) perceptual processes are ill-suited to a "strong" Bayesian approach based on a single consistent model of the world. Instead, they argue for a "weak" variant that assumes Bayesianinference to be carried out in modules of more limited scope. But how weak is "weak"? Are such approaches suitable for only a few relatively low-level tasks, or can they be applied more generally? Could a weak Bayesian approach, for example, explain how we would recognize the return of Elvis Presley? The formal modelling of human perception To help get a fix on things, it is useful to examine the fate of an earlier attempt to formalize human perception: the application of information theory. It was once hoped that this theory—a close cousin of the Bayesian formulation—would provide a way to uncover information-handling laws that were largely independent of physical implementation. In this approach, the human nervous system was assumed to have.. (shrink)
Disagreement is a ubiquitous feature of human life, and philosophers have dutifully attended to it. One important question related to disagreement is epistemological: How does a rational person change her beliefs (if at all) in light of disagreement from others? The typical methodology for answering this question is to endorse a steadfast or conciliatory disagreement norm (and not both) on a priori grounds and selected intuitive cases. In this paper, I argue that this methodology is misguided. Instead, a thoroughgoingly (...) class='Hi'>Bayesian strategy is what's needed. Such a strategy provides conciliatory norms in appropriate cases and steadfast norms in appropriate cases. I argue, further, that the few extant efforts to address disagreement in the Bayesian spirit are laudable but uncompelling. A modelling, rather than a functional, approach gets us the right norms and is highly general, allowing the epistemologist to deal with (1) multiple epistemic interlocutors, (2) epistemic superiors and inferiors (i.e. not just epistemic peers), and (3) dependence between interlocutors. (shrink)
The goal of this short chapter, aimed at philosophers, is to provide an overview and brief explanation of some central concepts involved in predictive processing (PP). Even those who consider themselves experts on the topic may find it helpful to see how the central terms are used in this collection. To keep things simple, we will first informally define a set of features important to predictive processing, supplemented by some short explanations and an alphabetic glossary. -/- The features described here (...) are not shared in all PP accounts. Some may not be necessary for an individual model; others may be contested. Indeed, not even all authors of this collection will accept all of them. To make this transparent, we have encouraged contributors to indicate briefly which of the features are necessary to support the arguments they provide, and which (if any) are incompatible with their account. For the sake of clarity, we provide the complete list here, very roughly ordered by how central we take them to be for “Vanilla PP” (i.e., a formulation of predictive processing that will probably be accepted by most researchers working on this topic). More detailed explanations will be given below. Note that these features do not specify individually necessary and jointly sufficient conditions for the application of the concept of “predictive processing”. All we currently have is a semantic cluster, with perhaps some overlapping sets of jointly sufficient criteria. The framework is still developing, and it is difficult, maybe impossible, to provide theory-neutral explanations of all PP ideas without already introducing strong background assumptions. (shrink)
The paper investigates measures of explanatory power and how to define the inference schema “Inference to the Best Explanation”. It argues that these measures can also be used to quantify the systematic power of a hypothesis and the inference schema “Inference to the Best Systematization” is defined. It demonstrates that systematic power is a fruitful criterion for theory choice and IBS is truth-conducive. It also shows that even radical Bayesians must admit that systemic power is an (...) integral component of Bayesian reasoning. Finally, the paper puts the achieved results in perspective with van Fraassen’s famous criticism of IBE. (shrink)
The Generalized Poisson Distribution (GPD) adds an extra parameter to the usual Poisson distribution. This parameter induces a loss of homogeneity in the stochastic processes modeled by the distribution. Thus, the generalized distribution becomes an useful model for counting processes where the occurrence of events is not homogeneous. This model creates the need for an inferential procedure, to test for the value of this extra parameter. The FBST (Full Bayesian Significance Test) is a Bayesian hypotheses test procedure, capable (...) of providing an evidence measure on sharp hypotheses (where the dimension of the parametric space under the null hypotheses is smaller than that of the full parametric space). The goal of this work is study the empirical properties of the FBST for testing the nullity of extra parameter of the generalized Poisson distribution. Numerical experiments show a better performance of FBST with respect to the classical likelihood ratio test, and suggest that FBST is an efficient and robust tool for this application. (shrink)
When visual attention is directed away from a stimulus, neural processing is weak and strength and precision of sensory data decreases. From a computational perspective, in such situations observers should give more weight to prior expectations in order to behave optimally during a discrimination task. Here we test a signal detection theoretic model that counter-intuitively predicts subjects will do just the opposite in a discrimination task with two stimuli, one attended and one unattended: when subjects are probed to discriminate the (...) unattended stimulus, they rely less on prior information about the probed stimulus’ identity. The model is in part inspired by recent findings that attention reduces trial-by-trial variability of the neuronal population response and that they use a common criterion for attended and unattended trials. In five different visual discrimination experiments, when attention was directed away from the target stimulus, subjects did not adjust their response bias in reaction to a change in stimulus presentation frequency despite being fully informed and despite the presence of performance feedback and monetary and social incentives. This indicates that subjects did not rely more on the priors under conditions of inattention as would be predicted by a Bayes-optimal observer model. These results inform and constrain future models of Bayesianinference in the human brain. (shrink)
Cosmological models that invoke a multiverse - a collection of unobservable regions of space where conditions are very different from the region around us - are controversial, on the grounds that unobservable phenomena shouldn't play a crucial role in legitimate scientific theories. I argue that the way we evaluate multiverse models is precisely the same as the way we evaluate any other models, on the basis of abduction, Bayesianinference, and empirical success. There is no scientifically respectable way (...) to do cosmology without taking into account different possibilities for what the universe might be like outside our horizon. Multiverse theories are utterly conventionally scientific, even if evaluating them can be difficult in practice. (shrink)
A recent surge of work on prediction-driven processing models--based on Bayesianinference and representation-heavy models--suggests that the material basis of conscious experience is inferentially secluded and neurocentrically brain bound. This paper develops an alternative account based on the free energy principle. It is argued that the free energy principle provides the right basic tools for understanding the anticipatory dynamics of the brain within a larger brain-body-environment dynamic, viewing the material basis of some conscious experiences as extensive--relational and thoroughly (...) world-involving. (shrink)
Disagreement about how best to think of the relation between theories and the realities they represent has a longstanding and venerable history. We take up this debate in relation to the free energy principle (FEP) - a contemporary framework in computational neuroscience, theoretical biology and the philosophy of cognitive science. The FEP is very ambitious, extending from the brain sciences to the biology of self-organisation. In this context, some find apparent discrepancies between the map (the FEP) and the territory (target (...) systems) a compelling reason to defend instrumentalism about the FEP. We take this to be misguided. We identify an important fallacy made by those defending instrumentalism about the FEP. We call it the literalist fallacy: this is the fallacy of inferring the truth of instrumentalism based on the claim that the properties of FEP models do not literally map onto real-world, target systems. We conclude that scientific realism about the FEP is a live and tenable option. (shrink)
Bayesian confirmation theory is our best formal framework for describing inductive reasoning. The problem of old evidence is a particularly difficult one for confirmation theory, because it suggests that this framework fails to account for central and important cases of inductive reasoning and scientific inference. I show that we can appeal to the fragmentation of doxastic states to solve this problem for confirmation theory. This fragmentation solution is independently well-motivated because of the success of fragmentation in solving other (...) problems. I also argue that the fragmentation solution is preferable to other solutions to the problem of old evidence. These other solutions are already committed to something like fragmentation, but suffer from difficulties due to their additional commitments. If these arguments are successful, Bayesian confirmation theory is saved from the problem of old evidence, and the argument for fragmentation is bolstered by its ability to solve yet another problem. (shrink)
Philosophers interested in the theoretical consequences of predictive processing often assume that predictive processing is an inferentialist and representationalist theory of cognition. More specifically, they assume that predictive processing revolves around approximated Bayesian inferences drawn by inverting a generative model. Generative models, in turn, are said to be structural representations: representational vehicles that represent their targets by being structurally similar to them. Here, I challenge this assumption, claiming that, at present, it lacks an adequate justification. I examine the only (...) argument offered to establish that generative models are structural representations, and argue that it does not substantiate the desired conclusion. Having so done, I consider a number of alternative arguments aimed at showing that the relevant structural similarity obtains, and argue that all these arguments are unconvincing for a variety of reasons. I then conclude the paper by briefly highlighting three themes that might be relevant for further investigation on the matter. (shrink)
In the current chapter, I examined the relationship between the cerebellum, emotion, and morality with evidence from large-scale neuroimaging data analysis. Although the aforementioned relationship has not been well studied in neuroscience, recent studies have shown that the cerebellum is closely associated with emotional and social processes at the neural level. Also, debates in the field of moral philosophy, psychology, and neuroscience have supported the importance of emotion in moral functioning. Thus, I explored the potentially important but less-studies topic with (...) NeuroSynth, a tool for large-scale brain image analysis, while addressing issues associated with reverse inference. The result from analysis demonstrated that brain regions in the cerebellum, the right Crus I and Crus II in particular, were specifically associated with morality in general. I discussed the potential implications of the finding based on clinical and functional neuroimaging studies of the cerebellum, emotional functioning, and neural networks for diverse psychological processes. (shrink)
Most representationalists argue that perceptual experience has to be representational because phenomenal looks are, by themselves, representational. Charles Travis argues that looks cannot represent. I argue that perceptual experience has to be representational due to the way the visual system works.
In this article I criticize the recommendations of some prominent statisticians about how to estimate and compare probabilities of the repeated sudden infant death and repeated murder. The issue has drawn considerable public attention in connection with several recent court cases in the UK. I try to show that when the three components of the Bayesianinference are carefully analyzed in this context, the advice of the statisticians turns out to be problematic in each of the steps.
The main goal of this article is to use the epistemological framework of a specific version of Cognitive Constructivism to address Piaget’s central problem of knowledge construction, namely, the re-equilibration of cognitive structures. The distinctive objective character of this constructivist framework is supported by formal inference methods of Bayesian statistics, and is based on Heinz von Foerster’s fundamental metaphor of objects as tokens for eigen-solutions. This epistemological perspective is illustrated using some episodes in the history of chemistry concerning (...) the definition or identification of chemical elements. Some of von Foerster’s epistemological imperatives provide general guidelines of development and argumentation. (shrink)
Children acquire complex concepts like DOG earlier than simple concepts like BROWN, even though our best neuroscientific theories suggest that learning the former is harder than learning the latter and, thus, should take more time (Werning 2010). This is the Complex- First Paradox. We present a novel solution to the Complex-First Paradox. Our solution builds on a generalization of Xu and Tenenbaum’s (2007) Bayesian model of word learning. By focusing on a rational theory of concept learning, we show that (...) it is easier to infer the meaning of complex concepts than that of simple concepts. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.