The study of cultural evolution has taken on an increasingly interdisciplinary and diverse approach in explicating phenomena of cultural transmission and adoptions. Inspired by this computational movement, this study uses Bayesiannetworks analysis, combining both the frequentist and the Hamiltonian Markov chain Monte Carlo (MCMC) approach, to investigate the highly representative elements in the cultural evolution of a Vietnamese city’s architecture in the early 20th century. With a focus on the façade design of 68 old houses in Hanoi’s (...) Old Quarter (based on 78 data lines extracted from 248 photos), the study argues that it is plausible to look at the aesthetics, architecture, and designs of the house façade to find traces of cultural evolution in Vietnam, which went through more than six decades of French colonization and centuries of sociocultural influence from China. The in-depth technical analysis, though refuting the presumed model on the probabilistic dependency among the variables, yields several results, the most notable of which is the strong influence of Buddhism over the decorations of the house façade. Particularly, in the top 5 networks with the best Bayesian Information Criterion (BIC) scores and p<0.05, the variable for decorations (DC) always has a direct probabilistic dependency on the variable B for Buddhism. The paper then checks the robustness of these models using Hamiltonian MCMC method and find the posterior distributions of the models’ coefficients all satisfy the technical requirement. Finally, this study suggests integrating Bayesian statistics in the social sciences in general and for the study of cultural evolution and architectural transformation in particular. (shrink)
We construct a probabilistic coherence measure for information sets which determines a partial coherence ordering. This measure is applied in constructing a criterion for expanding our beliefs in the face of new information. A number of idealizations are being made which can be relaxed by an appeal to BayesianNetworks.
The legal scholar Henry Wigmore asserted that cross-examination is ‘the greatest legal engine ever invented for the discovery of truth.’ Was Wigmore right? Instead of addressing this question upfront, this paper offers a conceptual ground clearing. It is difficult to say whether Wigmore was right or wrong without becoming clear about what we mean by cross-examination; how it operates at trial; what it is intended to accomplish. Despite the growing importance of legal epistemology, there is virtually no philosophical work that (...) discusses cross-examination, its scope and function at trial. This paper makes a first attempt at clearing the ground by articulating an analysis of cross-examination using probability theory and Bayesiannetworks. This analysis relies on the distinction between undercutting and rebutting evidence. A preliminary assessment of the truth-seeking function of cross-examination is offered at the end of the paper. (shrink)
This paper shows how an efficient and parallel algorithm for inference in BayesianNetworks (BNs) can be built and implemented combining sparse matrix factorization methods with variable elimination algorithms for BNs. This entails a complete separation between a first symbolic phase, and a second numerical phase.
The exponential growth of social data both in volume and complexity has increasingly exposed many of the shortcomings of the conventional frequentist approach to statistics. The scientific community has called for careful usage of the approach and its inference. Meanwhile, the alternative method, Bayesian statistics, still faces considerable barriers toward a more widespread application. The bayesvl R package is an open program, designed for implementing Bayesian modeling and analysis using the Stan language’s no-U-turn (NUTS) sampler. The package combines (...) the ability to construct Bayesian network models using directed acyclic graphs (DAGs), the Markov chain Monte Carlo (MCMC) simulation technique, and the graphic capability of the ggplot2 package. As a result, it can improve the user experience and intuitive understanding when constructing and analyzing Bayesian network models. A case example is offered to illustrate the usefulness of the package for Big Data analytics and cognitive computing. (shrink)
We model scientific theories as Bayesiannetworks. Nodes carry credences and function as abstract representations of propositions within the structure. Directed links carry conditional probabilities and represent connections between those propositions. Updating is Bayesian across the network as a whole. The impact of evidence at one point within a scientific theory can have a very different impact on the network than does evidence of the same strength at a different point. A Bayesian model allows us to (...) envisage and analyze the differential impact of evidence and credence change at different points within a single network and across different theoretical structures. (shrink)
Bayesian models can be related to cognitive processes in a variety of ways that can be usefully understood in terms of Marr's distinction among three levels of explanation: computational, algorithmic and implementation. In this note, we discuss how an integrated probabilistic account of the different levels of explanation in cognitive science is resulting, at least for the current research practice, in a sort of unpredicted epistemological shift with respect to Marr's original proposal.
Conditional independence tests have received special attention lately in machine learning and computational intelligence related literature as an important indicator of the relationship among the variables used by their models. In the field of probabilistic graphical models, which includes Bayesian network models, conditional independence tests are especially important for the task of learning the probabilistic graphical model structure from data. In this paper, we propose the full Bayesian significance test for tests of conditional independence for discrete datasets. The (...) full Bayesian significance test is a powerful Bayesian test for precise hypothesis, as an alternative to the frequentist’s significance tests (characterized by the calculation of the p-value). (shrink)
Decoupling is a general principle that allows us to separate simple components in a complex system. In statistics, decoupling is often expressed as independence, no association, or zero covariance relations. These relations are sharp statistical hypotheses, that can be tested using the FBST - Full Bayesian Significance Test. Decoupling relations can also be introduced by some techniques of Design of Statistical Experiments, DSEs, like randomization. This article discusses the concepts of decoupling, randomization and sparsely connected statistical models in the (...) epistemological framework of cognitive constructivism. (shrink)
We describe and try to motivate our project to build systems using both a knowledge based and a neural network approach. These two approaches are used at different stages in the solution of a problem, instead of using knowledge bases exclusively on some problems, and neural nets exclusively on others. The knowledge base (KB) is defined first in a declarative, symbolic language that is easy to use. It is then compiled into an efficient neural network (NN) representation, run, and the (...) results from run time and (eventually) from learning are decompiled to a symbolic description of the knowledge contained in the network. After inspecting this recovered knowledge, a designer would be able to modify the KB and go through the whole cycle of compiling, running, and decompiling again. The central question with which this project is concerned is, therefore, How do we go from a KB to an NN, and back again? We are investigating this question by building tools consisting of a repertoire of language/translation/network types, and trying them on problems in a variety of domains. (shrink)
An important problem with machine learning is that when label number n>2, it is very difficult to construct and optimize a group of learning functions, and we wish that optimized learning functions are still useful when prior distribution P(x) (where x is an instance) is changed. To resolve this problem, the semantic information G theory, Logical Bayesian Inference (LBI), and a group of Channel Matching (CM) algorithms together form a systematic solution. MultilabelMultilabel A semantic channel in the G theory (...) consists of a group of truth functions or membership functions. In comparison with likelihood functions, Bayesian posteriors, and Logistic functions used by popular methods, membership functions can be more conveniently used as learning functions without the above problem. In Logical Bayesian Inference (LBI), every label’s learning is independent. For Multilabel learning, we can directly obtain a group of optimized membership functions from a big enough sample with labels, without preparing different samples for different labels. A group of Channel Matching (CM) algorithms are developed for machine learning. For the Maximum Mutual Information (MMI) classification of three classes with Gaussian distributions on a two-dimensional feature space, 2-3 iterations can make mutual information between three classes and three labels surpass 99% of the MMI for most initial partitions. For mixture models, the Expectation-Maxmization (EM) algorithm is improved and becomes the CM-EM algorithm, which can outperform the EM algorithm when mixture ratios are imbalanced, or local convergence exists. The CM iteration algorithm needs to combine neural networks for MMI classifications on high-dimensional feature spaces. LBI needs further studies for the unification of statistics and logic. (shrink)
This research employs the Bayesian network modeling approach, and the Markov chain Monte Carlo technique, to learn about the role of lies and violence in teachings of major religions, using a unique dataset extracted from long-standing Vietnamese folktales. The results indicate that, although lying and violent acts augur negative consequences for those who commit them, their associations with core religious values diverge in the final outcome for the folktale characters. Lying that serves a religious mission of either Confucianism or (...) Taoism (but not Buddhism) brings a positive outcome to a character (βT_and_Lie_O= 2.23; βC_and_Lie_O= 1.47; βT_and_Lie_O= 2.23). A violent act committed to serving Buddhist missions results in a happy ending for the committer (βB_and_Viol_O= 2.55). What is highlighted here is a glaring double standard in the interpretation and practice of the three teachings: the very virtuous outcomes being preached, whether that be compassion and meditation in Buddhism, societal order in Confucianism, or natural harmony in Taoism, appear to accommodate two universal vices—violence in Buddhism and lying in the latter two. These findings contribute to a host of studies aimed at making sense of contradictory human behaviors, adding the role of religious teachings in addition to cognition in belief maintenance and motivated reasoning in discounting counterargument. (shrink)
Under the independence and competence assumptions of Condorcet’s classical jury model, the probability of a correct majority decision converges to certainty as the jury size increases, a seemingly unrealistic result. Using Bayesiannetworks, we argue that the model’s independence assumption requires that the state of the world (guilty or not guilty) is the latest common cause of all jurors’ votes. But often – arguably in all courtroom cases and in many expert panels – the latest such common cause (...) is a shared ‘body of evidence’ observed by the jurors. In the corresponding Bayesian network, the votes are direct descendants not of the state of the world, but of the body of evidence, which in turn is a direct descendant of the state of the world. We develop a model of jury decisions based on this Bayesian network. Our model permits the possibility of misleading evidence, even for a maximally competent observer, which cannot easily be accommodated in the classical model. We prove that (i) the probability of a correct majority verdict converges to the probability that the body of evidence is not misleading, a value typically below 1; (ii) depending on the required threshold of ‘no reasonable doubt’, it may be impossible, even in an arbitrarily large jury, to establish guilt of a defendant ‘beyond any reasonable doubt’. (shrink)
The epistemic probability of A given B is the degree to which B evidentially supports A, or makes A plausible. This paper is a first step in answering the question of what determines the values of epistemic probabilities. I break this question into two parts: the structural question and the substantive question. Just as an object’s weight is determined by its mass and gravitational acceleration, some probabilities are determined by other, more basic ones. The structural question asks what probabilities are (...) not determined in this way—these are the basic probabilities which determine values for all other probabilities. The substantive question asks how the values of these basic probabilities are determined. I defend an answer to the structural question on which basic probabilities are the probabilities of atomic propositions conditional on potential direct explanations. I defend this against the view, implicit in orthodox mathematical treatments of probability, that basic probabilities are the unconditional probabilities of complete worlds. I then apply my answer to the structural question to clear up common confusions in expositions of Bayesianism and shed light on the “problem of the priors.”. (shrink)
At the heart of Drazen Prelec’s chapter is the distinction between outcome utility and diagnostic utility. There is a particular distinction in the literature on causal networks (Pearl 2000), namely the distinction between observing and intervening, that maps onto Prelec’s distinction between diagnostic and outcome utility. I will explore the connection between both frameworks.
Decision-making typically requires judgments about causal relations: we need to know the causal effects of our actions and the causal relevance of various environmental factors. We investigate how several individuals' causal judgments can be aggregated into collective causal judgments. First, we consider the aggregation of causal judgments via the aggregation of probabilistic judgments, and identify the limitations of this approach. We then explore the possibility of aggregating causal judgments independently of probabilistic ones. Formally, we introduce the problem of causal-network aggregation. (...) Finally, we revisit the aggregation of probabilistic judgments when this is constrained by prior aggregation of qualitative causal judgments. (shrink)
Robustness arguments hold that hypotheses are more likely to be true when they are confirmed by diverse kinds of evidence. Robustness arguments require the confirming evidence to be independent. We identify two kinds of independence appealed to in robustness arguments: ontic independence —when the multiple lines of evidence depend on different materials, assumptions, or theories—and probabilistic independence. Many assume that OI is sufficient for a robustness argument to be warranted. However, we argue that, as typically construed, OI is not a (...) sufficient independence condition for warranting robustness arguments. We show that OI evidence can collectively confirm a hypothesis to a lower degree than individual lines of evidence, contrary to the standard assumption undergirding usual robustness arguments. We employ Bayesiannetworks to represent the ideal empirical scenario for a robustness argument and a variety of ways in which empirical scenarios can fall short of this ideal. (shrink)
Chương trình bayesvl được thiết kế với định hướng sư phạm, hỗ trợ các nhà nghiên cứu ngành khoa học xã hội và nhân văn sử dụng mô hình lưới Bayesian, mô phỏng MCMC, hình ảnh hóa các thông số kĩ thuật và phân tích dữ liệu xã hội. bayesvl được phát hành chính thức trên hệ thống thư viện chuẩn của R là Comprehensive R Archive Network (CRAN) vào tháng 5 năm 2019.
There is a need to rapidly assess the impact of new technology initiatives on the Counter Improvised Explosive Device battle in Iraq and Afghanistan. The immediate challenge is the need for rapid decisions, and a lack of engineering test data to support the assessment. The rapid assessment methodology exploits available information to build a probabilistic model that provides an explicit executable representation of the initiative’s likely impact. The model is used to provide a consistent, explicit, explanation to decision makers on (...) the likely impact of the initiative. Sensitivity analysis on the model provides analytic information to support development of informative test plans. (shrink)
Kim’s causal exclusion argument purports to demonstrate that the non-reductive physicalist must treat mental properties (and macro-level properties in general) as causally inert. A number of authors have attempted to resist Kim’s conclusion by utilizing the conceptual resources of Woodward’s (2005) interventionist conception of causation. The viability of these responses has been challenged by Gebharter (2017a), who argues that the causal exclusion argument is vindicated by the theory of causal Bayesiannetworks (CBNs). Since the interventionist conception of causation (...) relies crucially on CBNs for its foundations, Gebharter’s argument appears to cast significant doubt on interventionism’s antireductionist credentials. In the present article, we both (1) demonstrate that Gebharter’s CBN-theoretic formulation of the exclusion argument relies on some unmotivated and philosophically significant assumptions (especially regarding the relationship between CBNs and the metaphysics of causal relevance), and (2) use Bayesiannetworks to develop a general theory of causal inference for multi-level systems that can serve as the foundation for an antireductionist interventionist account of causation. (shrink)
This research employs the Bayesian network modeling approach, and the Markov chain Monte Carlo technique, to learn about the role of lies and violence in teachings of major religions, using a unique dataset extracted from long-standing Vietnamese folktales. The results indicate that, although lying and violent acts augur negative consequences for those who commit them, their associations with core religious values diverge in the outcome for the folktale characters. Lying that serves a religious mission of either Confucianism or Taoism (...) (but not Buddhism) brings a positive outcome to a character. A violent act committed to serving Buddhist mission results in a happy ending for the committer. (shrink)
This paper presents an attempt to integrate theories of causal processes—of the kind developed by Wesley Salmon and Phil Dowe—into a theory of causal models using Bayesiannetworks. We suggest that arcs in causal models must correspond to possible causal processes. Moreover, we suggest that when processes are rendered physically impossible by what occurs on distinct paths, the original model must be restricted by removing the relevant arc. These two techniques suffice to explain cases of late preëmption and (...) other cases that have proved problematic for causal models. (shrink)
We evaluate the asymptotic performance of boundedly-rational strategies in multi-armed bandit problems, where performance is measured in terms of the tendency (in the limit) to play optimal actions in either (i) isolation or (ii) networks of other learners. We show that, for many strategies commonly employed in economics, psychology, and machine learning, performance in isolation and performance in networks are essentially unrelated. Our results suggest that the appropriateness of various, common boundedly-rational strategies depends crucially upon the social context (...) (if any) in which such strategies are to be employed. (shrink)
Currently, under the conditions of permanent financial risks that hamper the sustainable economic growth in the financial sector, the development of evaluation and risk management methods both regulated by Basel II and III and others seem to be of special importance. The reputation risk is one of significant risks affecting reliability and credibility of commercial banks. The importance of reputation risk management and the quality of their assessment remain relevant as the probability of decrease in or loss of business reputation (...) influences the financial results and the degree of customers’, partners’ and stakeholders’ confidence. By means of imitating modeling based on BayesianNetworks and the fuzzy data analysis, the article characterizes the mechanism of reputation risk assessment and possible losses evaluation in banks by plotting normal and lognormal distribution functions. Monte-Carlo simulation is used to calculate the probability of losses caused by reputation risks. The degree of standardized histogram similarity is determined on the basis of the fuzzy data analysis applying Hamming distance method. The tree-like hierarchy based on the OWA-operator is used to aggregate the data with Fishburne's coefficients as the convolution scales. The mechanism takes into account the impact of criteria, such as return on equity, goodwill value, the risk assets ratio, the share of the productive assets in net assets, the efficiency ratio of interest bearing liabilities, the risk ratio of credit operations, the funding ratio and reliability index on the business reputation of the bank. The suggested methods and recommendations might be applied to develop the decision-making mechanism targeted at the implementation of reputation risk management system in commercial banks as well as to optimize risk management technologies. (shrink)
The development of causal modelling since the 1950s has been accompanied by a number of controversies, the most striking of which concerns the Markov condition. Reichenbach's conjunctive forks did satisfy the Markov condition, while Salmon's interactive forks did not. Subsequently some experts in the field have argued that adequate causal models should always satisfy the Markov condition, while others have claimed that non-Markovian causal models are needed in some cases. This paper argues for the second position by considering the multi-causal (...) forks, which are widespread in contemporary medicine (Section 2). A non-Markovian causal model for such forks is introduced and shown to be mathematically tractable (Sections 6, 7, and 8). The paper also gives a general discussion of the controversy about the Markov condition (Section 1), and of the related controversy about probabilistic causality (Sections 3, 4, and 5). (shrink)
Discovering high-level causal relations from low-level data is an important and challenging problem that comes up frequently in the natural and social sciences. In a series of papers, Chalupka etal. (2015, 2016a, 2016b, 2017) develop a procedure forcausal feature learning (CFL) in an effortto automate this task. We argue that CFL does not recommend coarsening in cases where pragmatic considerations rule in favor of it, and recommends coarsening in cases where pragmatic considerations rule against it. We propose a new technique, (...) pragmatic causal feature learning (PCFL), which extends the original CFL algorithm in useful and intuitive ways. We show that PCFL has the same attractive measure-theoretic properties as the original CFL algorithm. We compare the performance of both methods through theoretical analysis and experiments. (shrink)
Hanoi – the capital city of Vietnam and the land of thousand years of civilization – invokes among both locals and tourists the image of the ‘Sword Lake’ with its ancient ‘Turtle Tower’ and the charming Old Quarter with its preserved old houses lying along small commercial alleys. The houses in the Old Quarter were constructed over a century ago which feature tube houses with inclined tile roofs and a blend of French architecture create the infusions of history and memory. (...) Researchers of various fields have attempted to capture and explain the essence of these townhouses in their works, either in the collectibles of many authors, the quintessential drawings of talented painters, or in publications on the history of the Old Quarter. Among these, the recent work by Vuong et al. (2019) adds a unique view of the architectural features of Hanoi’s ancient townhouses as these features are viewed as dependent and independent variables. The study titled ‘Cultural evolution in Vietnam’s early 20th century: A Bayesian network analysis of Hanoi Franco-Chinese house designs’ aims to find traces of cultural evolution in the early 20th century in Vietnam and highlight the most notable elements that affect the Vietnamese people’s perception of cultural evolutions. (shrink)
In the current chapter, I examined the relationship between the cerebellum, emotion, and morality with evidence from large-scale neuroimaging data analysis. Although the aforementioned relationship has not been well studied in neuroscience, recent studies have shown that the cerebellum is closely associated with emotional and social processes at the neural level. Also, debates in the field of moral philosophy, psychology, and neuroscience have supported the importance of emotion in moral functioning. Thus, I explored the potentially important but less-studies topic with (...) NeuroSynth, a tool for large-scale brain image analysis, while addressing issues associated with reverse inference. The result from analysis demonstrated that brain regions in the cerebellum, the right Crus I and Crus II in particular, were specifically associated with morality in general. I discussed the potential implications of the finding based on clinical and functional neuroimaging studies of the cerebellum, emotional functioning, and neural networks for diverse psychological processes. (shrink)
Hempel’s Converse Consequence Condition (CCC), Entailment Condition (EC), and Special Consequence Condition (SCC) have some prima facie plausibility when taken individually. Hempel, though, shows that they have no plausibility when taken together, for together they entail that E confirms H for any propositions E and H. This is “Hempel’s paradox”. It turns out that Hempel’s argument would fail if one or more of CCC, EC, and SCC were modified in terms of explanation. This opens up the possibility that Hempel’s paradox (...) can be solved by modifying one or more of CCC, EC, and SCC in terms of explanation. I explore this possibility by modifying CCC and SCC in terms of explanation and considering whether CCC and SCC so modified are correct. I also relate that possibility to Inference to the Best Explanation. (shrink)
By using the general investigation framework offered by the cognitive science of religion (CSR), I analyse religion as a necessary condition for the evolutionary path of knowledge. The main argument is the "paradox of the birth of knowledge": in order to get to the meaning of the part, a sense context is needed; but a sense of the whole presupposes the sense (meaning) of the parts. Religion proposes solutions to escape this paradox, based on the imagination of sense (meaning) contexts, (...) respectively closures of these contexts through meta-senses. What is important is the practical effectiveness of solutions proposed by religion, taking into account the costs of faith and the costs of the absence of religious belief. The hypothesis has the following consequences: religion is a necessary condition for the initial evolution of knowledge and the emergence of religion is determined by the evolution of knowledge. The continuation of the solving of paradox is a Bayesian one, using explorations: a sense of the whole allows cognitive arrangements of the parties, which in turn open the possibility of a rearrangement of the whole. The contribution of religion to the emergence of sense (meaning) could be governed by the rule: any map of the world is more useful than no map; any meaning (of life) is better than no meaning. The human mind fills the perceptual and cognitive gaps, some (religious) filling solutions being true vault keys of the entire cognitive construction called the world. Knowledge is conditioned by the existence of an organized context, as the cosmos created by religion by means of explanatory meta-theories supports knowledge by closing the cognitive context and using meaning networks. The proposed analysis is consistent with a redefinition of rationality from the perspective of evolution: the importance and relevance of knowledge is determined by its practical outcome - survival. In the context of useful fictions, it does not matter what God actually does, but what we have done by believing in God. Existence has provided a pragmatic verification of the cognitive solutions that underlie the survival strategies promoted by religions. (shrink)
The Internet of Things (IoT) infrastructure forms a gigantic network of interconnected and interacting devices. This infrastructure involves a new generation of service delivery models, more advanced data management and policy schemes, sophisticated data analytics tools, and effective decision making applications. IoT technology brings automation to a new level wherein nodes can communicate and make autonomous decisions in the absence of human interventions. IoT enabled solutions generate and process enormous volumes of heterogeneous data exchanged among billions of nodes. This results (...) in Big Data congestion, data management, storage issues and various inefficiencies. Fog Computing aims at solving the issues with data management as it includes intelligent computational components and storage closer to the data sources. Often, an IoT-enabled infrastructure is shared among many users with various requirements. Sharing resources, sharing operational costs and collective decision making (consensus) among many stakeholders is frequently neglected. This research addresses an essential requirement for adaptive, autonomous and consensus-based Fog computational solutions which are able to support distributed and in-network schemes and policies. These network schemes and policies need to meet the requirements of many users. In this work, innovative consensus-based computational solutions are investigated. These proposed solutions aim to correlate and organise data for effective management and decision making in Fog. Instead of individual decision making, the algorithms aim to aggregate several decisions into a consensus decision representing a collective agreement, benefiting from the individuals variant knowledge and meeting multiple stakeholders requirements. In order to validate the proposed solutions, hybrid research methodology is involved that includes the design of a test-bed and the execution of several experiments. In order to investigate the effectiveness of the paradigm, three experiments were designed and validated. Real-life sensor data and synthetic statistical data was collected, processed and analysed. Bayesian Machine Learning models and Analytics were used to consolidate the design and evaluate the performance of the algorithms. In the Fog environment, the first scenario tests the Aggregation by Distribution algorithm. The solution contribute in achieving a notable efficiency of data delivery obtained with a minimal loss in precision. The second scenario validates the merits of the approach in predicting the activities of high mobility IoT applications. The third scenario tests the applications related to smart home IoT. All proposed Consensus algorithms use statistical analysis to support effective decision making in Fog and enable data aggregation for optimal storage, data transmission, processing and analytics. The final results of all experiments showed that all the implemented consensus approaches surpass the individual ones in different performance terms. Formal results also showed that the paradigm is a good fit in many IoT environments and can be suitable for different scenarios when applying data analysis to correlate data with the design. Finally, the design demonstrates that Fog Computing can compete with Cloud Computing in terms of accuracy with an added preference of locality. (shrink)
Dogmatism is sometimes thought to be incompatible with Bayesian models of rational learning. I show that the best model for updating imprecise credences is compatible with dogmatism.
We challenge Bruineberg et al's view that Markov blankets are illicitly reified when used to describe organismic boundaries. We do this both on general methodological grounds, where we appeal to a form of structural realism derived from Bayesian cognitive science to dissolve the problem, and by rebutting specific arguments in the target article.
It is often claimed that the greatest value of the Bayesian framework in cognitive science consists in its unifying power. Several Bayesian cognitive scientists assume that unification is obviously linked to explanatory power. But this link is not obvious, as unification in science is a heterogeneous notion, which may have little to do with explanation. While a crucial feature of most adequate explanations in cognitive science is that they reveal aspects of the causal mechanism that produces the phenomenon (...) to be explained, the kind of unification afforded by the Bayesian framework to cognitive science does not necessarily reveal aspects of a mechanism. Bayesian unification, nonetheless, can place fruitful constraints on causal–mechanical explanation. 1 Introduction2 What a Great Many Phenomena Bayesian Decision Theory Can Model3 The Case of Information Integration4 How Do Bayesian Models Unify?5 Bayesian Unification: What Constraints Are There on Mechanistic Explanation?5.1 Unification constrains mechanism discovery5.2 Unification constrains the identification of relevant mechanistic factors5.3 Unification constrains confirmation of competitive mechanistic models6 ConclusionAppendix. (shrink)
If a group is modelled as a single Bayesian agent, what should its beliefs be? I propose an axiomatic model that connects group beliefs to beliefs of group members, who are themselves modelled as Bayesian agents, possibly with different priors and different information. Group beliefs are proven to take a simple multiplicative form if people’s information is independent, and a more complex form if information overlaps arbitrarily. This shows that group beliefs can incorporate all information spread over the (...) individuals without the individuals having to communicate their (possibly complex and hard-to-describe) private information; communicating prior and posterior beliefs sufices. JEL classification: D70, D71.. (shrink)
As stochastic independence is essential to the mathematical development of probability theory, it seems that any foundational work on probability should be able to account for this property. Bayesian decision theory appears to be wanting in this respect. Savage’s postulates on preferences under uncertainty entail a subjective expected utility representation, and this asserts only the existence and uniqueness of a subjective probability measure, regardless of its properties. What is missing is a preference condition corresponding to stochastic independence. To fill (...) this significant gap, the article axiomatizes Bayesian decision theory afresh and proves several representation theorems in this novel framework. (shrink)
Even if our justified beliefs are closed under known entailment, there may still be instances of transmission failure. Transmission failure occurs when P entails Q, but a subject cannot acquire a justified belief that Q by deducing it from P. Paradigm cases of transmission failure involve inferences from mundane beliefs (e.g., that the wall in front of you is red) to the denials of skeptical hypotheses relative to those beliefs (e.g., that the wall in front of you is not white (...) and lit by red lights). According to the Bayesian explanation, transmission failure occurs when (i) the subject’s belief that P is based on E, and (ii) P(Q|E) P(Q). No modifications of the Bayesian explanation are capable of accommodating such cases, so the explanation must be rejected as inadequate. Alternative explanations employing simple subjunctive conditionals are fully capable of capturing all of the paradigm cases, as well as those missed by the Bayesian explanation. (shrink)
A piece of folklore enjoys some currency among philosophical Bayesians, according to which Bayesian agents that, intuitively speaking, spread their credence over the entire space of available hypotheses are certain to converge to the truth. The goals of the present discussion are to show that kernel of truth in this folklore is in some ways fairly small and to argue that Bayesian convergence-to-the-truth results are a liability for Bayesianism as an account of rationality, since they render a certain (...) sort of arrogance rationally mandatory. (shrink)
Various sexist and racist beliefs ascribe certain negative qualities to people of a given sex or race. Epistemic allies are people who think that in normal circumstances rationality requires the rejection of such sexist and racist beliefs upon learning of many counter-instances, i.e. members of these groups who lack the target negative quality. Accordingly, epistemic allies think that those who give up their sexist or racist beliefs in such circumstances are rationally responding to their evidence, while those who do not (...) are irrational in failing to respond to their evidence by giving up their belief. This is a common view among philosophers and non-philosophers. But epistemic allies face three problems. First, sexist and racist beliefs often involve generic propositions. These sorts of propositions are notoriously resilient in the face of counter-instances since the truth of generic propositions is typically compatible with the existence of many counter-instances. Second, background beliefs can enable one to explain away counter-instances to one’s beliefs. So even when counter-instances might otherwise constitute strong evidence against the truth of the generic, the ability to explain the counter-instances away with relevant background beliefs can make it rational to retain one’s belief in the generic despite the existence of many counter-instances. The final problem is that the kinds of judgements epistemic allies want to make about the irrationality of sexist and racist beliefs upon encountering many counter-instances is at odds with the judgements that we are inclined to make in seemingly parallel cases about the rationality of non-sexist and non-racist generic beliefs. Thus epistemic allies may end up having to give up on plausible normative supervenience principles. All together, these problems pose a significant prima facie challenge to epistemic allies. In what follows I explain how a Bayesian approach to the relation between evidence and belief can neatly untie these knots. The basic story is one of defeat: Bayesianism explains when one is required to become increasingly confident in chance propositions, and confidence in chance propositions can make belief in corresponding generics irrational. (shrink)
Stochastic independence has a complex status in probability theory. It is not part of the definition of a probability measure, but it is nonetheless an essential property for the mathematical development of this theory. Bayesian decision theorists such as Savage can be criticized for being silent about stochastic independence. From their current preference axioms, they can derive no more than the definitional properties of a probability measure. In a new framework of twofold uncertainty, we introduce preference axioms that entail (...) not only these definitional properties, but also the stochastic independence of the two sources of uncertainty. This goes some way towards filling a curious lacuna in Bayesian decision theory. (shrink)
Learning is fundamentally about action, enabling the successful navigation of a changing and uncertain environment. The experience of pain is central to this process, indicating the need for a change in action so as to mitigate potential threat to bodily integrity. This review considers the application of Bayesian models of learning in pain that inherently accommodate uncertainty and action, which, we shall propose are essential in understanding learning in both acute and persistent cases of pain.
Bayesianism is our leading theory of uncertainty. Epistemology is defined as the theory of knowledge. So “Bayesian Epistemology” may sound like an oxymoron. Bayesianism, after all, studies the properties and dynamics of degrees of belief, understood to be probabilities. Traditional epistemology, on the other hand, places the singularly non-probabilistic notion of knowledge at centre stage, and to the extent that it traffics in belief, that notion does not come in degrees. So how can there be a Bayesian epistemology?
Debunking arguments in ethics contend that our moral beliefs have dubious evolutionary, cultural, or psychological origins – hence concluding that we should doubt such beliefs. Debates about debunking are often couched in coarse-grained terms – about whether our moral beliefs are justified or not, for instance. In this paper, I propose a more detailed Bayesian analysis of debunking arguments, which proceeds in the fine-grained framework of rational confidence. Such analysis promises several payoffs: it highlights how debunking arguments don’t affect (...) all agents, but rather only those agents who updated on their intuitions using a specific range of evidentiary weights; it underscores how the debunkers shouldn’t conclude that we should reduce confidence beyond some threshold, but rather only that we should reduce confidence by some amount; and it proposes a method of integrating different kinds of evidence – about the kinds of epistemic flaws at play, about the different possible origins of our moral beliefs, about the background normative assumptions we’re entitled to make – in order to arrive at a rational moral credence in light of debunking. (shrink)
Politics is rife with motivated cognition. People do not dispassionately engage with the evidence when they form political beliefs; they interpret it selectively, generating justifications for their desired conclusions and reasons why contrary evidence should be ignored. Moreover, research shows that epistemic ability (e.g. intelligence and familiarity with evidence) is correlated with motivated cognition. Bjørn Hallsson has pointed out that this raises a puzzle for the epistemology of disagreement. On the one hand, we typically think that epistemic ability in an (...) interlocutor gives us reason to downgrade our belief upon learning that we disagree. On the other hand, if our interlocutor is under the sway of motivated cognition, then we have reason to discount his opinion. In this paper, I argue that Hallsson's puzzle is solved by adopting a Bayesian approach to disagreement. If an interlocutor is under the sway of motivated cognition, his disagreement should not affect our beliefs--no matter his ability. Because we implicitly and to high accuracy know his beliefs before he reveals them to us, disagreement provides us with no new information on which to conditionalize. I advance a model which accommodates the motivated cognition dynamic and other key epistemic features of disagreement. (shrink)
Any theory of confirmation must answer the following question: what is the purpose of its conception of confirmation for scientific inquiry? In this article, we argue that no Bayesian conception of confirmation can be used for its primary intended purpose, which we take to be making a claim about how worthy of belief various hypotheses are. Then we consider a different use to which Bayesian confirmation might be put, namely, determining the epistemic value of experimental outcomes, and thus (...) to decide which experiments to carry out. Interestingly, Bayesian confirmation theorists rule out that confirmation be used for this purpose. We conclude that Bayesian confirmation is a means with no end. 1 Introduction2 Bayesian Confirmation Theory3 Bayesian Confirmation and Belief4 Confirmation and the Value of Experiments5 Conclusion. (shrink)
In this article, network science is discussed from a methodological perspective, and two central theses are defended. The first is that network science exploits the very properties that make a system complex. Rather than using idealization techniques to strip those properties away, as is standard practice in other areas of science, network science brings them to the fore, and uses them to furnish new forms of explanation. The second thesis is that network representations are particularly helpful in explaining the properties (...) of non-decomposable systems. Where part-whole decomposition is not possible, network science provides a much-needed alternative method of compressing information about the behavior of complex systems, and does so without succumbing to problems associated with combinatorial explosion. The article concludes with a comparison between the uses of network representation analyzed in the main discussion, and an entirely distinct use of network representation that has recently been discussed in connection with mechanistic modeling. (shrink)
This paper considers a problem for Bayesian epistemology and proposes a solution to it. On the traditional Bayesian framework, an agent updates her beliefs by Bayesian conditioning, a rule that tells her how to revise her beliefs whenever she gets evidence that she holds with certainty. In order to extend the framework to a wider range of cases, Jeffrey (1965) proposed a more liberal version of this rule that has Bayesian conditioning as a special case. Jeffrey (...) conditioning is a rule that tells the agent how to revise her beliefs whenever she gets evidence that she holds with any degree of confidence. The problem? While Bayesian conditioning has a foundationalist structure, this foundationalism disappears once we move to Jeffrey conditioning. If Bayesian conditioning is a special case of Jeffrey conditioning, then they should have the same normative structure. The solution? To reinterpret Bayesian updating as a form of diachronic coherentism. (shrink)
The Capgras delusion is a condition in which a person believes that an imposter has replaced some close friend or relative. Recent theorists have appealed to Bayesianism to help explain both why a subject with the Capgras delusion adopts this delusional belief and why it persists despite counter-evidence. The Bayesian approach is useful for addressing these questions; however, the main proposal of this essay is that Capgras subjects also have a delusional conception of epistemic possibility, more specifically, they think (...) more things are possible, given what is known, than non-delusional subjects do. I argue that this is a central way in which their thinking departs from ordinary cognition and that it cannot be characterized in Bayesian terms. Thus, in order to fully understand the cognitive processing involved in the Capgras delusion, we must move beyond Bayesianism. 1 The Simple Bayesian Model2 Anomalous Evidence and the Capgras Delusion3 Impaired Reasoning4 Setting Priors5 Epistemic Modality6 Delusions of Possibility7 Delusions of Possibility in Different Contexts8 How Many Factors? (shrink)
Gordon Belot has recently developed a novel argument against Bayesianism. He shows that there is an interesting class of problems that, intuitively, no rational belief forming method is likely to get right. But a Bayesian agent’s credence, before the problem starts, that she will get the problem right has to be 1. This is an implausible kind of immodesty on the part of Bayesians. My aim is to show that while this is a good argument against traditional, precise Bayesians, (...) the argument doesn’t neatly extend to imprecise Bayesians. As such, Belot’s argument is a reason to prefer imprecise Bayesianism to precise Bayesianism. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.