ABSTRACTIn this article, we discuss the benefits of Bayesianstatistics and how to utilize them in studies of moral education. To demonstrate concrete examples of the applications of Bayesianstatistics to studies of moral education, we reanalyzed two data sets previously collected: one small data set collected from a moral educational intervention experiment, and one big data set from a large-scale Defining Issues Test-2 survey. The results suggest that Bayesian analysis of data sets collected from (...) moral educational studies can provide additional useful statistical information, particularly that associated with the strength of evidence supporting alternative hypotheses, which has not been provided by the classical frequentist approach focusing on P-values. Finally, we introduce several practical guidelines pertaining to how to utilize Bayesianstatistics, including the utilization of newly developed free statistical software, Jeffrey’s Amazing Statistics Program, and thresholding based on Bayes Factors, to scholars in the field of moral education. (shrink)
The study of cultural evolution has taken on an increasingly interdisciplinary and diverse approach in explicating phenomena of cultural transmission and adoptions. Inspired by this computational movement, this study uses Bayesian networks analysis, combining both the frequentist and the Hamiltonian Markov chain Monte Carlo (MCMC) approach, to investigate the highly representative elements in the cultural evolution of a Vietnamese city’s architecture in the early 20th century. With a focus on the façade design of 68 old houses in Hanoi’s Old (...) Quarter (based on 78 data lines extracted from 248 photos), the study argues that it is plausible to look at the aesthetics, architecture, and designs of the house façade to find traces of cultural evolution in Vietnam, which went through more than six decades of French colonization and centuries of sociocultural influence from China. The in-depth technical analysis, though refuting the presumed model on the probabilistic dependency among the variables, yields several results, the most notable of which is the strong influence of Buddhism over the decorations of the house façade. Particularly, in the top 5 networks with the best Bayesian Information Criterion (BIC) scores and p<0.05, the variable for decorations (DC) always has a direct probabilistic dependency on the variable B for Buddhism. The paper then checks the robustness of these models using Hamiltonian MCMC method and find the posterior distributions of the models’ coefficients all satisfy the technical requirement. Finally, this study suggests integrating Bayesianstatistics in the social sciences in general and for the study of cultural evolution and architectural transformation in particular. (shrink)
The purpose of this paper is twofold: -/- 1) to highlight the widely ignored but fundamental problem of ‘superpopulations’ for the use of inferential statistics in development studies. We do not to dwell on this problem however as it has been sufficiently discussed in older papers by statisticians that social scientists have nevertheless long chosen to ignore; the interested reader can turn to those for greater detail. -/- 2) to show that descriptive statistics both avoid the problem of (...) superpopulations and can be a powerful tool when used correctly. A few examples are provided. -/- The paper ends with considerations of some reasons we think are behind the adherence to methods that are known to be inapplicable to many of the types of questions asked in development studies yet still widely practiced. (shrink)
Modern scientific cosmology pushes the boundaries of knowledge and the knowable. This is prompting questions on the nature of scientific knowledge. A central issue is what defines a 'good' model. When addressing global properties of the Universe or its initial state this becomes a particularly pressing issue. How to assess the probability of the Universe as a whole is empirically ambiguous, since we can examine only part of a single realisation of the system under investigation: at some point, data will (...) run out. We review the basics of applying Bayesian statistical explanation to the Universe as a whole. We argue that a conventional Bayesian approach to model inference generally fails in such circumstances, and cannot resolve, e.g., the so-called 'measure problem' in inflationary cosmology. Implicit and non-empirical valuations inevitably enter model assessment in these cases. This undermines the possibility to perform Bayesian model comparison. One must therefore either stay silent, or pursue a more general form of systematic and rational model assessment. We outline a generalised axiological Bayesian model inference framework, based on mathematical lattices. This extends inference based on empirical data (evidence) to additionally consider the properties of model structure (elegance) and model possibility space (beneficence). We propose this as a natural and theoretically well-motivated framework for introducing an explicit, rational approach to theoretical model prejudice and inference beyond data. (shrink)
If the goal of statistical analysis is to form justified credences based on data, then an account of the foundations of statistics should explain what makes credences justified. I present a new account called statistical reliabilism (SR), on which credences resulting from a statistical analysis are justified (relative to alternatives) when they are in a sense closest, on average, to the corresponding objective probabilities. This places (SR) in the same vein as recent work on the reliabilist justification of credences (...) generally [Dunn, 2015, Tang, 2016, Pettigrew, 2018], but it has the advantage of being action-guiding in that knowledge of objective probabilities is not required to identify the best-justified available credences. The price is that justification is relativized to a specific class of candidate objective probabilities, and to a particular choice of reliability measure. On the other hand, I show that (SR) has welcome implications for frequentist-Bayesian reconciliation, including a clarification of the use of priors; complemen- tarity between probabilist and fallibilist [Gelman and Shalizi, 2013, Mayo, 2018] approaches towards statistical foundations; and the justification of credences outside of formal statistical settings. Regarding the latter, I demonstrate how the insights of statistics may be used to amend other reliabilist accounts so as to render them action-guiding. I close by discussing new possible research directions for epistemologists and statisticians (and other applied users of probability) raised by the (SR) framework. (shrink)
The replication crisis has prompted many to call for statistical reform within the psychological sciences. Here we examine issues within Frequentist statistics that may have led to the replication crisis, and we examine the alternative—Bayesianstatistics—that many have suggested as a replacement. The Frequentist approach and the Bayesian approach offer radically different perspectives on evidence and inference with the Frequentist approach prioritising error control and the Bayesian approach offering a formal method for quantifying the relative (...) strength of evidence for hypotheses. We suggest that rather than mere statistical reform, what is needed is a better understanding of the different modes of statistical inference and a better understanding of how statistical inference relates to scientific inference. (shrink)
Dogmatism is sometimes thought to be incompatible with Bayesian models of rational learning. I show that the best model for updating imprecise credences is compatible with dogmatism.
As stochastic independence is essential to the mathematical development of probability theory, it seems that any foundational work on probability should be able to account for this property. Bayesian decision theory appears to be wanting in this respect. Savage’s postulates on preferences under uncertainty entail a subjective expected utility representation, and this asserts only the existence and uniqueness of a subjective probability measure, regardless of its properties. What is missing is a preference condition corresponding to stochastic independence. To fill (...) this significant gap, the article axiomatizes Bayesian decision theory afresh and proves several representation theorems in this novel framework. (shrink)
The Sleeping Beauty problem has attracted considerable attention in the literature as a paradigmatic example of how self-locating uncertainty creates problems for the Bayesian principles of Conditionalization and Reflection. Furthermore, it is also thought to raise serious issues for diachronic Dutch Book arguments. I show that, contrary to what is commonly accepted, it is possible to represent the Sleeping Beauty problem within a standard Bayesian framework. Once the problem is correctly represented, the ‘thirder’ solution satisfies standard rationality principles, (...) vindicating why it is not vulnerable to diachronic Dutch Book arguments. Moreover, the diachronic Dutch Books against the ‘halfer’ solutions fail to undermine the standard arguments for Conditionalization. The main upshot that emerges from my discussion is that the disagreement between different solutions does not challenge the applicability of Bayesian reasoning to centered settings, nor the commitment to Conditionalization, but is instead an instance of the familiar problem of choosing the priors. (shrink)
This paper puts forward the hypothesis that the distinctive features of quantum statistics are exclusively determined by the nature of the properties it describes. In particular, all statistically relevant properties of identical quantum particles in many-particle systems are conjectured to be irreducible, ‘inherent’ properties only belonging to the whole system. This allows one to explain quantum statistics without endorsing the ‘Received View’ that particles are non-individuals, or postulating that quantum systems obey peculiar probability distributions, or assuming that there (...) are primitive restrictions on the range of states accessible to such systems. With this, the need for an unambiguously metaphysical explanation of certain physical facts is acknowledged and satisfied. (shrink)
Even if our justified beliefs are closed under known entailment, there may still be instances of transmission failure. Transmission failure occurs when P entails Q, but a subject cannot acquire a justified belief that Q by deducing it from P. Paradigm cases of transmission failure involve inferences from mundane beliefs (e.g., that the wall in front of you is red) to the denials of skeptical hypotheses relative to those beliefs (e.g., that the wall in front of you is not white (...) and lit by red lights). According to the Bayesian explanation, transmission failure occurs when (i) the subject’s belief that P is based on E, and (ii) P(Q|E) P(Q). No modifications of the Bayesian explanation are capable of accommodating such cases, so the explanation must be rejected as inadequate. Alternative explanations employing simple subjunctive conditionals are fully capable of capturing all of the paradigm cases, as well as those missed by the Bayesian explanation. (shrink)
It is a commonplace in epistemology that credences should equal known chances. It is less clear, however, that conditional credences should do so, too. Following Ramsey, this paper proposes a counterfactual interpretation of conditional probability which provides a justification for this equality without relying on the Principal Principle. As a result, we obtain a refined view of Bayesian inference where both learning and supposing have a place.
The spin-statistics connection is derived in a simple manner under the postulates that the original and the exchange wave functions are simply added, and that the azimuthal phase angle, which defines the orientation of the spin part of each single-particle spin-component eigenfunction in the plane normal to the spin-quantization axis, is exchanged along with the other parameters. The spin factor (−1)2s belongs to the exchange wave function when this function is constructed so as to get the spinor ambiguity under (...) control. This is achieved by effecting the exchange of the azimuthal angle by means of rotations and admitting only rotations in one sense. The procedure works in Galilean as well as in Lorentz-invariant quantum mechanics. Relativistic quantum field theory is not required. (shrink)
Research in ecology and evolutionary biology (evo-eco) often tries to emulate the “hard” sciences such as physics and chemistry, but to many of its practitioners feels more like the “soft” sciences of psychology and sociology. I argue that this schizophrenic attitude is the result of lack of appreciation of the full consequences of the peculiarity of the evo-eco sciences as lying in between a-historical disciplines such as physics and completely historical ones as like paleontology. Furthermore, evo-eco researchers have gotten stuck (...) on mathematically appealing but philosophi- cally simplistic concepts such as null hypotheses and p-values defined according to the frequentist approach in statistics, with the consequence of having been unable to fully embrace the complexity and subtlety of the problems with which ecologists and evolutionary biologists deal with. I review and discuss some literature in ecology, philosophy of science and psychology to show that a more critical methodological attitude can be liberating for the evo-eco scientist and can lead to a more fecund and enjoyable practice of ecology and evolutionary biology. With this aim, I briefly cover concepts such as the method of multiple hypotheses, Bayesian analysis, and strong inference. (shrink)
In this paper we discuss the new Tweety puzzle. The original Tweety puzzle was addressed by approaches in non-monotonic logic, which aim to adequately represent the Tweety case, namely that Tweety is a penguin and, thus, an exceptional bird, which cannot fly, although in general birds can fly. The new Tweety puzzle is intended as a challenge for probabilistic theories of epistemic states. In the first part of the paper we argue against monistic Bayesians, who assume that epistemic states can (...) at any given time be adequately described by a single subjective probability function. We show that monistic Bayesians cannot provide an adequate solution to the new Tweety puzzle, because this requires one to refer to a frequency-based probability function. We conclude that monistic Bayesianism cannot be a fully adequate theory of epistemic states. In the second part we describe an empirical study, which provides support for the thesis that monistic Bayesianism is also inadequate as a descriptive theory of cognitive states. In the final part of the paper we criticize Bayesian approaches in cognitive science, insofar as their monistic tendency cannot adequately address the new Tweety puzzle. We, further, argue against monistic Bayesianism in cognitive science by means of a case study. In this case study we show that Oaksford and Chater’s (2007, 2008) model of conditional inference—contrary to the authors’ theoretical position—has to refer also to a frequency-based probability function. (shrink)
The Capgras delusion is a condition in which a person believes that an imposter has replaced some close friend or relative. Recent theorists have appealed to Bayesianism to help explain both why a subject with the Capgras delusion adopts this delusional belief and why it persists despite counter-evidence. The Bayesian approach is useful for addressing these questions; however, the main proposal of this essay is that Capgras subjects also have a delusional conception of epistemic possibility, more specifically, they think (...) more things are possible, given what is known, than non-delusional subjects do. I argue that this is a central way in which their thinking departs from ordinary cognition and that it cannot be characterized in Bayesian terms. Thus, in order to fully understand the cognitive processing involved in the Capgras delusion, we must move beyond Bayesianism. 1 The Simple Bayesian Model2 Anomalous Evidence and the Capgras Delusion3 Impaired Reasoning4 Setting Priors5 Epistemic Modality6 Delusions of Possibility7 Delusions of Possibility in Different Contexts8 How Many Factors? (shrink)
Bayesian epistemology tells us with great precision how we should move from prior to posterior beliefs in light of new evidence or information, but says little about where our prior beliefs come from. It offers few resources to describe some prior beliefs as rational or well-justified, and others as irrational or unreasonable. A different strand of epistemology takes the central epistemological question to be not how to change one’s beliefs in light of new evidence, but what reasons justify a (...) given set of beliefs in the first place. We offer an account of rational belief formation that closes some of the gap between Bayesianism and its reason-based alternative, formalizing the idea that an agent can have reasons for his or her (prior) beliefs, in addition to evidence or information in the ordinary Bayesian sense. Our analysis of reasons for belief is part of a larger programme of research on the role of reasons in rational agency (Dietrich and List, Nous, 2012a, in press; Int J Game Theory, 2012b, in press). (shrink)
Bayesian confirmation theory is rife with confirmation measures. Many of them differ from each other in important respects. It turns out, though, that all the standard confirmation measures in the literature run counter to the so-called “Reverse Matthew Effect” (“RME” for short). Suppose, to illustrate, that H1 and H2 are equally successful in predicting E in that p(E | H1)/p(E) = p(E | H2)/p(E) > 1. Suppose, further, that initially H1 is less probable than H2 in that p(H1) < (...) p(H2). Then by RME it follows that the degree to which E confirms H1 is greater than the degree to which it confirms H2. But by all the standard confirmation measures in the literature, in contrast, it follows that the degree to which E confirms H1 is less than or equal to the degree to which it confirms H2. It might seem, then, that RME should be rejected as implausible. Festa (2012), however, argues that there are scientific contexts in which RME holds. If Festa’s argument is sound, it follows that there are scientific contexts in which none of the standard confirmation measures in the literature is adequate. Festa’s argument is thus interesting, important, and deserving of careful examination. I consider five distinct respects in which E can be related to H, use them to construct five distinct ways of understanding confirmation measures, which I call “Increase in Probability”, “Partial Dependence”, “Partial Entailment”, “Partial Discrimination”, and “Popper Corroboration”, and argue that each such way runs counter to RME. The result is that it is not at all clear that there is a place in Bayesian confirmation theory for RME. (shrink)
Statistical evidence is crucial throughout disparate impact’s three-stage analysis: during (1) the plaintiff’s prima facie demonstration of a policy’s disparate impact; (2) the defendant’s job-related business necessity defense of the discriminatory policy; and (3) the plaintiff’s demonstration of an alternative policy without the same discriminatory impact. The circuit courts are split on a vital question about the “practical significance” of statistics at Stage 1: Are “small” impacts legally insignificant? For example, is an employment policy that causes a one percent (...) disparate impact an appropriate policy for redress through disparate impact litigation? This circuit split calls for a comprehensive analysis of practical significance testing across disparate impact’s stages. Importantly, courts and commentators use “practical significance” ambiguously between two aspects of practical significance: the magnitude of an effect and confidence in statistical evidence. For example, at Stage 1 courts might ask whether statistical evidence supports a disparate impact (a confidence inquiry) and whether such an impact is large enough to be legally relevant (a magnitude inquiry). Disparate impact’s texts, purposes, and controlling interpretations are consistent with confidence inquires at all three stages, but not magnitude inquiries. Specifically, magnitude inquiries are inappropriate at Stages 1 and 3—there is no discriminatory impact or reduction too small or subtle for the purposes of the disparate impact analysis. Magnitude inquiries are appropriate at Stage 2, when an employer defends a discriminatory policy on the basis of its job-related business necessity. (shrink)
Stochastic independence has a complex status in probability theory. It is not part of the definition of a probability measure, but it is nonetheless an essential property for the mathematical development of this theory. Bayesian decision theorists such as Savage can be criticized for being silent about stochastic independence. From their current preference axioms, they can derive no more than the definitional properties of a probability measure. In a new framework of twofold uncertainty, we introduce preference axioms that entail (...) not only these definitional properties, but also the stochastic independence of the two sources of uncertainty. This goes some way towards filling a curious lacuna in Bayesian decision theory. (shrink)
Statistics play a critical role in biological and clinical research. To promote logically consistent representation and classification of statistical entities, we have developed the Ontology of Biological and Clinical Statistics (OBCS). OBCS extends the Ontology of Biomedical Investigations (OBI), an OBO Foundry ontology supported by some 20 communities. Currently, OBCS contains 686 terms, including 381 classes imported from OBI and 147 classes specific to OBCS. The goal of this paper is to present OBCS for community critique and to (...) describe a number of use cases designed to illustrate its potential applications. The OBCS project and source code are available at http://obcs.googlecode.com. (shrink)
This paper (first published under the same title in Journal of Mathematical Economics, 29, 1998, p. 331-361) is a sequel to "Consistent Bayesian Aggregation", Journal of Economic Theory, 66, 1995, p. 313-351, by the same author. Both papers examine mathematically whether the the following assumptions are compatible: the individuals and the group both form their preferences according to Subjective Expected Utility (SEU) theory, and the preferences of the group satisfy the Pareto principle with respect to those of the individuals. (...) While the 1995 paper explored these assumptions in the axiomatic context of Savage's (1954-1972) SEU theory, the present paper explores them in the context of Anscombe and Aumann's (1963) alternative SEU theory. We first show that the problematic assumptions become compatible when the Anscombe-Aumann utility functions are state-dependent and no subjective probabilities are elicited. Then we show that the problematic assumptions become incompatible when the Anscombe-Aumann utility functions are state-dependent, like before, but subjective probabilities are elicited using a relevant technical scheme. This last result reinstates the impossibilities proved by the 1995 paper, and thus shows them to be robust with respect to the choice of the SEU axiomatic framework. The technical scheme used for the elicitation of subjective probabilities is that of Karni, Schmeidler and Vind (1983). (shrink)
The last two decades have seen a welcome proliferation of the collection and dissemination of data on social progress, as well as considered public debates rethinking existing standards of measuring the progress of societies. These efforts are to be welcomed. However, they are only a nascent step on a longer road to the improved measurement of social progress. In this paper, I focus on the central role that gender should take in future efforts to measure progress in securing human rights, (...) with a particular focus on anti-poverty rights. I proceed in four parts. First, I argue that measurement of human rights achievements and human rights deficits is entailed by the recognition of human rights, and that adequate measurement of human rights must be genuinely gender-sensitive. Second, I argue that existing systems of information collection currently fail rights holders, especially women, by failing to adequately gather information on the degree to which their rights are secure. If my first two claims are correct, this failure represents a serious injustice, and in particular an injustice for women. Third, I make recommendations regarding changes to existing information collection that would generate gender-sensitive measures of anti-poverty rights. Fourth, I conclude by responding to various objections that have been raised regarding the rise of indicators to track human rights. (shrink)
This paper, “Cultural Statistics, the Media and the Planning and Development of Calabar, Nigeria” stresses the need for the use of Cultural Statistics and effective media communication in the planning and development of Calabar, the Cross River State Capital. This position is anchored on the fact that in virtually every sphere of life, there can be no development without planning, and there can be no proper planning without accurate data or information. Cultural Statistics, and effective use of (...) the media thus become imperative in the planning and development of Calabar, especially as the Cross River State capital, is fast becoming an internationally recognized cultural city due largely to its annual Calabar Festival and Carnival. The paper among other things argues that cultural statistics and the use of the media will reposition the city of Calabar, not only in terms of development, but also in marketing and branding, taking into consideration the new economy and globalization which involve technology, creativity, human capital and capacity for innovation. The paper concludes that although some effort has been made by the Cross River State government in gathering and publishing some cultural information in brochures and other periodicals, there will be need for deliberate and conscientious effort to be made by the relevant government authorities to collect, collate, analyze and interpret cultural data in Calabar and project same in the media with a view to enhancing the planning and development of the Cross River State capital so as to truly make it a tourism and cultural haven in Nigeria and in the continent of Africa. (shrink)
Resource rationality may explain suboptimal patterns of reasoning; but what of “anti-Bayesian” effects where the mind updates in a direction opposite the one it should? We present two phenomena — belief polarization and the size-weight illusion — that are not obviously explained by performance- or resource-based constraints, nor by the authors’ brief discussion of reference repulsion. Can resource rationality accommodate them?
How do agents with limited cognitive capacities flourish in informationally impoverished or unexpected circumstances? Aristotle argued that human flourishing emerged from knowing about the world and our place within it. If he is right, then the virtuous processes that produce knowledge, best explain flourishing. Influenced by Aristotle, virtue epistemology defends an analysis of knowledge where beliefs are evaluated for their truth and the intellectual virtue or competences relied on in their creation. However, human flourishing may emerge from how degrees of (...) ignorance are managed in an uncertain world. Perhaps decision-making in the shadow of knowledge best explains human wellbeing—a Bayesian approach? In this dissertation I argue that a hybrid of virtue and Bayesian epistemologies explains human flourishing—what I term homeostatic epistemology. Homeostatic epistemology supposes that an agent has a rational credence p when p is the product of reliable processes aligned with the norms of probability theory; whereas an agent knows that p when a rational credence p is the product of reliable processes such that: 1) p meets some relevant threshold for belief, 2) p coheres with a satisficing set of relevant beliefs and, 3) the relevant set of beliefs is coordinated appropriately to meet the integrated aims of the agent. Homeostatic epistemology recognizes that justificatory relationships between beliefs are constantly changing to combat uncertainties and to take advantage of predictable circumstances. Contrary to holism, justification is built up and broken down across limited sets like the anabolic and catabolic processes that maintain homeostasis in the cells, organs and systems of the body. It is the coordination of choristic sets of reliably produced beliefs that create the greatest flourishing given the limitations inherent in the situated agent. (shrink)
Bayesian models can be related to cognitive processes in a variety of ways that can be usefully understood in terms of Marr's distinction among three levels of explanation: computational, algorithmic and implementation. In this note, we discuss how an integrated probabilistic account of the different levels of explanation in cognitive science is resulting, at least for the current research practice, in a sort of unpredicted epistemological shift with respect to Marr's original proposal.
It is often claimed that the greatest value of the Bayesian framework in cognitive science consists in its unifying power. Several Bayesian cognitive scientists assume that unification is obviously linked to explanatory power. But this link is not obvious, as unification in science is a heterogeneous notion, which may have little to do with explanation. While a crucial feature of most adequate explanations in cognitive science is that they reveal aspects of the causal mechanism that produces the phenomenon (...) to be explained, the kind of unification afforded by the Bayesian framework to cognitive science does not necessarily reveal aspects of a mechanism. Bayesian unification, nonetheless, can place fruitful constraints on causal–mechanical explanation. 1 Introduction2 What a Great Many Phenomena Bayesian Decision Theory Can Model3 The Case of Information Integration4 How Do Bayesian Models Unify?5 Bayesian Unification: What Constraints Are There on Mechanistic Explanation?5.1 Unification constrains mechanism discovery5.2 Unification constrains the identification of relevant mechanistic factors5.3 Unification constrains confirmation of competitive mechanistic models6 ConclusionAppendix. (shrink)
In this paper, I consider the relationship between Inference to the Best Explanation and Bayesianism, both of which are well-known accounts of the nature of scientific inference. In Sect. 2, I give a brief overview of Bayesianism and IBE. In Sect. 3, I argue that IBE in its most prominently defended forms is difficult to reconcile with Bayesianism because not all of the items that feature on popular lists of “explanatory virtues”—by means of which IBE ranks competing explanations—have confirmational import. (...) Rather, some of the items that feature on these lists are “informational virtues”—properties that do not make a hypothesis \ more probable than some competitor \ given evidence E, but that, roughly-speaking, give that hypothesis greater informative content. In Sect. 4, I consider as a response to my argument a recent version of compatibilism which argues that IBE can provide further normative constraints on the objectively correct probability function. I argue that this response does not succeed, owing to the difficulty of defending with any generality such further normative constraints. Lastly, in Sect. 5, I propose that IBE should be regarded, not as a theory of scientific inference, but rather as a theory of when we ought to “accept” H, where the acceptability of H is fixed by the goals of science and concerns whether H is worthy of commitment as research program. In this way, IBE and Bayesianism, as I will show, can be made compatible, and thus the Bayesian and the proponent of IBE can be friends. (shrink)
The canonical Bayesian solution to the ravens paradox faces a problem: it entails that black non-ravens disconfirm the hypothesis that all ravens are black. I provide a new solution that avoids this problem. On my solution, black ravens confirm that all ravens are black, while non-black non-ravens and black non-ravens are neutral. My approach is grounded in certain relations of epistemic dependence, which, in turn, are grounded in the fact that the kind raven is more natural than the kind (...) black. The solution applies to any generalization “All F’s are G” in which F is more natural than G. (shrink)
In the world of philosophy of science, the dominant theory of confirmation is Bayesian. In the wider philosophical world, the idea of inference to the best explanation exerts a considerable influence. Here we place the two worlds in collision, using Bayesian confirmation theory to argue that explanatoriness is evidentially irrelevant.
ABSTRACTRational agents have consistent beliefs. Bayesianism is a theory of consistency for partial belief states. Rational agents also respond appropriately to experience. Dogmatism is a theory of how to respond appropriately to experience. Hence, Dogmatism and Bayesianism are theories of two very different aspects of rationality. It's surprising, then, that in recent years it has become common to claim that Dogmatism and Bayesianism are jointly inconsistent: how can two independently consistent theories with distinct subject matter be jointly inconsistent? In this (...) essay I argue that Bayesianism and Dogmatism are inconsistent only with the addition of a specific hypothesis about how the appropriate responses to perceptual experience are to be incorporated into the formal models of the Bayesian. That hypothesis isn't essential either to Bayesianism or to Dogmatism, and so Bayesianism and Dogmatism are jointly consistent. That leaves the matter of how experiences and credences are related, a... (shrink)
This paper considers two novel Bayesian responses to a well-known skeptical paradox. The paradox consists of three intuitions: first, given appropriate sense experience, we have justification for accepting the relevant proposition about the external world; second, we have justification for expanding the body of accepted propositions through known entailment; third, we do not have justification for accepting that we are not disembodied souls in an immaterial world deceived by an evil demon. The first response we consider rejects the third (...) intuition and proposes an explanation of why we have a faulty intuition. The second response, which we favor, accommodates all three intuitions; it reconciles the first and the third intuition by the dual component model of justification, and defends the second intuition by distinguishing two principles of epistemic closure. (shrink)
A piece of folklore enjoys some currency among philosophical Bayesians, according to which Bayesian agents that, intuitively speaking, spread their credence over the entire space of available hypotheses are certain to converge to the truth. The goals of the present discussion are to show that kernel of truth in this folklore is in some ways fairly small and to argue that Bayesian convergence-to-the-truth results are a liability for Bayesianism as an account of rationality, since they render a certain (...) sort of arrogance rationally mandatory. (shrink)
Every year, the Vietnamese people reportedly burned about 50,000 tons of joss papers, which took the form of not only bank notes, but iPhones, cars, clothes, even housekeepers, in hope of pleasing the dead. The practice was mistakenly attributed to traditional Buddhist teachings but originated in fact from China, which most Vietnamese were not aware of. In other aspects of life, there were many similar examples of Vietnamese so ready and comfortable with adding new norms, values, and beliefs, even contradictory (...) ones, to their culture. This phenomenon, dubbed “cultural additivity”, prompted us to study the co-existence, interaction, and influences among core values and norms of the Three Teachings –Confucianism, Buddhism, and Taoism–as shown through Vietnamese folktales. By applying Bayesian logistic regression, we evaluated the possibility of whether the key message of a story was dominated by a religion (dependent variables), as affected by the appearance of values and anti-values pertaining to the Three Teachings in the story (independent variables). Our main findings included the existence of the cultural additivity of Confucian and Taoist values. More specifically, empirical results showed that the interaction or addition of the values of Taoism and Confucianism in folktales together helped predict whether the key message of a story was about Confucianism, β{VT ⋅ VC} = 0.86. Meanwhile, there was no such statistical tendency for Buddhism. The results lead to a number of important implications. First, this showed the dominance of Confucianism because the fact that Confucian and Taoist values appeared together in a story led to the story’s key message dominated by Confucianism. Thus, it presented the evidence of Confucian dominance and against liberal interpretations of the concept of the Common Roots of Three Religions (“tam giáo đồng nguyên”) as religious unification or unicity. Second, the concept of “cultural additivity” could help explain many interesting socio-cultural phenomena, namely the absence of religious intolerance and extremism in the Vietnamese society, outrageous cases of sophistry in education, the low productivity in creative endeavors like science and technology, the misleading branding strategy in business. We are aware that our results are only preliminary and more studies, both theoretical and empirical, must be carried out to give a full account of the explanatory reach of “cultural additivity”. (shrink)
In his classic book “the Foundations of Statistics” Savage developed a formal system of rational decision making. The system is based on (i) a set of possible states of the world, (ii) a set of consequences, (iii) a set of acts, which are functions from states to consequences, and (iv) a preference relation over the acts, which represents the preferences of an idealized rational agent. The goal and the culmination of the enterprise is a representation theorem: Any preference relation (...) that satisfies certain arguably acceptable postulates determines a (finitely additive) probability distribution over the states and a utility assignment to the consequences, such that the preferences among acts are determined by their expected utilities. Additional problematic assumptions are however required in Savage's proofs. First, there is a Boolean algebra of events (sets of states) which determines the richness of the set of acts. The probabilities are assigned to members of this algebra. Savage's proof requires that this be a σ-algebra (i.e., closed under infinite countable unions and intersections), which makes for an extremely rich preference relation. On Savage's view we should not require subjective probabilities to be σ-additive. He therefore finds the insistence on a σ-algebra peculiar and is unhappy with it. But he sees no way of avoiding it. Second, the assignment of utilities requires the constant act assumption: for every consequence there is a constant act, which produces that consequence in every state. This assumption is known to be highly counterintuitive. The present work contains two mathematical results. The first, and the more difficult one, shows that the σ-algebra assumption can be dropped. The second states that, as long as utilities are assigned to finite gambles only, the constant act assumption can be replaced by the more plausible and much weaker assumption that there are at least two non-equivalent constant acts. The second result also employs a novel way of deriving utilities in Savage-style systems -- without appealing to von Neumann-Morgenstern lotteries. The paper discusses the notion of “idealized agent" that underlies Savage's approach, and argues that the simplified system, which is adequate for all the actual purposes for which the system is designed, involves a more realistic notion of an idealized agent. (shrink)
Any theory of confirmation must answer the following question: what is the purpose of its conception of confirmation for scientific inquiry? In this article, we argue that no Bayesian conception of confirmation can be used for its primary intended purpose, which we take to be making a claim about how worthy of belief various hypotheses are. Then we consider a different use to which Bayesian confirmation might be put, namely, determining the epistemic value of experimental outcomes, and thus (...) to decide which experiments to carry out. Interestingly, Bayesian confirmation theorists rule out that confirmation be used for this purpose. We conclude that Bayesian confirmation is a means with no end. 1 Introduction2 Bayesian Confirmation Theory3 Bayesian Confirmation and Belief4 Confirmation and the Value of Experiments5 Conclusion. (shrink)
Given a few assumptions, the probability of a conjunction is raised, and the probability of its negation is lowered, by conditionalising upon one of the conjuncts. This simple result appears to bring Bayesian confirmation theory into tension with the prominent dogmatist view of perceptual justification – a tension often portrayed as a kind of ‘Bayesian objection’ to dogmatism. In a recent paper, David Jehle and Brian Weatherson observe that, while this crucial result holds within classical probability theory, it (...) fails within intuitionistic probability theory. They conclude that the dogmatist who is willing to take intuitionistic logic seriously can make a convincing reply to the Bayesian objection. In this paper, I argue that this conclusion is premature – the Bayesian objection can survive the transition from classical to intuitionistic probability, albeit in a slightly altered form. I shall conclude with some general thoughts about what the Bayesian objection to dogmatism does and doesn’t show. (shrink)
According to Hempel's paradox, evidence (E) that an object is a nonblack nonraven confirms the hypothesis (H) that every raven is black. According to the standard Bayesian solution, E does confirm H but only to a minute degree. This solution relies on the almost never explicitly defended assumption that the probability of H should not be affected by evidence that an object is nonblack. I argue that this assumption is implausible, and I propose a way out for Bayesians. Introduction (...) Hempel's paradox, the standard Bayesian solution, and the disputed assumption Attempts to defend the disputed assumption Attempts to refute the disputed assumption A way out for Bayesians Conclusion. (shrink)
Various sexist and racist beliefs ascribe certain negative qualities to people of a given sex or race. Epistemic allies are people who think that in normal circumstances rationality requires the rejection of such sexist and racist beliefs upon learning of many counter-instances, i.e. members of these groups who lack the target negative quality. Accordingly, epistemic allies think that those who give up their sexist or racist beliefs in such circumstances are rationally responding to their evidence, while those who do not (...) are irrational in failing to respond to their evidence by giving up their belief. This is a common view among philosophers and non-philosophers. But epistemic allies face three problems. First, sexist and racist beliefs often involve generic propositions. These sorts of propositions are notoriously resilient in the face of counter-instances since the truth of generic propositions is typically compatible with the existence of many counter-instances. Second, background beliefs can enable one to explain away counter-instances to one’s beliefs. So even when counter-instances might otherwise constitute strong evidence against the truth of the generic, the ability to explain the counter-instances away with relevant background beliefs can make it rational to retain one’s belief in the generic despite the existence of many counter-instances. The final problem is that the kinds of judgements epistemic allies want to make about the irrationality of sexist and racist beliefs upon encountering many counter-instances is at odds with the judgements that we are inclined to make in seemingly parallel cases about the rationality of non-sexist and non-racist generic beliefs. Thus epistemic allies may end up having to give up on plausible normative supervenience principles. All together, these problems pose a significant prima facie challenge to epistemic allies. In what follows I explain how a Bayesian approach to the relation between evidence and belief can neatly untie these knots. The basic story is one of defeat: Bayesianism explains when one is required to become increasingly confident in chance propositions, and confidence in chance propositions can make belief in corresponding generics irrational. (shrink)
The objective Bayesian view of proof (or logical probability, or evidential support) is explained and defended: that the relation of evidence to hypothesis (in legal trials, science etc) is a strictly logical one, comparable to deductive logic. This view is distinguished from the thesis, which had some popularity in law in the 1980s, that legal evidence ought to be evaluated using numerical probabilities and formulas. While numbers are not always useful, a central role is played in uncertain reasoning by (...) the ‘proportional syllogism’, or argument from frequencies, such as ‘nearly all aeroplane flights arrive safely, so my flight is very likely to arrive safely’. Such arguments raise the ‘problem of the reference class’, arising from the fact that an individual case may be a member of many different classes in which frequencies differ. For example, if 15 per cent of swans are black and 60 per cent of fauna in the zoo is black, what should I think about the likelihood of a swan in the zoo being black? The nature of the problem is explained, and legal cases where it arises are given. It is explained how recent work in data mining on the relevance of features for prediction provides a solution to the reference class problem. (shrink)
We start with the ambition -- dating back to the early days of the semantic web -- of assembling a significant portion human knowledge into a contradiction-free form using semantic web technology. We argue that this would not be desirable, because there are concepts, known as essentially contested concepts, whose definitions are contentious due to deep-seated ethical disagreements. Further, we argue that the ninetenth century hermeneutical tradition has a great deal to say, both about the ambition, and about why it (...) fails. We conclude with some remarks about statistics. (shrink)
Probability updating via Bayes' rule often entails extensive informational and computational requirements. In consequence, relatively few practical applications of Bayesian adaptive control techniques have been attempted. This paper discusses an alternative approach to adaptive control, Bayesian in spirit, which shifts attention from the updating of probability distributions via transitional probability assessments to the direct updating of the criterion function, itself, via transitional utility assessments. Results are illustrated in terms of an adaptive reinvestment two-armed bandit problem.
A group is often construed as a single agent with its own probabilistic beliefs (credences), which are obtained by aggregating those of the individuals, for instance through averaging. In their celebrated contribution “Groupthink”, Russell et al. (2015) apply the Bayesian paradigm to groups by requiring group credences to undergo a Bayesian revision whenever new information is learnt, i.e., whenever the individual credences undergo a Bayesian revision based on this information. Bayesians should often strengthen this requirement by extending (...) it to non-public or even private information (learnt by not all or just one individual), or to non-representable information (not corresponding to an event in the algebra on which credences are held). I propose a taxonomy of six kinds of `group Bayesianism', which differ in the type of information for which Bayesian revision of group credences is required: public representable information, private representable information, public non-representable information, and so on. Six corresponding theorems establish exactly how individual credences must (not) be aggregated such that the resulting group credences obey group Bayesianism of any given type, respectively. Aggregating individual credences through averaging is never permitted. One of the theorems – the one concerned with public representable information – is essentially Russell et al.'s central result (with minor corrections). (shrink)
Statistics play a critical role in biological and clinical research. However, most reports of scientific results in the published literature make it difficult for the reader to reproduce the statistical analyses performed in achieving those results because they provide inadequate documentation of the statistical tests and algorithms applied. The Ontology of Biological and Clinical Statistics (OBCS) is put forward here as a step towards solving this problem. Terms in OBCS, including ‘data collection’, ‘data transformation in statistics’, ‘data (...) visualization’, ‘statistical data analysis’, and ‘drawing a conclusion based on data’, cover the major types of statistical processes used in basic biological research and clinical outcome studies. OBCS is aligned with the Basic Formal Ontology (BFO) and extends the Ontology of Biomedical Investigations (OBI), an OBO (Open Biological and Biomedical Ontologies) Foundry ontology supported by over 20 research communities. We discuss two examples illustrating how the ontology is being applied. In the first (biological) use case, we describe how OBCS was applied to represent the high throughput microarray data analysis of immunological transcriptional profiles in human subjects vaccinated with an influenza vaccine. In the second (clinical outcomes) use case, we applied OBCS to represent the processing of electronic health care data to determine the associations between hospital staffing levels and patient mortality. Our case studies were designed to show how OBCS can be used for the consistent representation of statistical analysis pipelines under two different research paradigms. By representing statistics-related terms and their relations in a rigorous fashion, OBCS facilitates standard data analysis and integration, and supports reproducible biological and clinical research. (shrink)
According to the traditional Bayesian view of credence, its structure is that of precise probability, its objects are descriptive propositions about the empirical world, and its dynamics are given by conditionalization. Each of the three essays that make up this thesis deals with a different variation on this traditional picture. The first variation replaces precise probability with sets of probabilities. The resulting imprecise Bayesianism is sometimes motivated on the grounds that our beliefs should not be more precise than the (...) evidence calls for. One known problem for this evidentially motivated imprecise view is that in certain cases, our imprecise credence in a particular proposition will remain the same no matter how much evidence we receive. In the first essay I argue that the problem is much more general than has been appreciated so far, and that it’s difficult to avoid without compromising the initial evidentialist motivation. The second variation replaces descriptive claims with moral claims as the objects of credence. I consider three standard arguments for probabilism with respect to descriptive uncertainty—representation theorem arguments, Dutch book arguments, and accuracy arguments—in order to examine whether such arguments can also be used to establish probabilism with respect to moral uncertainty. In the second essay, I argue that by and large they can, with some caveats. First, I don’t examine whether these arguments can be given sound non-cognitivist readings, and any conclusions therefore only hold conditional on cognitivism. Second, decision-theoretic representation theorems are found to be less convincing in the moral case, because there they implausibly commit us to thinking that intertheoretic comparisons of value are always possible. Third and finally, certain considerations may lead one to think that imprecise probabilism provides a more plausible model of moral epistemology. The third variation considers whether, in addition to conditionalization, agents may also change their minds by becoming aware of propositions they had not previously entertained, and therefore not previously assigned any probability. More specifically, I argue that if we wish to make room for reflective equilibrium in a probabilistic moral epistemology, we must allow for awareness growth. In the third essay, I sketch the outline of such a Bayesian account of reflective equilibrium. Given that this account gives a central place to awareness growth, and that the rationality constraints on belief change by awareness growth are much weaker than those on belief change by conditionalization, it follows that the rationality constraints on the credences of agents who are seeking reflective equilibrium are correspondingly weaker. (shrink)
In this chapter I analyse an objection to phenomenal conservatism to the effect that phenomenal conservatism is unacceptable because it is incompatible with Bayesianism. I consider a few responses to it and dismiss them as misled or problematic. Then, I argue that this objection doesn’t go through because it rests on an implausible formalization of the notion of seeming-based justification. In the final part of the chapter, I investigate how seeming-based justification and justification based on one’s reflective belief that one (...) has a seeming interact with one another. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.