In robustness analysis, hypotheses are supported to the extent that a result proves robust, and a result is robust to the extent that we detect it in diverse ways. But what precise sense of diversity is at work here? In this paper, I show that the formal explications of evidential diversity most often appealed to in work on robustness – which all draw in one way or another on probabilisticindependence – fail to shed light on the notion (...) of diversity relevant to robustness analysis. I close by briefly outlining a promising alternative approach inspired by Horwich’s (1982) eliminative account of evidential diversity. (shrink)
Stochastic independence (SI) has a complex status in probability theory. It is not part of the definition of a probability measure, but it is nonetheless an essential property for the mathematical development of this theory, hence a property that any theory on the foundations of probability should be able to account for. Bayesian decision theory, which is one such theory, appears to be wanting in this respect. In Savage's classic treatment, postulates on preferences under uncertainty are shown to entail (...) a subjective expected utility (SEU) representation, and this permits asserting only the existence and uniqueness of a subjective probability, regardless of its properties. What is missing is a preference postulate that would specifically connect with the SI property. The paper develops a version of Bayesian decision theory that fills this gap. In a framework of multiple sources of uncertainty, we introduce preference conditions that jointly entail the SEU representation and the property that the subjective probability in this representation treats the sources of uncertainty as being stochastically independent. We give two representation theorems of graded complexity to demonstrate the power of our preference conditions. Two sections of comments follow, one connecting the theorems with earlier results in Bayesian decision theory, and the other connecting them with the foundational discussion on SI in probability theory and the philosophy of probability. Appendices offer more technical material. (shrink)
Suppose several individuals (e.g., experts on a panel) each assign probabilities to some events. How can these individual probability assignments be aggregated into a single collective probability assignment? This article reviews several proposed solutions to this problem. We focus on three salient proposals: linear pooling (the weighted or unweighted linear averaging of probabilities), geometric pooling (the weighted or unweighted geometric averaging of probabilities), and multiplicative pooling (where probabilities are multiplied rather than averaged). We present axiomatic characterisations of each class of (...) pooling functions (most of them classic, but one new) and argue that linear pooling can be justified "procedurally" but not "epistemically", while the other two pooling methods can be justified "epistemically". The choice between them, in turn, depends on whether the individuals' probability assignments are based on shared information or on private information. We conclude by mentioning a number of other pooling methods. (shrink)
Dilation occurs when an interval probability estimate of some event E is properly included in the interval probability estimate of E conditional on every event F of some partition, which means that one’s initial estimate of E becomes less precise no matter how an experiment turns out. Critics maintain that dilation is a pathological feature of imprecise probability models, while others have thought the problem is with Bayesian updating. However, two points are often overlooked: (1) knowing that E is stochastically (...) independent of F (for all F in a partition of the underlying state space) is sufficient to avoid dilation, but (2) stochastic independence is not the only independence concept at play within imprecise probability models. In this paper we give a simple characterization of dilation formulated in terms of deviation from stochastic independence, propose a measure of dilation, and distinguish between proper and improper dilation. Through this we revisit the most sensational examples of dilation, which play up independence between dilator and dilatee, and find the sensationalism undermined by either fallacious reasoning with imprecise probabilities or improperly constructed imprecise probability models. (shrink)
Two recent and influential papers, van Rooij 2007 and Lassiter 2012, propose solutions to the proviso problem that make central use of related notions of independence—qualitative in the first case, probabilistic in the second. We argue here that, if these solutions are to work, they must incorporate an implicit assumption about presupposition accommodation, namely that accommodation does not interfere with existing qualitative or probabilistic independencies. We show, however, that this assumption is implausible, as updating beliefs with conditional (...) information does not in general preserve independencies. We conclude that the approach taken by van Rooij and Lassiter does not succeed in resolving the proviso problem. (shrink)
How can different individuals' probability functions on a given sigma-algebra of events be aggregated into a collective probability function? Classic approaches to this problem often require 'event-wise independence': the collective probability for each event should depend only on the individuals' probabilities for that event. In practice, however, some events may be 'basic' and others 'derivative', so that it makes sense first to aggregate the probabilities for the former and then to let these constrain the probabilities for the latter. We (...) formalize this idea by introducing a 'premise-based' approach to probabilistic opinon pooling, and show that, under a variety of assumptions, it leads to linear or neutral opinion pooling on the 'premises'. This paper is the second of two self-contained, but technically related companion papers inspired by binary judgment-aggregation theory. (shrink)
Robustness arguments hold that hypotheses are more likely to be true when they are confirmed by diverse kinds of evidence. Robustness arguments require the confirming evidence to be independent. We identify two kinds of independence appealed to in robustness arguments: ontic independence —when the multiple lines of evidence depend on different materials, assumptions, or theories—and probabilisticindependence. Many assume that OI is sufficient for a robustness argument to be warranted. However, we argue that, as typically construed, (...) OI is not a sufficient independence condition for warranting robustness arguments. We show that OI evidence can collectively confirm a hypothesis to a lower degree than individual lines of evidence, contrary to the standard assumption undergirding usual robustness arguments. We employ Bayesian networks to represent the ideal empirical scenario for a robustness argument and a variety of ways in which empirical scenarios can fall short of this ideal. (shrink)
Background independence begins life as an informal property that a physical theory might have, often glossed as 'doesn't posit a fixed spacetime background'. Interest in trying to offer a precise account of background independence has been sparked by the pronouncements of several theorists working on quantum gravity that background independence embodies in some sense an essential discovery of the General Theory of Relativity, and a feature we should strive to carry forward to future physical theories. This paper (...) has two goals. The first is to investigate what a world must be like in order to be truly described by a background independent theory given extant accounts of background independence. The second is to argue that there are no non-empirical reasons to be more confident in theories that satisfy extant accounts of background independence than in theories that don't. The paper concludes by drawing a general moral about a way in which focussing primarily on mathematical formulations of our physical theories can adversely affect debates in the metaphysics of physics. (shrink)
Probabilistic support is not transitive. There are cases in which x probabilistically supports y , i.e., Pr( y | x ) > Pr( y ), y , in turn, probabilistically supports z , and yet it is not the case that x probabilistically supports z . Tomoji Shogenji, though, establishes a condition for transitivity in probabilistic support, that is, a condition such that, for any x , y , and z , if Pr( y | x ) > (...) Pr( y ), Pr( z | y ) > Pr( z ), and the condition in question is satisfied, then Pr( z | x ) > Pr( z ). I argue for a second and weaker condition for transitivity in probabilistic support. This condition, or the principle involving it, makes it easier (than does the condition Shogenji provides) to establish claims of probabilistic support, and has the potential to play an important role in at least some areas of philosophy. (shrink)
The aim of this dissertation is to comprehensively study various robustness arguments proposed in the literature from Levins to Lloyd as well as the opposition offered to them and pose enquiry into the degree of epistemic virtue that they provide to the model prediction results with respect to climate science and modeling. Another critical issue that this dissertation strives to examine is that of the actual epistemic notion that is operational when scientists and philosophers appeal to robustness. In attempting to (...) explicate this idea, the discussion turns to arguments provided by Schupbach who completely rejects probabilisticindependence in favour of explanatory reasoning, Stegenga and Menon who still see some value in probabilisticindependence, and Winsberg who takes applies Schupbach’s to climate science, going beyond models to involve multi-modal evidence. After an exhaustive discussion on these arguments, this dissertation attempts to provide a thorough and updated notion of robustness in climate modeling and climate science. (shrink)
Intuitively, a classical field theory is background-in- dependent if the structure required to make sense of its equations is itself subject to dynamical evolution, rather than being imposed ab initio. The aim of this paper is to provide an explication of this intuitive notion. Background-independence is not a not formal property of theories: the question whether a theory is background-independent depends upon how the theory is interpreted. Under the approach proposed here, a theory is fully background-independent relative to an (...) interpretation if each physical possibility corresponds to a distinct spacetime geometry; and it falls short of full background-independence to the extent that this condition fails. (shrink)
How should we understand the notion of moral objectivity? Metaethical positions that vindicate morality’s objective appearance are often associated with moral realism. On a realist construal, moral objectivity is understood in terms of mind-, stance-, or attitude-independence. But realism is not the only game in town for moral objectivists. On an antirealist construal, morality’s objective features are understood in virtue of our attitudes. In this paper I aim to develop this antirealist construal of moral objectivity in further detail, and (...) to make its metaphysical commitments explicit. I do so by building on Sharon Street’s version of “Humean Constructivism”. Instead of the realist notion of attitude-independence, the antirealist account of moral objectivity that I articulate centres on the notion of standpoint-invariance. While constructivists have been criticized for compromising on the issue of moral objectivity, I make a preliminary case for the thesis that, armed with the notion of standpoint-invariance, constructivists have resources to vindicate an account of objectivity with just the right strength, given the commitments of ordinary moral thought and practice. In support of this thesis I highlight recent experimental findings about folk moral objectivism. Empirical observations about the nature of moral discourse have traditionally been taken to give prima facie support to moral realism. I argue, by contrast, that from what we can tell from our current experimental understanding, antirealists can capture the commitments of ordinary discourse at least as well as realists can. (shrink)
In cognitive science, the concept of dissociation has been central to the functional individuation and decomposition of cognitive systems. Setting aside debates about the legitimacy of inferring the existence of dissociable systems from ‘behavioural’ dissociation data, the main idea behind the dissociation approach is that two cognitive systems are dissociable, and thus viewed as distinct, if each can be damaged, or impaired, without affecting the other system’s functions. In this article, I propose a notion of functional independence that does (...) not require dissociability, and describe an approach to the functional decomposition and modelling of cognitive systems that complements the dissociation approach. I show that highly integrated cognitive and neurocognitive systems can be decomposed into non-dissociable but functionally independent components, and argue that this approach can provide a general account of cognitive specialization in terms of a stable structure–function relationship. 1 Introduction2 Functional Independence without Dissociability3 FI Systems and Cognitive Architecture4 FI Systems and Cognitive Specialization. (shrink)
How can different individuals' probability assignments to some events be aggregated into a collective probability assignment? Classic results on this problem assume that the set of relevant events -- the agenda -- is a sigma-algebra and is thus closed under disjunction (union) and conjunction (intersection). We drop this demanding assumption and explore probabilistic opinion pooling on general agendas. One might be interested in the probability of rain and that of an interest-rate increase, but not in the probability of rain (...) or an interest-rate increase. We characterize linear pooling and neutral pooling for general agendas, with classic results as special cases for agendas that are sigma-algebras. As an illustrative application, we also consider probabilistic preference aggregation. Finally, we compare our results with existing results on binary judgment aggregation and Arrovian preference aggregation. This paper is the first of two self-contained, but technically related companion papers inspired by binary judgment-aggregation theory. (shrink)
The explanatory role of natural selection is one of the long-term debates in evolutionary biology. Nevertheless, the consensus has been slippery because conceptual confusions and the absence of a unified, formal causal model that integrates different explanatory scopes of natural selection. In this study we attempt to examine two questions: (i) What can the theory of natural selection explain? and (ii) Is there a causal or explanatory model that integrates all natural selection explananda? For the first question, we argue that (...) five explananda have been assigned to the theory of natural selection and that four of them may be actually considered explananda of natural selection. For the second question, we claim that a probabilistic conception of causality and the statistical relevance concept of explanation are both good models for understanding the explanatory role of natural selection. We review the biological and philosophical disputes about the explanatory role of natural selection and formalize some explananda in probabilistic terms using classical results from population genetics. Most of these explananda have been discussed in philosophical terms but some of them have been mixed up and confused. We analyze and set the limits of these problems. (shrink)
In their article 'Causes and Explanations: A Structural-Model Approach. Part I: Causes', Joseph Halpern and Judea Pearl draw upon structural equation models to develop an attractive analysis of 'actual cause'. Their analysis is designed for the case of deterministic causation. I show that their account can be naturally extended to provide an elegant treatment of probabilistic causation.
An important principle in the epistemology of disagreement is Independence, which states, “In evaluating the epistemic credentials of another’s expressed belief about P, in order to determine how (or whether) to modify my own belief about P, I should do so in a way that doesn’t rely on the reasoning behind my initial belief about P” (Christensen 2011, 1-2). I present a series of new counterexamples to both Independence and also a revised, more widely applicable, version of it. (...) I then formulate and endorse a third version of Independence that avoids those counterexamples. Lastly, I show how this third version of Independence reveals two new ways one may remain steadfast in the face of two real life disagreements: one about God’s existence and one about moral realism. (shrink)
A first-order theory T has the Independence Property provided deduction of a statement of type (quantifiers) (P -> (P1 or P2 or .. or Pn)) in T implies that (quantifiers) (P -> Pi) can be deduced in T for some i, 1 <= i <= n). Variants of this property have been noticed for some time in logic programming and in linear programming. We show that a first-order theory has the Independence Property for the class of basic formulas (...) provided it can be axiomatized with Horn sentences. The existence of free models is a useful intermediate result. The independence Property is also a tool to decide that a sentence cannot be deduced. We illustrate this with the case of the classical Caratheodory theorem for Pasch-Peano geometries. (shrink)
I set up two axiomatic theories of inductive support within the framework of Kolmogorovian probability theory. I call these theories ‘Popperian theories of inductive support’ because I think that their specific axioms express the core meaning of the word ‘inductive support’ as used by Popper (and, presumably, by many others, including some inductivists). As is to be expected from Popperian theories of inductive support, the main theorem of each of them is an anti-induction theorem, the stronger one of them saying, (...) in fact, that the relation of inductive support is identical with the empty relation. It seems to me that an axiomatic treatment of the idea(s) of inductive support within orthodox probability theory could be worthwhile for at least three reasons. Firstly, an axiomatic treatment demands from the builder of a theory of inductive support to state clearly in the form of specific axioms what he means by ‘inductive support’. Perhaps the discussion of the new anti-induction proofs of Karl Popper and David Miller would have been more fruitful if they had given an explicit definition of what inductive support is or should be. Secondly, an axiomatic treatment of the idea(s) of inductive support within Kolmogorovian probability theory might be accommodating to those philosophers who do not completely trust Popperian probability theory for having theorems which orthodox Kolmogorovian probability theory lacks; a transparent derivation of anti-induction theorems within a Kolmogorovian frame might bring additional persuasive power to the original anti-induction proofs of Popper and Miller, developed within the framework of Popperian probability theory. Thirdly, one of the main advantages of the axiomatic method is that it facilitates criticism of its products: the axiomatic theories. On the one hand, it is much easier than usual to check whether those statements which have been distinguished as theorems really are theorems of the theory under examination. On the other hand, after we have convinced ourselves that these statements are indeed theorems, we can take a critical look at the axioms—especially if we have a negative attitude towards one of the theorems. Since anti-induction theorems are not popular at all, the adequacy of some of the axioms they are derived from will certainly be doubted. If doubt should lead to a search for alternative axioms, sheer negative attitudes might develop into constructive criticism and even lead to new discoveries. -/- I proceed as follows. In section 1, I start with a small but sufficiently strong axiomatic theory of deductive dependence, closely following Popper and Miller (1987). In section 2, I extend that starting theory to an elementary Kolmogorovian theory of unconditional probability, which I extend, in section 3, to an elementary Kolmogorovian theory of conditional probability, which in its turn gets extended, in section 4, to a standard theory of probabilistic dependence, which also gets extended, in section 5, to a standard theory of probabilistic support, the main theorem of which will be a theorem about the incompatibility of probabilistic support and deductive independence. In section 6, I extend the theory of probabilistic support to a weak Popperian theory of inductive support, which I extend, in section 7, to a strong Popperian theory of inductive support. In section 8, I reconsider Popper's anti-inductivist theses in the light of the anti-induction theorems. I conclude the paper with a short discussion of possible objections to our anti-induction theorems, paying special attention to the topic of deductive relevance, which has so far been neglected in the discussion of the anti-induction proofs of Popper and Miller. (shrink)
I develop a probabilistic account of coherence, and argue that at least in certain respects it is preferable to (at least some of) the main extant probabilistic accounts of coherence: (i) Igor Douven and Wouter Meijs’s account, (ii) Branden Fitelson’s account, (iii) Erik Olsson’s account, and (iv) Tomoji Shogenji’s account. Further, I relate the account to an important, but little discussed, problem for standard varieties of coherentism, viz., the “Problem of Justified Inconsistent Beliefs.”.
Starting from a recent paper by S. Kaufmann, we introduce a notion of conjunction of two conditional events and then we analyze it in the setting of coherence. We give a representation of the conjoined conditional and we show that this new object is a conditional random quantity, whose set of possible values normally contains the probabilities assessed for the two conditional events. We examine some cases of logical dependencies, where the conjunction is a conditional event; moreover, we give the (...) lower and upper bounds on the conjunction. We also examine an apparent paradox concerning stochastic independence which can actually be explained in terms of uncorrelation. We briefly introduce the notions of disjunction and iterated conditioning and we show that the usual probabilistic properties still hold. (shrink)
A new version of quantum theory is proposed, according to which probabilistic events occur whenever new statioinary or bound states are created as a result of inelastic collisions. The new theory recovers the experimental success of orthodox quantum theory, but differs form the orthodox theory for as yet unperformed experiments.
As I head home from work, I’m not sure whether my daughter’s new bike is green, and I’m also not sure whether I’m on drugs that distort my color perception. One thing that I am sure about is that my attitudes towards those possibilities are evidentially independent of one another, in the sense that changing my confidence in one shouldn’t affect my confidence in the other. When I get home and see the bike it looks green, so I increase my (...) confidence that it is green. But something else has changed: now an increase in my confidence that I’m on color-drugs would undermine my confidence that the bike is green. Jonathan Weisberg and Jim Pryor argue that the preceding story is problematic for standard Bayesian accounts of perceptual learning. Due to the ‘rigidity’ of Conditionalization, a negative probabilistic correlation between two propositions cannot be introduced by updating on one of them. Hence if my beliefs about my own color-sobriety start out independent of my beliefs about the color of the bike, then they must remain independent after I have my perceptual experience and update accordingly. Weisberg takes this to be a reason to reject Conditionalization. I argue that this conclusion is too pessimistic: Conditionalization is only part of the Bayesian story of perceptual learning, and the other part needn’t preserve independence. Hence Bayesian accounts of perceptual learning are perfectly consistent with potential underminers for perceptual beliefs. (shrink)
A central feature of ordinary moral thought is that moral judgment is mind-independent in the following sense: judging something to be morally wrong does not thereby make it morally wrong. To deny this would be to accept a form of subjectivism. Neil Sinclair (2008) makes a novel attempt to show how expressivism is simultaneously committed to (1) an understanding of moral judgments as expressions of attitudes and (2) the rejection of subjectivism. In this paper, I discuss Sinclair’s defense of anti-subjectivist (...) moral mind-independence on behalf of the expressivist, and I argue that the account does not fully succeed. An examination of why it does not is instructive, and it reveals a fundamental dilemma for the expressivist. I offer a suggestion for how the expressivist might respond to the dilemma and so uphold Sinclair’s defense. (shrink)
In a recent paper in this Journal San Pedro I formulated a conjecture relating Measurement Independence and Parameter Independence, in the context of common cause explanations of EPR correlations. My conjecture suggested that a violation of Measurement Independence would entail a violation of Parameter Independence as well. Leszek Wroński has shown that conjecture to be false. In this note, I review Wroński’s arguments and agree with him on the fate of the conjecture. I argue that what (...) is interesting about the conjecture, however, is not whether it is true or false in itself, but the reasons for the actual verdict, and their implications regarding locality. (shrink)
Stochastic independence has a complex status in probability theory. It is not part of the definition of a probability measure, but it is nonetheless an essential property for the mathematical development of this theory. Bayesian decision theorists such as Savage can be criticized for being silent about stochastic independence. From their current preference axioms, they can derive no more than the definitional properties of a probability measure. In a new framework of twofold uncertainty, we introduce preference axioms that (...) entail not only these definitional properties, but also the stochastic independence of the two sources of uncertainty. This goes some way towards filling a curious lacuna in Bayesian decision theory. (shrink)
Recent work by Peijnenburg, Atkinson, and Herzberg suggests that infinitists who accept a probabilistic construal of justification can overcome significant challenges to their position by attending to mathematical treatments of infinite probabilistic regresses. In this essay, it is argued that care must be taken when assessing the significance of these formal results. Though valuable lessons can be drawn from these mathematical exercises (many of which are not disputed here), the essay argues that it is entirely unclear that the (...) form of infinitism that results meets a basic requirement: namely, providing an account of infinite chains of propositions qua reasons made available to agents. (shrink)
There is a long-standing debate in epistemology on the structure of justification. Some recent work in formal epistemology promises to shed some new light on that debate. I have in mind here some recent work by David Atkinson and Jeanne Peijnenburg, hereafter “A&P”, on infinite regresses of probabilistic support. A&P show that there are probability distributions defined over an infinite set of propositions {\ such that \ is probabilistically supported by \ for all i and \ has a high (...) probability. Let this result be “APR”. A&P oftentimes write as though they believe that APR runs counter to foundationalism. This makes sense, since there is some prima facie plausibility in the idea that APR runs counter to foundationalism, and since some prominent foundationalists argue for theses inconsistent with APR. I argue, though, that in fact APR does not run counter to foundationalism. I further argue that there is a place in foundationalism for infinite regresses of probabilistic support. (shrink)
There is a substantial class of collective decision problems whose successful solution requires interdependence among decision makers at the agenda-setting stage and independence at the stage of choice. We define this class of problems and describe and apply a search-and-decision mechanism theoretically modeled in the context of honeybees and identified in earlier empirical work in biology. The honeybees’ mechanism has useful implications for mechanism design in human institutions, including courts, legislatures, executive appointments, research and development in firms, and basic (...) research in the sciences. Our paper offers a fresh perspective on the idea of “biomimicry” in institutional design and raises the possibility of comparative institutional analysis across species. (shrink)
I shall defend the view that the experience of resistance gives us a direct phenomenal access to the mind-independence of perceptual objects. In the first part, I address an objection against the very possibility of experiencing mind-independence. The possibility of an experience of mind-independence being secured, I argue in the second part that the experience of resistance is the kind of experience by which we access mind-independence.
The aim of the paper is to develop general criteria of argumentative validity and adequacy for probabilistic arguments on the basis of the epistemological approach to argumentation. In this approach, as in most other approaches to argumentation, proabilistic arguments have been neglected somewhat. Nonetheless, criteria for several special types of probabilistic arguments have been developed, in particular by Richard Feldman and Christoph Lumer. In the first part (sects. 2-5) the epistemological basis of probabilistic arguments is discussed. With (...) regard to the philosophical interpretation of probabilities a new subjectivist, epistemic interpretation is proposed, which identifies probabilities with tendencies of evidence (sect. 2). After drawing the conclusions of this interpretation with respect to the syntactic features of the probability concept, e.g. one variable referring to the data base (sect. 3), the justification of basic probabilities (priors) by judgements of relative frequency (sect. 4) and the justification of derivative probabilities by means of the probability calculus are explained (sect. 5). The core of the paper is the definition of '(argumentatively) valid derivative probabilistic arguments', which provides exact conditions for epistemically good probabilistic arguments, together with conditions for the adequate use of such arguments for the aim of rationally convincing an addressee (sect. 6). Finally, some measures for improving the applicability of probabilistic reasoning are proposed (sect. 7). (shrink)
Actual causes - e.g. Suzy's being exposed to asbestos - often bring about their effects - e.g. Suzy's suffering mesothelioma - probabilistically. I use probabilistic causal models to tackle one of the thornier difficulties for traditional accounts of probabilistic actual causation: namely probabilistic preemption.
We are told by philosophers that photographs are a distinct category of image because the photographic process is mind-independent. Furthermore, that the experience of viewing a photograph has a special status, justified by a viewer’s knowledge that the photographic process is mind-independent. Versions of these ideas are central to discussions of photography in both the philosophy of art and epistemology and have far-reaching implications for science, forensics and documentary journalism. Mind-independence (sometimes ‘belief independence’) is a term employed to (...) highlight what is important in the idea that photographs can be produced naturally, mechanically, accidentally or automatically. Insofar as the process is physical, natural, mechanical or causal it can occur without human agency or intervention, entirely in the absence of intentional states. Presented innocuously, the idea is that although photographs are dependent on natural or mechanical processes, they can be produced independently of human agency – particularly human beliefs. Presented in a stronger form, the claim is that even if human agency is heavily involved in the production process, the definitive features that make the photograph a photograph and determine its salient properties are nonetheless independent of human minds. In epistemic debates, mind-independence is viewed as essential for explaining why photographs occupy a distinct category among images and justifying a variety of claims about their privileged epistemic and affective status in science, forensics, popular culture and journalism. But, in the philosophy of art, claims about mind-. (shrink)
Bilateralism is a theory of meaning according to which assertion and denial are independent speech acts. Bilateralism also proposes two coordination principles for assertion and denial. I argue that if assertion and denial are independent speech acts, they cannot be coordinated by the bilateralist principles.
Patrick Toner has recently criticized accounts of substance provided by Kit Fine, E. J. Lowe, and the author, accounts which say (to a first approximation) that substances cannot depend on things other than their own parts. On Toner’s analysis, the inclusion of this parts exception results in a disjunctive definition of substance rather than a unified account. In this paper (speaking only for myself, but in a way that would, I believe, support the other authors that Toner discusses), I first (...) make clear what Toner’s criticism is, and then I respond to it. Including the parts exception is not the adding of a second condition but instead the creation of a new single condition. Since it is not the adding of a condition, the result is not disjunctive. Therefore, the objection fails. (shrink)
Entanglement is one of the most striking features of quantum mechanics, and yet it is not specifically quantum. More specific to quantum mechanics is the connection between entanglement and thermodynamics, which leads to an identification between entropies and measures of pure state entanglement. Here we search for the roots of this connection, investigating the relation between entanglement and thermodynamics in the framework of general probabilistic theories. We first address the question whether an entangled state can be transformed into another (...) by means of local operations and classical communication. Under two operational requirements, we prove a general version of the Lo-Popescu theorem, which lies at the foundations of the theory of pure-state entanglement. We then consider a resource theory of purity where free operations are random reversible transformations, modelling the scenario where an agent has limited control over the dynamics of a closed system. Our key result is a duality between the resource theory of entanglement and the resource theory of purity, valid for every physical theory where all processes arise from pure states and reversible interactions at the fundamental level. As an application of the main result, we establish a one-to-one correspondence between entropies and measures of pure bipartite entanglement and exploit it to define entanglement measures in the general probabilistic framework. In addition, we show a duality between the task of information erasure and the task of entanglement generation, whereby the existence of entropy sinks (systems that can absorb arbitrary amounts of information) becomes equivalent to the existence of entanglement sources (correlated systems from which arbitrary amounts of entanglement can be extracted). (shrink)
The author revisits the Blue Bus Problem, a famous thought-experiment in law involving probabilistic proof, and presents simple Bayesian solutions to different versions of the blue bus problem. In addition, the author expresses his solutions in standard and visual formats, i.e. in terms of probabilities and natural frequencies.
Justification logics are constructive analogues of modal logics. They are often used as epistemic logics, particularly as models of evidentialist justification. However, in this role, justification logics are defective insofar as they represent justification with a necessity-like operator, whereas actual evidentialist justification is usually probabilistic. This paper first examines and rejects extant candidates for solving this problem: Milnikel’s Logic of Uncertain Justifications, Ghari’s Hájek–Pavelka-Style Justification Logics and a version of probabilistic justification logic developed by Kokkinis et al. It (...) then proposes a new solution to the problem in the form of a justification logic that incorporates the essential features of both a fuzzy logic and a probabilistic logic. (shrink)
In this issue of the Journal, Dundas et al. (Am J Epidemiol. 2014;180(2):197–207) apply a hitherto infrequent multilevel analytical approach: multiple membership multiple classification (MMMC) models. Specifically, by adopting a life-course approach, they use a multilevel regression with individuals cross-classified in different contexts (i.e., families, early schools, and neighborhoods) to investigate self-reported health and mental health in adulthood. They provide observational evidence suggesting the relevance of the early family environment for launching public health interventions in childhood in order to improve (...) health in adulthood. In their analyses, the authors distinguish between specific contextual measures (i.e., the association between particular contextual characteristics and individual health) and general contextual measures (i.e., the share of the total interindividual heterogeneity in health that appears at each level). By doing so, they implicitly question the traditional probabilistic risk factor epidemiology including classical “neighborhood effects” studies. In fact, those studies use simple hierarchical structures and disregard the analysis of general contextual measures. The innovative MMMC approach properly responds to the call for a multilevel eco-epidemiology against a widespread probabilistic risk factors epidemiology. The risk factors epidemiology is not only reduced to individual-level analyses, but it also embraces many current “multilevel analyses” that are exclusively focused on analyzing contextual risk factors. (shrink)
To understand something involves some sort of commitment to a set of propositions comprising an account of the understood phenomenon. Some take this commitment to be a species of belief; others, such as Elgin and I, take it to be a kind of cognitive policy. This paper takes a step back from debates about the nature of understanding and asks when this commitment involved in understanding is epistemically appropriate, or `acceptable' in Elgin's terminology. In particular, appealing to lessons from the (...) lottery and preface paradoxes, it is argued that this type of commitment is sometimes acceptable even when it would be rational to assign arbitrarily low probabilities to the relevant propositions. This strongly suggests that the relevant type of commitment is sometimes acceptable in the absence of epistemic justification for belief, which in turn implies that understanding does not require justification in the traditional sense. The paper goes on to develop a new probabilistic model of acceptability, based on the idea that the maximally informative accounts of the understood phenomenon should be optimally probable. Interestingly, this probabilistic model ends up being similar in important ways to Elgin’s proposal to analyze the acceptability of such commitments in terms of ‘reflective equilibrium’. (shrink)
Does perceptual consciousness require cognitive access? Ned Block argues that it does not. Central to his case are visual memory experiments that employ post-stimulus cueing—in particular, Sperling's classic partial report studies, change-detection work by Lamme and colleagues, and a recent paper by Bronfman and colleagues that exploits our perception of ‘gist’ properties. We argue contra Block that these experiments do not support his claim. Our reinterpretations differ from previous critics' in challenging as well a longstanding and common view of visual (...) memory as involving declining capacity across a series of stores. We conclude by discussing the relation of probabilistic perceptual representations and phenomenal consciousness. (shrink)
In the latter half of the twentieth century, philosophers of science have argued (implicitly and explicitly) that epistemically rational individuals might compose epistemically irrational groups and that, conversely, epistemically rational groups might be composed of epistemically irrational individuals. We call the conjunction of these two claims the Independence Thesis, as they together imply that methodological prescriptions for scientific communities and those for individual scientists might be logically independent of one another. We develop a formal model of scientific inquiry, define (...) four criteria for individual and group epistemic rationality, and then prove that the four definitions diverge, in the sense that individuals will be judged rational when groups are not and vice versa. We conclude by explaining implications of the inconsistency thesis for (i) descriptive history and sociology of science and (ii) normative prescriptions for scientific communities. (shrink)
The success of the Bayesian approach to perception suggests probabilistic perceptual representations. But if perceptual representation is probabilistic, why doesn't normal conscious perception reflect the full probability distributions that the probabilistic point of view endorses? For example, neurons in MT/V5 that respond to the direction of motion are broadly tuned: a patch of cortex that is tuned to vertical motion also responds to horizontal motion, but when we see vertical motion, foveally, in good conditions, it does not (...) look at all horizontal. This article argues that the best Bayesian approach to this problem does not require probabilistic representation. (shrink)
The value of optimality modeling has long been a source of contention amongst population biologists. Here I present a view of the optimality approach as at once playing a crucial explanatory role and yet also depending on external sources of confirmation. Optimality models are not alone in facing this tension between their explanatory value and their dependence on other approaches; I suspect that the scenario is quite common in science. This investigation of the optimality approach thus serves as a case (...) study, on the basis of which I suggest that there is a widely felt tension in science between explanatory independence and broad epistemic inter dependence, and that this tension influences scientific methodology. (shrink)
I use Plotinus to present absolute divine simplicity as the consequence of principles about metaphysical and explanatory priority to which most theists are already committed. I employ Phil Corkum’s account of ontological independence as independent status to present a new interpretation of Plotinus on the dependence of everything on the One. On this reading, if something else (whether an internal part or something external) makes you what you are, then you are ontologically dependent on it. I show that this (...) account supports Plotinus’s claim that any entity with parts cannot be fully independent. In particular, I lay out Plotinus’s case for thinking that even a divine self-understanding intellect cannot be fully independent. I then argue that a weaker version of simplicity is not enough for the theist since priority monism meets the conditions of a moderate version of ontological independence just as well as a transcendent but complex ultimate being. (shrink)
This paper offers a probabilistic treatment of the conditions for argument cogency as endorsed in informal logic: acceptability, relevance, and sufficiency. Treating a natural language argument as a reason-claim-complex, our analysis identifies content features of defeasible argument on which the RSA conditions depend, namely: change in the commitment to the reason, the reason’s sensitivity and selectivity to the claim, one’s prior commitment to the claim, and the contextually determined thresholds of acceptability for reasons and for claims. Results contrast with, (...) and may indeed serve to correct, the informal understanding and applications of the RSA criteria concerning their conceptual dependence, their function as update-thresholds, and their status as obligatory rather than permissive norms, but also show how these formal and informal normative approachs can in fact align. (shrink)
When people want to identify the causes of an event, assign credit or blame, or learn from their mistakes, they often reflect on how things could have gone differently. In this kind of reasoning, one considers a counterfactual world in which some events are different from their real-world counterparts and considers what else would have changed. Researchers have recently proposed several probabilistic models that aim to capture how people do (or should) reason about counterfactuals. We present a new model (...) and show that it accounts better for human inferences than several alternative models. Our model builds on the work of Pearl (2000), and extends his approach in a way that accommodates backtracking inferences and that acknowledges the difference between counterfactual interventions and counterfactual observations. We present six new experiments and analyze data from four experiments carried out by Rips (2010), and the results suggest that the new model provides an accurate account of both mean human judgments and the judgments of individuals. (shrink)
Conceptual combination performs a fundamental role in creating the broad range of compound phrases utilised in everyday language. This article provides a novel probabilistic framework for assessing whether the semantics of conceptual combinations are compositional, and so can be considered as a function of the semantics of the constituent concepts, or not. While the systematicity and productivity of language provide a strong argument in favor of assuming compositionality, this very assumption is still regularly questioned in both cognitive science and (...) philosophy. Additionally, the principle of semantic compositionality is underspecifi ed, which means that notions of both "strong" and "weak" compositionality appear in the literature. Rather than adjudicating between different grades of compositionality, the framework presented here contributes formal methods for determining a clear dividing line between compositional and non-compositional semantics. In addition, we suggest that the distinction between these is contextually sensitive. Compositionality is equated with a joint probability distribution modeling how the constituent concepts in the combination are interpreted. Marginal selectivity is introduced as a pivotal probabilistic constraint for the application of the Bell/CH and CHSH systems of inequalities. Non-compositionality is equated with a failure of marginal selectivity, or violation of either system of inequalities in the presence of marginal selectivity. This means that the conceptual combination cannot be modeled in a joint probability distribution, the variables of which correspond to how the constituent concepts are being interpreted. The formal analysis methods are demonstrated by applying them to an empirical illustration of twenty-four non-lexicalised conceptual combinations. (shrink)
We often have some reason to do actions insofar as they promote outcomes or states of affairs, such as the satisfaction of a desire. But what is it to promote an outcome? I defend a new version of 'probabilism about promotion'. According to Minimal Probabilistic Promotion, we promote some outcome when we make that outcome more likely than it would have been if we had done something else. This makes promotion easy and reasons cheap.
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.