Suppose several individuals (e.g., experts on a panel) each assign probabilities to some events. How can these individual probability assignments be aggregated into a single collective probability assignment? This article reviews several proposed solutions to this problem. We focus on three salient proposals: linear pooling (the weighted or unweighted linear averaging of probabilities), geometric pooling (the weighted or unweighted geometric averaging of probabilities), and multiplicative pooling (where probabilities are multiplied rather than averaged). We present axiomatic characterisations of (...) each class of pooling functions (most of them classic, but one new) and argue that linear pooling can be justified procedurally, but not epistemically, while the other two pooling methods can be justified epistemically. The choice between them, in turn, depends on whether the individuals' probability assignments are based on shared information or on private information. We conclude by mentioning a number of other pooling methods. (shrink)
How can different individuals' probability assignments to some events be aggregated into a collective probability assignment? Classic results on this problem assume that the set of relevant events -- the agenda -- is a sigma-algebra and is thus closed under disjunction (union) and conjunction (intersection). We drop this demanding assumption and explore probabilisticopinionpooling on general agendas. One might be interested in the probability of rain and that of an interest-rate increase, but not in the probability (...) of rain or an interest-rate increase. We characterize linear pooling and neutral pooling for general agendas, with classic results as special cases for agendas that are sigma-algebras. As an illustrative application, we also consider probabilistic preference aggregation. Finally, we compare our results with existing results on binary judgment aggregation and Arrovian preference aggregation. This paper is the first of two self-contained, but technically related companion papers inspired by binary judgment-aggregation theory. (shrink)
How can different individuals' probability functions on a given sigma-algebra of events be aggregated into a collective probability function? Classic approaches to this problem often require 'event-wise independence': the collective probability for each event should depend only on the individuals' probabilities for that event. In practice, however, some events may be 'basic' and others 'derivative', so that it makes sense first to aggregate the probabilities for the former and then to let these constrain the probabilities for the latter. We formalize (...) this idea by introducing a 'premise-based' approach to probabilisticopinionpooling, and show that, under a variety of assumptions, it leads to linear or neutral opinionpooling on the 'premises'. This paper is the second of two self-contained, but technically related companion papers inspired by binary judgment-aggregation theory. (shrink)
There are many reasons we might want to take the opinions of various individuals and aggregate them to give the opinions of the group they constitute. If all the individuals in the group have probabilistic opinions about the same propositions, there is a host of aggregation functions we might deploy, such as linear or geometric pooling. However, there are also cases where different members of the group assign probabilities to different sets of propositions, which might overlap a lot, (...) a little, or not at all. There are far fewer proposals for how to proceed in these cases, and those there are have undesirable features. I begin by considering four proposals and arguing that they don't work; then I'll describe my own proposal. In fact, my proposal breaks down into two proposals, each suited to a different purpose we might have when we aggregate. One purpose is descriptive, the other normative. In descriptive cases, we aggregate the individuals' credences in order to provide a compressed summary description of their opinions; in normative cases, we aggregate the credences to provide an account of the group's opinion, where the group is in some sense treated as an entity in its own rights. (shrink)
Merging of opinions results underwrite Bayesian rejoinders to complaints about the subjective nature of personal probability. Such results establish that sufficiently similar priors achieve consensus in the long run when fed the same increasing stream of evidence. Initial subjectivity, the line goes, is of mere transient significance, giving way to intersubjective agreement eventually. Here, we establish a merging result for sets of probability measures that are updated by Jeffrey conditioning. This generalizes a number of different merging results in the literature. (...) We also show that such sets converge to a shared, maximally informed opinion. Convergence to a maximally informed opinion is a (weak) Jeffrey conditioning analogue of Bayesian “convergence to the truth” for conditional probabilities. Finally, we demonstrate the philosophical significance of our study by detailing applications to the topics of dynamic coherence, imprecise probabilities, and probabilisticopinionpooling. (shrink)
How can the propositional attitudes of several individuals be aggregated into overall collective propositional attitudes? Although there are large bodies of work on the aggregation of various special kinds of propositional attitudes, such as preferences, judgments, probabilities and utilities, the aggregation of propositional attitudes is seldom studied in full generality. In this paper, we seek to contribute to filling this gap in the literature. We sketch the ingredients of a general theory of propositional attitude aggregation and prove two new theorems. (...) Our first theorem simultaneously characterizes some prominent aggregation rules in the cases of probability, judgment and preference aggregation, including linear opinionpooling and Arrovian dictatorships. Our second theorem abstracts even further from the specific kinds of attitudes in question and describes the properties of a large class of aggregation rules applicable to a variety of belief-like attitudes. Our approach integrates some previously disconnected areas of investigation. (shrink)
A group is often construed as one agent with its own probabilistic beliefs (credences), which are obtained by aggregating those of the individuals, for instance through averaging. In their celebrated “Groupthink”, Russell et al. (2015) require group credences to undergo Bayesian revision whenever new information is learnt, i.e., whenever individual credences undergo Bayesian revision based on this information. To obtain a fully Bayesian group, one should often extend this requirement to non-public or even private information (learnt by not all (...) or just one individual), or to non-representable information (not representable by any event in the domain where credences are held). I pro- pose a taxonomy of six types of ‘group Bayesianism’. They differ in the information for which Bayesian revision of group credences is required: public representable information, private representable information, public non-representable information, etc. Six corre- sponding theorems establish how individual credences must (not) be aggregated to ensure group Bayesianism of any type, respectively. Aggregating through standard averaging is never permitted; instead, different forms of geometric averaging must be used. One theorem—that for public representable information—is essentially Russell et al.’s central result (with minor corrections). Another theorem—that for public non-representable information—fills a gap in the theory of externally Bayesian opinionpooling. (shrink)
Decision-making typically requires judgments about causal relations: we need to know the causal effects of our actions and the causal relevance of various environmental factors. We investigate how several individuals' causal judgments can be aggregated into collective causal judgments. First, we consider the aggregation of causal judgments via the aggregation of probabilistic judgments, and identify the limitations of this approach. We then explore the possibility of aggregating causal judgments independently of probabilistic ones. Formally, we introduce the problem of (...) causal-network aggregation. Finally, we revisit the aggregation of probabilistic judgments when this is constrained by prior aggregation of qualitative causal judgments. (shrink)
This symposium in the overlap of philosophy and decision theory is described well by its title “Beliefs in Groups”. Each word in the title matters, with one intended ambiguity. The symposium is about beliefs rather than other attitudes such as preferences; these beliefs take the form of probabilities in the first three contributions, binary yes/no beliefs (‘judgments’) in the fourth contribution, and qualitative probabilities (‘probability grades’) in the fifth contribution. The beliefs occur in groups, which is ambiguous between beliefs of (...) groups as a whole and beliefs of group members. The five contributions – all of them interesting, we believe – address several aspects of this general theme. (shrink)
Supra-Bayesianism is the Bayesian response to learning the opinions of others. Probability pooling constitutes an alternative response. One natural question is whether there are cases where probability pooling gives the supra-Bayesian result. This has been called the problem of Bayes-compatibility for pooling functions. It is known that in a common prior setting, under standard assumptions, linear pooling cannot be non-trivially Bayes-compatible. We show by contrast that geometric pooling can be non-trivially Bayes-compatible. Indeed, we show that, (...) under certain assumptions, geometric and Bayes-compatible pooling are equivalent. Granting supra-Bayesianism its usual normative status, one upshot of our study is thus that, in a certain class of epistemic contexts, geometric pooling enjoys a normative advantage over linear pooling as a social learning mechanism. We discuss the philosophical ramifications of this advantage, which we show to be robust to variations in our statement of the Bayes-compatibility problem. (shrink)
Democratic decision-making is often defended on grounds of the ‘wisdom of crowds’: decisions are more likely to be correct if they are based on many independent opinions, so a typical argument in social epistemology. But what does it mean to have independent opinions? Opinions can be probabilistically dependent even if individuals form their opinion in causal isolation from each other. We distinguish four probabilistic notions of opinion independence. Which of them holds depends on how individuals are causally (...) affected by environmental factors such as commonly perceived evidence. In a general theorem, we identify causal conditions guaranteeing each kind of opinion independence. These results have implications for whether and how ‘wisdom of crowds’ arguments are possible, and how truth-conducive institutions can be designed. (shrink)
It is often suggested that when opinions differ among individuals in a group, the opinions should be aggregated to form a compromise. This paper compares two approaches to aggregating opinions, linear pooling and what I call opinion agglomeration. In evaluating both strategies, I propose a pragmatic criterion, No Regrets, entailing that an aggregation strategy should prevent groups from buying and selling bets on events at prices regretted by their members. I show that only opinion agglomeration is able (...) to satisfy the demand. I then proceed to give normative and empirical arguments in support of the pragmatic criterion for opinion aggregation, and that ultimately favor opinion agglomeration. (shrink)
In times of populist mistrust towards experts, it is important and the aim of the paper to ascertain the rationality of arguments from expert opinion and to reconstruct their rational foundations as well as to determine their limits. The foundational approach chosen is probabilistic. However, there are at least three correct probabilistic reconstructions of such argumentations: statistical inferences, Bayesian updating, and interpretive arguments. To solve this competition problem, the paper proposes a recourse to the arguments' justification strengths (...) achievable in the respective situation. (shrink)
There is a well-known equivalence between avoiding accuracy dominance and having probabilistically coherent credences (see, e.g., de Finetti 1974, Joyce 2009, Predd et al. 2009, Pettigrew 2016). However, this equivalence has been established only when the set of propositions on which credence functions are defined is finite. In this paper, I establish connections between accuracy dominance and coherence when credence functions are defined on an infinite set of propositions. In particular, I establish the necessary results to extend the classic accuracy (...) argument for probabilism to certain classes of infinite sets of propositions including countably infinite partitions. (shrink)
Abstract: Four main forms of Doomsday Argument (DA) exist—Gott’s DA, Carter’s DA, Grace’s DA and Universal DA. All four forms use different probabilistic logic to predict that the end of the human civilization will happen unexpectedly soon based on our early location in human history. There are hundreds of publications about the validity of the Doomsday argument. Most of the attempts to disprove the Doomsday Argument have some weak points. As a result, we are uncertain about the validity of (...) DA proofs and rebuttals. In this article, a meta-DA is introduced, which uses the idea of logical uncertainty over the DA’s validity estimated based on a virtual prediction market of the opinions of different scientists. The result is around 0.4 for the validity of some form of DA, and even smaller for “Strong DA”, which predicts the end of the world in the near term. We discuss many examples of the validity of the DA in real life as an instrument to prove it “experimentally”. We also show that DA becomes strongest if it is based on the idea of the “natural reference class” of observers, that is, the observers who know about the DA (i.e. a Self-Referenced DA). Such a DA predicts that there is a high probability of a global catastrophe with human extinction in the 21st century, which aligns with what we already know based on analysis of different technological risks. (shrink)
to appear in Szabó Gendler, T. & J. Hawthorne (eds.) Oxford Studies in Epistemology volume 6 We often ask for the opinion of a group of individuals. How strongly does the scientific community believe that the rate at which sea levels are rising increased over the last 200 years? How likely does the UK Treasury think it is that there will be a recession if the country leaves the European Union? What are these group credences that such questions request? (...) And how do they relate to the individual credences assigned by the members of the particular group in question? According to the credal judgment aggregation principle, Linear Pooling, the credence function of a group should be a weighted average or linear pool of the credence functions of the individuals in the group. In this paper, I give an argument for Linear Pooling based on considerations of accuracy. And I respond to two standard objections to the aggregation principle. (shrink)
We present an abstract social aggregation theorem. Society, and each individual, has a preorder that may be interpreted as expressing values or beliefs. The preorders are allowed to violate both completeness and continuity, and the population is allowed to be infinite. The preorders are only assumed to be represented by functions with values in partially ordered vector spaces, and whose product has convex range. This includes all preorders that satisfy strong independence. Any Pareto indifferent social preorder is then shown to (...) be represented by a linear transformation of the representations of the individual preorders. Further Pareto conditions on the social preorder correspond to positivity conditions on the transformation. When all the Pareto conditions hold and the population is finite, the social preorder is represented by a sum of individual preorder representations. We provide two applications. The first yields an extremely general version of Harsanyi's social aggregation theorem. The second generalizes a classic result about linear opinionpooling. (shrink)
If a group is modelled as a single Bayesian agent, what should its beliefs be? I propose an axiomatic model that connects group beliefs to beliefs of group members, who are themselves modelled as Bayesian agents, possibly with different priors and different information. Group beliefs are proven to take a simple multiplicative form if people’s information is independent, and a more complex form if information overlaps arbitrarily. This shows that group beliefs can incorporate all information spread over the individuals without (...) the individuals having to communicate their (possibly complex and hard-to-describe) private information; communicating prior and posterior beliefs sufices. JEL classification: D70, D71.. (shrink)
Can a group be an orthodox rational agent? This requires the group's aggregate preferences to follow expected utility (static rationality) and to evolve by Bayesian updating (dynamic rationality). Group rationality is possible, but the only preference aggregation rules which achieve it (and are minimally Paretian and continuous) are the linear-geometric rules, which combine individual values linearly and combine individual beliefs geometrically. Linear-geometric preference aggregation contrasts with classic linear-linear preference aggregation, which combines both values and beliefs linearly, but achieves only static (...) rationality. Our characterisation of linear-geometric preference aggregation has two corollaries: a characterisation of linear aggregation of values (Harsanyi's Theorem) and a characterisation of geometric aggregation of beliefs. (shrink)
Kant scholars have rarely addressed the centuries-old tradition of casuistry and the concept of conscience in Kant’s writings. This book offers a detailed exploration of the period from Pascal’s Provincial Letters to Kant’s critique of probabilism and discusses his proposal of a (new) casuistry as part of an moral education. *** -/- Die Debatte um Kasuistik und Probabilismus zählt zu den wichtigsten Themen der Moraltheologie und Moralphilosophie der frühen Neuzeit. In der enormen Verbreitung der Literatur über die Gewissensfälle (casus conscientiae) (...) und in der großen Resonanz der Lehre von den wahrscheinlichen Meinungen (opiniones probabiles) in der ersten Hälfte des 17. Jahrhunderts findet sich ein wirkungsmächtiges Modell für die Analyse des moralischen Handelns, in welchem Universalität des Gebots und Kontingenz der Handlung, Notwendigkeit der Norm und individuelle Freiheit miteinander versöhnt werden. Dennoch erregt dieses Modell heftige Kritik, die in Pascals berühmtem Angriff in den Briefen in die Provinz (1656–1657) gipfelt. So beginnt für die Kasuistik und den Probabilismus eine lange Krise, welche die Entstehung der modernen Moralphilosophie und den Siegeszug des kantischen Paradigmas ankündigt. Die Krise von Kasuistik und Probabilismus fällt zusammen mit einer einsetzenden rigoristischen Reaktion der christlichen Moraltheologie sowohl katholischer als auch protestantischer Prägung,welche die moralischen und spekulativen Folgen eines Rückgriffs auf Gewissensfälle und wahrscheinliche Meinungen als inakzeptabel ablehnt. Die Ablehnung geht aber weiter und dringt bis in den Kern der philosophischen Reflexion vor: Die rigoristische Reaktion gegen Kasuistik und Probabilismus entzündet sich im moralphilosophischen Denken Kants als Folge seiner umfassenden kritischen Revision der Grundbegriffe der Moralphilosophie sowie insbesondere seiner Neubestimmung des Verhältnisses von Gewissen und praktischem Urteil. Dem Grundsatz des Probabilismus setzt Kant das Kriterium der moralischen Gewißheit („quod dubitas, ne feceris!“) entgegen, das er als „Postulat des Gewissens“ bezeichnet (RGV 6:185.23–186.9).Und die Kasuistik wird als eine „Critic der Handlungen nach Sophistic“ abgestempelt, als eine „Uebung, um nach der Sophistic das Gewissen zu hintergehen oder es zu chicaniren, insofern man es in Irrthum zu setzen denkt“ (V-MS/Vigil 27:620.2–5). Dennoch weist Kant den kasuistischen Fragen eine herausragende Stellung in der Tugendlehre zu und legt der Kasuistik im Rahmen der ethischen Methodenlehre, also im Hinblick auf pädagogische Anwendungen, letztlich eine positive Funktion bei. Vermutlich ist Kants Angriff auf Kasuistik und Probabilismus durch den Einfluss Pascals philosophisch vermittelt. Kants Verhältnis zu Pascal und, allgemeiner, zu dem historischen Hintergrund, welchem dieser angehört, sind gleichwohl noch erstaunlich wenig erforscht: Trotz der wiederholten (stillschweigenden oder sogar ausdrücklichen) Hinweise an wichtigen Stellen in Kants Schriften richtet die Kant-Forschung ihre Aufmerksamkeit nur selten auf die Jahrhunderte währende Tradition der Kasuistik und den Begriff des Gewissens, der in ihrem Rahmen ausgearbeitet wird. Aus dieser langen Geschichte wird in diesem Band insbesondere der Zeitraum „von Pascal bis Kant“ eingehend untersucht, d.h. der noch wenig erkundete Zeitabschnitt von der beißenden Polemik von Pascals Briefe in die Provinz bis zu Kants eigener Kritik des Probabilismus und seinem Entwurf einer (neuen) Kasuistik als Teil der ethischen Methodenlehre. (shrink)
The fundamental constants that are involved in the laws of physics which describe our universe are finely-tuned for life, in the sense that if some of the constants had slightly different values life could not exist. Some people hold that this provides evidence for the existence of God. I will present a probabilistic version of this fine-tuning argument which is stronger than all other versions in the literature. Nevertheless, I will show that one can have reasonable opinions such that (...) the fine-tuning argument doesn’t lead to an increase in one’s probability for the existence of God. (shrink)
You have higher-order uncertainty iff you are uncertain of what opinions you should have. I defend three claims about it. First, the higher-order evidence debate can be helpfully reframed in terms of higher-order uncertainty. The central question becomes how your first- and higher-order opinions should relate—a precise question that can be embedded within a general, tractable framework. Second, this question is nontrivial. Rational higher-order uncertainty is pervasive, and lies at the foundations of the epistemology of disagreement. Third, the answer is (...) not obvious. The Enkratic Intuition---that your first-order opinions must “line up” with your higher-order opinions---is incorrect; epistemic akrasia can be rational. If all this is right, then it leaves us without answers---but with a clear picture of the question, and a fruitful strategy for pursuing it. (shrink)
Suppositions can be introduced in either the indicative or subjunctive mood. The introduction of either type of supposition initiates judgments that may be either qualitative, binary judgments about whether a given proposition is acceptable or quantitative, numerical ones about how acceptable it is. As such, accounts of qualitative/quantitative judgment under indicative/subjunctive supposition have been developed in the literature. We explore these four different types of theories by systematically explicating the relationships canonical representatives of each. Our representative qualitative accounts of indicative (...) and subjunctive supposition are based on the belief change operations provided by AGM revision and KM update respectively; our representative quantitative ones are offered by conditionalization and imaging. This choice is motivated by the familiar approach of understanding supposition as `provisional belief revision' wherein one temporarily treats the supposition as true and forms judgments by making appropriate changes to their other opinions. To compare the numerical judgments recommended by the quantitative theories with the binary ones recommended by the qualitative accounts, we rely on a suitably adapted version of the Lockean thesis. Ultimately, we establish a number of new results that we interpret as vindicating the often-repeated claim that conditionalization is a probabilistic version of revision, while imaging is a probabilistic version of update. (shrink)
A wide variety of ontologies relevant to the biological and medical domains are available through the OBO Foundry portal, and their number is growing rapidly. Integration of these ontologies, while requiring considerable effort, is extremely desirable. However, heterogeneities in format and style pose serious obstacles to such integration. In particular, inconsistencies in naming conventions can impair the readability and navigability of ontology class hierarchies, and hinder their alignment and integration. While other sources of diversity are tremendously complex and challenging, agreeing (...) a set of common naming conventions is an achievable goal, particularly if those conventions are based on lessons drawn from pooled practical experience and surveys of community opinion. We summarize a review of existing naming conventions and highlight certain disadvantages with respect to general applicability in the biological domain. We also present the results of a survey carried out to establish which naming conventions are currently employed by OBO Foundry ontologies and to determine what their special requirements regarding the naming of entities might be. Lastly, we propose an initial set of typographic, syntactic and semantic conventions for labelling classes in OBO Foundry ontologies. Adherence to common naming conventions is more than just a matter of aesthetics. Such conventions provide guidance to ontology creators, help developers avoid flaws and inaccuracies when editing, and especially when interlinking, ontologies. Common naming conventions will also assist consumers of ontologies to more readily understand what meanings were intended by the authors of ontologies used in annotating bodies of data. (shrink)
Many have claimed that whenever an investigation might provide evidence for a claim, it might also provide evidence against it. Similarly, many have claimed that your credence should never be on the edge of the range of credences that you think might be rational. Surprisingly, both of these principles imply that you cannot rationally be modest: you cannot be uncertain what the rational opinions are.
The policy makers or lawyers may face the need of legal research for reasons. The congressmen may plan to make new laws to address the challenges of their constituent or to the interest of nation. The lawyers may need to serve their clients who like to know the legal issues involved, the strategies to deal with their loss and recovery, and prospect for winning the case if the dispute has gotten worse. The lawyers may practice in a solo business or (...) might work as an associate lawyer in the law firms. A senior lawyer or partner in some cases may like to exploit the junior work force about the problems or grievances from the potential clients. Since he needs to focus their attention on other matters, such as the business expansion of his law firm or more lucrative cases in need of career hands, he may tap the junior lawyer for the legal research, who could assist with the basis of his final legal opinion. The memorandum, opinion letter and brief would be such forms of professional communication for the lawyers and legal researchers. The congressman also can be supplemented with the aid of staffs in terms of his legal expertise and grasp of the issues standing for carrying their responsibility more effectively. For the lawyers and legal researchers, the structure of state and federal legal system is the kind of important variants to orient their work direction and basic frame for the most efficient and adequate scope of search and analysis. The paradigm change also is revolutionary to impact the general base of people. Decades ago, the research or researcher only related with the class of professionals, such as professors, lawyers, career officers and cadres of enterprises. They would enjoy insulation and exploit their research work as an entrance barrier from lay people. Their large shelves at home study or in the room or corner of office thrust an impression that he or she is learned and knowledgeable. This impliedly communicates his or her prestige of social or professional success. The books and articles seem to symbolize a kind of monopoly which bears to exclude unclassified or non-professional workers. The change is remarkable that every citizen could benefit from the on-line sources of information, which, of course, is generally true of professional knowledge. Now professors fear of plagiarism that students often are the kind of suspect. The legal research would turn on the help of digitization which revolutionizes to incur a new mode of research operation. About the query, the citizens and people can readily verify its truth or falsity with one clique within the personal computer. An enormous amount of information is currently flowing on the internet and on-line sources of reference, which shapes an informative and knowledgeable society. As the medical doctors warned decades ago, the precept is any most effective, “forbear from your intelligence or knowledge work for your health.” Many of them now spend long hours a day to satisfy their curiosity and intelligent search. He or she may be well awarded an academic degree if to recognize their hard work on his PC. Nevertheless, this information age does not always bring a positive progress, which arouses the kind of issues, say, right of privacy, on-line fraud. In some cases, this transformation may lead to an inferior attitude of researcher. My today’s experience is one of exotic case. I have received an e-mail from Google CEO stating that I have been selected as one of twelve winners from the pool of Google users. The award money is enormous which stalls me for some time. The authenticity and reliability have hit my head, and I utilized a verification service website managed by the team of lawyers. It costs five dollars and remaining 24 dollars would be processed upon the progress of interaction. I am waiting for an answer from the team. Then the research nowadays is not limited to the basic context of our subsistence, but influences in any depth into the professional lives. (shrink)
The article is devoted to the analysis of the evident scientific method of Galen, which establishes the necessity of correct diagnosis of diseases, determination of true symptoms and causes of diseases, which results in the choice of the exact method of treatment. The article focuses on how Galen seeks to achieve reliable knowledge based on an undeniable logical necessity. Logical reliability is contrasted with “dialectical”, that is, probabilistic judgments, often leading to the opposite of what was originally asserted in (...) them. Probabilistic judgments were characteristic of the Stoic philosophical school, with which Galen hotly argues, asserting his understanding of truth. Truth, in his opinion, is achieved through facts based on true premises. The criterion of truth for Galen is the study of the device of a living organism, and the logical conclusions depend on the accuracy of knowledge of human anatomy. Thus, the nature of the proof judgments of Galen depends on their practical content and not on the formal and logical structure offered by the Stoics when studying the functions of the human body, which ultimately leads to incorrect methods of treatment. (shrink)
In this paper I expound an argument which seems to establish that probabilism and special relativity are incompatible. I examine the argument critically, and consider its implications for interpretative problems of quantum theory, and for theoretical physics as a whole.
The epistemic modal auxiliaries 'must' and 'might' are vehicles for expressing the force with which a proposition follows from some body of evidence or information. Standard approaches model these operators using quantificational modal logic, but probabilistic approaches are becoming increasingly influential. According to a traditional view, 'must' is a maximally strong epistemic operator and 'might' is a bare possibility one. A competing account---popular amongst proponents of a probabilisitic turn---says that, given a body of evidence, 'must p' entails that Pr(p) (...) is high but non-maximal and 'might p' that Pr(p) is significantly greater than 0. Drawing on several observations concerning the behavior of 'must', 'might' and similar epistemic operators in evidential contexts, deductive inferences, downplaying and retractions scenarios, and expressions of epistemic tension, I argue that those two influential accounts have systematic descriptive shortcomings. To better make sense of their complex behavior, I propose instead a broadly Kratzerian account according to which 'must p' entails that Pr(p) = 1 and 'might p' that Pr(p) > 0, given a body of evidence and a set of normality assumptions about the world. From this perspective, 'must' and 'might' are vehicles for expressing a common mode of reasoning whereby we draw inferences from specific bits of evidence against a rich set of background assumptions---some of which we represent as defeasible---which capture our general expectations about the world. I will show that the predictions of this Kratzerian account can be substantially refined once it is combined with a specific yet independently motivated `grammatical' approach to the computation of scalar implicatures. Finally, I discuss some implications of these results for more general discussions concerning the empirical and theoretical motivation to adopt a probabilisitic semantic framework. (shrink)
We investigate a basic probabilistic dynamic semantics for a fragment containing conditionals, probability operators, modals, and attitude verbs, with the aim of shedding light on the prospects for adding probabilistic structure to models of the conversational common ground.
The aim of the paper is to develop general criteria of argumentative validity and adequacy for probabilistic arguments on the basis of the epistemological approach to argumentation. In this approach, as in most other approaches to argumentation, proabilistic arguments have been neglected somewhat. Nonetheless, criteria for several special types of probabilistic arguments have been developed, in particular by Richard Feldman and Christoph Lumer. In the first part (sects. 2-5) the epistemological basis of probabilistic arguments is discussed. With (...) regard to the philosophical interpretation of probabilities a new subjectivist, epistemic interpretation is proposed, which identifies probabilities with tendencies of evidence (sect. 2). After drawing the conclusions of this interpretation with respect to the syntactic features of the probability concept, e.g. one variable referring to the data base (sect. 3), the justification of basic probabilities (priors) by judgements of relative frequency (sect. 4) and the justification of derivative probabilities by means of the probability calculus are explained (sect. 5). The core of the paper is the definition of '(argumentatively) valid derivative probabilistic arguments', which provides exact conditions for epistemically good probabilistic arguments, together with conditions for the adequate use of such arguments for the aim of rationally convincing an addressee (sect. 6). Finally, some measures for improving the applicability of probabilistic reasoning are proposed (sect. 7). (shrink)
Are special relativity and probabilism compatible? Dieks argues that they are. But the possible universe he specifies, designed to exemplify both probabilism and special relativity, either incorporates a universal "now" (and is thus incompatible with special relativity), or amounts to a many world universe (which I have discussed, and rejected as too ad hoc to be taken seriously), or fails to have any one definite overall Minkowskian-type space-time structure (and thus differs drastically from special relativity as ordinarily understood). Probabilism and (...) special relativity appear to be incompatible after all. What is at issue is not whether "the flow of time" can be reconciled with special relativity, but rather whether explicitly probabilistic versions of quantum theory should be rejected because of incompatibility with special relativity. (shrink)
The explanatory role of natural selection is one of the long-term debates in evolutionary biology. Nevertheless, the consensus has been slippery because conceptual confusions and the absence of a unified, formal causal model that integrates different explanatory scopes of natural selection. In this study we attempt to examine two questions: (i) What can the theory of natural selection explain? and (ii) Is there a causal or explanatory model that integrates all natural selection explananda? For the first question, we argue that (...) five explananda have been assigned to the theory of natural selection and that four of them may be actually considered explananda of natural selection. For the second question, we claim that a probabilistic conception of causality and the statistical relevance concept of explanation are both good models for understanding the explanatory role of natural selection. We review the biological and philosophical disputes about the explanatory role of natural selection and formalize some explananda in probabilistic terms using classical results from population genetics. Most of these explananda have been discussed in philosophical terms but some of them have been mixed up and confused. We analyze and set the limits of these problems. (shrink)
There can be situations in which, if I contribute to a pool of resources for helping a large number of people, the difference that my contribution makes to any of the people helped from the pool will be imperceptible at best, and maybe even non-existent. And this can be the case where it is also true that giving the same amount directly to one of the intended beneficiaries of the pool would have made a very large difference to her. Can (...) non-contribution to the pool be morally justified on this ground? I argue that it cannot. For, first, this line of thought leaves unaffected any reasons for holding that failing to perform the direct action of benefiting someone greatly would be wrong. But the pooling system of helping people is often better than a system separating the help which is given — better because of the perceptible difference it makes to its beneficiaries. If so, failing to contribute to the pool will be at least as wrong as failing to have helped directly would have been. The paper clarifies and defends an argument of this form, showing how it can be formulated in a way that avoids apparent counterexamples, and identifying the assumptions on which it rests. (shrink)
This paper offers a probabilistic treatment of the conditions for argument cogency as endorsed in informal logic: acceptability, relevance, and sufficiency. Treating a natural language argument as a reason-claim-complex, our analysis identifies content features of defeasible argument on which the RSA conditions depend, namely: change in the commitment to the reason, the reason’s sensitivity and selectivity to the claim, one’s prior commitment to the claim, and the contextually determined thresholds of acceptability for reasons and for claims. Results contrast with, (...) and may indeed serve to correct, the informal understanding and applications of the RSA criteria concerning their conceptual dependence, their function as update-thresholds, and their status as obligatory rather than permissive norms, but also show how these formal and informal normative approachs can in fact align. (shrink)
This paper examines a promising probabilistic theory of singular causation developed by David Lewis. I argue that Lewis' theory must be made more sophisticated to deal with certain counterexamples involving pre-emption. These counterexamples appear to show that in the usual case singular causation requires an unbroken causal process to link cause with effect. I propose a new probabilistic account of singular causation, within the framework developed by Lewis, which captures this intuition.
Nick Shea’s Representation in Cognitive Science commits him to representations in perceptual processing that are about probabilities. This commentary concerns how to adjudicate between this view and an alternative that locates the probabilities rather in the representational states’ associated “attitudes”. As background and motivation, evidence for probabilistic representations in perceptual processing is adduced, and it is shown how, on either conception, one can address a specific challenge Ned Block has raised to this evidence.
We report a series of experiments examining whether people ascribe knowledge for true beliefs based on probabilistic evidence. Participants were less likely to ascribe knowledge for beliefs based on probabilistic evidence than for beliefs based on perceptual evidence or testimony providing causal information. Denial of knowledge for beliefs based on probabilistic evidence did not arise because participants viewed such beliefs as unjustified, nor because such beliefs leave open the possibility of error. These findings rule out traditional philosophical (...) accounts for why probabilistic evidence does not produce knowledge. The experiments instead suggest that people deny knowledge because they distrust drawing conclusions about an individual based on reasoning about the population to which it belongs, a tendency previously identified by “judgment and decision making” researchers. Consistent with this, participants were more willing to ascribe knowledge for beliefs based on probabilistic evidence that is specific to a particular case. 2016 APA, all rights reserved). (shrink)
According to a standard assumption in epistemology, if one only partially believes that p , then one cannot thereby have knowledge that p. For example, if one only partially believes that that it is raining outside, one cannot know that it is raining outside; and if one only partially believes that it is likely that it will rain outside, one cannot know that it is likely that it will rain outside. Many epistemologists will agree that epistemic agents are capable of (...) partial beliefs in addition to full beliefs and that partial beliefs can be epistemically assessed along some dimensions. However, it has been generally assumed that such doxastic attitudes cannot possibly amount to knowledge. In Probabilistic Knowledge, Moss challenges this standard assumption and provides a formidable defense of the claim that probabilistic beliefs—a class of doxastic attitudes including credences and degrees of beliefs—can amount to knowledge too. Call this the probabilistic knowledge claim . Throughout the book, Moss goes to great lengths to show that probabilistic knowledge can be fruitfully applied to a variety of debates in epistemology and beyond. My goal in this essay is to explore a further application for probabilistic knowledge. I want to look at the role of probabilistic knowledge within a “knowledge-centered” psychology—a kind of psychology that assigns knowledge a central stage in explanations of intentional behavior. My suggestion is that Moss’s notion of probabilistic knowledge considerably helps further both a knowledge-centered psychology and a broadly intellectualist picture of action and know-how that naturally goes along with it. At the same time, though, it raises some interesting issues about the notion of explanation afforded by the resulting psychology. (shrink)
A common view among nontheists combines the de jure objection that theism is epistemically unacceptable with agnosticism about the de facto objection that theism is false. Following Plantinga, we can call this a “proper” de jure objection—a de jure objection that does not depend on any de facto objection. In his Warranted Christian Belief, Plantinga has produced a general argument against all proper de jure objections. Here I first show that this argument is logically fallacious (it makes subtle probabilistic (...) fallacies disguised by scope ambiguities), and proceed to lay the groundwork for the construction of actual proper de jure objections. (shrink)
We often have some reason to do actions insofar as they promote outcomes or states of affairs, such as the satisfaction of a desire. But what is it to promote an outcome? I defend a new version of 'probabilism about promotion'. According to Minimal Probabilistic Promotion, we promote some outcome when we make that outcome more likely than it would have been if we had done something else. This makes promotion easy and reasons cheap.
The starting point in the development of probabilistic analyses of token causation has usually been the naïve intuition that, in some relevant sense, a cause raises the probability of its effect. But there are well-known examples both of non-probability-raising causation and of probability-raising non-causation. Sophisticated extant probabilistic analyses treat many such cases correctly, but only at the cost of excluding the possibilities of direct non-probability-raising causation, failures of causal transitivity, action-at-a-distance, prevention, and causation by absence and omission. I (...) show that an examination of the structure of these problem cases suggests a different treatment, one which avoids the costs of extant probabilistic analyses. (shrink)
Conceptual combination performs a fundamental role in creating the broad range of compound phrases utilised in everyday language. This article provides a novel probabilistic framework for assessing whether the semantics of conceptual combinations are compositional, and so can be considered as a function of the semantics of the constituent concepts, or not. While the systematicity and productivity of language provide a strong argument in favor of assuming compositionality, this very assumption is still regularly questioned in both cognitive science and (...) philosophy. Additionally, the principle of semantic compositionality is underspecifi ed, which means that notions of both "strong" and "weak" compositionality appear in the literature. Rather than adjudicating between different grades of compositionality, the framework presented here contributes formal methods for determining a clear dividing line between compositional and non-compositional semantics. In addition, we suggest that the distinction between these is contextually sensitive. Compositionality is equated with a joint probability distribution modeling how the constituent concepts in the combination are interpreted. Marginal selectivity is introduced as a pivotal probabilistic constraint for the application of the Bell/CH and CHSH systems of inequalities. Non-compositionality is equated with a failure of marginal selectivity, or violation of either system of inequalities in the presence of marginal selectivity. This means that the conceptual combination cannot be modeled in a joint probability distribution, the variables of which correspond to how the constituent concepts are being interpreted. The formal analysis methods are demonstrated by applying them to an empirical illustration of twenty-four non-lexicalised conceptual combinations. (shrink)
In previous work, we studied four well known systems of qualitative probabilistic inference, and presented data from computer simulations in an attempt to illustrate the performance of the systems. These simulations evaluated the four systems in terms of their tendency to license inference to accurate and informative conclusions, given incomplete information about a randomly selected probability distribution. In our earlier work, the procedure used in generating the unknown probability distribution (representing the true stochastic state of the world) tended to (...) yield probability distributions with moderately high entropy levels. In the present article, we present data charting the performance of the four systems when reasoning in environments of various entropy levels. The results illustrate variations in the performance of the respective reasoning systems that derive from the entropy of the environment, and allow for a more inclusive assessment of the reliability and robustness of the four systems. (shrink)
Coherentism in epistemology has long suffered from lack of formal and quantitative explication of the notion of coherence. One might hope that probabilistic accounts of coherence such as those proposed by Lewis, Shogenji, Olsson, Fitelson, and Bovens and Hartmann will finally help solve this problem. This paper shows, however, that those accounts have a serious common problem: the problem of belief individuation. The coherence degree that each of the accounts assigns to an information set (or the verdict it gives (...) as to whether the set is coherent tout court) depends on how beliefs (or propositions) that represent the set are individuated. Indeed, logically equivalent belief sets that represent the same information set can be given drastically different degrees of coherence. This feature clashes with our natural and reasonable expectation that the coherence degree of a belief set does not change unless the believer adds essentially new information to the set or drops old information from it; or, to put it simply, that the believer cannot raise or lower the degree of coherence by purely logical reasoning. None of the accounts in question can adequately deal with coherence once logical inferences get into the picture. Toward the end of the paper, another notion of coherence that takes into account not only the contents but also the origins (or sources) of the relevant beliefs is considered. It is argued that this notion of coherence is of dubious significance, and that it does not help solve the problem of belief individuation. (shrink)
The best and most popular argument for probabilism is the accuracy-dominance argument, which purports to show that alethic considerations alone support the view that an agent's degrees of belief should always obey the axioms of probability. I argue that extant versions of the accuracy-dominance argument face a problem. In order for the mathematics of the argument to function as advertised, we must assume that every omniscient credence function is classically consistent; there can be no worlds in the set of dominance-relevant (...) worlds that obey some other logic. This restriction cannot be motivated on alethic grounds unless we're also willing to accept that rationality requires belief in every metaphysical necessity, as the distinction between a priori logical necessities and a posteriori metaphysical ones is not an alethic distinction. To justify the restriction to classically consistent worlds, non-alethic motivation is required. And thus, if there is a version of the accuracy-dominance argument in support of probabilism, it isn't one that is grounded in alethic considerations alone. (shrink)
In mathematics, any form of probabilistic proof obtained through the application of a probabilistic method is not considered as a legitimate way of gaining mathematical knowledge. In a series of papers, Don Fallis has defended the thesis that there are no epistemic reasons justifying mathematicians’ rejection of probabilistic proofs. This paper identifies such an epistemic reason. More specifically, it is argued here that if one adopts a conception of mathematical knowledge in which an epistemic subject can know (...) a mathematical proposition based solely on a probabilistic proof, one is then forced to admit that such an epistemic subject can know several lottery propositions based solely on probabilistic evidence. Insofar as knowledge of lottery propositions on the basis of probabilistic evidence alone is denied by the vast majority of epistemologists, it is concluded that this constitutes an epistemic reason for rejecting probabilistic proofs as a means of acquiring mathematical knowledge. (shrink)
All extant purely probabilistic measures of explanatory power satisfy the following technical condition: if Pr(E | H1) > Pr(E | H2) and Pr(E | ~H1) < Pr(E | ~H2), then H1’s explanatory power with respect to E is greater than H2’s explanatory power with respect to E. We argue that any measure satisfying this condition faces three serious problems – the Problem of Temporal Shallowness, the Problem of Negative Causal Interactions, and the Problem of Non-Explanations. We further argue that (...) many such measures face a fourth problem – the Problem of Explanatory Irrelevance. (shrink)
Jeff Paris proves a generalized Dutch Book theorem. If a belief state is not a generalized probability then one faces ‘sure loss’ books of bets. In Williams I showed that Joyce’s accuracy-domination theorem applies to the same set of generalized probabilities. What is the relationship between these two results? This note shows that both results are easy corollaries of the core result that Paris appeals to in proving his dutch book theorem. We see that every point of accuracy-domination deﬁnes a (...) dutch book, but we only have a partial converse. (shrink)
What sort of entities are electrons, photons and atoms given their wave-like and particle-like properties? Is nature fundamentally deterministic or probabilistic? Orthodox quantum theory evades answering these two basic questions by being a theory about the results of performing measurements on quantum systems. But this evasion results in OQT being a seriously defective theory. A rival, somewhat ignored strategy is to conjecture that the quantum domain is fundamentally probabilistic. This means quantum entities, interacting with one another probabilistically, must (...) differ radically from the entities of deterministic classical physics, the classical wave or particle. It becomes possible to conceive of quantum entities as a new kind of fundamentally probabilistic entity, the “propensiton”, neither wave nor particle. A fully micro realistic, testable rival to OQT results. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.