Wittgenstein did not write very much on the topic of probability. The little we have comes from a few short pages of the Tractatus, some 'remarks' from the 1930s, and the informal conversations which went on during that decade with the Vienna Circle. Nevertheless, Wittgenstein's views were highly influential in the later development of the logical theory of probability. This paper will attempt to clarify and defend Wittgenstein's conception of probability against some oft-cited criticisms that stem from a misunderstanding of (...) his views. Max Black, for instance, criticises Wittgenstein for formulating a theory of probability that is capable of being used only against the backdrop of the ideal language of the Tractatus. I argue that on the contrary, by appealing to the 'hypothetical laws of nature', Wittgenstein is able to make sense of probability statements involving propositions that have not been completely analysed. G.H. von Wright criticises Wittgenstein's characterisation of these very hypothetical laws. He argues that by introducing them Wittgenstein makes what is distinctive about his theory superfluous, for the hypothetical laws are directly inspired by statistical observations and hence these observations indirectly determine the mechanism by which the logical theory of probability operates. I argue that this is not the case at all, and that while statistical observations play a part in the formation of the hypothetical laws, these observations are only necessary, but not sufficient conditions for the introduction of these hypotheses. (shrink)
The justificatory force of empirical reasoning always depends upon the existence of some synthetic, a priori justification. The reasoner must begin with justified, substantive constraints on both the prior probability of the conclusion and certain conditional probabilities; otherwise, all possible degrees of belief in the conclusion are left open given the premises. Such constraints cannot in general be empirically justified, on pain of infinite regress. Nor does subjective Bayesianism offer a way out for the empiricist. Despite often-cited convergence (...) theorems, subjective Bayesians cannot hold that any empirical hypothesis is ever objectively justified in the relevant sense. Rationalism is thus the only alternative to an implausible skepticism. (shrink)
Automated reasoning about uncertain knowledge has many applications. One difficulty when developing such systems is the lack of a completely satisfactory integration of logic and probability. We address this problem directly. Expressive languages like higher-order logic are ideally suited for representing and reasoning about structured knowledge. Uncertain knowledge can be modeled by using graded probabilities rather than binary truth-values. The main technical problem studied in this paper is the following: Given a set of sentences, each having some probability of (...) being true, what probability should be ascribed to other (query) sentences? A natural wish-list, among others, is that the probability distribution (i) is consistent with the knowledge base, (ii) allows for a consistent inference procedure and in particular (iii) reduces to deductive logic in the limit of probabilities being 0 and 1, (iv) allows (Bayesian) inductive reasoning and (v) learning in the limit and in particular (vi) allows confirmation of universally quantified hypotheses/sentences. We translate this wish-list into technical requirements for a prior probability and show that probabilities satisfying all our criteria exist. We also give explicit constructions and several general characterizations of probabilities that satisfy some or all of the criteria and various (counter) examples. We also derive necessary and sufficient conditions for extending beliefs about finitely many sentences to suitable probabilities over all sentences, and in particular least dogmatic or least biased ones. We conclude with a brief outlook on how the developed theory might be used and approximated in autonomous reasoning agents. Our theory is a step towards a globally consistent and empirically satisfactory unification of probability and logic. (shrink)
Conspiracy theories are often portrayed as unwarranted beliefs, typically supported by suspicious kinds of evidence. Yet contemporary work in Philosophy argues provisional belief in conspiracy theories is at the very least understandable---because conspiracies occur---and that if we take an evidential approach, judging individual conspiracy theories on their particular merits, belief in such theories turns out to be warranted in a range of cases. -/- Drawing on this work, I examine the kinds of evidence typically associated with conspiracy theories, and show (...) how the so-called evidential problems with conspiracy theories are also problems for the kinds of evidence put forward in support of other theories. As such, if there is a problem with the conspiracy theorist's use of evidence, it is one of principle: is the principle which guides the conspiracy theorist's use of evidence somehow in error? I argue that whatever we might think about conspiracy theories generally, there is no prima facie case for a scepticism of conspiracy theories based purely on their use of evidence. (shrink)
Bayesianism is the position that scientific reasoning is probabilistic and that probabilities are adequately interpreted as an agent's actual subjective degrees of belief, measured by her betting behaviour. Confirmation is one important aspect of scientific reasoning. The thesis of this paper is the following: if scientific reasoning is at all probabilistic, the subjective interpretation has to be given up in order to get right confirmation—and thus scientific reasoning in general. The Bayesian approach to scientific reasoning Bayesian confirmation theory The (...) example The less reliable the source of information, the higher the degree of Bayesian confirmation Measure sensitivity A more general version of the problem of old evidence Conditioning on the entailment relation The counterfactual strategy Generalizing the counterfactual strategy The desired result, and a necessary and sufficient condition for it Actual degrees of belief The common knock-down feature, or ‘anything goes’ The problem of priorprobabilities. (shrink)
The applicability of Bayesian conditionalization in setting one’s posterior probability for a proposition, α, is limited to cases where the value of a corresponding prior probability, PPRI(α|∧E), is available, where ∧E represents one’s complete body of evidence. In order to extend probability updating to cases where the priorprobabilities needed for Bayesian conditionalization are unavailable, I introduce an inference schema, defeasible conditionalization, which allows one to update one’s personal probability in a proposition by conditioning on a proposition (...) that represents a proper subset of one’s complete body of evidence. While defeasible conditionalization has wider applicability than standard Bayesian conditionalization (since it may be used when the value of a relevant prior probability, PPRI(α|∧E), is unavailable), there are circumstances under which some instances of defeasible conditionalization are unreasonable. To address this difficulty, I outline the conditions under which instances of defeasible conditionalization are defeated. To conclude the article, I suggest that the prescriptions of direct inference and statistical induction can be encoded within the proposed system of probability updating, by the selection of intuitively reasonable priorprobabilities. (shrink)
Following Nancy Cartwright and others, I suggest that most (if not all) theories incorporate, or depend on, one or more idealizing assumptions. I then argue that such theories ought to be regimented as counterfactuals, the antecedents of which are simplifying assumptions. If this account of the logic form of theories is granted, then a serious problem arises for Bayesians concerning the priorprobabilities of theories that have counterfactual form. If no such probabilities can be assigned, the the (...) posterior probabilities will be undefined, as the latter are defined in terms of the former. I argue here that the most plausible attempts to address the problem of probabilities of conditionals fail to help Bayesians, and, hence, that Bayesians are faced with a new problem. In so far as these proposed solutions fail, I argue that Bayesians must give up Bayesianism or accept the counterintuitive view that no theories that incorporate any idealizations have ever really been confirmed to any extent whatsoever. Moreover, as it appears that the latter horn of this dilemma is highly implausible, we are left with the conclusion that Bayesianism should be rejected, at least as it stands. (shrink)
Assuming that votes are independent, the epistemically optimal procedure in a binary collective choice problem is known to be a weighted supermajority rule with weights given by personal log-likelihood-ratios. It is shown here that an analogous result holds in a much more general model. Firstly, the result follows from a more basic principle than expected-utility maximisation, namely from an axiom (Epistemic Monotonicity) which requires neither utilities nor priorprobabilities of the ‘correctness’ of alternatives. Secondly, a person’s input need (...) not be a vote for an alternative, it may be any type of input, for instance a subjective degree of belief or probability of the correctness of one of the alternatives. The case of a proﬁle of subjective degrees of belief is particularly appealing, since here no parameters such as competence parameters need to be known. (shrink)
In this article I argue that "Timaeus" 48e-52d, the passage in which Plato introduces the receptacle into his ontology, Contains the material for a satisfactory response to the third man argument. Plato's use of "this" and "such" to distinguish the receptacle, Becoming, And the forms clarifies the nature of his ontology and indicates that the forms are not, In general, self-predicative. This result removes one argument against regarding the "Timaeus" as a late dialogue.
In this paper I argue against the view of G.E.L. Owen that the second version of the Third Man Argument is a sound objection to Plato's conception of Forms as paradigms and that Plato knew it. The argument can be formulated so as to be valid, but Plato need not be committed to one of its premises. Forms are self-predicative, but the ground of self-predication is not the same as that of ordinary predication.
The paper starts by describing and clarifying what Williamson calls the consequence fallacy. I show two ways in which one might commit the fallacy. The first, which is rather trivial, involves overlooking background information; the second way, which is the more philosophically interesting, involves overlooking priorprobabilities. In the following section, I describe a powerful form of sceptical argument, which is the main topic of the paper, elaborating on previous work by Huemer. The argument attempts to show the (...) impossibility of defeasible justification, justification based on evidence which does not entail the (allegedly) justified proposition or belief. I then discuss the relation between the consequence fallacy, or some similar enough reasoning, and that form of argument. I argue that one can resist that form of sceptical argument if one gives up the idea that a belief cannot be justified unless it is supported by the totality of the evidence available to the subject—a principle entailed by many prominent epistemological views, most clearly by epistemological evidentialism. The justification, in the relevant cases, should instead derive solely from the prior probability of the proposition. A justification of this sort, that does not rely on evidence, would amount to a form of entitlement, in (something like) Crispin Wright’s sense. I conclude with some discussion of how to understand priorprobabilities, and how to develop the notion of entitlement in an externalist epistemological framework. (shrink)
To move beyond vague platitudes about the importance of context in legal reasoning or natural language understanding, one must take account of ideas from artiﬁcial intelligence on how to represent context formally. Work on topics like priorprobabilities, the theory-ladenness of observation, encyclopedic knowledge for disambiguation in language translation and pathology test diagnosis has produced a body of knowledge on how to represent context in artiﬁcial intelligence applications.
The principle of universal instantiation plays a pivotal role both in the derivation of intensional paradoxes such as Prior’s paradox and Kaplan’s paradox and the debate between necessitism and contingentism. We outline a distinctively free logical approach to the intensional paradoxes and note how the free logical outlook allows one to distinguish two different, though allied themes in higher-order necessitism. We examine the costs of this solution and compare it with the more familiar ramificationist approaches to higher-order logic. Our (...) assessment of both approaches is largely pessimistic, and we remain reluctantly inclined to take Prior’s and Kaplan’s derivations at face value. (shrink)
In this paper, I answer the following question: suppose that two individuals, C and D, have been in a long-term committed relationship, and D now has dementia, while C is competent; if D agrees to have sex with C, is it permissible for C to have sex with D? Ultimately, I defend the view that, under certain conditions, D can give valid consent to sex with C, rendering sex between them permissible. Specifically, I argue there is compelling reason to endorse (...) the following thesis: -/- Prior Consent Thesis: D, when competent, can give valid prior consent to sex with her competent partner (C) that will take place after she has dementia, assuming that D is the same person as she was when she gave prior consent, meaning that, if D, when competent, gave prior consent to sex with C, then C may permissibly have sex with D. In section I, I explain both the background and the existing literature on this issue. In section II, I outline relevant stipulations about the kinds of cases I will be examining. In section III, I defend the Prior Consent Thesis. And, in section IV, I address objections to the Prior Consent Thesis. (shrink)
We discuss the role of prior authorization (PA) in supporting patient-centered care (PCC) by directing health system resources and thus the ability to better meet the needs of individual patients. We begin with an account of PCC as a standard that should be aimed for in patient care. In order to achieve widespread PCC, appropriate resource management is essential in a healthcare system. This brings us to PA, and we present an idealized view of PA in order to argue (...) how at its best, it can contribute to the provision of PCC. PA is a means of cost saving and as such it has mixed success. The example of the US demonstrates how implementation of PA has increased health inequalities whereas best practice has the potential to reduce them. In contrast, systems of universal coverage, like those in Europe, may use the cost savings of PA to better address individuals' care and PCC. The conclusion we offer therefore is an optimistic one, pointing towards areas of supportive overlap between PCC and PA where usually the incongruities are most evident. (shrink)
Bayesian epistemology tells us with great precision how we should move from prior to posterior beliefs in light of new evidence or information, but says little about where our prior beliefs come from. It offers few resources to describe some prior beliefs as rational or well-justified, and others as irrational or unreasonable. A different strand of epistemology takes the central epistemological question to be not how to change one’s beliefs in light of new evidence, but what reasons (...) justify a given set of beliefs in the first place. We offer an account of rational belief formation that closes some of the gap between Bayesianism and its reason-based alternative, formalizing the idea that an agent can have reasons for his or her (prior) beliefs, in addition to evidence or information in the ordinary Bayesian sense. Our analysis of reasons for belief is part of a larger programme of research on the role of reasons in rational agency (Dietrich and List, Nous, 2012a, in press; Int J Game Theory, 2012b, in press). (shrink)
A fundamental problem in science is how to make logical inferences from scientiﬁc data. Mere data does not suﬃce since additional information is necessary to select a domain of models or hypotheses and thus determine the likelihood of each model or hypothesis. Thomas Bayes’ Theorem relates the data and prior information to posterior probabilities associated with diﬀering models or hypotheses and thus is useful in identifying the roles played by the known data and the assumed prior information (...) when making inferences. Scientists, philosophers, and theologians accumulate knowledge when analyzing diﬀerent aspects of reality and search for particular hypotheses or models to ﬁt their respective subject matters. Of course, a main goal is then to integrate all kinds of knowledge into an all-encompassing worldview that would describe the whole of reality. A generous description of the whole of reality would span, in the order of complexity, from the purely physical to the supernatural. These two extreme aspects of reality are bridged by a nonphysical realm, which would include elements of life, man, consciousness, rationality, mental and mathematical abstractions, etc. An urgent problem in the theory of knowledge is what science is and what it is not. Albert Einstein’s notion of science in terms of sense perception is reﬁned by deﬁning operationally the data that makes up the subject matter of science. It is shown, for instance, that theological considerations included in the prior information assumed by Isaac Newton is irrelevant in relating the data logically to the model or hypothesis. In addition, the concepts of naturalism, intelligent design, and evolutionary theory are critically analyzed. Finally, Eugene P. Wigner’s suggestions concerning the nature of human consciousness, life, and the success of mathematics in the natural sciences is considered in the context of the creative power endowed in humans by God. (shrink)
Prior’s arguments for and against seeing ‘ought’ as a copula and his considerations about normative negation are applied to the case of responsibility judgments. My thesis will be that responsibility judgments, even though often expressed by using the verb ‘to be’, are in fact normative judgments. This is shown by analyzing their negation, which parallels the behavior of ought negation.
In English, in order to speak about Arthur’s attitudes, we use sentences like “Arthur believes that natural language is messy”. For sentences of this kind we have a standard theory, according to which the ‘that’-clause ‘that natural language is messy’ denotes a proposition. As Prior showed for the first time, the standard theory appears to be at odds with some linguistic data. Geach and Prior both assumed that linguistic data are to be taken as reliable guides to a (...) correct semantic account and I will start by raising some worries concerning their methodology. Because of these data, Prior and Geach suggested some non-standard accounts. I will then show that if we take linguistic data seriously, their non-standard accounts do not fare any better than the standard theory. My general conclusion will thus not only be that Prior’s and Geach’s methodology is disputable, but also that their conclusions do not seem to follow even if we grant the reliability of their methodology. (shrink)
I present a solution to the epistemological or characterisation problem of induction. In part I, Bayesian Confirmation Theory (BCT) is discussed as a good contender for such a solution but with a fundamental explanatory gap (along with other well discussed problems); useful assigned probabilities like priors require substantive degrees of belief about the world. I assert that one does not have such substantive information about the world. Consequently, an explanation is needed for how one can be licensed to act (...) as if one has substantive information about the world when one does not. I sketch the outlines of a solution in part I, showing how it differs from others, with full details to follow in subsequent parts. The solution is pragmatic in sentiment (though differs in specifics to arguments from, for example, William James); the conceptions we use to guide our actions are and should be at least partly determined by preferences. This is cashed out in a reformulation of decision theory motivated by a non-reductive formulation of hypotheses and logic. A distinction emerges between initial assumptions--that can be non-dogmatic--and effective assumptions that can simultaneously be substantive. An explanation is provided for the plausibility arguments used to explain assigned probabilities in BCT. -/- In subsequent parts, logic is constructed from principles independent of language and mind. In particular, propositions are defined to not have form. Probabilities are logical and uniquely determined by assumptions. The problems considered fatal to logical probabilities--Goodman's `grue' problem and the uniqueness of priors problem are dissolved due to the particular formulation of logic used. Other problems such as the zero-prior problem are also solved. -/- A universal theory of (non-linguistic) meaning is developed. Problems with counterfactual conditionals are solved by developing concepts of abstractions and corresponding pictures that make up hypotheses. Spaces of hypotheses and the version of Bayes' theorem that utilises them emerge from first principles. -/- Theoretical virtues for hypotheses emerge from the theory. Explanatory force is explicated. The significance of effective assumptions is partly determined by combinatoric factors relating to the structure of hypotheses. I conjecture that this is the origin of simplicity. (shrink)
A key and continuing concern within the pragma-dialectical theory of argumentation is how to account for effective persuasion disciplined by dialectical rationality. Currently, van Eemeren and Houtlosser offer one response to this concern in the form of strategic manoeuvring. This paper offers a prior/passing theory of communicative interaction as a supplement to the strategic manoeuvring approach. Our use of a prior/passing model investigates how a difference of opinion can be resolved while both dialectic obligations of reasonableness and rhetorical (...) ambitions of argumentative success are simultaneously accommodated. The paper explores the model with particular reference to the pragma-dialectical rules of critical discussion, strategic manoeuvring and fallacious reasoning. (shrink)
It is well known that classical, aka ‘sharp’, Bayesian decision theory, which models belief states as single probability functions, faces a number of serious difficulties with respect to its handling of agnosticism. These difficulties have led to the increasing popularity of so-called ‘imprecise’ models of decision-making, which represent belief states as sets of probability functions. In a recent paper, however, Adam Elga has argued in favour of a putative normative principle of sequential choice that he claims to be borne out (...) by the sharp model but not by any promising incarnation of its imprecise counterpart. After first pointing out that Elga has fallen short of establishing that his principle is indeed uniquely borne out by the sharp model, I cast aspersions on its plausibility. I show that a slight weakening of the principle is satisfied by at least one, but interestingly not all, varieties of the imprecise model and point out that Elga has failed to motivate his stronger commitment. (shrink)
The epistemic probability of A given B is the degree to which B evidentially supports A, or makes A plausible. This paper is a first step in answering the question of what determines the values of epistemic probabilities. I break this question into two parts: the structural question and the substantive question. Just as an object’s weight is determined by its mass and gravitational acceleration, some probabilities are determined by other, more basic ones. The structural question asks what (...)probabilities are not determined in this way—these are the basic probabilities which determine values for all other probabilities. The substantive question asks how the values of these basic probabilities are determined. I defend an answer to the structural question on which basic probabilities are the probabilities of atomic propositions conditional on potential direct explanations. I defend this against the view, implicit in orthodox mathematical treatments of probability, that basic probabilities are the unconditional probabilities of complete worlds. I then apply my answer to the structural question to clear up common confusions in expositions of Bayesianism and shed light on the “problem of the priors.”. (shrink)
Ramsification is a well-known method of defining theoretical terms that figures centrally in a wide range of debates in metaphysics. Prior's puzzle is the puzzle of why, given the assumption that that-clauses denote propositions, substitution of "the proposition that P" for "that P" within the complements of many propositional attitude verbs sometimes fails to preserve truth, and other times fails to preserve grammaticality. On the surface, Ramsification and Prior's puzzle appear to have little to do with each other. (...) But Prior's puzzle is much more general than is ordinarily appreciated, and Ramsification requires a solution to the generalized form of Prior's puzzle. Without such a solution, a wide range of theories will either fail to imply their Ramsey sentences, or have Ramsey sentences that are ill-formed. As a consequence, definitions of theoretical terms given using the Ramsey sentence will be either incorrect or nonsensical. I present a partial solution to the puzzle that requires making use of a neo-Davidsonian language for scientific theorizing, but the would-be Ramsifier still faces serious challenges. (shrink)
We critically examine the role and status probabilities, as they enter via the Quantum Equilibrium Hypothesis, play in the standard, deterministic interpretation of deBroglie’s and Bohm’s Pilot Wave Theory (dBBT), by considering interpretations of probabilities in terms of ignorance, typicality and Humean Best Systems, respectively. We argue that there is an inherent conflict between dBBT and probabilities, thus construed. The conflict originates in dBBT’s deterministic nature, rooted in the Guidance Equation. Inquiring into the latter’s role within dBBT, (...) we find it explanatorily redundant (in particular for dBBT’s solution of the Measurement Problem, which only requires that the corpuscles possess definite positions), and subject to a number of difficulties. Following a suggestion from Bell, we propose to abandon the Guidance Equation, whilst retaining dBBT’s point particle-based Primitive Ontology, with positions as local beables. The resultant theory, which we identify as a stochastic, minimally deBroglie-Bohmian theory, describes a random walk through configuration space. Its probabilities, we propose, are best understood as dispositions of possible corpuscle configurations to manifest themselves. We subsequently evaluate the merits of sdBBT vis-à-vis dBBT, such as the justification of the Symmetrisation Postulate and the violation of the Action-Reaction Principle. Not only is sdBBT an attractive Bohmian theory that, whilst retaining dBBT's virtues, overcomes many of its shortcomings; it also sparks off a number of exciting follow-up questions, such as a comparison between sdBBT and other stochastic hidden-variable theories, e.g. Nelson Stochastics, or between sdBBT and the Everett interpretation. (shrink)
Enjoying great popularity in decision theory, epistemology, and philosophy of science, Bayesianism as understood here is fundamentally concerned with epistemically ideal rationality. It assumes a tight connection between evidential probability and ideally rational credence, and usually interprets evidential probability in terms of such credence. Timothy Williamson challenges Bayesianism by arguing that evidential probabilities cannot be adequately interpreted as the credences of an ideal agent. From this and his assumption that evidential probabilities cannot be interpreted as the actual credences (...) of human agents either, he concludes that no interpretation of evidential probabilities in terms of credence is adequate. I argue to the contrary. My overarching aim is to show on behalf of Bayesians how one can still interpret evidential probabilities in terms of ideally rational credence and how one can maintain a tight connection between evidential probabilities and ideally rational credence even if the former cannot be interpreted in terms of the latter. By achieving this aim I illuminate the limits and prospects of Bayesianism. (shrink)
In this short survey article, I discuss Bell’s theorem and some strategies that attempt to avoid the conclusion of non-locality. I focus on two that intersect with the philosophy of probability: (1) quantum probabilities and (2) superdeterminism. The issues they raised not only apply to a wide class of no-go theorems about quantum mechanics but are also of general philosophical interest.
Non-Archimedean probability functions allow us to combine regularity with perfect additivity. We discuss the philosophical motivation for a particular choice of axioms for a non-Archimedean probability theory and answer some philosophical objections that have been raised against infinitesimal probabilities in general.
This chapter will review selected aspects of the terrain of discussions about probabilities in statistical mechanics (with no pretensions to exhaustiveness, though the major issues will be touched upon), and will argue for a number of claims. None of the claims to be defended is entirely original, but all deserve emphasis. The first, and least controversial, is that probabilistic notions are needed to make sense of statistical mechanics. The reason for this is the same reason that convinced Maxwell, Gibbs, (...) and Boltzmann that probabilities would be needed, namely, that the second law of thermodynamics, which in its original formulation says that certain processes are impossible, must, on the kinetic theory, be replaced by a weaker formulation according to which what the original version deems impossible is merely improbable. Second is that we ought not take the standard measures invoked in equilibrium statistical mechanics as giving, in any sense, the correct probabilities about microstates of the system. We can settle for a much weaker claim: that the probabilities for outcomes of experiments yielded by the standard distributions are effectively the same as those yielded by any distribution that we should take as a representing probabilities over microstates. Lastly, (and most controversially): in asking about the status of probabilities in statistical mechanics, the familiar dichotomy between epistemic probabilities (credences, or degrees of belief) and ontic (physical) probabilities is insufficient; the concept of probability that is best suited to the needs of statistical mechanics is one that combines epistemic and physical considerations. (shrink)
The predominant view in developmental psychology is that young children are able to reason with the concept of desire prior to being able to reason with the concept of belief. We propose an explanation of this phenomenon that focuses on the cognitive tasks that competence with the belief and desire concepts enable young children to perform. We show that cognitive tasks that are typically considered fundamental to our competence with the belief and desire concepts can be performed with the (...) concept of desire in the absence of competence with the concept of belief, whereas the reverse is considerably less feasible. (shrink)
Moss (2018) argues that rational agents are best thought of not as having degrees of belief in various propositions but as having beliefs in probabilistic contents, or probabilistic beliefs. Probabilistic contents are sets of probability functions. Probabilistic belief states, in turn, are modeled by sets of probabilistic contents, or sets of sets of probability functions. We argue that this Mossean framework is of considerable interest quite independently of its role in Moss’ account of probabilistic knowledge or her semantics for epistemic (...) modals and probability operators. It is an extremely general model of uncertainty. Indeed, it is at least as general and expressively powerful as every other current imprecise probability framework, including lower probabilities, lower previsions, sets of probabilities, sets of desirable gambles, and choice functions. In addition, we partially answer an important question that Moss leaves open, viz., why should rational agents have consistent probabilistic beliefs? We show that an important subclass of Mossean believers avoid Dutch bookability iff they have consistent probabilistic beliefs. (shrink)
Prior Analytics by the Greek philosopher Aristotle (384 – 322 BCE) and Laws of Thought by the English mathematician George Boole (1815 – 1864) are the two most important surviving original logical works from before the advent of modern logic. This article has a single goal: to compare Aristotle’s system with the system that Boole constructed over twenty-two centuries later intending to extend and perfect what Aristotle had started. This comparison merits an article itself. Accordingly, this article does not (...) discuss many other historically and philosophically important aspects of Boole’s book, e.g. his confused attempt to apply differential calculus to logic, his misguided effort to make his system of ‘class logic’ serve as a kind of ‘truth-functional logic’, his now almost forgotten foray into probability theory, or his blindness to the fact that a truth-functional combination of equations that follows from a given truth-functional combination of equations need not follow truth-functionally. One of the main conclusions is that Boole’s contribution widened logic and changed its nature to such an extent that he fully deserves to share with Aristotle the status of being a founding figure in logic. By setting forth in clear and systematic fashion the basic methods for establishing validity and for establishing invalidity, Aristotle became the founder of logic as formal epistemology. By making the first unmistakable steps toward opening logic to the study of ‘laws of thought’—tautologies and laws such as excluded middle and non-contradiction—Boole became the founder of logic as formal ontology. (shrink)
In Prior Analytics I.30, Aristotle seems too much optmistic about finding out the principles of sciences. For he seems to say that, if our empirical collection of facts in a given domain is exhaustive or sufficient, it will be easy for us to find out the explanatory principles in the domain. However, there is a distance between collecting facts and finding out the explanatory principles in a given domain. In this paper, I discuss how the key expression in the (...) sentence at 46a25 should be interpreted: “the true characteristics of things” (“τῶν ἀληθῶς ὑπαρχόντων τοῖς πράγμασιν”). I argue that, on a more accurate interpretation of the expression, Aristotle’s point would cease to look like a piece of naïve or even silly optimism. (shrink)
Orthodoxy has it that knowledge is a composite of belief and non-mental factors. However, Timothy Williamson suggests that orthodoxy implies that the concept of belief is acquired before the concept of knowledge, whereas developmental data suggest the reverse. More recently, Jennifer Nagel reviews the psychological evidence, building a psychological case that the concept of knowledge emerges prior to belief. I assess the psychological state of the art and find support for the opposite conclusion. Overall the empirical evidence supports the (...) orthodox view that the concept of belief is prior to the concept of knowledge. (shrink)
The standard treatment of conditional probability leaves conditional probability undefined when the conditioning proposition has zero probability. Nonetheless, some find the option of extending the scope of conditional probability to include zero-probability conditions attractive or even compelling. This article reviews some of the pitfalls associated with this move, and concludes that, for the most part, probabilities conditional on zero-probability propositions are more trouble than they are worth.
Prior's puzzle is the puzzle of why, given the assumption that that-clauses denote propositions, substitution of "the proposition that P" for "that P" within the complements of many propositional attitude verbs is invalid. I show that there are two variants on Prior's puzzle---a quantificational variant and a pronominal variant---that have the same source and warrant the same solution as the original puzzle. I then show that neither the original puzzle nor its variants are specific to that-clauses or propositional (...) attitude verbs. Rather, they arise in the complements of all kinds of attitude verbs, in adjectival positions, in adverbial positions, and in a variety of other positions as well. These variants, together with the generalization to other grammatical categories, show that a wide range of proposed solutions to Prior's puzzle fail, or are radically incomplete. I then discuss one view that stands to solve the generalized puzzle. (shrink)
In a quantum universe with a strong arrow of time, it is standard to postulate that the initial wave function started in a particular macrostate---the special low-entropy macrostate selected by the Past Hypothesis. Moreover, there is an additional postulate about statistical mechanical probabilities according to which the initial wave function is a ''typical'' choice in the macrostate. Together, they support a probabilistic version of the Second Law of Thermodynamics: typical initial wave functions will increase in entropy. Hence, there are (...) two sources of randomness in such a universe: the quantum-mechanical probabilities of the Born rule and the statistical mechanical probabilities of the Statistical Postulate. I propose a new way to understand time's arrow in a quantum universe. It is based on what I call the Thermodynamic Theories of Quantum Mechanics. According to this perspective, there is a natural choice for the initial quantum state of the universe, which is given by not a wave function but by a density matrix. The density matrix plays a microscopic role: it appears in the fundamental dynamical equations of those theories. The density matrix also plays a macroscopic / thermodynamic role: it is exactly the projection operator onto the Past Hypothesis subspace. Thus, given an initial subspace, we obtain a unique choice of the initial density matrix. I call this property "the conditional uniqueness" of the initial quantum state. The conditional uniqueness provides a new and general strategy to eliminate statistical mechanical probabilities in the fundamental physical theories, by which we can reduce the two sources of randomness to only the quantum mechanical one. I also explore the idea of an absolutely unique initial quantum state, in a way that might realize Penrose's idea of a strongly deterministic universe. (shrink)
I attempt to meet some criticisms that Williamson makes of my attempt to carry out Prior's project of reducing possibility discourse to actualist discourse.
This pair of articles provides a critical commentary on contemporary approaches to statistical mechanical probabilities. These articles focus on the two ways of understanding these probabilities that have received the most attention in the recent literature: the epistemic indifference approach, and the Lewis-style regularity approach. These articles describe these approaches, highlight the main points of contention, and make some attempts to advance the discussion. The first of these articles provides a brief sketch of statistical mechanics, and discusses the (...) indifference approach to statistical mechanical probabilities. (shrink)
Peter Baumann uses the Monty Hall game to demonstrate that probabilities cannot be meaningfully applied to individual games. Baumann draws from this first conclusion a second: in a single game, it is not necessarily rational to switch from the door that I have initially chosen to the door that Monty Hall did not open. After challenging Baumann's particular arguments for these conclusions, I argue that there is a deeper problem with his position: it rests on the false assumption that (...) what justifies the switching strategy is its leading me to win a greater percentage of the time. In fact, what justifies the switching strategy is not any statistical result over the long run but rather the "causal structure" intrinsic to each individual game itself. Finally, I argue that an argument by Hilary Putnam will not help to save Baumann's second conclusion above. (shrink)
In this article I criticize the recommendations of some prominent statisticians about how to estimate and compare probabilities of the repeated sudden infant death and repeated murder. The issue has drawn considerable public attention in connection with several recent court cases in the UK. I try to show that when the three components of the Bayesian inference are carefully analyzed in this context, the advice of the statisticians turns out to be problematic in each of the steps.
This pair of articles provides a critical commentary on contemporary approaches to statistical mechanical probabilities. These articles focus on the two ways of understanding these probabilities that have received the most attention in the recent literature: the epistemic indifference approach, and the Lewis-style regularity approach. These articles describe these approaches, highlight the main points of contention, and make some attempts to advance the discussion. The second of these articles discusses the regularity approach to statistical mechanical probabilities, and (...) describes some areas where further research is needed. (shrink)
Traditionally, ‘that’-clauses occurring in attitude attributions are taken to denote the objects of the attitudes. Prior raised a famous problem: even if Frege fears that the Begriffsschrift leads to a paradox, it is unlikely that he fears a proposition, a sentence or what have you as the alleged object denoted by the ‘that’-clause. The usual way out is to say that ‘that’-clauses do not contribute the objects of the attitudes but their contents. I will show that, if we accept (...) this answer, either we’d better stop working on attitude attributions or we’d better work harder on them. (shrink)
Research on preference reversals has demonstrated a disproportionate influence of outcome probability on choices between monetary gambles. The aim was to investigate the hypothesis that this is a prominence effect originally demonstrated for riskless choice. Another aim was to test the structure compatibility hypothesis as an explanation of the effect. The hypothesis implies that probability should be the prominent attribute when compared with value attributes both in a choice and a preference rating procedure. In Experiment 1, two groups of undergraduates (...) were presented with medical treatments described by two value attributes (effectiveness and pain-relief). All participants performed both a matching task and made preference ratings. In the latter task, outcome probabilities were added to the descriptions of the medical treatments for one of the groups. In line with the hypothesis, this reduced the prominence effect on the preference ratings observed for effectiveness. In Experiment 2, a matching task was used to demonstrate that probability was considered more important by a group of participating undergraduates than the value attributes. Furthermore, in both choices and preference ratings the expected prominence effect was found for probability. (shrink)
When visual attention is directed away from a stimulus, neural processing is weak and strength and precision of sensory data decreases. From a computational perspective, in such situations observers should give more weight to prior expectations in order to behave optimally during a discrimination task. Here we test a signal detection theoretic model that counter-intuitively predicts subjects will do just the opposite in a discrimination task with two stimuli, one attended and one unattended: when subjects are probed to discriminate (...) the unattended stimulus, they rely less on prior information about the probed stimulus’ identity. The model is in part inspired by recent findings that attention reduces trial-by-trial variability of the neuronal population response and that they use a common criterion for attended and unattended trials. In five different visual discrimination experiments, when attention was directed away from the target stimulus, subjects did not adjust their response bias in reaction to a change in stimulus presentation frequency despite being fully informed and despite the presence of performance feedback and monetary and social incentives. This indicates that subjects did not rely more on the priors under conditions of inattention as would be predicted by a Bayes-optimal observer model. These results inform and constrain future models of Bayesian inference in the human brain. (shrink)
One popular line of argument put forward in support of the principle that the right is prior to the good is to show that teleological theories, which put the good prior to the right, lead to implausible normative results. There are situa- tions, it is argued, in which putting the good prior to the right entails that we ought to do things that cannot be right for us to do. Consequently, goodness cannot (always) explain an action's rightness. (...) This indicates that what is right must be determined independently of the good. In this paper, I argue that these purported counterexamples to teleology fail to establish that the right must be prior to the good. In fact, putting the right prior to the good can lead to sets of ought statements which potentially con- flict with the principle that ‘ought’ implies ‘can’. I argue that no plausible ethical theory can determine what is right independently of a notion of value or goodness. Every plausible ethical theory needs a mapping from goodness to rightness, which implies that right cannot be prior to the good. (shrink)
IBE ('Inference to the best explanation' or abduction) is a popular and highly plausible theory of how we should judge the evidence for claims of past events based on present evidence. It has been notably developed and supported recently by Meyer following Lipton. I believe this theory is essentially correct. This paper supports IBE from a probability perspective, and argues that the retrodictive probabilities involved in such inferences should be analysed in terms of predictive probabilities and a priori (...) probability ratios of initial events. The key point is to separate these two features. Disagreements over evidence can be traced to disagreements over either the a priori probability ratios or predictive conditional ratios. In many cases, in real science, judgements of the former are necessarily subjective. The principles of iterated evidence are also discussed. The Sceptic's position is criticised as ignoring iteration of evidence, and characteristically failing to adjust a priori probability ratios in response to empirical evidence. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.