Sometimes different partitions of the same space each seem to divide that space into propositions that call for equal epistemic treatment. Famously, equal treatment in the form of equal point-valued credence leads to incoherence. Some have argued that equal treatment in the form of equal interval-valued credence solves the puzzle. This paper shows that, once we rule out intervals with extreme endpoints, this proposal also leads to incoherence.
There is a trade-off between specificity and accuracy in existing models of belief. Descriptions of agents in the tripartite model, which recognizes only three doxastic attitudes—belief, disbelief, and suspension of judgment—are typically accurate, but not sufficiently specific. The orthodox Bayesian model, which requires real-valued credences, is perfectly specific, but often inaccurate: we often lack precise credences. I argue, first, that a popular attempt to fix the Bayesian model by using sets of functions is also inaccurate, since it requires us to (...) have interval-valued credences with perfectly precise endpoints. We can see this problem as analogous to the problem of higher order vagueness. Ultimately, I argue, the only way to avoid these problems is to endorse Insurmountable Unclassifiability. This principle has some surprising and radical consequences. For example, it entails that the trade-off between accuracy and specificity is in-principle unavoidable: sometimes it is simply impossible to characterize an agent’s doxastic state in a way that is both fully accurate and maximally specific. What we can do, however, is improve on both the tripartite and existing Bayesian models. I construct a new model of belief—the minimal model—that allows us to characterize agents with much greater specificity than the tripartite model, and yet which remains, unlike existing Bayesian models, perfectly accurate. (shrink)
The article is a plea for ethicists to regard probability as one of their most important concerns. It outlines a series of topics of central importance in ethical theory in which probability is implicated, often in a surprisingly deep way, and lists a number of open problems. Topics covered include: interpretations of probability in ethical contexts; the evaluative and normative significance of risk or uncertainty; uses and abuses of expected utility theory; veils of ignorance; Harsanyi’s aggregation theorem; (...) population size problems; equality; fairness; giving priority to the worse off; continuity; incommensurability; nonexpected utility theory; evaluative measurement; aggregation; causal and evidential decision theory; act consequentialism; rule consequentialism; and deontology. (shrink)
The problem of approximating a propositional calculus is to find many-valued logics which are sound for the calculus (i.e., all theorems of the calculus are tautologies) with as few tautologies as possible. This has potential applications for representing (computationally complex) logics used in AI by (computationally easy) many-valued logics. It is investigated how far this method can be carried using (1) one or (2) an infinite sequence of many-valued logics. It is shown that the optimal candidate matrices for (1) can (...) be computed from the calculus. (shrink)
The notion of comparative probability defined in Bayesian subjectivist theory stems from an intuitive idea that, for a given pair of events, one event may be considered “more probable” than the other. Yet it is conceivable that there are cases where it is indeterminate as to which event is more probable, due to, e.g., lack of robust statistical information. We take that these cases involve indeterminate comparative probabilities. This paper provides a Savage-style decision-theoretic foundation for indeterminate comparative probabilities.
This paper motivates and develops a novel semantic framework for deontic modals. The framework is designed to shed light on two things: the relationship between deontic modals and substantive theories of practical rationality and the interaction of deontic modals with conditionals, epistemic modals and probability operators. I argue that, in order to model inferential connections between deontic modals and probability operators, we need more structure than is provided by classical intensional theories. In particular, we need probabilistic structure that (...) interacts directly with the compositional semantics of deontic modals. However, I reject theories that provide this probabilistic structure by claiming that the semantics of deontic modals is linked to the Bayesian notion of expectation. I offer a probabilistic premise semantics that explains all the data that create trouble for the rival theories. (shrink)
In this study we investigate the influence of reason-relation readings of indicative conditionals and ‘and’/‘but’/‘therefore’ sentences on various cognitive assessments. According to the Frege-Grice tradition, a dissociation is expected. Specifically, differences in the reason-relation reading of these sentences should affect participants’ evaluations of their acceptability but not of their truth value. In two experiments we tested this assumption by introducing a relevance manipulation into the truth-table task as well as in other tasks assessing the participants’ acceptability and probability evaluations. (...) Across the two experiments a strong dissociation was found. The reason-relation reading of all four sentences strongly affected their probability and acceptability evaluations, but hardly affected their respective truth evaluations. Implications of this result for recent work on indicative conditionals are discussed. (shrink)
Boolean-valued models of set theory were independently introduced by Scott, Solovay and Vopěnka in 1965, offering a natural and rich alternative for describing forcing. The original method was adapted by Takeuti, Titani, Kozawa and Ozawa to lattice-valued models of set theory. After this, Löwe and Tarafder proposed a class of algebras based on a certain kind of implication which satisfy several axioms of ZF. From this class, they found a specific 3-valued model called PS3 which satisfies all the axioms of (...) ZF, and can be expanded with a paraconsistent negation *, thus obtaining a paraconsistent model of ZF. The logic (PS3 ,*) coincides (up to language) with da Costa and D'Ottaviano logic J3, a 3-valued paraconsistent logic that have been proposed independently in the literature by several authors and with different motivations such as CluNs, LFI1 and MPT. We propose in this paper a family of algebraic models of ZFC based on LPT0, another linguistic variant of J3 introduced by us in 2016. The semantics of LPT0, as well as of its first-order version QLPT0, is given by twist structures defined over Boolean agebras. From this, it is possible to adapt the standard Boolean-valued models of (classical) ZFC to twist-valued models of an expansion of ZFC by adding a paraconsistent negation. We argue that the implication operator of LPT0 is more suitable for a paraconsistent set theory than the implication of PS3, since it allows for genuinely inconsistent sets w such that [(w = w)] = 1/2 . This implication is not a 'reasonable implication' as defined by Löwe and Tarafder. This suggests that 'reasonable implication algebras' are just one way to define a paraconsistent set theory. Our twist-valued models are adapted to provide a class of twist-valued models for (PS3,*), thus generalizing Löwe and Tarafder result. It is shown that they are in fact models of ZFC (not only of ZF). (shrink)
Dilation occurs when an interval probability estimate of some event E is properly included in the interval probability estimate of E conditional on every event F of some partition, which means that one’s initial estimate of E becomes less precise no matter how an experiment turns out. Critics maintain that dilation is a pathological feature of imprecise probability models, while others have thought the problem is with Bayesian updating. However, two points are often overlooked: (1) knowing that (...) E is stochastically independent of F (for all F in a partition of the underlying state space) is sufficient to avoid dilation, but (2) stochastic independence is not the only independence concept at play within imprecise probability models. In this paper we give a simple characterization of dilation formulated in terms of deviation from stochastic independence, propose a measure of dilation, and distinguish between proper and improper dilation. Through this we revisit the most sensational examples of dilation, which play up independence between dilator and dilatee, and find the sensationalism undermined by either fallacious reasoning with imprecise probabilities or improperly constructed imprecise probability models. (shrink)
Many philosophers argue that Keynes’s concept of the “weight of arguments” is an important aspect of argument appraisal. The weight of an argument is the quantity of relevant evidence cited in the premises. However, this dimension of argumentation does not have a received method for formalisation. Kyburg has suggested a measure of weight that uses the degree of imprecision in his system of “Evidential Probability” to quantify weight. I develop and defend this approach to measuring weight. I illustrate the (...) usefulness of this measure by employing it to develop an answer to Popper’s Paradox of Ideal Evidence. (shrink)
We provide a 'verisimilitudinarian' analysis of the well-known Linda paradox or conjunction fallacy, i.e., the fact that most people judge the probability of the conjunctive statement "Linda is a bank teller and is active in the feminist movement" (B & F) as more probable than the isolated statement "Linda is a bank teller" (B), contrary to an uncontroversial principle of probability theory. The basic idea is that experimental participants may judge B & F a better hypothesis about Linda (...) as compared to B because they evaluate B & F as more verisimilar than B. In fact, the hypothesis "feminist bank teller", while less likely to be true than "bank teller", may well be a better approximation to the truth about Linda. (shrink)
The major competing statistical paradigms share a common remarkable but unremarked thread: in many of their inferential applications, different probability interpretations are combined. How this plays out in different theories of inference depends on the type of question asked. We distinguish four question types: confirmation, evidence, decision, and prediction. We show that Bayesian confirmation theory mixes what are intuitively “subjective” and “objective” interpretations of probability, whereas the likelihood-based account of evidence melds three conceptions of what constitutes an “objective” (...)probability. (shrink)
This book explores a question central to philosophy--namely, what does it take for a belief to be justified or rational? According to a widespread view, whether one has justification for believing a proposition is determined by how probable that proposition is, given one's evidence. In this book this view is rejected and replaced with another: in order for one to have justification for believing a proposition, one's evidence must normically support it--roughly, one's evidence must make the falsity of that proposition (...) abnormal in the sense of calling for special, independent explanation. This conception of justification bears upon a range of topics in epistemology and beyond. Ultimately, this way of looking at justification guides us to a new, unfamiliar picture of how we should respond to our evidence and manage our own fallibility. This picture is developed here. (shrink)
This paper defends David Hume's "Of Miracles" from John Earman's (2000) Bayesian attack by showing that Earman misrepresents Hume's argument against believing in miracles and misunderstands Hume's epistemology of probable belief. It argues, moreover, that Hume's account of evidence is fundamentally non-mathematical and thus cannot be properly represented in a Bayesian framework. Hume's account of probability is show to be consistent with a long and laudable tradition of evidential reasoning going back to ancient Roman law.
This paper is a response to Tyler Wunder’s ‘The modality of theism and probabilistic natural theology: a tension in Alvin Plantinga's philosophy’ (this journal). In his article, Wunder argues that if the proponent of the Evolutionary Argument Against Naturalism (EAAN) holds theism to be non-contingent and frames the argument in terms of objective probability, that the EAAN is either unsound or theism is necessarily false. I argue that a modest revision of the EAAN renders Wunder’s objection irrelevant, and that (...) this revision actually widens the scope of the argument. (shrink)
When probability discounting (or probability weighting), one multiplies the value of an outcome by one's subjective probability that the outcome will obtain in decision-making. The broader import of defending probability discounting is to help justify cost-benefit analyses in contexts such as climate change. This chapter defends probability discounting under risk both negatively, from arguments by Simon Caney (2008, 2009), and with a new positive argument. First, in responding to Caney, I argue that small costs and (...) benefits need to be evaluated, and that viewing practices at the social level is too coarse-grained. Second, I argue for probability discounting, using a distinction between causal responsibility and moral responsibility. Moral responsibility can be cashed out in terms of blameworthiness and praiseworthiness, while causal responsibility obtains in full for any effect which is part of a causal chain linked to one's act. With this distinction in hand, unlike causal responsibility, moral responsibility can be seen as coming in degrees. My argument is, given that we can limit our deliberation and consideration to that which we are morally responsible for and that our moral responsibility for outcomes is limited by our subjective probabilities, our subjective probabilities can ground probability discounting. (shrink)
A probability distribution is regular if no possible event is assigned probability zero. While some hold that probabilities should always be regular, three counter-arguments have been posed based on examples where, if regularity holds, then perfectly similar events must have different probabilities. Howson (2017) and Benci et al. (2016) have raised technical objections to these symmetry arguments, but we see here that their objections fail. Howson says that Williamson’s (2007) “isomorphic” events are not in fact isomorphic, but Howson (...) is speaking of set-theoretic representations of events in a probability model. While those sets are not isomorphic, Williamson’s physical events are, in the relevant sense. Benci et al. claim that all three arguments rest on a conflation of different models, but they do not. They are founded on the premise that similar events should have the same probability in the same model, or in one case, on the assumption that a single rotation-invariant distribution is possible. Having failed to refute the symmetry arguments on such technical grounds, one could deny their implicit premises, which is a heavy cost, or adopt varying degrees of instrumentalism or pluralism about regularity, but that would not serve the project of accurately modelling chances. (shrink)
A definition of causation as probability-raising is threatened by two kinds of counterexample: first, when a cause lowers the probability of its effect; and second, when the probability of an effect is raised by a non-cause. In this paper, I present an account that deals successfully with problem cases of both these kinds. In doing so, I also explore some novel implications of incorporating into the metaphysical investigation considerations of causal psychology.
There is a plethora of confirmation measures in the literature. Zalabardo considers four such measures: PD, PR, LD, and LR. He argues for LR and against each of PD, PR, and LD. First, he argues that PR is the better of the two probability measures. Next, he argues that LR is the better of the two likelihood measures. Finally, he argues that LR is superior to PR. I set aside LD and focus on the trio of PD, PR, and (...) LR. The question I address is whether Zalabardo succeeds in showing that LR is superior to each of PD and PR. I argue that the answer is negative. I also argue, though, that measures such as PD and PR, on one hand, and measures such as LR, on the other hand, are naturally understood as explications of distinct senses of confirmation. (shrink)
A study is reported testing two hypotheses about a close parallel relation between indicative conditionals, if A then B, and conditional bets, I bet you that if A then B. The first is that both the indicative conditional and the conditional bet are related to the conditional probability, P(B|A). The second is that de Finetti's three-valued truth table has psychological reality for both types of conditional – true, false, or void for indicative conditionals and win, lose or void for (...) conditional bets. The participants were presented with an array of chips in two different colours and two different shapes, and an indicative conditional or a conditional bet about a random chip. They had to make judgments in two conditions: either about the chances of making the indicative conditional true or false or about the chances of winning or losing the conditional bet. The observed distributions of responses in the two conditions were generally related to the conditional probability, supporting the first hypothesis. In addition, a majority of participants in further conditions chose the third option, “void”, when the antecedent of the conditional was false, supporting the second hypothesis. (shrink)
We generalize the Kolmogorov axioms for probability calculus to obtain conditions defining, for any given logic, a class of probability functions relative to that logic, coinciding with the standard probability functions in the special case of classical logic but allowing consideration of other classes of "essentially Kolmogorovian" probability functions relative to other logics. We take a broad view of the Bayesian approach as dictating inter alia that from the perspective of a given logic, rational degrees of (...) belief are those representable by probability functions from the class appropriate to that logic. Classical Bayesianism, which fixes the logic as classical logic, is only one version of this general approach. Another, which we call Intuitionistic Bayesianism, selects intuitionistic logic as the preferred logic and the associated class of probability functions as the right class of candidate representions of epistemic states (rational allocations of degrees of belief). Various objections to classical Bayesianism are, we argue, best met by passing to intuitionistic Bayesianism—in which the probability functions are taken relative to intuitionistic logic—rather than by adopting a radically non-Kolmogorovian, for example, nonadditive, conception of (or substitute for) probability functions, in spite of the popularity of the latter response among those who have raised these objections. The interest of intuitionistic Bayesianism is further enhanced by the availability of a Dutch Book argument justifying the selection of intuitionistic probability functions as guides to rational betting behavior when due consideration is paid to the fact that bets are settled only when/if the outcome bet on becomes known. (shrink)
Modern scientific cosmology pushes the boundaries of knowledge and the knowable. This is prompting questions on the nature of scientific knowledge. A central issue is what defines a 'good' model. When addressing global properties of the Universe or its initial state this becomes a particularly pressing issue. How to assess the probability of the Universe as a whole is empirically ambiguous, since we can examine only part of a single realisation of the system under investigation: at some point, data (...) will run out. We review the basics of applying Bayesian statistical explanation to the Universe as a whole. We argue that a conventional Bayesian approach to model inference generally fails in such circumstances, and cannot resolve, e.g., the so-called 'measure problem' in inflationary cosmology. Implicit and non-empirical valuations inevitably enter model assessment in these cases. This undermines the possibility to perform Bayesian model comparison. One must therefore either stay silent, or pursue a more general form of systematic and rational model assessment. We outline a generalised axiological Bayesian model inference framework, based on mathematical lattices. This extends inference based on empirical data (evidence) to additionally consider the properties of model structure (elegance) and model possibility space (beneficence). We propose this as a natural and theoretically well-motivated framework for introducing an explicit, rational approach to theoretical model prejudice and inference beyond data. (shrink)
Leibniz’s account of probability has come into better focus over the past decades. However, less attention has been paid to a certain domain of application of that account, that is, the application of it to the moral or ethical domain—the sphere of action, choice and practice. This is significant, as Leibniz had some things to say about applying probability theory to the moral domain, and thought the matter quite relevant. Leibniz’s work in this area is conducted at a (...) high level of abstraction. It establishes a proof of concept, rather than concrete guidelines for how to apply calculations to specific cases. Still, this highly abstract material does allow us to begin to construct a framework for thinking about Leibniz’s approach to the ethical side of probability. (shrink)
NOTE: This paper is a reworking of some aspects of an earlier paper – ‘What else justification could be’ and also an early draft of chapter 2 of Between Probability and Certainty. I'm leaving it online as it has a couple of citations and there is some material here which didn't make it into the book (and which I may yet try to explore elsewhere). -/- My concern in this paper is with a certain, pervasive picture of epistemic justification. (...) On this picture, acquiring justification for believing something is essentially a matter of minimising one’s risk of error – so one is justified in believing something just in case it is sufficiently likely, given one’s evidence, to be true. This view is motivated by an admittedly natural thought: If we want to be fallibilists about justification then we shouldn’t demand that something be certain – that we completely eliminate error risk – before we can be justified in believing it. But if justification does not require the complete elimination of error risk, then what could it possibly require if not its minimisation? If justification does not require epistemic certainty then what could it possibly require if not epistemic likelihood? When all is said and done, I’m not sure that I can offer satisfactory answers to these questions – but I will attempt to trace out some possible answers here. The alternative picture that I’ll outline makes use of a notion of normalcy that I take to be irreducible to notions of statistical frequency or predominance. (shrink)
Dutch Book arguments have been presented for static belief systems and for belief change by conditionalization. An argument is given here that a rule for belief change which under certain conditions violates probability kinematics will leave the agent open to a Dutch Book.
How were reliable predictions made before Pascal and Fermat's discovery of the mathematics of probability in 1654? What methods in law, science, commerce, philosophy, and logic helped us to get at the truth in cases where certainty was not attainable? The book examines how judges, witch inquisitors, and juries evaluated evidence; how scientists weighed reasons for and against scientific theories; and how merchants counted shipwrecks to determine insurance rates. Also included are the problem of induction before Hume, design arguments (...) for the existence of God, and theories on how to evaluate scientific and historical hypotheses. It is explained how Pascal and Fermat's work on chance arose out of legal thought on aleatory contracts. The book interprets pre-Pascalian unquantified probability in a generally objective Bayesian or logical probabilist sense. (shrink)
Hájek has recently presented the following paradox. You are certain that a cable guy will visit you tomorrow between 8 a.m. and 4 p.m. but you have no further information about when. And you agree to a bet on whether he will come in the morning interval (8, 12] or in the afternoon interval (12, 4). At first, you have no reason to prefer one possibility rather than the other. But you soon realise that there will definitely be a future (...) time at which you will (rationally) assign higher probability to an afternoon arrival than a morning one, due to time elapsing. You are also sure there may not be a future time at which you will (rationally) assign a higher probability to a morning arrival than an afternoon one. It would therefore appear that you ought to bet on an afternoon arrival. The paradox is based on the apparent incompatibility of the principle of expected utility and principles of diachronic rationality which are prima facie plausible. Hájek concludes that the latter are false, but doesn't provide a clear diagnosis as to why. We endeavour to further our understanding of the paradox by providing such a diagnosis. (shrink)
A general class of labeled sequent calculi is investigated, and necessary and sufficient conditions are given for when such a calculus is sound and complete for a finite -valued logic if the labels are interpreted as sets of truth values. Furthermore, it is shown that any finite -valued logic can be given an axiomatization by such a labeled calculus using arbitrary "systems of signs," i.e., of sets of truth values, as labels. The number of labels needed is logarithmic in the (...) number of truth values, and it is shown that this bound is tight. (shrink)
There are many scientific and everyday cases where each of Pr and Pr is high and it seems that Pr is high. But high probability is not transitive and so it might be in such cases that each of Pr and Pr is high and in fact Pr is not high. There is no issue in the special case where the following condition, which I call “C1”, holds: H 1 entails H 2. This condition is sufficient for transitivity in (...) high probability. But many of the scientific and everyday cases referred to above are cases where it is not the case that H 1 entails H 2. I consider whether there are additional conditions sufficient for transitivity in high probability. I consider three candidate conditions. I call them “C2”, “C3”, and “C2&3”. I argue that C2&3, but neither C2 nor C3, is sufficient for transitivity in high probability. I then set out some further results and relate the discussion to the Bayesian requirement of coherence. (shrink)
In this article, I introduce the term “cognitivism” as a name for the thesis that degrees of belief are equivalent to full beliefs about truth-valued propositions. The thesis (of cognitivism) that degrees of belief are equivalent to full beliefs is equivocal, inasmuch as different sorts of equivalence may be postulated between degrees of belief and full beliefs. The simplest sort of equivalence (and the sort of equivalence that I discuss here) identifies having a given degree of belief with having a (...) full belief with a specific content. This sort of view was proposed in [C. Howson and P. Urbach, Scientific reasoning: the Bayesian approach. Chicago: Open Court (1996)].In addition to embracing a form of cognitivism about degrees of belief, Howson and Urbach argued for a brand of probabilism. I call a view, such as Howson and Urbach’s, which combines probabilism with cognitivism about degrees of belief “cognitivist probabilism”. In order to address some problems with Howson and Urbach’s view, I propose a view that incorperates several of modifications of Howson and Urbach’s version of cognitivist probabilism. The view that I finally propose upholds cognitivism about degrees of belief, but deviates from the letter of probabilism, in allowing that a rational agent’s degrees of belief need not conform to the axioms of probability, in the case where the agent’s cognitive resources are limited. (shrink)
In the following we will investigate whether von Mises’ frequency interpretation of probability can be modified to make it philosophically acceptable. We will reject certain elements of von Mises’ theory, but retain others. In the interpretation we propose we do not use von Mises’ often criticized ‘infinite collectives’ but we retain two essential claims of his interpretation, stating that probability can only be defined for events that can be repeated in similar conditions, and that exhibit frequency stabilization. The (...) central idea of the present article is that the mentioned ‘conditions’ should be well-defined and ‘partitioned’. More precisely, we will divide probabilistic systems into object, initializing, and probing subsystem, and show that such partitioning allows to solve problems. Moreover we will argue that a key idea of the Copenhagen interpretation of quantum mechanics (the determinant role of the observing system) can be seen as deriving from an analytic definition of probability as frequency. Thus a secondary aim of the article is to illustrate the virtues of analytic definition of concepts, consisting of making explicit what is implicit. (shrink)
We survey main developments, results, and open problems on interval temporal logics and duration calculi. We present various formal systems studied in the literature and discuss their distinctive features, emphasizing on expressiveness, axiomatic systems, and (un)decidability results.
Bayesian confirmation theory is rife with confirmation measures. Zalabardo focuses on the probability difference measure, the probability ratio measure, the likelihood difference measure, and the likelihood ratio measure. He argues that the likelihood ratio measure is adequate, but each of the other three measures is not. He argues for this by setting out three adequacy conditions on confirmation measures and arguing in effect that all of them are met by the likelihood ratio measure but not by any of (...) the other three measures. Glass and McCartney, hereafter “G&M,” accept the conclusion of Zalabardo’s argument along with each of the premises in it. They nonetheless try to improve on Zalabardo’s argument by replacing his third adequacy condition with a weaker condition. They do this because of a worry to the effect that Zalabardo’s third adequacy condition runs counter to the idea behind his first adequacy condition. G&M have in mind confirmation in the sense of increase in probability: the degree to which E confirms H is a matter of the degree to which E increases H’s probability. I call this sense of confirmation “IP.” I set out four ways of precisifying IP. I call them “IP1,” “IP2,” “IP3,” and “IP4.” Each of them is based on the assumption that the degree to which E increases H’s probability is a matter of the distance between p and a certain other probability involving H. I then evaluate G&M’s argument in light of them. (shrink)
This paper argues that the technical notion of conditional probability, as given by the ratio analysis, is unsuitable for dealing with our pretheoretical and intuitive understanding of both conditionality and probability. This is an ontological account of conditionals that include an irreducible dispositional connection between the antecedent and consequent conditions and where the conditional has to be treated as an indivisible whole rather than compositional. The relevant type of conditionality is found in some well-defined group of conditional statements. (...) As an alternative, therefore, we briefly offer grounds for what we would call an ontological reading: for both conditionality and conditional probability in general. It is not offered as a fully developed theory of conditionality but can be used, we claim, to explain why calculations according to the RATIO scheme does not coincide with our intuitive notion of conditional probability. What it shows us is that for an understanding of the whole range of conditionals we will need what John Heil (2003), in response to Quine (1953), calls an ontological point of view. (shrink)
Karl Popper discovered in 1938 that the unconditional probability of a conditional of the form ‘If A, then B’ normally exceeds the conditional probability of B given A, provided that ‘If A, then B’ is taken to mean the same as ‘Not (A and not B)’. So it was clear (but presumably only to him at that time) that the conditional probability of B given A cannot be reduced to the unconditional probability of the material conditional (...) ‘If A, then B’. I describe how this insight was developed in Popper’s writings and I add to this historical study a logical one, in which I compare laws of excess in Kolmogorov probability theory with laws of excess in Popper probability theory. (shrink)
This dissertation is a contribution to formal and computational philosophy. -/- In the first part, we show that by exploiting the parallels between large, yet finite lotteries on the one hand and countably infinite lotteries on the other, we gain insights in the foundations of probability theory as well as in epistemology. Case 1: Infinite lotteries. We discuss how the concept of a fair finite lottery can best be extended to denumerably infinite lotteries. The solution boils down to the (...) introduction of infinitesimal probability values, which can be achieved using non-standard analysis. Our solution can be generalized to uncountable sample spaces, giving rise to a Non-Archimedean Probability (NAP) theory. Case 2: Large but finite lotteries. We propose application of the language of relative analysis (a type of non-standard analysis) to formulate a new model for rational belief, called Stratified Belief. This contextualist model seems well-suited to deal with a concept of beliefs based on probabilities ‘sufficiently close to unity’. -/- The second part presents a case study in social epistemology. We model a group of agents who update their opinions by averaging the opinions of other agents. Our main goal is to calculate the probability for an agent to end up in an inconsistent belief state due to updating. To that end, an analytical expression is given and evaluated numerically, both exactly and using statistical sampling. The probability of ending up in an inconsistent belief state turns out to be always smaller than 2%. (shrink)
Epistemic closure under known implication is the principle that knowledge of "p" and knowledge of "p implies q", together, imply knowledge of "q". This principle is intuitive, yet several putative counterexamples have been formulated against it. This paper addresses the question, why is epistemic closure both intuitive and prone to counterexamples? In particular, the paper examines whether probability theory can offer an answer to this question based on four strategies. The first probability-based strategy rests on the accumulation of (...) risks. The problem with this strategy is that risk accumulation cannot accommodate certain counterexamples to epistemic closure. The second strategy is based on the idea of evidential support, that is, a piece of evidence supports a proposition whenever it increases the probability of the proposition. This strategy makes progress and can accommodate certain putative counterexamples to closure. However, this strategy also gives rise to a number of counterintuitive results. Finally, there are two broadly probabilistic strategies, one based on the idea of resilient probability and the other on the idea of assumptions that are taken for granted. These strategies are promising but are prone to some of the shortcomings of the second strategy. All in all, I conclude that each strategy fails. Probability theory, then, is unlikely to offer the account we need. (shrink)
A uniform construction for sequent calculi for finite-valued first-order logics with distribution quantifiers is exhibited. Completeness, cut-elimination and midsequent theorems are established. As an application, an analog of Herbrand’s theorem for the four-valued knowledge-representation logic of Belnap and Ginsberg is presented. It is indicated how this theorem can be used for reasoning about knowledge bases with incomplete and inconsistent information.
Łukasiewicz has often been criticized for his motive for inventing his three-valued logic, namely the avoidance of determinism. First of all, I want to show that almost all of the critcism along this line was wrong. Second I will indicate that he made mistakes, however, in constructing his system, because he had other motives at the same time. Finally I will propose some modification of his system and its interpretation which can attain his original purpose in some sense.
This thesis focuses on expressively rich languages that can formalise talk about probability. These languages have sentences that say something about probabilities of probabilities, but also sentences that say something about the probability of themselves. For example: (π): “The probability of the sentence labelled π is not greater than 1/2.” Such sentences lead to philosophical and technical challenges; but can be useful. For example they bear a close connection to situations where ones confidence in something can affect (...) whether it is the case or not. The motivating interpretation of probability as an agent's degrees of belief will be focused on throughout the thesis. -/- This thesis aims to answer two questions relevant to such frameworks, which correspond to the two parts of the thesis: “How can one develop a formal semantics for this framework?” and “What rational constraints are there on an agent once such expressive frameworks are considered?”. (shrink)
The aim of this paper is to emphasize the fact that for all finitely-many-valued logics there is a completely systematic relation between sequent calculi and tableau systems. More importantly, we show that for both of these systems there are al- ways two dual proof sytems (not just only two ways to interpret the calculi). This phenomenon may easily escape one’s attention since in the classical (two-valued) case the two systems coincide. (In two-valued logic the assignment of a truth value and (...) the exclusion of the opposite truth value describe the same situation.). (shrink)
In this paper we focus our attention on tableau methods for propositional interval temporal logics. These logics provide a natural framework for representing and reasoning about temporal properties in several areas of computer science. However, while various tableau methods have been developed for linear and branching time point-based temporal logics, not much work has been done on tableau methods for interval-based ones. We develop a general tableau method for Venema's \cdt\ logic interpreted over partial orders (\nsbcdt\ for short). It combines (...) features of the classical tableau method for first-order logic with those of explicit tableau methods for modal logics with constraint label management, and it can be easily tailored to most propositional interval temporal logics proposed in the literature. We prove its soundness and completeness, and we show how it has been implemented. (shrink)
The proof theory of many-valued systems has not been investigated to an extent comparable to the work done on axiomatizatbility of many-valued logics. Proof theory requires appropriate formalisms, such as sequent calculus, natural deduction, and tableaux for classical (and intuitionistic) logic. One particular method for systematically obtaining calculi for all finite-valued logics was invented independently by several researchers, with slight variations in design and presentation. The main aim of this report is to develop the proof theory of finite-valued first order (...) logics in a general way, and to present some of the more important results in this area. In Systems covered are the resolution calculus, sequent calculus, tableaux, and natural deduction. This report is actually a template, from which all results can be specialized to particular logics. (shrink)
Contrary to Bell’s theorem it is demonstrated that with the use of classical probability theory the quantum correlation can be approximated. Hence, one may not conclude from experiment that all local hidden variable theories are ruled out by a violation of inequality result.
We outline a simple development of special and general relativity based on the physical meaning of the spacetime interval. The Lorentz transformation is not used.
This paper shows how the classical finite probability theory (with equiprobable outcomes) can be reinterpreted and recast as the quantum probability calculus of a pedagogical or "toy" model of quantum mechanics over sets (QM/sets). There are two parts. The notion of an "event" is reinterpreted from being an epistemological state of indefiniteness to being an objective state of indefiniteness. And the mathematical framework of finite probability theory is recast as the quantum probability calculus for QM/sets. The (...) point is not to clarify finite probability theory but to elucidate quantum mechanics itself by seeing some of its quantum features in a classical setting. (shrink)
In this paper, we investigate the expressiveness of the variety of propositional interval neighborhood logics , we establish their decidability on linearly ordered domains and some important subclasses, and we prove the undecidability of a number of extensions of PNL with additional modalities over interval relations. All together, we show that PNL form a quite expressive and nearly maximal decidable fragment of Halpern–Shoham’s interval logic HS.
Probability plays a crucial role regarding the understanding of the relationship which exists between mathematics and physics. It will be the point of departure of this brief reflection concerning this subject, as well as about the placement of Poincaré’s thought in the scenario offered by some contemporary perspectives.
This document presents a Gentzen-style deductive calculus and proves that it is complete with respect to a 3-valued semantics for a language with quantifiers. The semantics resembles the strong Kleene semantics with respect to conjunction, disjunction and negation. The completeness proof for the sentential fragment fills in the details of a proof sketched in Arnon Avron (2003). The extension to quantifiers is original but uses standard techniques.
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.