Citations of:
Add citations
You must login to add citations.


Although expected utility theory has proven a fruitful and elegant theory in the finite realm, attempts to generalize it to infinite values have resulted in many paradoxes. In this paper, we argue that the use of John Conway's surreal numbers shall provide a firm mathematical foundation for transfinite decision theory. To that end, we prove a surreal representation theorem and show that our surreal decision theory respects dominance reasoning even in the case of infinite values. We then bring our theory (...) 

In probability textbooks, it is widely claimed that zero probability does not mean impossibility. But what stands behind this claim? In this paper I offer an explanation to this claim based on Kolmogorov's formalism. As such, this explanation is relevant to all interpretations of Kolmogorov's probability theory. I start by clarifying that this claim refers only to nonempty events, since empty events are always considered as impossible. Then, I offer the following three reasons for the claim that nonempty events with (...) 

In this paper I argue that de Finetti provided compelling reasons for rejecting countable additivity. It is ironical therefore that the main argument advanced by Bayesians against following his recommendation is based on the consistency criterion, coherence, he himself developed. I will show that this argument is mistaken. Nevertheless, there remain some counterintuitive consequences of rejecting countable additivity, and one in particular has all the appearances of a fullblown paradox. I will end by arguing that in fact it is no (...) 

Scientists and Bayesian statisticians often study hypotheses that they know to be false. This creates an interpretive problem because the Bayesian probability of a hypothesis is typically interpreted as a degree of belief that the hypothesis is true. In this paper, I present and contrast two solutions to the interpretive problem, both of which involve reinterpreting the Bayesian framework in such a way that pragmatic factors directly determine in part how probability assignments are interpreted and whether a given probability assignment (...) 

The existence of a transitive, complete, and weakly independent relation on the full set of gambles implies the existence of a nonRamsey set. Therefore, each transitive and weakly independent relation on the set of gambles either is incomplete or does not have an explicit description. Whatever tools decision theory makes available, there will always be decision problems where these tools fail us. In this sense, decision theory remains incomplete. 

NonArchimedean probability functions allow us to combine regularity with perfect additivity. We discuss the philosophical motivation for a particular choice of axioms for a nonArchimedean probability theory and answer some philosophical objections that have been raised against infinitesimal probabilities in general. _1_ Introduction _2_ The Limits of Classical Probability Theory _2.1_ Classical probability functions _2.2_ Limitations _2.3_ Infinitesimals to the rescue? _3_ NAP Theory _3.1_ First four axioms of NAP _3.2_ Continuity and conditional probability _3.3_ The final axiom of NAP (...) 

A probability distribution is regular if it does not assign probability zero to any possible event. While some hold that probabilities should always be regular, three counterarguments have been posed based on examples where, if regularity holds, then perfectly similar events must have different probabilities. Howson and Benci et al. have raised technical objections to these symmetry arguments, but we see here that their objections fail. Howson says that Williamson’s “isomorphic” events are not in fact isomorphic, but Howson is speaking (...) 

A probability distribution is regular if no possible event is assigned probability zero. While some hold that probabilities should always be regular, three counterarguments have been posed based on examples where, if regularity holds, then perfectly similar events must have different probabilities. Howson (2017) and Benci et al. (2016) have raised technical objections to these symmetry arguments, but we see here that their objections fail. Howson says that Williamson’s (2007) “isomorphic” events are not in fact isomorphic, but Howson is speaking (...) 

Let X\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{69pt} \begin{document}$$\mathcal{X }$$\end{document} be a set of outcomes, and let I\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{69pt} \begin{document}$$\mathcal{I }$$\end{document} be an infinite indexing set. This paper shows that any separable, permutationinvariant preference order \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{69pt} \begin{document}$$$$\end{document} on XI\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{69pt} \begin{document}$$\mathcal{X }^\mathcal{I }$$\end{document} admits an additive representation. That is: there exists a linearly ordered abelian group R\documentclass[12pt]{minimal} (...) 

The question of the analyticity of Hume's Principle is central to the neologicist project. We take on this question with respect to Frege's definition of analyticity, which entails that a sentence cannot be analytic if it can be consistently denied within the sphere of a special science. We show that HP can be denied within nonstandard analysis and argue that if HP is taken to depend on Frege's definition of number, it isn't analytic, and if HP is taken to be (...) 

Recent work has defended “Euclidean” theories of set size, in which Cantor’s Principle (two sets have equally many elements if and only if there is a onetoone correspondence between them) is abandoned in favor of the PartWhole Principle (if A is a proper subset of B then A is smaller than B). It has also been suggested that Gödel’s argument for the unique correctness of Cantor’s Principle is inadequate. Here we see from simple examples, not that Euclidean theories of set (...) 

In standard probability theory, probability zero is not the same as impossibility. But many have suggested that only impossible events should have probability zero. This can be arranged if we allow infinitesimal probabilities, but infinitesimals do not solve all of the problems. We will see that regular probabilities are not invariant over rigid transformations, even for simple, bounded, countable, constructive, and disjoint sets. Hence, regular chances cannot be determined by spacetime invariant physical laws, and regular credences cannot satisfy seemingly reasonable (...) 

We show that infinitesimal probabilities are much too small for modeling the individual outcome of a countably infinite fair lottery. 

In ‘Fair Infinite Lotteries’ (FIL), Wenmackers and Horsten use nonstandard analysis to construct a family of nicelybehaved hyperrationalvalued probability measures on sets of natural numbers. Each probability measure in FIL is determined by a free ultrafilter on the natural numbers: distinct free ultrafilters determine distinct probability measures. The authors reply to a worry about a consequent ‘arbitrariness’ by remarking, “A different choice of free ultrafilter produces a different ... probability function with the same standard part but infinitesimal differences.” They illustrate (...) 

I introduce a mathematical account of expectation based on a qualitative criterion of coherence for qualitative comparisons between gambles (or random quantities). The qualitative comparisons may be interpreted as an agent’s comparative preference judgments over options or more directly as an agent’s comparative expectation judgments over random quantities. The criterion of coherence is reminiscent of de Finetti’s quantitative criterion of coherence for betting, yet it does not impose an Archimedean condition on an agent’s comparative judgments, it does not require the (...) 

We examine some of Connes’ criticisms of Robinson’s infinitesimals starting in 1995. Connes sought to exploit the Solovay model S as ammunition against nonstandard analysis, but the model tends to boomerang, undercutting Connes’ own earlier work in functional analysis. Connes described the hyperreals as both a “virtual theory” and a “chimera”, yet acknowledged that his argument relies on the transfer principle. We analyze Connes’ “dartthrowing” thought experiment, but reach an opposite conclusion. In S , all definable sets of reals are (...) 

Leibniz seems to have been the first to suggest a logical interpretation of probability, but there have always seemed formidable mathematical and interpretational barriers to implementing the idea. De Finetti revived it only, it seemed, to reject it in favour of a purely decisiontheoretic approach. In this paper I argue that not only is it possible to view (Bayesian) probability as a continuumvalued logic, but that it has a very close formal kinship with classical propositional logic. 

Philosophers cannot agree on whether the rule of Countable Additivity should be an axiom of probability. Edwin T. Jaynes attacks the problem in a way which is original to him and passed over in the current debate about the principle: he says the debate only arises because of an erroneous use of mathematical infinity. I argue that this solution fails, but I construct a different argument which, I argue, salvages the spirit of the more general point Jaynes makes. I argue (...) 

We relate Popper functions to regular and perfectly additive such nonArchimedean probability functions by means of a representation theorem: every such nonArchimedean probability function is infinitesimally close to some Popper function, and vice versa. We also show that regular and perfectly additive nonArchimedean probability functions can be given a lexicographic representation. Thus Popper functions, a specific kind of nonArchimedean probability functions, and lexicographic probability functions triangulate to the same place: they are in a good sense interchangeable. 

Pruss uses an example of Lester Dubins to argue against the claim that appealing to hyperrealvalued probabilities saves probabilistic regularity from the objection that in continuum outcomespaces and with standard probability functions all save countably many possibilities must be assigned probability 0. Dubins’s example seems to show that merely finitely additive standard probability functions allow reasoning to a foregone conclusion, and Pruss argues that hyperrealvalued probability functions are vulnerable to the same charge. However, Pruss’s argument relies on the rule of (...) 

A popular way to relate probabilistic information to binary rational beliefs is the Lockean Thesis, which is usually formalized in terms of thresholds. This approach seems far from satisfactory: the value of the thresholds is not wellspecified and the Lottery Paradox shows that the model violates the Conjunction Principle. We argue that the Lottery Paradox is a symptom of a more fundamental and general problem, shared by all thresholdmodels that attempt to put an exact border on something that is intrinsically (...) 

In a wellknown paper, Timothy Williamson claimed to prove with a coinflipping example that infinitesimalvalued probabilities cannot save the principle of Regularity, because on pain of inconsistency the event ‘all tosses land heads’ must be assigned probability 0, whether the probability function is hyperrealvalued or not. A premise of Williamson’s argument is that two infinitary events in that example must be assigned the same probability because they are isomorphic. It was argued by Howson that the claim of isomorphism fails, but (...) 

An infinite lottery machine is used as a foil for testing the reach of inductive inference, since inferences concerning it require novel extensions of probability. Its use is defensible if there is some sense in which the lottery is physically possible, even if exotic physics is needed. I argue that exotic physics is needed and describe several proposals that fail and at least one that succeeds well enough. 

Scientists and Bayesian statisticians often study hypotheses that they know to be false. This creates an interpretive problem because the Bayesian probability of a hypothesis is supposed to represent the probability that the hypothesis is true. I investigate whether Bayesianism can accommodate the idea that false hypotheses are sometimes approximately true or that some hypotheses or models can be closer to the truth than others. I argue that the idea that some hypotheses are approximately true in an absolute sense is (...) 

Forty years ago, Bayesian philosophers were just catching a new wave of technical innovation, ushering in an era of scoring rules, imprecise credences, and infinitesimal probabilities. Meanwhile, down the hall, Gettier’s 1963 paper [28] was shaping a literature with little obvious interest in the formal programs of Reichenbach, Hempel, and Carnap, or their successors like Jeffrey, Levi, Skyrms, van Fraassen, and Lewis. And how Bayesians might accommodate the discourses of full belief and knowledge was but a glimmer in the eye (...) 