Switch to: References

Add citations

You must login to add citations.
  1. The psychology of human risk preferences and vulnerability to scare-mongers: experimental economic tools for hypothesis formulation and testing.W. Harrison Glenn & Ross Don - 2016 - Journal of Cognition and Culture 16 (5):383-414.
    The Internet and social media have opened niches for political exploitation of human dispositions to hyper-alarmed states that amplify perceived threats relative to their objective probabilities of occurrence. Researchers should aim to observe the dynamic “ramping up” of security threat mechanisms under controlled experimental conditions. Such research necessarily begins from a clear model of standard baseline states, and should involve adding treatments to established experimental protocols developed by experimental economists. We review these protocols, which allow for joint estimation of risk (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Measuring Belief and Risk Attitude.Sven Neth - 2019 - Electronic Proceedings in Theoretical Computer Science 297:354–364.
    Ramsey (1926) sketches a proposal for measuring the subjective probabilities of an agent by their observable preferences, assuming that the agent is an expected utility maximizer. I show how to extend the spirit of Ramsey's method to a strictly wider class of agents: risk-weighted expected utility maximizers (Buchak 2013). In particular, I show how we can measure the risk attitudes of an agent by their observable preferences, assuming that the agent is a risk-weighted expected utility maximizer. Further, we can leverage (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Policymaking under scientific uncertainty.Joe Roussos - 2020 - Dissertation, London School of Economics
    Policymakers who seek to make scientifically informed decisions are constantly confronted by scientific uncertainty and expert disagreement. This thesis asks: how can policymakers rationally respond to expert disagreement and scientific uncertainty? This is a work of non-ideal theory, which applies formal philosophical tools developed by ideal theorists to more realistic cases of policymaking under scientific uncertainty. I start with Bayesian approaches to expert testimony and the problem of expert disagreement, arguing that two popular approaches— supra-Bayesianism and the standard model of (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • The material theory of induction.John D. Norton - 2021 - Calgary, Alberta, Canada: University of Calgary Press.
    The inaugural title in the new, Open Access series BSPS Open, The Material Theory of Induction will initiate a new tradition in the analysis of inductive inference. The fundamental burden of a theory of inductive inference is to determine which are the good inductive inferences or relations of inductive support and why it is that they are so. The traditional approach is modeled on that taken in accounts of deductive inference. It seeks universally applicable schemas or rules or a single (...)
    Download  
     
    Export citation  
     
    Bookmark   36 citations  
  • Accuracy Across Doxastic Attitudes: Recent Work on the Accuracy of Belief.Robert Weston Siscoe - 2022 - American Philosophical Quarterly 59 (2):201-217.
    James Joyce's article “A Nonpragmatic Vindication of Probabilism” introduced an approach to arguing for credal norms by appealing to the epistemic value of accuracy. The central thought was that credences ought to accurately represent the world, a guiding thought that has gone on to generate an entire research paradigm on the rationality of credences. Recently, a number of epistemologists have begun to apply this same thought to full beliefs, attempting to explain and argue for norms of belief in terms of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Generalized Immodesty Principles in Epistemic Utility Theory.Alejandro Pérez Carballo - 2023 - Ergo: An Open Access Journal of Philosophy 10 (31):874–907.
    Epistemic rationality is typically taken to be immodest at least in this sense: a rational epistemic state should always take itself to be doing at least as well, epistemically and by its own light, than any alternative epistemic state. If epistemic states are probability functions and their alternatives are other probability functions defined over the same collection of proposition, we can capture the relevant sense of immodesty by claiming that epistemic utility functions are (strictly) proper. In this paper I examine (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Mechanisms for information elicitation.Aviv Zohar & Jeffrey S. Rosenschein - 2008 - Artificial Intelligence 172 (16-17):1917-1939.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The uniqueness of local proper scoring rules: the logarithmic family.Jingni Yang - 2020 - Theory and Decision 88 (2):315-322.
    Local proper scoring rules provide convenient tools for measuring subjective probabilities. Savage, 783–801, 1971) has shown that the only local proper scoring rule for more than two exclusive events is the logarithmic family. We generalize Savage by relaxing the properness and the domain, and provide simpler proof.
    Download  
     
    Export citation  
     
    Bookmark  
  • Consequences of Calibration.Robert Williams & Richard Pettigrew - forthcoming - British Journal for the Philosophy of Science:14.
    Drawing on a passage from Ramsey's Truth and Probability, we formulate a simple, plausible constraint on evaluating the accuracy of credences: the Calibration Test. We show that any additive, continuous accuracy measure that passes the Calibration Test will be strictly proper. Strictly proper accuracy measures are known to support the touchstone results of accuracy-first epistemology, for example vindications of probabilism and conditionalization. We show that our use of Calibration is an improvement on previous such appeals by showing how it answers (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Coping with Ethical Uncertainty.John R. Welch - 2017 - Diametros 53:150-166.
    Most ethical decisions are conditioned by formidable uncertainty. Decision makers may lack reliable information about relevant facts, the consequences of actions, and the reactions of other people. Resources for dealing with uncertainty are available from standard forms of decision theory, but successful application to decisions under risk requires a great deal of quantitative information: point-valued probabilities of states and point-valued utilities of outcomes. When this information is not available, this paper recommends the use of a form of decision theory that (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Credence for conclusions: a brief for Jeffrey’s rule.John R. Welch - 2020 - Synthese 197 (5):2051-2072.
    Some arguments are good; others are not. How can we tell the difference? This article advances three proposals as a partial answer to this question. The proposals are keyed to arguments conditioned by different degrees of uncertainty: mild, where the argument’s premises are hedged with point-valued probabilities; moderate, where the premises are hedged with interval probabilities; and severe, where the premises are hedged with non-numeric plausibilities such as ‘very likely’ or ‘unconfirmed’. For mild uncertainty, the article proposes to apply a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • You've Come a Long Way, Bayesians.Jonathan Weisberg - 2015 - Journal of Philosophical Logic 44 (6):817-834.
    Forty years ago, Bayesian philosophers were just catching a new wave of technical innovation, ushering in an era of scoring rules, imprecise credences, and infinitesimal probabilities. Meanwhile, down the hall, Gettier’s 1963 paper [28] was shaping a literature with little obvious interest in the formal programs of Reichenbach, Hempel, and Carnap, or their successors like Jeffrey, Levi, Skyrms, van Fraassen, and Lewis. And how Bayesians might accommodate the discourses of full belief and knowledge was but a glimmer in the eye (...)
    Download  
     
    Export citation  
     
    Bookmark   20 citations  
  • Where do Bayesian priors come from?Patrick Suppes - 2007 - Synthese 156 (3):441-471.
    Bayesian prior probabilities have an important place in probabilistic and statistical methods. In spite of this fact, the analysis of where these priors come from and how they are formed has received little attention. It is reasonable to excuse the lack, in the foundational literature, of detailed psychological theory of what are the mechanisms by which prior probabilities are formed. But it is less excusable that there is an almost total absence of a detailed discussion of the highly differentiating nature (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Measuring the overall incoherence of credence functions.Julia Staffel - 2015 - Synthese 192 (5):1467-1493.
    Many philosophers hold that the probability axioms constitute norms of rationality governing degrees of belief. This view, known as subjective Bayesianism, has been widely criticized for being too idealized. It is claimed that the norms on degrees of belief postulated by subjective Bayesianism cannot be followed by human agents, and hence have no normative force for beings like us. This problem is especially pressing since the standard framework of subjective Bayesianism only allows us to distinguish between two kinds of credence (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Calibration, coherence, and scoring rules.Teddy Seidenfeld - 1985 - Philosophy of Science 52 (2):274-294.
    Can there be good reasons for judging one set of probabilistic assertions more reliable than a second? There are many candidates for measuring "goodness" of probabilistic forecasts. Here, I focus on one such aspirant: calibration. Calibration requires an alignment of announced probabilities and observed relative frequency, e.g., 50 percent of forecasts made with the announced probability of.5 occur, 70 percent of forecasts made with probability.7 occur, etc. To summarize the conclusions: (i) Surveys designed to display calibration curves, from which a (...)
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
  • A conflict between finite additivity and avoiding dutch book.Teddy Seidenfeld & Mark J. Schervish - 1983 - Philosophy of Science 50 (3):398-412.
    For Savage (1954) as for de Finetti (1974), the existence of subjective (personal) probability is a consequence of the normative theory of preference. (De Finetti achieves the reduction of belief to desire with his generalized Dutch-Book argument for Previsions.) Both Savage and de Finetti rebel against legislating countable additivity for subjective probability. They require merely that probability be finitely additive. Simultaneously, they insist that their theories of preference are weak, accommodating all but self-defeating desires. In this paper we dispute these (...)
    Download  
     
    Export citation  
     
    Bookmark   29 citations  
  • The Effect of Exchange Rates on Statistical Decisions.Mark J. Schervish, Teddy Seidenfeld & Joseph B. Kadane - 2013 - Philosophy of Science 80 (4):504-532.
    Statistical decision theory, whether based on Bayesian principles or other concepts such as minimax or admissibility, relies on minimizing expected loss or maximizing expected utility. Loss and utility functions are generally treated as unit-less numerical measures of value for consequences. Here, we address the issue of the units in which loss and utility are settled and the implications that those units have on the rankings of potential decisions. When multiple currencies are available for paying the loss, one must take explicit (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • What is justified credence?Richard Pettigrew - 2021 - Episteme 18 (1):16-30.
    In this paper, we seek a reliabilist account of justified credence. Reliabilism about justified beliefs comes in two varieties: process reliabilism (Goldman, 1979, 2008) and indicator reliabilism (Alston, 1988, 2005). Existing accounts of reliabilism about justified credence comes in the same two varieties: Jeff Dunn (2015) proposes a version of process reliabilism, while Weng Hong Tang (2016) offers a version of indicator reliabilism. As we will see, both face the same objection. If they are right about what justification is, it (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • A New Epistemic Utility Argument for the Principal Principle.Richard G. Pettigrew - 2013 - Episteme 10 (1):19-35.
    Jim Joyce has presented an argument for Probabilism based on considerations of epistemic utility [Joyce, 1998]. In a recent paper, I adapted this argument to give an argument for Probablism and the Principal Principle based on similar considerations [Pettigrew, 2012]. Joyce’s argument assumes that a credence in a true proposition is better the closer it is to maximal credence, whilst a credence in a false proposition is better the closer it is to minimal credence. By contrast, my argument in that (...)
    Download  
     
    Export citation  
     
    Bookmark   54 citations  
  • Accuracy-First Epistemology Without Additivity.Richard Pettigrew - 2022 - Philosophy of Science 89 (1):128-151.
    Accuracy arguments for the core tenets of Bayesian epistemology differ mainly in the conditions they place on the legitimate ways of measuring the inaccuracy of our credences. The best existing arguments rely on three conditions: Continuity, Additivity, and Strict Propriety. In this paper, I show how to strengthen the arguments based on these conditions by showing that the central mathematical theorem on which each depends goes through without assuming Additivity.
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Updating as Communication.Sarah Moss - 2012 - Philosophy and Phenomenological Research 85 (2):225-248.
    Traditional procedures for rational updating fail when it comes to self-locating opinions, such as your credences about where you are and what time it is. This paper develops an updating procedure for rational agents with self-locating beliefs. In short, I argue that rational updating can be factored into two steps. The first step uses information you recall from your previous self to form a hypothetical credence distribution, and the second step changes this hypothetical distribution to reflect information you have genuinely (...)
    Download  
     
    Export citation  
     
    Bookmark   57 citations  
  • Mechanism design for the truthful elicitation of costly probabilistic estimates in distributed information systems.Athanasios Papakonstantinou, Alex Rogers, Enrico H. Gerding & Nicholas R. Jennings - 2011 - Artificial Intelligence 175 (2):648-672.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • What Accuracy Could Not Be.Graham Oddie - 2019 - British Journal for the Philosophy of Science 70 (2):551-580.
    Two different programmes are in the business of explicating accuracy—the truthlikeness programme and the epistemic utility programme. Both assume that truth is the goal of inquiry, and that among inquiries that fall short of realizing the goal some get closer to it than others. Truthlikeness theorists have been searching for an account of the accuracy of propositions. Epistemic utility theorists have been searching for an account of the accuracy of credal states. Both assume we can make cognitive progress in an (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • Scalable transfer learning in heterogeneous, dynamic environments.Trung Thanh Nguyen, Tomi Silander, Zhuoru Li & Tze-Yun Leong - 2017 - Artificial Intelligence 247 (C):70-94.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Risk-neutral equilibria of noncooperative games.Robert Nau - 2015 - Theory and Decision 78 (2):171-188.
    Game-theoretic solution concepts such as Nash and Bayesian equilibrium start from an assumption that the players’ sets of possible payoffs, measured in units of von Neumann–Morgenstern utility, are common knowledge, and they go on to define rational behavior in terms of equilibrium strategy profiles that are either pure or independently randomized and which, in applications, are often taken to be uniquely determined or at least tightly constrained. A mechanism through which to obtain a common knowledge of payoff functions measured in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Epistemic values and the value of learning.Wayne C. Myrvold - 2012 - Synthese 187 (2):547-568.
    In addition to purely practical values, cognitive values also figure into scientific deliberations. One way of introducing cognitive values is to consider the cognitive value that accrues to the act of accepting a hypothesis. Although such values may have a role to play, such a role does not exhaust the significance of cognitive values in scientific decision-making. This paper makes a plea for consideration of epistemic value —that is, value attaching to a state of belief—and defends the notion of cognitive (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Scoring Rules and Epistemic Compromise.Sarah Moss - 2011 - Mind 120 (480):1053-1069.
    It is commonly assumed that when we assign different credences to a proposition, a perfect compromise between our opinions simply ‘splits the difference’ between our credences. I introduce and defend an alternative account, namely that a perfect compromise maximizes the average of the expected epistemic values that we each assign to alternative credences in the disputed proposition. I compare the compromise strategy I introduce with the traditional strategy of compromising by splitting the difference, and I argue that my strategy is (...)
    Download  
     
    Export citation  
     
    Bookmark   54 citations  
  • Personal probabilities of probabilities.Jacob Marschak, Morris H. Degroot, J. Marschak, Karl Borch, Herman Chernoff, Morris De Groot, Robert Dorfman, Ward Edwards, T. S. Ferguson, Koichi Miyasawa, Paul Randolph, Leonard J. Savage, Robert Schlaifer & Robert L. Winkler - 1975 - Theory and Decision 6 (2):121-153.
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Why scientists gather evidence.Patrick Maher - 1990 - British Journal for the Philosophy of Science 41 (1):103-119.
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Symmetry and partial belief geometry.Stefan Lukits - 2021 - European Journal for Philosophy of Science 11 (3):1-24.
    When beliefs are quantified as credences, they are related to each other in terms of closeness and accuracy. The “accuracy first” approach in formal epistemology wants to establish a normative account for credences based entirely on the alethic properties of the credence: how close it is to the truth. To pull off this project, there is a need for a scoring rule. There is widespread agreement about some constraints on this scoring rule, but not whether a unique scoring rule stands (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Beliefs about overconfidence.Sandra Ludwig & Julia Nafziger - 2011 - Theory and Decision 70 (4):475-500.
    This experiment elicits beliefs about other people’s overconfidence and abilities. We find that most people believe that others are unbiased, and only few think that others are overconfident. There is a remarkable heterogeneity between these groups. Those people who think others are underconfident or unbiased are overconfident themselves. Those who think others are overconfident are underconfident themselves. Despite this heterogeneity, people overestimate on average the abilities of others as they do their own ability. One driving force behind this result is (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Leitgeb and Pettigrew on Accuracy and Updating.Benjamin Anders Levinstein - 2012 - Philosophy of Science 79 (3):413-424.
    Leitgeb and Pettigrew argue that (1) agents should minimize the expected inaccuracy of their beliefs and (2) inaccuracy should be measured via the Brier score. They show that in certain diachronic cases, these claims require an alternative to Jeffrey Conditionalization. I claim that this alternative is an irrational updating procedure and that the Brier score, and quadratic scoring rules generally, should be rejected as legitimate measures of inaccuracy.
    Download  
     
    Export citation  
     
    Bookmark   36 citations  
  • Imprecision and indeterminacy in probability judgment.Isaac Levi - 1985 - Philosophy of Science 52 (3):390-409.
    Bayesians often confuse insistence that probability judgment ought to be indeterminate (which is incompatible with Bayesian ideals) with recognition of the presence of imprecision in the determination or measurement of personal probabilities (which is compatible with these ideals). The confusion is discussed and illustrated by remarks in a recent essay by R. C. Jeffrey.
    Download  
     
    Export citation  
     
    Bookmark   56 citations  
  • Justifying Objective Bayesianism on Predicate Languages.Jürgen Landes & Jon Williamson - 2015 - Entropy 17 (4):2459-2543.
    Objective Bayesianism says that the strengths of one’s beliefs ought to be probabilities, calibrated to physical probabilities insofar as one has evidence of them, and otherwise sufficiently equivocal. These norms of belief are often explicated using the maximum entropy principle. In this paper we investigate the extent to which one can provide a unified justification of the objective Bayesian norms in the case in which the background language is a first-order predicate language, with a view to applying the resulting formalism (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • We ought to agree: A consequence of repairing Goldman's group scoring rule.Matthew Kopec - 2012 - Episteme 9 (2):101-114.
    In Knowledge in a Social World, Alvin Goldman presents a framework to quantify the epistemic effects that various policies, procedures, and behaviors can have on a group of agents. In this essay, I show that the framework requires some modifications when applied to agents with credences. The required modifications carry with them an interesting consequence, namely, that any group whose members disagree can become more accurate by forming a consensus through averaging their credences. I sketch a way that this result (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Degrees of incoherence, Dutch bookability & guidance value.Jason Konek - 2022 - Philosophical Studies 180 (2):395-428.
    Why is it good to be less, rather than more incoherent? Julia Staffel, in her excellent book “Unsettled Thoughts,” answers this question by showing that if your credences are incoherent, then there is some way of nudging them toward coherence that is guaranteed to make them more accurate and reduce the extent to which they are Dutch-bookable. This seems to show that such a nudge toward coherence makes them better fit to play their key epistemic and practical roles: representing the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Minimizing Inaccuracy for Self-Locating Beliefs.Brian Kierland & Bradley Monton - 2005 - Philosophy and Phenomenological Research 70 (2):384-395.
    One's inaccuracy for a proposition is defined as the squared difference between the truth value (1 or 0) of the proposition and the credence (or subjective probability, or degree of belief) assigned to the proposition. One should have the epistemic goal of minimizing the expected inaccuracies of one's credences. We show that the method of minimizing expected inaccuracy can be used to solve certain probability problems involving information loss and self-locating beliefs (where a self-locating belief of a temporal part of (...)
    Download  
     
    Export citation  
     
    Bookmark   31 citations  
  • The impossibility of experimental elicitation of subjective probabilities.Edi Karni & Zvi Safra - 1995 - Theory and Decision 38 (3):313-320.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Incomplete risk attitudes and random choice behavior: an elicitation mechanism.Edi Karni - 2021 - Theory and Decision 92 (3-4):677-687.
    In the presence of incomplete risk attitudes, choices between noncomparable risky prospects are random. A random choice model advanced by Karni, 2021) includes the hypothesis that choices among noncomparable risky prospects are prompted by signals drawn from personal distributions. This paper introduces a scheme designed to elicit subjects’ assessments of their personal likelihoods of choices among noncomparable risky prospects and describes experiments designed to test the aforementioned hypothesis.
    Download  
     
    Export citation  
     
    Bookmark  
  • A nonpragmatic vindication of probabilism.James M. Joyce - 1998 - Philosophy of Science 65 (4):575-603.
    The pragmatic character of the Dutch book argument makes it unsuitable as an "epistemic" justification for the fundamental probabilist dogma that rational partial beliefs must conform to the axioms of probability. To secure an appropriately epistemic justification for this conclusion, one must explain what it means for a system of partial beliefs to accurately represent the state of the world, and then show that partial beliefs that violate the laws of probability are invariably less accurate than they could be otherwise. (...)
    Download  
     
    Export citation  
     
    Bookmark   487 citations  
  • A Characterization for the Spherical Scoring Rule.Victor Richmond Jose - 2009 - Theory and Decision 66 (3):263-281.
    Strictly proper scoring rules have been studied widely in statistical decision theory and recently in experimental economics because of their ability to encourage assessors to honestly provide their true subjective probabilities. In this article, we study the spherical scoring rule by analytically examining some of its properties and providing some new geometric interpretations for this rule. Moreover, we state a theorem which provides an axiomatic characterization for the spherical scoring rule. The objective of this analysis is to provide a better (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Value of a Probability Forecast from Portfolio Theory.D. J. Johnstone - 2007 - Theory and Decision 63 (2):153-203.
    A probability forecast scored ex post using a probability scoring rule (e.g. Brier) is analogous to a risky financial security. With only superficial adaptation, the same economic logic by which securities are valued ex ante – in particular, portfolio theory and the capital asset pricing model (CAPM) – applies to the valuation of probability forecasts. Each available forecast of a given event is valued relative to each other and to the “market” (all available forecasts). A forecast is seen to be (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Economic Darwinism: Who has the Best Probabilities? [REVIEW]David Johnstone - 2007 - Theory and Decision 62 (1):47-96.
    Simulation evidence obtained within a Bayesian model of price-setting in a betting market, where anonymous gamblers queue to bet against a risk-neutral bookmaker, suggests that a gambler who wants to maximize future profits should trade on the advice of the analyst cum probability forecaster who records the best probability score, rather than the highest trading profits, during the preceding observation period. In general, probability scoring rules, specifically the log score and better known “Brier” (quadratic) score, are found to have higher (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Theories of probability.Colin Howson - 1995 - British Journal for the Philosophy of Science 46 (1):1-32.
    My title is intended to recall Terence Fine's excellent survey, Theories of Probability [1973]. I shall consider some developments that have occurred in the intervening years, and try to place some of the theories he discussed in what is now a slightly longer perspective. Completeness is not something one can reasonably hope to achieve in a journal article, and any selection is bound to reflect a view of what is salient. In a subject as prone to dispute as this, there (...)
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  • Justifying conditionalization: Conditionalization maximizes expected epistemic utility.Hilary Greaves & David Wallace - 2006 - Mind 115 (459):607-632.
    According to Bayesian epistemology, the epistemically rational agent updates her beliefs by conditionalization: that is, her posterior subjective probability after taking account of evidence X, pnew, is to be set equal to her prior conditional probability pold(·|X). Bayesians can be challenged to provide a justification for their claim that conditionalization is recommended by rationality—whence the normative force of the injunction to conditionalize? There are several existing justifications for conditionalization, but none directly addresses the idea that conditionalization will be epistemically rational (...)
    Download  
     
    Export citation  
     
    Bookmark   231 citations  
  • Aggregating opinions through logarithmic pooling.C. Genest, S. Weerahandi & J. V. Zidek - 1984 - Theory and Decision 17 (1):61-70.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • A theory of subjective expected utility with vague preferences.Peter C. Fishburn - 1975 - Theory and Decision 6 (3):287-310.
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Reliability for degrees of belief.Jeff Dunn - 2015 - Philosophical Studies 172 (7):1929-1952.
    We often evaluate belief-forming processes, agents, or entire belief states for reliability. This is normally done with the assumption that beliefs are all-or-nothing. How does such evaluation go when we’re considering beliefs that come in degrees? I consider a natural answer to this question that focuses on the degree of truth-possession had by a set of beliefs. I argue that this natural proposal is inadequate, but for an interesting reason. When we are dealing with all-or-nothing belief, high reliability leads to (...)
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  • Eliciting beliefs.Robert Chambers & Tigran Melkonyan - 2008 - Theory and Decision 65 (4):271-284.
    We develop an algorithm that can be used to approximate a decisionmaker’s beliefs for a class of preference structures that includes, among others, α-maximin expected utility preferences, Choquet expected utility preferences, and, more generally, constant additive preferences. For both exact and statistical approximation, we demonstrate convergence in an appropriate sense to the true belief structure.
    Download  
     
    Export citation  
     
    Bookmark  
  • Downwards Propriety in Epistemic Utility Theory.Alejandro Pérez Carballo - 2023 - Mind 132 (525):30-62.
    Epistemic Utility Theory is often identified with the project of *axiology-first epistemology*—the project of vindicating norms of epistemic rationality purely in terms of epistemic value. One of the central goals of axiology-first epistemology is to provide a justification of the central norm of Bayesian epistemology, Probabilism. The first part of this paper presents a new challenge to axiology first epistemology: I argue that in order to justify Probabilism in purely axiological terms, proponents of axiology first epistemology need to justify a (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation