According to epistemicutility theory, epistemic rationality is teleological: epistemic norms are instrumental norms that have the aim of acquiring accuracy. What’s definitive of these norms is that they can be expected to lead to the acquisition of accuracy when followed. While there’s much to be said in favor of this approach, it turns out that it faces a couple of worrisome extensional problems involving the future. The first problem involves credences about the future, and the (...) second problem involves future credences. Examining prominent solutions to a different extensional problem for this approach reinforces the severity of the two problems involving the future. Reflecting on these problems reveals the source: the teleological assumption that epistemic rationality aims at acquiring accuracy. (shrink)
How does logic relate to rational belief? Is logic normative for belief, as some say? What, if anything, do facts about logical consequence tell us about norms of doxastic rationality? In this paper, we consider a range of putative logic-rationality bridge principles. These purport to relate facts about logical consequence to norms that govern the rationality of our beliefs and credences. To investigate these principles, we deploy a novel approach, namely, epistemicutility theory. That is, we assume that (...) doxastic attitudes have different epistemic value depending on how accurately they represent the world. We then use the principles of decision theory to determine which of the putative logic-rationality bridge principles we can derive from considerations of epistemicutility. (shrink)
EpistemicUtility Theory is often identified with the project of *axiology-first epistemology*—the project of vindicating norms of epistemic rationality purely in terms of epistemic value. One of the central goals of axiology-first epistemology is to provide a justification of the central norm of Bayesian epistemology, Probabilism. The first part of this paper presents a new challenge to axiology first epistemology: I argue that in order to justify Probabilism in purely axiological terms, proponents of axiology first epistemology (...) need to justify a claim about epistemic value—what I label ‘Downwards Propriety’—much stronger than any they have offered justification. The second part of this paper offers an argument that this challenge cannot be met: that there is no hope for providing a purely axiological justification of Downwards Propriety, at least given widely accepted assumptions about epistemic value. (shrink)
Epistemic rationality is typically taken to be immodest at least in this sense: a rational epistemic state should always take itself to be doing at least as well, epistemically and by its own light, than any alternative epistemic state. If epistemic states are probability functions and their alternatives are other probability functions defined over the same collection of proposition, we can capture the relevant sense of immodesty by claiming that epistemicutility functions are (strictly) (...) proper. In this paper I examine what happens if we allow for the alternatives to an epistemic state to include probability functions with different domains. I first prove an impossibility result: on minimal assumptions, I show that there is no way of vindicating strong immodesty principles to the effect that any probability function should take itself to be doing at least as well than any alternative probability function, regardless of its domain. I then consider alternative, weaker generalizations of the traditional immodesty principle and prove some characterization results for some classes of epistemicutility functions satisfying each of the relevant principles. (shrink)
We use a theorem from M. J. Schervish to explore the relationship between accuracy and practical success. If an agent is pragmatically rational, she will quantify the expected loss of her credence with a strictly proper scoring rule. Which scoring rule is right for her will depend on the sorts of decisions she expects to face. We relate this pragmatic conception of inaccuracy to the purely epistemic one popular among epistemicutility theorists.
Some propositions are more epistemically important than others. Further, how important a proposition is is often a contingent matter—some propositions count more in some worlds than in others. EpistemicUtility Theory cannot accommodate this fact, at least not in any standard way. For EUT to be successful, legitimate measures of epistemicutility must be proper, i.e., every probability function must assign itself maximum expected utility. Once we vary the importance of propositions across worlds, however, normal (...) measures of epistemicutility become improper. I argue there isn’t any good way out for EUT. (shrink)
Epistemicutility theory is generally coupled with veritism. Veritism is the view that truth is the sole fundamental epistemic value. Veritism, when paired with EUT, entails a methodological commitment: norms of epistemic rationality are justified only if they can be derived from considerations of accuracy alone. According to EUT, then, believing truly has epistemic value, while believing falsely has epistemic disvalue. This raises the question as to how the rational believer should balance the prospect (...) of true belief against the risk of error. A strong intuitive case can be made for a kind of epistemic conservatism – that we should disvalue error more than we value true belief. I argue that none of the ways in which advocates of veritist EUT have sought to motivate conservatism can be squared with their methodological commitments. Short of any such justification, they must therefore either abandon their most central methodological principle or else adopt a permissive line with respect to epistemic risk. (shrink)
William James famously tells us that there are two main goals for rational believers: believing truth and avoiding error. I argues that epistemic consequentialism—in particular its embodiment in epistemicutility theory—seems to be well positioned to explain how epistemic agents might permissibly weight these goals differently and adopt different credences as a result. After all, practical versions of consequentialism render it permissible for agents with different goals to act differently in the same situation. -/- Nevertheless, I (...) argue that epistemic consequentialism doesn’t allow for this kind of permissivism and goes on to argue that this reveals a deep disanalogy between decision theory and the formally similar epistemicutility theory. This raises the question whether epistemicutility theory is a genuinely consequentialist theory at all. (shrink)
The short abstract: Epistemicutility theory + permissivism about attitudes to epistemic risk => permissivism about rational credences. The longer abstract: I argue that epistemic rationality is permissive. More specifically, I argue for two claims. First, a radical version of interpersonal permissivism about rational credence: for many bodies of evidence, there is a wide range of credal states for which there is some individual who might rationally adopt that state in response to that evidence. Second, a (...) slightly less radical version of intrapersonal permissivism about rational credence: for many bodies of evidence and for many individuals, there is a narrower but still wide range of credal states that the individual might rationally adopt in response to that evidence. My argument proceeds from two premises: (1) epistemicutility theory; and (2) permissivism about attitudes to epistemic risk. Epistemicutility theory says this: What it is epistemically rational for you to believe is what it would be rational for you to choose if you got to pick your beliefs and, when picking them, you cared only for their purely epistemic value. So, to say which credences it is epistemically rational for you to have, we must say how you should measure purely epistemic value, and which decision rule it is appropriate for you to use when you face the hypothetical choice between the possible credences you might adopt. Permissivism about attitudes to epistemic risk says that rationality permits many different attitudes to epistemic risk. These attitudes can show up in epistemicutility theory in two ways: in the way that you measure epistemic value; and in the decision rule that you use to pick your credences. I explore what happens if we encode our attitudes to epistemic risk in our epistemic decision rule. The result is the interpersonal and intrapersonal permissivism described above: different attitudes to epistemic risk lead to different choices of priors; given most bodies of evidence you might acquire, different priors lead to different posteriors; and even once we fix your attitudes to epistemic risk, if they are at all risk-inclined, there is a range of different priors and therefore different posteriors they permit. The essay ends by considering a range of objections to the sort of permissivism for which I’ve argued. (shrink)
Epistemic decision theory (EDT) employs the mathematical tools of rational choice theory to justify epistemic norms, including probabilism, conditionalization, and the Principal Principle, among others. Practitioners of EDT endorse two theses: (1) epistemic value is distinct from subjective preference, and (2) belief and epistemic value can be numerically quantified. We argue the first thesis, which we call epistemic puritanism, undermines the second.
Veritism says that the fundamental source of epistemic value for a doxastic state is the extent to which it represents the world correctly: that is, its fundamental epistemic value is deter...
We argue that there is a tension between two monistic claims that are the core of recent work in epistemic consequentialism. The first is a form of monism about epistemic value, commonly known as veritism: accuracy is the sole final objective to be promoted in the epistemic domain. The other is a form of monism about a class of epistemic scoring rules: that is, strictly proper scoring rules are the only legitimate measures of inaccuracy. These two (...) monisms, we argue, are in tension with each other. If only accuracy has final epistemic value, then there are legitimate alternatives to strictly proper scoring rules. Our argument relies on the way scoring rules are used in contexts where accuracy is rewarded, such as education. (shrink)
This paper is about the alethic aspect of epistemic rationality. The most common approaches to this aspect are either normative (what a reasoner ought to/may believe?) or evaluative (how rational is a reasoner?), where the evaluative approaches are usually comparative (one reasoner is assessed compared to another). These approaches often present problems with blindspots. For example, ought a reasoner to believe a currently true blindspot? Is she permitted to? Consequently, these approaches often fail in describing a situation of alethic (...) maximality, where a reasoner fulfills all the alethic norms and could be used as a standard of rationality (as they are, in fact, used in some of these approaches). I propose a function α, which accepts a set of beliefs as inputand returns a numeric alethic value. Then I use this function to define a notion of alethic maximality that is satisfiable by finite reasoners (reasoners with cognitive limitations) and does not present problems with blindspots. Function α may also be used in alethic norms and evaluation methods (comparative and non-comparative) that may be applied to finite reasoners and do not present problems with blindspots. A result of this investigation isthat the project of providing purely alethic norms is defective. The use of function α also sheds light on important epistemological issues, such as the lottery and the preface paradoxes, and the principles of clutter avoidance and reflection. (shrink)
Epistemic instrumentalism views epistemic norms and epistemic normativity as essentially involving the instrumental relation between means and ends. It construes notions like epistemic normativity, norms, and rationality, as forms of instrumental or means-end normativity, norms, and rationality. I do two main things in this paper. In part 1, I argue that there is an under-appreciated distinction between two independent types of epistemic instrumentalism. These are instrumentalism about epistemic norms and instrumentalism about epistemic normativity. (...) In part 2, I argue that this under-appreciated distinction matters for the debate surrounding the plausibility of EI. Specifically, whether we interpret EI as norm-EI or as source-EI matters for the widely discussed universality or categoricity objection to EI, and for two important motivations for adopting EI, namely naturalism and the practical utility of epistemic norms. I will then conclude by drawing some lessons for epistemic instrumentalism going forward. (shrink)
Assuming that votes are independent, the epistemically optimal procedure in a binary collective choice problem is known to be a weighted supermajority rule with weights given by personal log-likelihood-ratios. It is shown here that an analogous result holds in a much more general model. Firstly, the result follows from a more basic principle than expected-utility maximisation, namely from an axiom (Epistemic Monotonicity) which requires neither utilities nor prior probabilities of the ‘correctness’ of alternatives. Secondly, a person’s input need (...) not be a vote for an alternative, it may be any type of input, for instance a subjective degree of belief or probability of the correctness of one of the alternatives. The case of a profile of subjective degrees of belief is particularly appealing, since here no parameters such as competence parameters need to be known. (shrink)
In “The Possibility of Epistemic Nudging” (2021), I address a phenomenon that is widely neglected in the current literature on nudges: intentional doxastic nudging, i.e. people’s intentional influence over other people’s beliefs, rather than over their choices. I argue that, at least in brute cases, nudging is not giving reasons, but rather bypasses reasoning altogether. More specifically, nudging utilizes psychological heuristics and the nudged person’s biases in smart ways. The goal of my paper is to defend the claim that (...) nudging, even when it bypasses reasoning, can result in justified beliefs and knowledge. As I argue, it takes two things to accomplish this goal: suitable meta-epistemological views and appropriate circumstances. If a broadly reliabilist account of justified beliefs and knowledge is correct, and if the relevant belief-forming methods are externally individuated in the right way, then nudging to knowledge is possible. If, in addition, the nudger is knowledgeable, epistemically benevolent and systematically effective, then nudging to knowledge will become reality. In their replies Neil Levy (2021) and Jonathan Matheson and Valerie Joly Chock (2021), put pressure on my argument from different angles. Levy thinks that a better case can be made for his view that nudging is giving testimonial reasons, and finds my objections to this view unconvincing. Matheson and Joly Chock, on the other hand, point out that acquiring knowledge through nudging (i.e. epistemic nudging) is compatible with evidentialism, even if nudging is not giving reasons. On their view, evidentialism provides an explanation of epistemic nudging that is superior to my own account, which, according to them, also suffers from a number of counterintuitive consequences. I am grateful to my critics for raising these concerns, because considering them deepens our perspective on the target phenomenon, and has made me think harder about the relevant epistemological issues. Nevertheless, I am convinced that my core claims can be defended against these criticisms. (shrink)
We can distinguish two concepts of respect for persons: appraisal respect , an attitude based on a positive appraisal of a person's moral character, and recognition respect , the practice of treating persons with consideration based on the belief that they deserve such treatment. After engaging in an extended analysis of these concepts, I examine two "truisms" about them. We justifiably believe of some persons that they have good character and thus deserve our esteem . Frequently it pays to be (...) disrespectful; e.g., insulting those who insult us may put them in their place. By using empirical results from social and personality psychology and techniques from decision theory in addition to conceptual considerations, I argue that, surprisingly, the above two "truisms" are false. Extensive psychological evidence indicates that most persons are indeterminate---overall neither good nor bad nor intermediate---and that our information about specific persons almost never distinguishes those who are indeterminate from those who are not. The strategy of habitually avoiding disrespectful behavior maximizes long-term expected utility. In sum, we have good pragmatic reason to treat persons respectfully, but we have good epistemic reason to avoid esteeming or despising them. (shrink)
This paper introduces a model for evidence denial that explains this behavior as a manifestation of rationality and it is based on the contention that social values (measurable as utilities) often underwrite these sorts of responses. Moreover, it is contended that the value associated with group membership in particular can override epistemic reason when the expected utility of a belief or belief system is great. However, it is also true that it appears to be the case that it (...) is still possible for such unreasonable believers to reverse this sort of dogmatism and to change their beliefs in a way that is epistemically rational. The conjecture made here is that we should expect this to happen only when the expected utility of the beliefs in question dips below a threshold where the utility value of continued dogmatism and the associated group membership is no longer sufficient to motivate defusing the counter-evidence that tells against such epistemically irrational beliefs. (shrink)
The paper re-expresses arguments against the normative validity of expected utility theory in Robin Pope (1983, 1991a, 1991b, 1985, 1995, 2000, 2001, 2005, 2006, 2007). These concern the neglect of the evolving stages of knowledge ahead (stages of what the future will bring). Such evolution is fundamental to an experience of risk, yet not consistently incorporated even in axiomatised temporal versions of expected utility. Its neglect entails a disregard of emotional and financial effects on well-being before a particular (...) risk is resolved. These arguments are complemented with an analysis of the essential uniqueness property in the context of temporal and atemporal expected utility theory and a proof of the absence of a limit property natural in an axiomatised approach to temporal expected utility theory. Problems of the time structure of risk are investigated in a simple temporal framework restricted to a subclass of temporal lotteries in the sense of David Kreps and Evan Porteus (1978). This subclass is narrow but wide enough to discuss basic issues. It will be shown that there are serious objections against the modification of expected utility theory axiomatised by Kreps and Porteus (1978, 1979). By contrast the umbrella theory proffered by Pope that she has now termed SKAT, the Stages of Knowledge Ahead Theory, offers an epistemically consistent framework within which to construct particular models to deal with particular decision situations. A model by Caplin and Leahy (2001) will also be discussed and contrasted with the modelling within SKAT (Pope, Leopold and Leitner 2007). (shrink)
Intellectual progress involves forming a more accurate picture of the world. But it also figuring out which concepts to use for theorizing about the world. Bayesian epistemology has had much to say about the former aspect of our cognitive lives, but little if at all about the latter. I outline a framework for formulating questions about conceptual change in a broadly Bayesian framework. By enriching the resources of EpistemicUtility Theory with a more expansive conception of epistemic (...) value, I offer a picture of our cognitive economy on which adopting new conceptual tools can sometimes be epistemically rational. (shrink)
Pérez Carballo adopts an epistemicutility theory picture of epistemic norms where epistemicutility functions measure the value of degrees of belief, and rationality consists in maximizing expected epistemicutility. Within this framework he seeks to show that we can make sense of the intuitive idea that some true beliefs—say true beliefs about botany—are more valuable than other true beliefs—say true beliefs about the precise number of plants in North Dakota. To do so, (...) however, Pérez Carballo argues that we must think of the value of epistemic states as consisting in more than simply accuracy. This sheds light on which questions it is most epistemically valuable to pursue. (shrink)
The Lockean Thesis says that you must believe p iff you’re sufficiently confident of it. On some versions, the 'must' asserts a metaphysical connection; on others, it asserts a normative one. On some versions, 'sufficiently confident' refers to a fixed threshold of credence; on others, it varies with proposition and context. Claim: the Lockean Thesis follows from epistemicutility theory—the view that rational requirements are constrained by the norm to promote accuracy. Different versions of this theory generate different (...) versions of Lockeanism; moreover, a plausible version of epistemicutility theory meshes with natural language considerations, yielding a new Lockean picture that helps to model and explain the role of beliefs in inquiry and conversation. Your beliefs are your best guesses in response to the epistemic priorities of your context. Upshot: we have a new approach to the epistemology and semantics of belief. And it has teeth. It implies that the role of beliefs is fundamentally different than many have thought, and in fact supports a metaphysical reduction of belief to credence. (shrink)
The idea that logic is in some sense normative for thought and reasoning is a familiar one. Some of the most prominent figures in the history of philosophy including Kant and Frege have been among its defenders. The most natural way of spelling out this idea is to formulate wide-scope deductive requirements on belief which rule out certain states as irrational. But what can account for the truth of such deductive requirements of rationality? By far, the most prominent responses draw (...) in one way or another on the idea that belief aims at the truth. In this paper, I consider two ways of making this line of thought more precise and I argue that they both fail. In particular, I examine a recent attempt by EpistemicUtility Theory to give a veritist account of deductive coherence requirements. I argue that despite its proponents’ best efforts, EpistemicUtility Theory cannot vindicate such requirements. (shrink)
Accuracy‐first epistemology is an approach to formal epistemology which takes accuracy to be a measure of epistemicutility and attempts to vindicate norms of epistemic rationality by showing how conformity with them is beneficial. If accuracy‐first epistemology can actually vindicate any epistemic norms, it must adopt a plausible account of epistemic value. Any such account must avoid the epistemic version of Derek Parfit's “repugnant conclusion.” I argue that the only plausible way of doing so (...) is to say that accurate credences in certain propositions have no, or almost no, epistemic value. I prove that this is incompatible with standard accuracy‐first arguments for probabilism, and argue that there is no way for accuracy‐first epistemology to show that all credences of all agents should be coherent. (shrink)
We argue that philosophers ought to distinguish epistemic decision theory and epistemology, in just the way ordinary decision theory is distinguished from ethics. Once one does this, the internalist arguments that motivate much of epistemic decision theory make sense, given specific interpretations of the formalism. Making this distinction also causes trouble for the principle called Propriety, which says, roughly, that the only acceptable epistemicutility functions make probabilistically coherent credence functions immodest. We cast doubt on this (...) requirement, but then argue that epistemic decision theorists should never have wanted such a strong principle in any case. (shrink)
This is a critical discussion of the accuracy-first approach to epistemic norms. If you think of accuracy (gradational or categorical) as the fundamental epistemic good and think of epistemic goods as things that call for promotion, you might think that we should use broadly consequentialist reasoning to determine which norms govern partial and full belief. After presenting consequentialist arguments for probabilism and the normative Lockean view, I shall argue that the consequentialist framework isn't nearly as promising as (...) it might first appear. (shrink)
There has been considerable discussion recently of consequentialist justifications of epistemic norms. In this paper, I shall argue that these justifications are not justifications. The consequentialist needs a value theory, a theory of the epistemic good. The standard theory treats accuracy as the fundamental epistemic good and assumes that it is a good that calls for promotion. Both claims are mistaken. The fundamental epistemic good involves accuracy, but it involves more than just that. The fundamental (...) class='Hi'>epistemic good is knowledge, not mere true belief, because the goodness of an epistemic state is connected to that state's ability to give us reasons. If I'm right about the value theory, this has a number of significant implications for the consequentialist project. First, the good-making features that attach to valuable full beliefs are not features of partial belief. The resulting value theory does not give us the values we need to give consequentialist justifications of credal norms. Second, the relevant kind of good does not call for promotion. It is good to know, but the rational standing of a belief is not determined by the belief's location in a ranked set of options. In the paper's final section, I explain why the present view is a kind of teleological non-consequentialism. There is a kind of good that is prior to the right, but as the relevant kind of good does not call for promotion the value theory shows us what is wrong with the consequentialist project. (shrink)
Pettigrew offers new axiomatic constraints on legitimate measures of inaccuracy. His axiom called ‘Decomposition’ stipulates that legitimate measures of inaccuracy evaluate a credence function in part based on its level of calibration at a world. I argue that if calibration is valuable, as Pettigrew claims, then this fact is an explanandum for accuracy-rst epistemologists, not an explanans, for three reasons. First, the intuitive case for the importance of calibration isn’t as strong as Pettigrew believes. Second, calibration is a perniciously global (...) property that both contravenes Pettigrew’s own views about the nature of credence functions themselves and undercuts the achievements and ambitions of accuracy-rst epistemology. Finally, Decomposition introduces a new kind of value compatible with but separate from accuracy-proper in violation of Pettigrew’s alethic monism. introduction. (shrink)
As more objections have been raised against grant peer-review for being costly and time-consuming, the legitimate question arises whether machine learning algorithms could help assess the epistemic efficiency of the proposed projects. As a case study, we investigated whether project efficiency in high energy physics can be algorithmically predicted based on the data from the proposal. To analyze the potential of algorithmic prediction in HEP, we conducted a study on data about the structure and outcomes of HEP experiments with (...) the goal of predicting their efficiency. In the first step, we assessed the project efficiency using Data Envelopment Analysis of 67 experiments conducted in the HEP laboratory Fermilab. In the second step, we employed predictive algorithms to detect which team structures maximize the epistemic performance of an expert group. For this purpose, we used the efficiency scores obtained by DEA and applied predictive algorithms – lasso and ridge linear regression, neural network, and gradient boosted trees – on them. The results of the predictive analyses show moderately high accuracy, indicating that they can be beneficial as one of the steps in grant review. Still, their applicability in practice should be approached with caution. Some of the limitations of the algorithmic approach are the unreliability of citation patterns, unobservable variables that influence scientific success, and the potential predictability of the model. (shrink)
Causation has always been a philosophically controversial subject matter. While David Hume’s empiricist account of causation has been the dominant influence in analytic philosophy and science during modern times, a minority view has instead connected causation essentially to agency and manipulation. A related approach has for the first time gained widespread popularity in recent years, due to new powerful theories of causal inference in science that are based in a technical notion of intervention, and James Woodward’s closely connected interventionist theory (...) of causation in philosophy. This monograph assesses five manipulationist or interventionist theories of causation, viewed as theories that purport to tell us what causation is by providing us with the meaning of causal claims. It is shown that they cannot do this, as the conditions on causation that they impose are too weak, mainly due to ineliminable circularities in their definitions of causal terms. It is then argued that a subset of Woodward’s theory can nevertheless contribute crucially to an explanation of the unique role that manipulation has in our acquisition of causal knowledge. This explanation differs from the common regularist explanation of the epistemicutility of manipulation and experiment, and it is taken to confirm several important manipulationist intuitions. However, the success of the explanation depends on interventionism not itself being understood as a theory of causation, but as a theory of intervention. (shrink)
According to Humean theories of objective chance, the chances reduce to patterns in the history of occurrent events, such as frequencies. According to non-Humean accounts, the chances are metaphysically fundamental, existing independently of the "Humean Mosaic" of actually-occurring events. It is therefore possible, by the lights of non-Humeanism, for the chances and the frequencies to diverge wildly. Humeans often allege that this undermines the ability of non-Humean accounts of chance to rationalize adherence to David Lewis' Principal Principle (PP), which states (...) that an agent's degrees of belief should match what they take to be the objective chances. In this paper, I propose two approaches to justifying (PP) for non-Humean chance, hence defusing the Humean objection. The first approach justifies (PP) via the role it plays in informing outright beliefs about long-run frequencies. The second approach justifies (PP) by showing that adherence to (PP), even for non-Humean chance, maximizes expected epistemicutility according to the actual objective chance function. I then address an objection to this approach, concerning the alleged circularity of the derivations. (shrink)
This paper is about guessing: how people respond to a question when they aren’t certain of the answer. Guesses show surprising and systematic patterns that the most obvious theories don’t explain. We argue that these patterns reveal that people aim to optimize a tradeoff between accuracy and informativity when forming their guess. After spelling out our theory, we use it to argue that guessing plays a central role in our cognitive lives. In particular, our account of guessing yields new theories (...) of belief, assertion, and the conjunction fallacy—the psychological finding that people sometimes rank a conjunction as more probable than one of its conjuncts. More generally, we suggest that guessing helps explain how boundedly rational agents like us navigate a complex, uncertain world. (shrink)
We often ask for the opinion of a group of individuals. How strongly does the scientific community believe that the rate at which sea levels are rising has increased over the last 200 years? How likely does the UK Treasury think it is that there will be a recession if the country leaves the European Union? What are these group credences that such questions request? And how do they relate to the individual credences assigned by the members of the particular (...) group in question? According to the credal judgement aggregation principle, linear pooling, the credence function of a group should be a weighted average or linear pool of the credence functions of the individuals in the group. In this chapter, I give an argument for linear pooling based on considerations of accuracy. And I respond to two standard objections to the aggregation principle. (shrink)
Accuracy arguments for the core tenets of Bayesian epistemology differ mainly in the conditions they place on the legitimate ways of measuring the inaccuracy of our credences. The best existing arguments rely on three conditions: Continuity, Additivity, and Strict Propriety. In this paper, I show how to strengthen the arguments based on these conditions by showing that the central mathematical theorem on which each depends goes through without assuming Additivity.
Reductionist accounts of objective chance rely on a notion of fit, which ties the chances at a world to the frequencies at that world. Here, I criticize extant measures of the fit of a chance system and draw on recent literature in epistemicutility theory to propose a new model: chances fit a world insofar as they are accurate at that world. I show how this model of fit does a better job of explaining the normative features of (...) chance, its role in the laws of nature, and its status as an expert function than do previous accounts. (shrink)
ABSTRACT: Decision-theoretic approach and a nonlinguistic theory of norms are applied in the paper in an attempt to explain the nature of scientific rationality. It is considered as a normative system accepted by scientific community. When we say that a certain action is rational, we express a speaker’s acceptance of some norms concerning a definite action. Scientists can choose according to epistemicutility or other rules and values, which themselves have a variable nature. Rationality can be identified with (...) a decision to accept a norm. This type of decision cannot be reduced only to its linguistic formulation; it is an act of evolvement of the normative regulation of human behavior. Norms are treated as decisions of a normative authority: a specific scientific community is the normative authority in science. These norms form a system and they are absolutely objective in the context of individual scientists. There exists an invariant core in all the norms of rationality, accounting for their not being liable to change, as compared with the flexibility of legal norms. The acceptance of and abidance by these norms is of social importance – it affects the aims of the community. A norm only defines the common framework and principles of scientific problem-solving; its application is a matter of professional skills and creative approach to a particular problem. It is of no importance at all, if an agent’s cognitive abilities do not live up to the requirements of a norm. Such discrepancy can be compensated for by the fact that a scientist carries out work in a conceptual and normative framework established by a respective scientific community. (shrink)
We introduce a ranking of multidimensional alternatives, including uncertain prospects as a particular case, when these objects can be given a matrix form. This ranking is separable in terms of rows and columns, and continuous and monotonic in the basic quantities. Owing to the theory of additive separability developed here, we derive very precise numerical representations over a large class of domains (i.e., typically notof the Cartesian product form). We apply these representationsto (1)streams of commodity baskets through time, (2)uncertain social (...) prospects, (3)uncertain individual prospects. Concerning(1), we propose a finite horizon variant of Koopmans’s (1960) axiomatization of infinite discounted utility sums. The main results concern(2). We push the classic comparison between the exanteand expostsocial welfare criteria one step further by avoiding any expected utility assumptions, and as a consequence obtain what appears to be the strongest existing form of Harsanyi’s (1955) Aggregation Theorem. Concerning(3), we derive a subjective probability for Anscombe and Aumann’s (1963) finite case by merely assuming that there are two epistemically independent sources of uncertainty. (shrink)
The best and most popular argument for probabilism is the accuracy-dominance argument, which purports to show that alethic considerations alone support the view that an agent’s degrees of belief should always obey the axioms of probability. I argue that extant versions of the accuracy-dominance argument face a problem. In order for the mathematics of the argument to function as advertised, we must assume that every omniscient credence function is classically consistent; there can be no worlds in the set of dominance-relevant (...) worlds that obey some other logic. This restriction cannot be motivated on alethic grounds unless we’re also willing to accept that rationality requires belief in every metaphysical necessity, as the distinction between a priori logical necessities and a posteriori metaphysical ones is not an alethic distinction. To justify the restriction to classically consistent worlds, non-alethic motivation is required. And thus, if there is a version of the accuracy-dominance argument in support of probabilism, it isn’t one that is grounded in alethic considerations alone. (shrink)
Resumo: Neste artigo, apresentamos uma versão de uma teoria que eu chamarei de Contextualismo Epistêmico ‒ a visão de que o contexto e os padrões determinados por ele desempenham um papel central na avaliação de se um agente epistêmico tem, ou não, justificação e, portanto, conhecimento ‒ para tentar resolver um dos problemas mais influentes em epistemologia, a saber, o Problema do Regresso epistêmico. O primeiro passo será o de caracterizar o problema do regresso epistêmico. Em seguida, apresentaremos uma importante (...) distinção que é útil para um melhor entendimento da nossa tese, isto é, a distinção entre justificação proposicional e doxástica. Então, abordaremos as visões tradicionais que supostamente alegam resolver este problema, mostrando que todas são problemáticas. Por fim, apresentaremos a visão que pretendemos defender, mostrando como ela pode oferecer uma resposta ao problema do regresso epistêmico, de uma maneira que as outras visões não podem. Palavras-chave: Regresso Epistêmico, Justificação, Conhecimento, Contextualismo Epistêmico. Abstract: In this paper we present a version of a theory that we will call Epistemic Contextualism - the view that the context and the standards that it determines, play a central role in evaluating whether, or not, a subject has justification, and therefore knowledge - to try to solve one of the most influential problems in epistemology, namely, the Epistemic Regress Problem. The first step will be to characterize the epistemic regress problem. Next, we present an important distinction that is useful to a better understanding of our view, that is, the distinction between propositional and doxastic justification. Then, we present traditional views that allegedly claim to solve this problem showing that all are problematic. The final step concerns the exposition of the view we want to defend, showing how it can solve the epistemic regress problem in a way that the other views cannot. Keywords: Epistemic Regress, Justification, Knowledge, Epistemic Contextualism. (shrink)
The epistemic position of an agent often depends on their position in a larger network of other agents who provide them with information. In general, agents are better off if they have diverse and independent sources. Sullivan et al. [19] developed a method for quantitatively characterizing the epistemic position of individuals in a network that takes into account both diversity and independence; and presented a proof-of-concept, closed-source implementation on a small graph derived from Twitter data [19]. This paper (...) reports on an open-source reimplementation of their algorithm in Python, optimized to be usable on much larger networks. In addition to the algorithm and package, we also show the ability to scale up our package to large synthetic social network graph profiling, and finally demonstrate its utility in analyzing real-world empirical evidence of ‘echo chambers’ on online social media, as well as evidence of interdisciplinary diversity in an academic communications network. (shrink)
to appear in Lambert, E. and J. Schwenkler (eds.) Transformative Experience (OUP) -/- L. A. Paul (2014, 2015) argues that the possibility of epistemically transformative experiences poses serious and novel problems for the orthodox theory of rational choice, namely, expected utility theory — I call her argument the Utility Ignorance Objection. In a pair of earlier papers, I responded to Paul’s challenge (Pettigrew 2015, 2016), and a number of other philosophers have responded in similar ways (Dougherty, et al. (...) 2015, Harman 2015) — I call our argument the Fine-Graining Response. Paul has her own reply to this response, which we might call the Authenticity Reply. But Sarah Moss has recently offered an alternative reply to the Fine-Graining Response on Paul’s behalf (Moss 2017) — we’ll call it the No Knowledge Reply. This appeals to the knowledge norm of action, together with Moss’ novel and intriguing account of probabilistic knowledge. In this paper, I consider Moss’ reply and argue that it fails. I argue first that it fails as a reply made on Paul’s behalf, since it forces us to abandon many of the features of Paul’s challenge that make it distinctive and with which Paul herself is particularly concerned. Then I argue that it fails as a reply independent of its fidelity to Paul’s intentions. (shrink)
Epistemic states of uncertainty play important roles in ethical and political theorizing. Theories that appeal to a “veil of ignorance,” for example, analyze fairness or impartiality in terms of certain states of ignorance. It is important, then, to scrutinize proposed conceptions of ignorance and explore promising alternatives in such contexts. Here, I study Lerner’s probabilistic egalitarian theorem in the setting of imprecise probabilities. Lerner’s theorem assumes that a social planner tasked with distributing income to individuals in a population is (...) “completely ignorant” about which utility functions belong to which individuals. Lerner models this ignorance with a certain uniform probability distribution, and shows that, under certain further assumptions, income should be equally distributed. Much of the criticism of the relevance of Lerner’s result centers on the representation of ignorance involved. Imprecise probabilities provide a general framework for reasoning about various forms of uncertainty including, in particular, ignorance. To what extent can Lerner’s conclusion be maintained in this setting? (shrink)
Desire', 'preference', 'utility', '(utility-aggregating) moral desirability' are terms that build on each other in this order. The article follows this definitional structure and presents these terms and their justifications. The aim is to present welfare-ethical criteria of the common good that define 'moral desirability' as an aggregation, e.g. addition, of individual utility: utilitarianism, utility egalitarianism, leximin, prioritarianism.
Instability occurs when the very fact of choosing one particular possible option rather than another affects the expected values of those possible options. In decision theory: An act is stable iff given that it is actually performed, its expected utility is maximal. When there is no stable choice available, the resulting instability can seem to pose a dilemma of practical rationality. A structurally very similar kind of instability, which occurs in cases of anti-expertise, can likewise seem to create dilemmas (...) of epistemic rationality. One possible line of response to such cases of instability, suggested by both Jeffrey (1983) and Sorensen (1987), is to insist that a rational agent can simply refuse to accept that such instability applies to herself in the first place. According to this line of thought it can be rational for a subject to discount even very strong empirical evidence that the anti-expertise condition obtains. I present a new variety of anti-expertise condition where no particular empirical stage-setting is required, since the subject can deduce a priori that an anti-expertise condition obtains. This kind of anti-expertise case is therefore not amenable to the line of response that Jeffrey and Sorensen recommend. (shrink)
This paper provides an account of what it is to have faith in a proposition p, in both religious and mundane contexts. It is argued that faith in p doesn’t require adopting a degree of belief that isn’t supported by one’s evidence but rather it requires terminating one’s search for further evidence and acting on the supposition that p. It is then shown, by responding to a formal result due to I.J. Good, that doing so can be rational in a (...) number of circumstances. If expected utility theory is the correct account of practical rationality, then having faith can be both epistemically and practically rational if the costs associated with gathering further evidence or postponing the decision are high. If a more permissive framework is adopted, then having faith can be rational even when there are no costs associated with gathering further evidence. (shrink)
I consider the problem of how to derive what an agent believes from their credence function and utility function. I argue the best solution of this problem is pragmatic, i.e. it is sensitive to the kinds of choices actually facing the agent. I further argue that this explains why our notion of justified belief appears to be pragmatic, as is argued e.g. by Fantl and McGrath. The notion of epistemic justification is not really a pragmatic notion, but it (...) is being applied to a pragmatically defined concept, i.e. belief. (shrink)
Psychologists and philosophers have argued that the capacity for perseverance or “grit” depends both on willpower and on a kind of epistemic resilience. But can a form of hopefulness in one’s future success also constitute a source of grit? I argue that substantial practical hopefulness, as a hope to bring about a desired outcome through exercises of one’s agency, can serve as a distinctive ground for the capacity for perseverance. Gritty agents’ “practical hope” centrally involves an attention-fuelled, risk-inclined weighting (...) of two competing concerns over action: when facing the decision of whether to persevere, hopeful gritty agents prioritize the aim of choosing a course of action which might go very well over that of choosing a course of action which is very likely to go fairly well. By relying on the notion of a “risk-inclined attentional pattern” as a dimension of gritty agents’ practical hope, we can explain that form of hope’s contribution to their motivation and practical rationality, especially on a risk-weighted expected utility framework. The upshot is a more pluralistic view of the sources of grit. (shrink)
Scientists often use aesthetic values in the evaluation and choice of theories. Aesthetic values are not only regarded as leading to practically more useful theories but are often taken to stand in a special epistemic relation to the truth of a theory such that the aesthetic merit of a theory is evidence of its truth. This paper explores what aesthetic considerations influence scientists' reasoning, how such aesthetic values relate to the utility of a scientific theory, and how one (...) can justify the epistemic role for such values. The paper examines ways in which the link between beauty and truth can be defended, the challenges facing such accounts, and explores alternative epistemic roles for aesthetic values in scientific practice. (shrink)
Despite numerous and increasing attempts to define what life is, there is no consensus on necessary and sufficient conditions for life. Accordingly, some scholars have questioned the value of definitions of life and encouraged scientists and philosophers alike to discard the project. As an alternative to this pessimistic conclusion, we argue that critically rethinking the nature and uses of definitions can provide new insights into the epistemic roles of definitions of life for different research practices. This paper examines the (...) possible contributions of definitions of life in scientific domains where such definitions are used most (e.g., Synthetic Biology, Origins of Life, Alife, and Astrobiology). Rather than as classificatory tools for demarcation of natural kinds, we highlight the pragmatic utility of what we call operational definitions that serve as theoretical and epistemic tools in scientific practice. In particular, we examine contexts where definitions integrate criteria for life into theoretical models that involve or enable observable operations. We show how these definitions of life play important roles in influencing research agendas and evaluating results, and we argue that to discard the project of defining life is neither sufficiently motivated, nor possible without dismissing important theoretical and practical research. (shrink)
The desirability of what actually occurs is often influenced by what could have been. Preferences based on such value dependencies between actual and counterfactual outcomes generate a class of problems for orthodox decision theory, the best-known perhaps being the so-called Allais Paradox. In this paper we solve these problems by extending Richard Jeffrey's decision theory to counterfactual prospects, using a multidimensional possible-world semantics for conditionals, and showing that preferences that are sensitive to counterfactual considerations can still be desirability maximising. We (...) end the paper by investigating the conditions necessary and sufficient for a desirability function to be an expected utility. It turns out that the additional conditions imply highly implausible epistemic principles. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.