Citations of:
Evidence: A Guide for the Uncertain
Philosophy and Phenomenological Research 100 (3):586-632 (2020)
Add citations
You must login to add citations.
|
|
Is it ever rational to suspend judgment about whether a particular doxastic attitude of ours is rational? An agent who suspends about whether her attitude is rational has serious doubts that it is. These doubts place a special burden on the agent, namely, to justify maintaining her chosen attitude over others. A dilemma arises. Providing justification for maintaining the chosen attitude would commit the agent to considering the attitude rational—contrary to her suspension on the matter. Alternatively, in the absence of (...) |
|
Should you always be certain about what you should believe? In other words, does rationality demand higher-order certainty? First answer: Yes! Higher-order uncertainty can’t be rational, since it breeds at least a mild form of epistemic akrasia. Second answer: No! Higher-order certainty can’t be rational, since it licenses a dogmatic kind of insensitivity to higher-order evidence. Which answer wins out? The first, I argue. Once we get clearer about what higher-order certainty is, a view emerges on which higher-order certainty does (...) |
|
People with the kind of preferences that give rise to the St. Petersburg paradox are problematic---but not because there is anything wrong with infinite utilities. Rather, such people cannot assign the St. Petersburg gamble any value that any kind of outcome could possibly have. Their preferences also violate an infinitary generalization of Savage's Sure Thing Principle, which we call the *Countable Sure Thing Principle*, as well as an infinitary generalization of von Neumann and Morgenstern's Independence axiom, which we call *Countable (...) |
|
A familiar complaint about conciliatory approaches to disagreement is that they are self-defeating or incoherent because they ‘call for their own rejection’. This complaint seems to be influential but it isn’t clear whether conciliatory views call for their own rejection or what, if anything, this tells us about the coherence of such views. We shall look at two ways of developing this self-defeat objection and we shall see that conciliatory views emerge unscathed. A simple version of the self-defeat objection leaves (...) |
|
Epistemologists spend a great deal of time thinking about how we should respond to our evidence. They spend far less time thinking about the ways that evidence can be acquired in the first place. This is an oversight. Some ways of acquiring evidence are better than others. Many normative epistemologies struggle to accommodate this fact. In this article I develop one that can and does. I identify a phenomenon – epistemic feedback loops – in which evidence acquisition has gone awry, (...) |
|
Metacognitive mental states are mental states about mental states. For example, I may be uncertain whether my belief is correct. In social discourse, an interlocutor’s metacognitive certainty may constitute evidence about the reliability of their testimony. For example, if a speaker is certain that their belief is correct, then we may take this as evidence in favour of their belief, or its content. This paper argues that, if metacognitive certainty is genuine evidence, then it is disproportionate evidence for extreme beliefs. (...) |
|
We argue that social deliberation may increase an agent’s confidence and credence under certain circumstances. An agent considers a proposition H and assigns a probability to it. However, she is not fully confident that she herself is reliable in this assignment. She then endorses H during deliberation with another person, expecting him to raise serious objections. To her surprise, however, the other person does not raise any objections to H. How should her attitudes toward H change? It seems plausible that (...) |
|
Higher-order evidence is evidence about what is rational to think in light of your evidence. Many have argued that it is special – falling into its own evidential category, or leading to deviations from standard rational norms. But it is not. Given standard assumptions, almost all evidence is higher-order evidence. |
|
Many have claimed that whenever an investigation might provide evidence for a claim, it might also provide evidence against it. Similarly, many have claimed that your credence should never be on the edge of the range of credences that you think might be rational. Surprisingly, both of these principles imply that you cannot rationally be modest: you cannot be uncertain what the rational opinions are. |
|
There are many things—call them ‘experts’—that you should defer to in forming your opinions. The trouble is, many experts are modest: they’re less than certain that they are worthy of deference. When this happens, the standard theories of deference break down: the most popular (“Reflection”-style) principles collapse to inconsistency, while their most popular (“New-Reflection”-style) variants allow you to defer to someone while regarding them as an anti-expert. We propose a middle way: deferring to someone involves preferring to make any decision (...) |
|
In this essay, I cast doubt on an apparent truism: namely, that if evidence is available for gathering and use at a negligible cost, then it's always instrumentally rational for us to gather that evidence and use it for making decisions. Call this thesis Value of Information. I show that Value of Information conflicts with two other plausible theses. The first is the view that an agent's evidence can entail non-trivial propositions about the external world. The second is the view (...) |
|
This paper is about a tension between two theses. The first is Value of Evidence: roughly, the thesis that it is always rational for an agent to gather and use cost-free evidence for making decisions. The second is Rationality of Imprecision: the thesis that an agent can be rationally required to adopt doxastic states that are imprecise, i.e., not representable by a single credence function. While others have noticed this tension, I offer a new diagnosis of it. I show that (...) |
|
Cases of inexact observations have been used extensively in the recent literature on higher-order evidence and higher-order knowledge. I argue that the received understanding of inexact observations is mistaken. Although it is convenient to assume that such cases can be modeled statically, they should be analyzed as dynamic cases that involve change of knowledge. Consequently, the underlying logic should be dynamic epistemic logic, not its static counterpart. When reasoning about inexact knowledge, it is easy to confuse the initial situation, the (...) |
|
It is natural to think that rationality imposes some relationship between what a person believes, and what she believes about what she’s rational to believe. Epistemic akrasia—for example, believing P while believing that P is not rational to believe in your situation—is often seen as intrinsically irrational. This paper argues otherwise. In certain cases, akrasia is intuitively rational. Understanding why akratic beliefs in those case are indeed rational provides a deeper explanation how typical akratic beliefs are irrational—an explanation that does (...) |
|
Good’s theorem is the apparent platitude that it is always rational to ‘look before you leap’: to gather information before making a decision when doing so is free. We argue that Good’s theorem is not platitudinous and may be false. And we argue that the correct advice is rather to ‘make your act depend on the answer to a question’. Looking before you leap is rational when, but only when, it is a way to do this. |
|
Predictable polarization is everywhere: we can often predict how people’s opinions—including our own—will shift over time. Extant theories either neglect the fact that we can predict our own polarization, or explain it through irrational mechanisms. They needn’t. Empirical studies suggest that polarization is predictable when evidence is ambiguous, i.e. when the rational response is not obvious. I show how Bayesians should model such ambiguity, and then prove that—assuming rational updates are those which obey the value of evidence (Blackwell 1953; Good (...) |
|
You have higher-order uncertainty iff you are uncertain of what opinions you should have. I defend three claims about it. First, the higher-order evidence debate can be helpfully reframed in terms of higher-order uncertainty. The central question becomes how your first- and higher-order opinions should relate—a precise question that can be embedded within a general, tractable framework. Second, this question is nontrivial. Rational higher-order uncertainty is pervasive, and lies at the foundations of the epistemology of disagreement. Third, the answer is (...) |
|
Many theories of rational belief give a special place to logic. They say that an ideally rational agent would never be uncertain about logical facts. In short: they say that ideal rationality requires "logical omniscience." Here I argue against the view that ideal rationality requires logical omniscience on the grounds that the requirement of logical omniscience can come into conflict with the requirement to proportion one’s beliefs to the evidence. I proceed in two steps. First, I rehearse an influential line (...) |
|
‘Bayesian epistemology’ became an epistemological movement in the 20th century, though its two main features can be traced back to the eponymous Reverend Thomas Bayes (c. 1701-61). Those two features are: (1) the introduction of a formal apparatus for inductive logic; (2) the introduction of a pragmatic self-defeat test (as illustrated by Dutch Book Arguments) for epistemic rationality as a way of extending the justification of the laws of deductive logic to include a justification for the laws of inductive logic. (...) |
|
A norm of local expert deference says that your credence in an arbitrary proposition A, given that the expert's probability for A is n, should be n. A norm of global expert deference says that your credence in A, given that the expert's entire probability function is E, should be E(A). Gaifman (1988) taught us that these two norms are not equivalent. Here, I provide characterisation theorems which tell us precisely when the norms give different advice. They tell us that, (...) |
|
This paper explores the idea that it is instrumentally valuable to learn normative truths. We consider an argument for "normative hedging" based on this principle, and examine the structure of decision-making under moral uncertainty that arises from it. But it also turns out that the value of normative information is inconsistent with the principle that learning *empirical* truths is instrumentally valuable. We conclude with a brief comment on "metanormative regress.". |
|
This paper proposes a novel answer to the question of what attitude agents should adopt when they receive misleading higher-order evidence that avoids the drawbacks of existing views. The answer builds on the independently motivated observation that there is a difference between attitudes that agents form as conclusions of their reasoning, called terminal attitudes, and attitudes that are formed in a transitional manner in the process of reasoning, called transitional attitudes. Terminal and transitional attitudes differ both in their descriptive and (...) |
|
On at least one of its uses, ‘higher-order evidence’ refers to evidence about what opinions are rationalized by your evidence. This chapter surveys the foundational epistemological questions raised by such evidence, the methods that have proven useful for answering them, and the potential consequences and applications of such answers. |
|
Principles of expert deference say that you should align your credences with those of an expert. This expert could be your doctor, the objective chances, or your future self, after you've learnt something new. These kinds of principles face difficulties in cases in which you are uncertain of the truth-conditions of the thoughts in which you invest credence, as well as cases in which the thoughts have different truth-conditions for you and the expert. For instance, you shouldn't defer to your (...) |
|
Do people tend to be overconfident in their opinions? Many think so. They’ve run studies to test whether people are calibrated: whether their confidence in their opinions matches the proportion of those opinions that are true. Under certain conditions, people are systematically “over-calibrated”—for example, of the opinions they’re 80% confident in, only 60% are true. From this observed over-calibration, it’s inferred that people are irrationally overconfident. My question: When—and why—is this inference warranted? Answering this question requires articulating a general connection (...) |
|
Stalnaker's Thesis about indicative conditionals is, roughly, that the probability one ought to assign to an indicative conditional equals the probability that one ought to assign to its consequent conditional on its antecedent. The thesis seems right. If you draw a card from a standard 52-card deck, how confident are you that the card is a diamond if it's a red card? To answer this, you calculate the proportion of red cards that are diamonds -- that is, you calculate the (...) |