Citations of:
Add citations
You must login to add citations.


Principles of expert deference say that you should align your credences with those of an expert. This expert could be your doctor, your future, better informed self, or the objective chances. These kinds of principles face difficulties in cases in which you are uncertain of the truthconditions of the thoughts in which you invest credence, as well as cases in which the thoughts have different truthconditions for you and the expert. For instance, you shouldn't defer to your doctor by aligning (...) 



One can have no prior credence whatsoever (not even zero) in a temporally indexical claim. This fact saves the principle of conditionalization from potential counterexample and undermines the Elga and Arntzenius/Dorr arguments for the thirder position and Lewis' argument for the halfer position on the Sleeping Beauty Problem, thereby supporting the doublehalfer position. / . 

Since the publication of Elga's seminal paper in 2000, the Sleeping Beauty paradox has been the source of much discussion, particularly in this journal. Over the past few decades the Everettian interpretation of quantum mechanics 1 has also been much debated. There is an interesting connection between the way these two topics raise issues about subjective probability assignments.This connection is often alluded to, but as far as we know Peter J. Lewis's ‘Quantum Sleeping Beauty’ is the first attempt to examine (...) 

Indexical beliefs pose a special problem for standard theories of Bayesian updating. Sometimes we are uncertain about our location in time and space. How are we to update our beliefs in situations like these? In a stepwise fashion, I develop a constraint on the dynamics of indexical belief. As an application, the suggested constraint is brought to bear on the Sleeping Beauty problem. 

I describe in this paper an ontological solution to the Sleeping Beauty problem. I begin with describing the Entanglement urn experiment. I restate first the Sleeping Beauty problem from a wider perspective than the usual opposition between halfers and thirders. I also argue that the Sleeping Beauty experiment is best modelled with the Entanglement urn. I draw then the consequences of considering that some balls in the Entanglement urn have ontologically different properties form normal ones. In this context, considering a (...) 

This paper examines three accounts of the sleeping beauty case: an account proposed by Adam Elga, an account proposed by David Lewis, and a third account defended in this paper. It provides two reasons for preferring the third account. First, this account does a good job of capturing the temporal continuity of our beliefs, while the accounts favored by Elga and Lewis do not. Second, Elga’s and Lewis’ treatments of the sleeping beauty case lead to highly counterintuitive consequences. The proposed (...) 

This paper proposes a new explanation for the paradoxes related to anthropic reasoning. Solutions to the Sleeping Beauty Problem and the Doomsday argument are discussed in detail. The main argument can be summarized as follows: / Our thoughts, reasonings and narratives inherently comes from a certain perspective. With each perspective there is a center, or using the term broadly, a self. The natural firstperson perspective is most primitive. However we can also think and express from others’ perspectives with a theory (...) 

How can selflocating propositions be integrated into normal patterns of belief revision? Puzzles such as Sleeping Beauty seem to show that such propositions lead to violation of ordinary principles for reasoning with subjective probability, such as Conditionalization and Reflection. I show that sophisticated forms of Conditionalization and Reflection are not only compatible with selflocating propositions, but also indispensable in understanding how they can function as evidence in Sleeping Beauty and similar cases. 

One's inaccuracy for a proposition is defined as the squared difference between the truth value (1 or 0) of the proposition and the credence (or subjective probability, or degree of belief) assigned to the proposition. One should have the epistemic goal of minimizing the expected inaccuracies of one's credences. We show that the method of minimizing expected inaccuracy can be used to solve certain probability problems involving information loss and selflocating beliefs (where a selflocating belief of a temporal part of (...) 

In a recent article, Adam Elga outlines a strategy for “Defeating Dr Evil with SelfLocating Belief”. The strategy relies on an indifference principle that is not up to the task. In general, there are two things to dislike about indifference principles: adopting one normally means confusing risk for uncertainty, and they tend to lead to incoherent views in some ‘paradoxical’ situations. I argue that both kinds of objection can be levelled against Elga’s indifference principle. There are also some difficulties with (...) 



A number of cases involving selflocating beliefs have been discussed in the Bayesian literature. I suggest that many of these cases, such as the sleeping beauty case, are entangled with issues that are independent of selflocating beliefs per se. In light of this, I propose a division of labor: we should address each of these issues separately before we try to provide a comprehensive account of belief updating. By way of example, I sketch some ways of extending Bayesianism in order (...) 

How do temporal and eternal beliefs interact? I argue that acquiring a temporal belief should have no effect on eternal beliefs for an important range of cases. Thus, I oppose the popular view that new norms of belief change must be introduced for cases where the only change is the passing of time. I defend this position from the purported counterexamples of the Prisoner and Sleeping Beauty. I distinguish two importantly different ways in which temporal beliefs can be acquired and (...) 

If an agent believes that the probability of E being true is 1/2, should she accept a bet on E at even odds or better? Yes, but only given certain conditions. This paper is about what those conditions are. In particular, we think that there is a condition that has been overlooked so far in the literature. We discovered it in response to a paper by Hitchcock (2004) in which he argues for the 1/3 answer to the Sleeping Beauty problem. (...) 

Currently, the most popular views about how to update de se or selflocating beliefs entail the onethird solution to the Sleeping Beauty problem.2 Another widely held view is that an agent‘s credences should be countably additive.3 In what follows, I will argue that there is a deep tension between these two positions. For the assumptions that underlie the onethird solution to the Sleeping Beauty problem entail a more general principle, which I call the Generalized Thirder Principle, and there are situations (...) 



A large number of essays address the Sleeping Beauty problem, which undermines the validity of Bayesian inference and Bas Van Fraassen's 'Reflection Principle'. In this study a straightforward analysis of the problem based on probability theory is presented. The key difference from previous works is that apart from the random experiment imposed by the problem's description, a different one is also considered, in order to negate the confusion on the involved conditional probabilities. The results of the analysis indicate that no (...) 

I restate the Sleeping Beauty probabilistic paradox and offer an overview of the ongoing discussions that aim at resolving the problem. I summarize and eventually criticize briefly the various views: neutral position, Bayesian thirdism, nonBayesian thirdism, traditional halfism, as well as the new halfism of credence conservation, a somewhat neglected but interesting point of view. I try at the same time to clarify some essential notions, to introduce the main actors of the debate, and to anticipate its evolution. 

The best arguments for the 1/3 answer to the Sleeping Beauty problem all require that when Beauty awakes on Monday she should be uncertain what day it is. I argue that this claim should be rejected, thereby clearing the way to accept the 1/2 solution. 



The traditional solutions to the Sleeping Beauty problem say that Beauty should have either a sharp 1/3 or sharp 1/2 credence that the coin flip was heads when she wakes. But Beauty’s evidence is incomplete so that it doesn’t warrant a precise credence, I claim. Instead, Beauty ought to have a properly imprecise credence when she wakes. In particular, her representor ought to assign \(R(H\!eads)=[0,1/2]\) . I show, perhaps surprisingly, that this solution can account for the many of the intuitions (...) 

Philosophical interest in the role of selflocating information in the confirmation of hypotheses has intensified in virtue of the Sleeping Beauty problem. If the correct solution to that problem is 1/3, various attractive views on confirmation and probabilistic reasoning appear to be undermined; and some writers have used the problem as a basis for rejecting some of those views. My interest here is in two such views. One of them is the thesis that selflocating information cannot be evidentially relevant to (...) 

The Sleeping Beauty problem is test stone for theories about selflocating belief, i.e. theories about how we should reasons when data or theories contain indexical information. Opinion on this problem is split between two camps, those who defend the "1/2 view" and those who advocate the "1/3 view". I argue that both these positions are mistaken. Instead, I propose a new "hybrid" model, which avoids the faults of the standard views while retaining their attractive properties. This model _appears_ to violate (...) 

The way a rational agent changes her belief in certain propositions/hypotheses in the light of new evidence lies at the heart of Bayesian inference. The basic natural assumption, as summarized in van Fraassen's Reflection Principle, would be that in the absence of new evidence the belief should not change. Yet, there are examples that are claimed to violate this assumption. The apparent paradox presented by such examples, if not settled, would demonstrate the inconsistency and/or incompleteness of the Bayesian approach, and (...) 





I present a solution to the Sleeping Beauty problem. I begin with the consensual emerald case and describe then a set of relevant urn analogies and situations. These latter experiments make it easier to diagnose the flaw in the thirder's line of reasoning. I discuss in detail the root cause of the flaw in the argument for 1/3 which is an erroneous assimilation with a repeated experiment. Lastly, I discuss an informative variant of the original Sleeping Beauty experiment that casts (...) 

The Sleeping Beauty problem is presented in a formalized framework which summarizes the underlying probability structure. The two rival solutions proposed by Elga and Lewis differ by a single parameter concerning her prior probability. They can be supported by considering, respectively, that Sleeping Beauty is “fuzzyminded” and “blankminded”, the first interpretation being more natural than the second. The traditional absent minded driver problem is reinterpreted in this framework and sustains Elga’s solution. 



