Bayesianism And Self-Locating Beliefs

Dissertation, Stanford University (2007)
Download Edit this record How to cite View on PhilPapers
Abstract
How should we update our beliefs when we learn new evidence? Bayesian confirmation theory provides a widely accepted and well understood answer – we should conditionalize. But this theory has a problem with self-locating beliefs, beliefs that tell you where you are in the world, as opposed to what the world is like. To see the problem, consider your current belief that it is January. You might be absolutely, 100%, sure that it is January. But you will soon believe it is February. This type of belief change cannot be modelled by conditionalization. We need some new principles of belief change for this kind of case, which I call belief mutation. In part 1, I defend the Relevance-Limiting Thesis, which says that a change in a purely self-locating belief of the kind that results in belief mutation should not shift your degree of belief in a non-self-locating belief, which can only change by conditionalization. My method is to give detailed analyses of the puzzles which threaten this thesis: Duplication, Sleeping Beauty, and The Prisoner. This also requires giving my own theory of observation selection effects. In part 2, I argue that when self-locating evidence is learnt from a position of uncertainty, it should be conditionalized on in the normal way. I defend this position by applying it to various cases where such evidence is found. I defend the Halfer position in Sleeping Beauty, and I defend the Doomsday Argument and the Fine-Tuning Argument
Reprint years
2007
PhilPapers/Archive ID
BRABAS
Upload history
First archival date: 2013-06-25
Latest version: 1 (2018-03-22)
View other versions
Added to PP index
2009-01-28

Total views
1,396 ( #2,911 of 2,448,680 )

Recent downloads (6 months)
38 ( #17,347 of 2,448,680 )

How can I increase my downloads?

Downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.