Bayesianism And Self-Locating Beliefs

Dissertation, Stanford University (2007)
  Copy   BIBTEX

Abstract

How should we update our beliefs when we learn new evidence? Bayesian confirmation theory provides a widely accepted and well understood answer – we should conditionalize. But this theory has a problem with self-locating beliefs, beliefs that tell you where you are in the world, as opposed to what the world is like. To see the problem, consider your current belief that it is January. You might be absolutely, 100%, sure that it is January. But you will soon believe it is February. This type of belief change cannot be modelled by conditionalization. We need some new principles of belief change for this kind of case, which I call belief mutation. In part 1, I defend the Relevance-Limiting Thesis, which says that a change in a purely self-locating belief of the kind that results in belief mutation should not shift your degree of belief in a non-self-locating belief, which can only change by conditionalization. My method is to give detailed analyses of the puzzles which threaten this thesis: Duplication, Sleeping Beauty, and The Prisoner. This also requires giving my own theory of observation selection effects. In part 2, I argue that when self-locating evidence is learnt from a position of uncertainty, it should be conditionalized on in the normal way. I defend this position by applying it to various cases where such evidence is found. I defend the Halfer position in Sleeping Beauty, and I defend the Doomsday Argument and the Fine-Tuning Argument

Analytics

Added to PP
2009-01-28

Downloads
1,761 (#5,053)

6 months
108 (#32,824)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?