Abstract
Suppose you think that whether you believe some proposition A at some future time t might have a causal influence on whether A is true. For instance, maybe you think a woman can read your mind, and either (1) you think she will snap her fingers shortly after t if and only if you believe at t that she will, or (2) you think she will snap her fingers shortly after t if and only if you don't believe at t that she will. Let A be the proposition that she snaps her fingers shortly after t. In case (1), theoretical rationality seems to leave it open whether you should believe A or not. Perhaps, for all it has to say, you could just directly choose whether to believe A. David Velleman seems to be committed to something close to that, but his view has been unpopular. In case (2), you seem to be in a theoretical dilemma, a situation where any attitude you adopt toward A will be self-undermining in a way that makes you irrational. Such theoretical dilemmas ought to be impossible, just as genuine moral dilemmas ought to be impossible, but it is surprisingly hard to show that they are (perhaps because they aren't). I study cases analogous to (1) and (2) in a probabilistic framework where degrees of belief rather than all-or-nothing beliefs are taken as basic. My principal conclusions are that Velleman's view is closer to the truth than it is generally thought to be, that case (2) type theoretical dilemmas only arise for hyperidealised agents unlike ourselves, and that there are related cases that can arise for agents like us that are very disturbing but might not quite amount to theoretical dilemmas