Abstract
Standard arguments for Bayesian conditionalizing rely on assumptions that many epistemologists have criticized as being too strong: (i) that conditionalizers must be logically infallible, which rules out the possibility of rational logical learning, and (ii) that what is learned with certainty must be true (factivity). In this paper, we give a new factivity-free argument for the superconditionalization norm in a personal possibility framework that allows agents to learn empirical and logical falsehoods. We then discuss how the resulting framework should be interpreted. Does it still model norms of rationality, or something else, or nothing useful at all? We discuss five ways of interpreting our results, three that embrace them and two that reject them. We find one of each kind wanting, and leave readers to choose among the remaining three.