Conditionalization Does Not Maximize Expected Accuracy

Mind 126 (504):1155-1187 (2017)
  Copy   BIBTEX

Abstract

Greaves and Wallace argue that conditionalization maximizes expected accuracy. In this paper I show that their result only applies to a restricted range of cases. I then show that the update procedure that maximizes expected accuracy in general is one in which, upon learning P, we conditionalize, not on P, but on the proposition that we learned P. After proving this result, I provide further generalizations and show that much of the accuracy-first epistemology program is committed to KK-like iteration principles and to the existence of a class of propositions that rational agents will be certain of if and only if they are true.

Author's Profile

Miriam Schoenfield
University of Texas at Austin

Analytics

Added to PP
2016-10-10

Downloads
805 (#17,055)

6 months
170 (#16,397)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?