Abstract
Suppose we learn that we have a poor track record in forming beliefs rationally, or that a brilliant colleague thinks that we believe P irrationally. Does such input require us to revise those beliefs whose rationality is in question? When we gain information suggesting that our beliefs are irrational, we are in one of two general cases. In the first case we made no error, and our beliefs are rational. In that case the input to the contrary is misleading. In the second case we indeed believe irrationally, and our original evidence already requires us to fix our mistake. In that case the input to that effect is normatively superfluous. Thus, we know that information suggesting that our beliefs are irrational is either misleading or superfluous. This, I submit, renders the input incapable of justifying belief revision, despite our not knowing which of the two kinds it is.