Abstract
Approximate coherentism suggests that imperfectly rational agents should hold approximately coherent credences. This norm is intended as a generalization of ordinary coherence. I argue that it may be unable to play this role by considering its application under learning experiences. While it is unclear how imperfect agents should revise their beliefs, I suggest a plausible route is through Bayesian updating. However, Bayesian updating can take an incoherent agent from relatively more coherent credences to relatively less coherent credences, depending on the data observed. Thus, comparative rationality judgments among incoherent agents are unduly sensitive to luck.