Knowledge: A Human Interest Story


Over the years I’ve written many papers defending an idiosyncratic version of interest-relative epistemology. This book collects and updates the views I’ve expressed over those papers. Interest-relative epistemologies all start in roughly the same way. A big part of what makes knowledge important is that it rationalises action. But for almost anything we purportedly know, there is some action that it wouldn’t rationalise. I know what I had for breakfast, but I wouldn’t take a bet at billion to one odds about it. Knowledge has practical limits. The first idiosyncratic feature of my version of interest-relative epistemology is how those limits are identified. Other interest-relative philosophers typically say that the limits have to do with stakes; in high stakes situations knowledge goes away. That’s no part of my view. I think knowledge goes away in long odds situations. High stakes situations are almost always are long odds situations, for reasons to do with the declining marginal utility of money. But the converse isn’t true. On my view, knowledge often goes away in cases where it is trivial to check before action. This idea, that interests matter in long odds cases, and not just in high stakes cases, is the main constant in what I’ve written on interest-relativity over the years. But there are three other respects in which the view I’m going to set out and defend in this book is very different from the view I set out in older papers. I used to identify the practical limits on knowledge with cases where relying on the purported knowledge would get the wrong answer. I focused, that is, on the outputs of inquiry. Knowledge goes away if relying on it would lead one to make a mistake. I now think I was looking at the wrong end of inquiry. Knowledge goes away if the thinker starts conducting an inquiry where the purported knowledge is an inappropriate starting point, and inappropriate for the special reason that it might be false. Now one way we can tell that something is a bad starting point is that starting there will mean we end up at the wrong place. But it’s not the only way. Sometimes a bad starting point will lead to the right conclusion for the wrong reasons. As the Nyāya philosophers argued, rational inquiry starts with knowledge. If it would be irrational to start this inquiry with a particular belief, that belief isn’t knowledge. Not all inquiries are practical inquiries, but many are. And practical inquiries are usually going to be at the center of attention in this book. But what is someone trying to figure out when they conduct a practical inquiry? I used to think that they were trying to figure out which option maximised expected utility, and to a first approximation identified knowledge with those things one could conditionalise on without changing the option that maximised expected utility. As noted in the previous paragraph, I no longer think that we can identify knowledge with what doesn’t change our verdicts. But more importantly, I no longer think that expected utility maximisation is as central to practical inquiry as I once did. There are some theoretical reasons from game theory that raise some doubts about expected utility maximisation. Weak dominance reasoning is part of our theory of rational choice, and can’t be modelled as expected utility maximisation. Perhaps some kinds of equilibrium seeking are parts of practical inquiry, and can’t be modelled as expected utility maximisation. But there are also very practical reasons to think that practical inquiry doesn’t aim at expected utility maximisation. When there are a lot of very similar options - think about selecting a can from a supermarket shelf - and it’s more trouble than it’s worth to figure out which of them maximises expected utility, it’s best to ignore the differences between them and just pick. As I’ll argue in Chapter 6, this makes a big difference to how interests and knowledge interact. In the version of interest-relativity that I’m defending here, everything in epistemology is interest-relative. Knowledge, rational belief, and evidence are all interest-relative. But they are all interest-relative in slightly different ways. The main aim here is to defend the interest-relativity of knowledge. A common objection to interest-relative theories of knowledge is that they can’t be extended into theories of all the things we care about in epistemology. Here I try to meet that challenge. The way I do so is a little messy. It would be nice if there was some part of epistemology that’s interest-invariant, as I used to think, or if all the interest-relative notions were interest-relative in the same way, as other interest-relative epistemologists argue. For better or worse, that’s not the view I’m defending. Interests matter throughout epistemology, and we just have to go case by case to figure out how and why they matter.

Author's Profile

Brian Weatherson
University of Michigan, Ann Arbor


Added to PP

257 (#62,243)

6 months
257 (#9,224)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?