In “The Deontological Conception of Epistemic Justification,” a by now classical article, William Alston argues that epistemic justification should not be spelled out in terms of obligations, permissions, praise, blame, and responsibility. His main argument for this thesis runs as follows:

(P1):

We are obligated to believe certain propositions only if we have sufficient voluntary control over our beliefs.

(P2):

We do not have sufficient voluntary control over our beliefs.

(C):

We do not have doxastic obligations.Footnote 1

And, obviously, if epistemic justification is to be spelled out in terms of doxastic obligations and we do not have such obligations, then the deontological conception of epistemic justification is in trouble. For the deontological conception of epistemic justification would then imply that we never believe justifiedly and that is widely agreed to be false.Footnote 2 It even seems that the very idea that we bear doxastic responsibility is problematic if Alston’s argument is convincing, for how can we be responsible for our beliefs if we never have an obligation (not) to believe a particular proposition?

Before we consider how philosophers have responded to this argument, let me say a few words on control. By ‘voluntary control’ Alston means intentional control. One has intentional control over something if one can choose to do it and one can choose not do it or, in other words, if one can decide to do it and one can decide not to do it. Slightly more precisely, Alston’s view seems to be that one has voluntary control over φ-ing if and only if one can φ as the result of an intention to φ and one can ~φ as the result of an intention to ~φ. Thus, it seems that in normal circumstances I have voluntary control over what I eat for breakfast, what I say to my colleagues, how much time I spend on my work, how I behave when I drive my car, and so forth.Footnote 3 For in all these cases there is something which I can decide to do and decide not to do. According to Alston, it is clear that we lack this kind of control over the vast majority of our beliefs: I cannot change my beliefs simply as the result of an intention to do so in the way I can choose to say something to a colleague or in the way I can choose to spend an hour on writing a letter. I believe that I had two slices of bread for breakfast, that I am now in my study, that it is a sunny day, and so on, but I have no idea how I could now abandon these beliefs. It seems that my beliefs are not under my voluntary control.

Several strategies have been devised to overturn Alston’s argument. First, some philosophers have denied that we lack control over our beliefs. Some of them argue that we can intentionally indirectly acquire or maintain beliefs.Footnote 4 Others grant that we lack intentional control over our beliefs, but argue that we do have compatibilist doxastic control (roughly, responsiveness to evidence) and that such control suffices for doxastic responsibility.Footnote 5 Second, some have granted that we lack control over our beliefs, but have also argued that we can nevertheless influence what we believe in virtue of our control over actions and omissions that influence what we believe, such as deeply considering a proposition, gathering evidence, and working on our intellectual virtues and vices.Footnote 6 Third, some have denied that doxastic control or influence is necessary for doxastic responsibility: we can be responsible for our beliefs even there is no way that we can control or influence what we believe. They have interpreted doxastic obligations as role oughts, epistemic ideals, ought-to-be’s that imply ought-to-do’s, or doxastic demands.Footnote 7

In this article, I argue against a fourth kind of response to Alston’s argument, one that has been advocated by several philosophers, but which has not yet been subjected to criticism in the literature. I call it the Belief Policies Response (BPR). According to BPR, we lack voluntary control over our beliefs, but we can nonetheless be responsible for our beliefs in virtue of our belief-policies. Belief-policies are norms about what one should believe and what one should not believe in certain evidential or non-evidential circumstances. The position originates with Paul Helm.Footnote 8 Several philosophers, such as Pascal Engel and Ronney Mourad, have embraced Helm’s version of BPR.Footnote 9 Other philosophers defend versions of BPR that differ on certain points from that of Paul Helm. According to Buckareff, for instance, doxastic responsibility can be grounded in our control over our higher-order acceptances concerning our beliefs.Footnote 10 And according to Heller, we are responsible for our beliefs in virtue of our responsibility for our epistemic natures, that is, our desires to form beliefs in accordance with certain dispositions rather than others.Footnote 11 In this paper I provide an argument that counts against all these different versions of BPR. In doing so, I take Paul Helm’s position as an example, since his view is the most detailed version of BPR that we find in the literature. However, I also show how the argument can easily be extended so that it counts against any version of BPR.

It is, of course, crucial for the issue at hand to get a firm grip on what a belief-policy is. Here are some examples of belief-policies that Helm mentions:

  1. (a)

    One should believe in accordance with one’s evidence (p. 64).

  2. (b)

    When there is a clash of evidence, then except in the case of well-attested and properly understood divine revelation, belief regarding matters of fact ought to be proportioned to the real probabilities (p. 88).

  3. (c)

    When the balance of evidence is against p, but it is desired that p is true, then one must believe that p in order to make it true (p. 102).

  4. (d)

    One must believe those propositions which are presented as a result of the application of acceptable scientific methods, and one may believe any propositions which are consistent with those propositions and which are not excluded by other parts of one’s overall belief-policy (p. 137).

Belief-policies, then, are norms that tell us what (not) to believe in certain evidential or non-evidential circumstances. Now, can we be a bit more precise on the ontology of belief-policies: what sort of thing are they? Unfortunately, Helm is quite ambiguous on this point. He calls belief-policies “ways of forming beliefs,” (p. 9) “projects,” (p. 58) “strategies,” (p. 58) “programmes,” (p. 58) “standards,” (pp. 5, 78) and even “beliefs.” (pp. 66, 115) He says explicitly, though, that our evidence underdetermines what belief-policy one should accept. Since non-evidential factors should also be taken into account, most belief-policies are under our voluntary control, so that we are responsible for them. But what belief-policy we accept often makes a significant difference to what we believe. For instance, in a situation in which one has strong scientific evidence against p but in which it is morally desirable to believe that p, belief-policy (c) will tell you to believe that p, whereas belief-policy (d) will forbid believing that p in those circumstances. I think it is, therefore, fair to say that Helm’s position and positions similar to it, such as those of Buckareff and Heller, boil down to the following two theses:

(T1):

The adoption of belief-policies is under our voluntary control.

(T2):

Belief-policies make a significant difference to what we believe.

Obviously, belief-policies can explain why we bear doxastic responsibility only if these two theses are true: as Helm admits, if we have no control over our belief-policies, then we are not responsible for them, and if belief-policies do not make a difference to the beliefs we hold, then they are irrelevant to doxastic responsibility.

My argument against the strategy of trying to meet Alston’s argument by explaining doxastic responsibility in terms of belief-policies takes the form of a dilemma. The advantage of this objection is that, if it is convincing, it counts not only against Helm’s particular view, but against any approach that grounds doxastic responsibility in belief-policies, such as Buckareff’s and Heller’s positions. The dilemma is the following: either belief-policies are beliefs or they are not. If they are, then (T1) is false. If they are not, then (T2) is false. Let me explain.

1. First, imagine that belief-policies are themselves beliefs, beliefs as to what one should (not) believe in particular (non-)evidential circumstances. If this is correct, then T2 will at least sometimes be true. For instance, if I believe that I should only believe some proposition if I have sufficient scientific evidence for it, but find myself with insufficient scientific evidence for p upon considering p, then that will normally suffice for me to abandon my belief that p. And that is because my belief about what I should (not) believe will be part of the evidence which determines whether or not I believe that p. If I have a different belief-policy, say, that I should not believe anything unless I have some kind of evidence for p—whether scientific or not—then I may very well come to believe that p upon considering whether p.

The problem with this first horn of the dilemma is that if belief-policies are themselves beliefs, then (T1) seems false, that is, it seems that the adoption of belief-policies is not under our voluntary control. As Helm and most other philosophers admit, beliefs are not under our voluntary control. In fact, the involuntariness of belief was the very reason to attempt to explain doxastic responsibility in terms of belief-policies. This should make us very suspicious of opting for the first horn. We can only opt for the first horn if we have good reason to think that belief-policies are crucially different from our other beliefs and that because of that difference they are under our voluntary control. This seems to be what Helm has in mind. After all, he says that our evidence crucially underdetermines which belief-policy to adopt (p. 58) and that shortage of time sometimes requires us to take a decision on a belief-policy despite having insufficient evidence in favor or against it (p. 119).

I think that there are several problems with this approach. First, if belief-policies are beliefs that are under our voluntary control, then it is not clear why other beliefs are not under our control. For, it is often the case that our evidence underdetermines whether we should believe or disbelieve a proposition and that shortage of time requires us to take the truth or the falsehood of some proposition for granted. If taking the truth of a proposition for granted is somehow sufficient for believing a proposition, then many of our beliefs will be under our voluntary control. But then we do not need to appeal to belief-policies in order to ground doxastic responsibility. We could simply say that we bear doxastic responsibility in virtue of the fact that under time-pressure we sometimes have to go with one proposition rather than another and that that suffices for belief.

One could, of course, adopt some weaker notion of ‘voluntary control’, on which it is not required that one can intentionally adopt a belief-policy in order for it to be under one’s control. On such a weaker notion of control, it is only required that one’s adoption of a belief-policy is reason-responsive, that is, that one would have adopted a different belief-policy if one had good reasons to do so.Footnote 12 The problem with this response is that if such reason-responsiveness concerning one’s belief-policy suffices for having voluntary control over one’s belief-policy, where that belief-policy is itself a belief, then there is no reason not to think that something similar applies to other beliefs of ours, that is, that reason-responsiveness suffices for our beliefs to be under our control. But then this approach boils down to an already existing solution to the argument from doxastic involuntarism, namely doxastic compatibilism, as defended, for instance, by Sharon Ryan and Matthias Steup (I already mentioned this approach above). This would make an approach in terms of belief-policies redundant.

Second, it is not at all clear why one has to adopt some belief-policy or other if one has insufficient evidence in favor of it or against it. If one has insufficient evidence one way or the other, it seems that what one should do is suspend judgment on whether that belief-policy is correct. But do we not need belief-policies in order to form beliefs? If belief-policies are themselves beliefs, then I think we do not need them in order to form beliefs. For most of the beliefs that I form, I do not first question whether thus believing would fit my evidence or whether it would suit my practical purposes. I believe that I had two slices of bread for breakfast this morning and that it is a sunny day today, but I have not thought about whether these beliefs fit my evidence or whether they meet other norms about what to believe. As to more complex beliefs, such as political beliefs, they do not seem to require belief-policies either. Even if I do not know whether believing some proposition p about a political issue is correct in the sense that it fits my evidence or helps to reach certain practical goals, it can still simply seem to me that p is true or I can still think that p is true and, hence, believe that p. Belief-policies, then, do not seem necessary for forming beliefs. But if belief-policies are not necessary for forming beliefs and if we have insufficient evidence concerning some belief-policy, then we can simply suspend judgment on whether that belief-policy is correct. In fact, it seems that if one has insufficient evidence on the correctness of a belief-policy, one will have as little control over believing or disbelieving that policy as one has over believing or disbelieving other propositions: if one is sufficiently rational, one will immediately find oneself suspending judgment on whether or not that belief-policy is correct.

Finally, if belief-policies are beliefs that we can simply choose to have for non-evidential reasons, say, because beliefs of a particular kind will make us happier, then such belief-policies will not make a difference to our beliefs. Thus, (T2) will be false. For, upon considering whether or not to believe that p, I will realize that I have adopted my belief-policy for pragmatic reasons, such as the fact that beliefs of a particular kind make me happier. I cannot, however, believe some proposition p if my belief-policy indicates that I should believe that p, while knowing that I have chosen that belief-policy for a non-epistemic (not truth-oriented) reason, unless I am deeply irrational. I may believe that I will be happier or that my chances of survival are greater if I believe that I am healthy, or that God exists, or that God does not exist, or that my family loves me, and so forth, but if I find myself with convincing evidence for the fact that I am not healthy, or that God does not exist, or that God does exist, or that my family does not love me, I would not know how to fail to believe these propositions. If, on the other hand, what counts in one’s adopting a belief-policy is merely one’s evidence, then it seems that belief-policies cannot be under our voluntary control: what we believe about what we should believe is then simply a result of the evidence that we find ourselves with. Thus, in that case (T1) will be false.

2. Second, imagine that belief-policies are not themselves beliefs. Then, what are they? I can think of two things here. First, one might think that belief-policies are acceptances or assumptions, as, for instance, Andrei Buckareff seems to do. One accepts a proposition if and only if one decides to adopt it as a starting point, to assume its truth both in action and theoretical reasoning.Footnote 13 Second, one could think of belief-policies as desires, as Heller seems to do. Let us start with belief-policies as acceptances. I think that would save (T1), for what we accept, that is, what we adopt as an assumption in the way that we act or the way that we reason, is clearly something that is under our voluntary control. If I do not know what to believe about the existence of God and other religious doctrines, I can nevertheless decide to act as if I am a Christian, as Blaise Pascal famously pointed out. And even if I disbelieve that the theories of quantum mechanics and general relativity are both true, since I know that they are incompatible, I can decide to take them for granted in my scientific work.

Unfortunately, this option does not save (T2). Of course, voluntarily adopting a particular belief-policy makes some difference to what we believe: it presumably makes a difference as to whether or not we believe that we have adopted that policy. But that is not the kind of difference that we are after: we are looking for a difference that adopting a belief-policy makes to a substantial body of our beliefs, so that belief-policies can ground doxastic responsibility in a sufficiently wide variety of cases. It seems, however, that if belief-policies are acceptances, then they cannot make such a difference.

Imagine, for instance, that I adopt the belief-policy of not believing anything that is not supported by scientific research. Imagine that I do not have sufficient evidence for this claim to really believe it, but that I deem it advantageous for my prestige among my academic peers to adopt this policy. I, therefore, accept rather than believe this particular policy. The next morning, I read in the newspaper that more than 9,000 people died in an earth quake. Since I believe that such reports are based on reliable testimony, I immediately believe that 9,000 people died in an earth quake. Then, however, I realize that I do not have scientific evidence in favor of this claim: it is just newspaper news, based on the authorities’ testimony. Since I accept rather than believe that I should not believe anything unless it is based upon scientific evidence, that acceptance itself does not go into my evidence base and does not count against my belief. Since I believe that the newspaper’s report is reliable, my having accepted the relevant belief-policy will not make a difference to what I believe. It will most probably make a difference to what I write down or to what I say to my colleagues, but that is an altogether different matter. If, on the other hand, I adopt a belief-policy for merely evidential (epistemic) reasons, then it is hard to see how I could fail to also believe that policy (cases of deep irrationality might be an exception to this). But then we will run into the problems that we identified in our discussion of the first horn of the dilemma.

What if we think of belief-policies as desires, as Heller seems to do? For instance, one has the belief-policy to believe that p only if one has sufficient evidence for p if and only if one desires or wants to believe that p only if one has sufficient evidence for p. Now, it seems that if one desires to believe that p only if one has sufficient evidence for p, then one will also accept that one should believe that p only if one has sufficient reasons for p. But maybe not. Maybe one can desire to believe that p only if one has sufficient evidence for p and still not accept that one should believe that p only if one has sufficient evidence for p, for instance, because one realizes that this desire is irrational or otherwise defective. One could, of course, reply that in such a case one does not have an all-things-considered desire to believe that p only if one has sufficient evidence for p, but I will leave this discussion to those who think that belief-policies can save doxastic responsibility from Alston’s argument. Instead, let us ask whether BPR is tenable if belief-policies are desires, whether or not desiring that p entails accepting that p.

I think the answer has to be negative. First, one might wonder whether (T1) is true if belief-policies are desires, that is, whether or not we have sufficient control over our desires. At least many of our desires do not seem to be under our voluntary control. If there is a piece of chocolate in front of me and I smell it, but I do not desire to eat it (because I dislike chocolate), I would not know what to do in order to come to desire to eat that piece of chocolate. If I desire that Syria becomes a democracy, then I would not know what to do in order to no longer desire that Syria becomes a democracy. The same seems true for belief-policies. I do not desire to hold beliefs only if they are the outcome of scientific research and I would not know what to do in order to acquire that desire. I may try to come to have a particular belief-policy, understood as a desire, and I might incidentally be successful. But it is at least controversial whether incidental success is sufficient for voluntary control and it seems that for many desires, such attempts are unlikely to succeed. Second and even more importantly, it seems that if belief-policies are desires, then they do not make a difference to our beliefs and, hence, cannot explain how we bear doxastic responsibility. Imagine that I come to desire that I hold only beliefs that are the results of academic research. I could desire that, but I would be unable to resist all sorts of beliefs that are not based on scientific research, such as my beliefs about the past, beliefs about my surroundings, and so forth. Moreover, to claim that our desires can make a difference to our beliefs would be to claim that we can voluntarily or intentionally control our beliefs after all. But then belief-policies are redundant; this strategy would boil down to a specific version of doxastic voluntarism, that is, the thesis that we can control our beliefs.

Hence, if one embraces the second horn of the dilemma, belief-policies cannot ground doxastic responsibility either. It follows that any attempt to meet Alston’s argument that grounds doxastic responsibility in belief-policies is doomed to fail.

I do not want to suggest that belief-policies—whatever precisely they are—cannot change over time. Both if they are beliefs and if they are acceptances or desires, they can. If they are beliefs, they will change due to new evidence that one acquires in the course of one’s life. If they are acceptances, then one will voluntarily change certain of one’s belief-policies in the light of new evidential and especially non-evidential factors. And if they are desires, then they might change due to new evidential or non-evidential considerations. The problems remain the same, though: if belief-policies are beliefs, then we seem to lack control over them and if they are acceptances or desires, then they do not seem to make a significant difference to the beliefs we hold.

Let me summarize what I have argued for in this short paper. Alston’s argument against the deontological conception of epistemic justification may or may not be sound—I have argued neither against nor in favor of its premises or its validity. I have argued that if there is a convincing way of meeting Alston’s argument, then, contrary to what some philosophers think, it will not be that of grounding doxastic responsibility in belief-policies. If we want to save the deontological conception of epistemic justification or, more generally, the idea that we bear doxastic responsibility, we will have to find another way to do so.Footnote 14