Introduction

The contributors to this special issue read Delusions and Other Irrational Beliefs and provided insightful and constructive criticism of its main theses. Now I am grateful for the opportunity to revisit and clarify some of my arguments in the light of their commentaries.

In the book, I argued that there is a tendency towards idealising beliefs in the philosophy of mind. The psychological literature invites us, instead, to consider that beliefs are often badly integrated with other beliefs, unsupported by evidence, resistant to change, and behaviourally inefficacious. Once we accept that everyday beliefs can be irrational in these ways, it is a short step to maintain that there is continuity between everyday beliefs and clinical delusions. Clinical delusions are typically irrational to a greater extent or irrational across more dimensions than non-delusional beliefs, but they are irrational in roughly the same way. Although all the commentators seem to agree that beliefs should not be idealised and find the discussion of widespread irrationality in everyday cognition useful, they remain largely unconvinced by my take on the status of delusions. For different reasons, they claim that it is not helpful to think of delusions as beliefs. Thus, the status of delusions is what I shall focus on here, making a further attempt at persuading my critics. I shall defend a modest position. I fully recognise that the doxastic view of delusions does not tell us everything we want to know about delusions, but I maintain that it is at least as good as the alternative views and, in some respects, preferable.

In “Delusions: Beliefs or In-between States?”, I shall comment on two proposals: that delusions are more like perceptual illusions than they are like beliefs; and that delusions are belief states ‘gone half mad’. According to both proposals, most delusions have an ‘in-between’ status. They are either in-between perceptual illusion and doxastic states, or they are failed attempts at believing. These are carefully laid out and attractive positions, as they start from a desire to understand the peculiarities of delusions and aim to account for the apparent gap between typical beliefs and textbook-cases of delusions.

In “Delusions and Folk Psychology”, I shall consider the view that the nature of delusions cannot be satisfactorily accounted for within the simplistic descriptive framework of folk psychology. One consideration is that we need to recognise different types of beliefs and identify delusions with one such type rather than describe them all as generic beliefs. Another consideration is that scientific psychology and neuroscience are already showing the limitations of our current folk conceptions of the mind. Thus, to try and fit delusions within the folk-psychological discourse is an out-dated project. A third consideration, which would vindicate the continuing importance of folk psychology, is that by ascribing beliefs we do not simply describe people’s behaviour but regulate it. Thus, we cannot stretch folk-psychological notions to include all delusions, as most delusions clearly violate the principles which warrant the ascription of beliefs.

I cannot hope to do full justice to the arguments I summarise in “Delusions: Beliefs or In-between States?” and “Delusions and Folk Psychology”, but I shall argue that the basic notion of belief the modest doxasticist has in mind is still useful. No doubt, it is in need of revision in the light of our scientific understanding of the mind and it is intrinsically tied to a whole set of normative notions, but the same can be said for the other folk-psychological, epistemological or phenomenological notions used to account for the nature of delusions, such as perceptual illusions, imaginings, alternative realities, experiences, and acceptances. The proposals put forward by the commentators do not offer an account of the nature of delusions that is entirely independent of, or necessarily more explanatory than, standard doxastic accounts. And to be able to say that, by and large, delusions are beliefs still has theoretical and pragmatic benefits over the relevant alternatives.

Finally, in “Concluding Remarks: The Continuity Thesis”, I shall defend the continuity thesis. Some of the commentators worry that by identifying delusions with irrational beliefs I trivialise delusions. But claiming that delusions are on a continuum with widespread instances of irrational belief does not imply that delusions are a trivial phenomenon. A good theory of the nature of delusions should be informed both by advances in cognitive neuropsychology and by evidence gathered through clinical encounters in psychiatry. Whether the available scientific and clinical evidence supports the doxastic account of delusions is debatable, as my critics know all too well, but it powerfully indicates that there is no radical break between normal and abnormal cognition.

Delusions: Beliefs or In-between States?

In the traditional literature on the nature of delusions, the phenomenological and metacognitive accounts are not particularly appealing as genuine alternatives to the doxastic account of delusion. Phenomenologists argue that at the very core of delusions there is no strange or irrational belief, but a powerful experience that affects one’s entire conception of reality (e.g., [1]). Metacognitivists argue that delusions are acts of imagination which one misinterprets and attributes to oneself as beliefs (e.g., [2]). Both positions highlight important aspects of the phenomenon of clinical delusions, but they can be seen as complementing rather than replacing the doxastic account. This is because neither the phenomenological nor the metacognitive account rules out that the person with delusions has relevant belief states that interact (more or less successfully) with her other cognitive and affective states and with her behaviour. Proponents of these accounts do not deny that the phenomenon of delusions involves strange or irrational beliefs, but they complain against the excessive focus on belief states, they suggest that delusions are not formed in the same way as belief states, and, typically, they emphasise the discontinuity rather than the continuity between the phenomenon of delusions and that of irrational beliefs that are more ‘mundane’ in content. In particular, phenomenologists invite us not to forget the experiential and emotional aspects of delusional thought and behaviour, and metacognitivists suggest a hypothesis about the formation of delusions that highlights their similarities with acts of imagination rather than with beliefs.

In this issue’s commentaries, authors propose novel hypotheses about the nature of delusions and each of these proposals deserves a more in-depth examination than I can offer here. Hohwy and Rajan argue for there being a strong analogy between delusions and illusions, and Schwitzgebel and Tumulty, in their individual contributions, account for delusions as ‘beliefs gone half mad’ or as ‘not quite beliefs’. Although the proposals differ significantly in content and motivation, both of them object to the standard doxastic account, and significantly advance the debate on the nature of delusions by suggesting that delusions have an in-between status.

Aren’t Delusions Just Like Perceptual Illusions?

According to Hohwy and Rajan, delusions are analogous to perceptual illusions and are the result of faulty perceptual inference. These claims are based on three arguments: (1) just like illusions, delusions are unrevisable; (2) just like illusions, delusions can have varying levels of circumscription; (3) in people with delusions reasoning competence is intact but reasoning performance is impaired due to deficient inputs.

In the book and elsewhere I have dealt very minimally with theories of delusion formation, for practical and theoretical reasons. First, advances in the field of cognitive neuropsychology are rapid, and the status of even the most influential theories of delusion formation on the market is fluid. This makes it really hard to adjudicate between competing hypotheses. Second, it is debatable whether the fine details of the process of delusion formation can determine what the status of delusions is. If we can clearly distinguish the mechanisms responsible for the formation of different types of mental states, and discover that delusions are formed in the same way as a particular type of mental state, then we have evidence that delusions can be legitimately considered mental states of that type. But this cannot be the whole story: for each type of mental state several causal paths are possible, and the effects of the mental state on other mental states and on behaviour are at least as important in the identification of the type of mental state in question as the causal path. Cannot something start out as an imagining and then become a belief, or viceversa? That is why in the book I focus on the role that delusions play in the mental economy of the people reporting them and on whether the surface (epistemic) features of delusions are shared by typical cases of belief. Here again, I shall not discuss those arguments in favour of the analogy between delusions and perceptual illusions that are based on aetiological considerations. Rather, I shall ask whether the features that Hohwy and Rajan attribute to both delusions and perceptual illusions are shared by beliefs too. I am not looking for a victory of the doxastic account over the competing perceptual account—I shall settle for a tie.

Let’s start with the argument that delusions (just like perceptual illusions) are not revisable.

Many illusions are unrevisable. No matter how many times one measures the Müller-Lyer lines with a ruler, one cannot revise the perceptual inference that they are of unequal lengths. This is similar to delusions against which normal reality testing is powerless. If the unrevisability of illusions is due to some kind of cognitive impenetrability specific to low level sensory processes, then the same may be the case for delusional content [3].

The unrevisability of delusions is considered one of their most distinctive features. Delusions are typically resistant to counterevidence, more so than ordinary beliefs, but it is probably an exaggeration to say that “normal reality testing is powerless against delusions”. Three sources of information recommend caution: the consideration of clinical case studies; the evidence on the effectiveness of cognitive probing (often in the context of cognitive behavioural therapy) in the treatment of delusions; and the reports of people who successfully manage their delusions. We know of some cases reported in the psychological literature (e.g., [4]) in which certain forms of cognitive probing bring some delusions to an end, although the process is often a slow and gradual one. Even when the delusion is not entirely abandoned, the conviction in the content of the delusion can be undermined by challenging the coherence of maintaining the delusion with endorsing other attitudes that fail to support the delusion. The emerging, though still not conclusive, evidence on the success of cognitive behavioural therapy in the treatment of delusions when combined with effective medication (e.g., [5]) raises some further doubts about unrevisability as such. This form of therapy often involves encouraging people to adopt a more critical attitude towards the content of their delusions, and consider alternative explanations for their baffling experiences. Moreover, in the first-person accounts of delusions published in the Schizophrenia Bulletin, it is striking how people seem to recur to the techniques of critical thinking to distance themselves from their delusional experiences (e.g., [6]), as if they had somehow ‘internalised’ the challenges they had been subject to in their encounters with clinical psychiatrists or had developed original way of coping which consists in doubting the content of their delusional states on the basis of considerations of plausibility and coherence.

I began to figure out a way to undo my delusions. I reasoned that if some of my thoughts were disturbed, I could use my unaffected mind to think myself well again. A delusion is a false belief one believes strongly and does not question. Yet one does not want to be misled by or base one’s life on false beliefs. Therefore, it is important to be open to exploring the possibility that each paranoid belief may be a delusion. […] I began designing a “four-step system” to question, recognize, counterargue, and replace delusions. ([6], page 549)

Naturally, there are extremely stubborn delusions which are a counterexample to this type of evidence. But overall, in the case of delusions, revision is unlikely and arduous rather than impossible. This does not undermine Hohwy and Rajan’s claim about delusions being to some extent cognitively impenetrable. We still need to explain why delusions are harder to revise than the typical belief, and the resistance to counterevidence observed in delusions does support the analogy between delusions and perceptual illusions.

Let’s move on to the idea that delusions be more or less circumscribed.

Once we realise an illusion is in fact just an illusion it is possible to circumscribe it to some degree such that it does not infect other internal models. For example, we do not revise our overall models of the capabilities of animate and inanimate matter when we experience the ventriloquist illusion… Some delusions also have a degree of circumscription, such as the case of mirrored self-misidentification where the patient merely avoids mirrors rather than is utterly freaked by the presence of a strange lookalike in the mirror [3].

Other illusions are not circumscribed… Similarly, some delusional perceptual inference infects the wider belief system and agency … There will probably be different causes for such differing degrees of integration, but it many delusions seem to begin with sensory malfunction in sensory domains for which it is difficult to apply concrete reality testing methods [3].

Hohwy and Rajan observe how some delusions remain circumscribed and some “infect” the whole belief system. This is a very interesting phenomenon that cannot be easily mapped onto specific types of delusions: it is not the case (as one might expect) that monothematic delusions are always circumscribed, and polythematic ones are always elaborated. Delusions of misidentification (such as Capgras) can be very circumscribed. For instance, a man who believes that his wife has been replaced by an impostor can be friendly and even flirty with the alleged impostor and do nothing to retrieve the ‘original’ wife. But other instances of the same type of delusions can show high levels of good integration between one’s delusion and one’s other attitudes and behaviour. For instance, some people with Capgras delusion can be hostile, even violent and aggressive, towards the alleged impostor. The same inconsistency can be observed in delusions of mirrored self misidentification (which is the example used by Hohwy and Rajan). In some cases, it is obvious that one covertly recognises the stranger in the mirror as oneself—the stranger is even named after oneself. In other cases, one experiences first surprise and then intense distress at the thought of a stranger in the house [7].

In order to assess the analogy between illusions and delusions with respect to circumscription, some more information would be useful. For instance, is it common for very similar illusory experiences to be circumscribed in some subjects and not in others? On what does circumscription depend in the case of perceptual illusions? Depending on the answers to these questions, then the phenomenon of circumscription could further support an analogy between illusions and delusions.

Finally, let’s consider the claim that in people with delusions reasoning competence is intact but reasoning performance is impaired (just like in people with perceptual illusions). Hohwy and Rajan believe that identifying the cause of delusion with a failure in reasoning competence is important to my project of arguing for the doxastic account of delusions.

It is easier to defend the position that delusions are of a kind with other irrational beliefs if they are generated by a reasoning competence failure than if they, in contrast to other irrational beliefs, are generated by deficient sensory processing plus intact reasoning competence. The choice of alliances in this debate is thus relevant for how we evaluate the conclusions of DOIB [3].

Should we commit to the idea that for any irrational belief we need to postulate a failure in reasoning competence rather than a failure in reasoning performance? Unless we do, then the hypothesis that delusions are generated by “deficient sensory processing plus intact reasoning competence” is perfectly compatible with the doxastic account of delusions. Hohwy and Rajan’s hypothesis does undermine some aetiological accounts of delusions which consider a reasoning competence failure as a necessary condition for delusion formation. But it does not (per se) undermine the claim that delusions are belief states. Hohwy and Rajan’s real target seems to be the two-factor theory of delusion formation, not the doxastic view of delusions in general.

The two-factor theory is problematic for a variety of reasons. It posits a domain general deficit of reasoning competence so it predicts that patients should have widespread delusions and yet patients with monothematic delusions do not. It also predicts that delusions are constantly present, instead of being, as seems to be the case, more dynamically shifting states [3].

I have some sympathy for the two-factor theory, as the authors realise, but, as it stands, it does have some limitations in explanatory power, and it is not the only approach to delusion formation that is compatible with the doxastic account. Even if we were to buy all the arguments against the two-factor theory, we would not have an argument against the doxastic account. As it happens, both charges against the two-factor theory in the passage above are a bit quick. First, the fact that there is a reasoning deficit may make it difficult to explain why some delusions start out as confined to one specific theme, but makes it easier to explain why these delusions often tend to spread. People who initially believe that their spouse has been replaced by an impostor may end up thinking that their entire family, or village, is made of replicas. Other delusions are subject to similar “spreading” over time. Second, the fluctuation of conviction in the delusional content is something that the nature of the delusional experience can help explain, and can also be accounted for by reference to the affective, motivational and contextual factors which are responsible for the fluctuation of conviction in the content of everyday beliefs (see [8]).

In sum, what is Hohwy and Rajan’s take on the status of delusions?

[I]t is not a given that they are irrational beliefs, on a par with other irrational beliefs. Some delusions could very well be better understood as irrational perceptual inferences [3].

The proposal is that delusions are not of a kind with other irrational beliefs, because they are not caused by a reasoning competence failure. But this is based on the assumption that all irrational beliefs derive from a failure of reasoning competence, and no good reason has been offered to believe that this is the case. Notice that the doxastic account is not challenged by the description of delusions the authors favour. When it comes to resistance to counterevidence, belief persistence is an incredibly common phenomenon, and not just among motivated beliefs. This suggests that we should not talk about resistance to counterevidence as a distinctive feature of all delusions. Rather, we should say that typically delusions resist counterevidence to a greater extent than ordinary beliefs. When I dismiss evidence against a theory of mine, but I do not so easily dismiss evidence against the rival theory, I am being irrational in much the same way as a person with a stubborn delusion. She will explain away apparently conflicting evidence and find (sometimes implausible) reasons to uphold the delusion, but the possibility of revision, or at least of reducing her conviction in the delusion, is not completely ruled out. A similar point can be made about circumscription. Beliefs also have varying levels of circumscription, as the much discussed problem of the compartmentalisation of the mind in epistemology shows (e.g., [9, 10]), and in this respect they do not significantly differ from delusions or perceptual illusions.

Hohwy and Rajan’s positive conclusion is that delusions are the result of perceptual inference, but in a framework where the difference between perceptual and doxastic states is downplayed. For this reason, I do not see the aetiological account of delusions defended by Hohwy and Rajan as a challenge to the doxastic account.

The difference between belief and perception lies in the time scale of the represented processes and their degree of invariance or perspective independence. There is no further special difference between them and the issue of rationality applies equally to perception and belief. From this perspective it is therefore easy to see how perception can be irrational [3].

On the whole, Hohwy and Rajan’s well-argued position on the causes and nature of delusions is not damning for the doxastic account. Although the analogy between delusions and perceptual illusions is convincing and enlightening, it does not put too much pressure on the view that delusions are belief states. On the basis of surface features, the analogy between delusions and ordinary beliefs is at least as convincing as the analogy between delusions and perceptual illusions. Aetiological considerations could motivate the difference between having the status of beliefs or having the status of the perceptual illusions, but Hohwy and Rajan themselves recognise that the gap between beliefs and illusions in their account is narrow: both can be the result of perceptual inference, and both can be irrational.

Aren’t Delusions ‘Beliefs Gone Half Mad’?

Schwitzgebel and Tumulty notice a potential tension in the doxastic account of delusions. On the one hand, I defend the view that delusions are beliefs, and the title of the book also implies that delusions are irrational beliefs. On the other hand, I concede that some delusional phenomena do not necessarily involve belief states—here commentators refer to the discussion of disowned thoughts, whose content is not endorsed by the subjects reporting those thoughts (see also [11, 12]).

Through virtually the whole book, Bortolotti presents herself as defending the view that delusions are beliefs against the view that they are not beliefs, without—it seems to me—much recognition of the possibility that at least some of them might be vague, in-betweenish cases, in some respects belief-like and in other respects not-very-belief-like. However, near the end of the book, Bortolotti comes close to endorsing the in-between approach [13].

When she discusses the activity of offering (what seem to the subject to be) good current reasons for belief, Bortolotti explicitly allows that someone doing that badly enough won’t count as doing it at all, and hence couldn’t usefully be ascribed the relevant belief [14].

According to Schwitzgebel, there is a solution to this apparent tension, a solution that in his view I have briefly considered but underestimated in the book. The solution is to think that being a belief is a property that comes in degrees: this is a version of the “sliding scale”, an approach to belief ascription endorsed by Cherniak in his work on minimal rationality [15] and described by Stich [16, 17] as an alternative to the rigidity of some interpretations of the intentional stance [18]. According to Tumulty, I should have acknowledged that when delusions fail to satisfy folk-psychological norms, describing them as beliefs is just a convenient shorthand.

I quickly set aside the sliding scale in chapter one of the book. Let me revisit the proposal now, in the light of Schwitzgebel’s suggestion that my doxastic account of delusions might have more in common with it than I was happy to concede. When we ask whether non-human animals have beliefs, or whether a delusion is a belief, we might think that a ‘yes-or-no’ answer is unsatisfactory. The sliding scale offers a sensible approach: the mental state is question can be considered as more or less of a belief depending on the extent to which its features overlap with features of typical belief states. In this way, it is possible to emphasise the similarities between beliefs and ‘proto-beliefs’ (as sometimes the mental representations of non-human animals are called), or between beliefs and delusions, without neglecting their important differences. Thanks to the sliding scale, both similarities and differences are reflected in the status of these mental states, an in-between status.

There is no doubt that the sliding scale is an attractive option, especially when we compare it to other accounts of belief ascription. But I have some reservations about it. One is about the basic assumptions underlying the account. The sliding scale does not imply a rejection of the rationality constraint on belief ascription. The “typical features” that a state needs to exhibit to count as a belief tend to coincide with what interpretationists consider standards of rationality for beliefs. The idea that there is a necessary connection between being rational and being ascribed beliefs is still at the foundation of the sliding scale. But instead of there being a threshold of rationality that all mental states need to satisfy in order to qualify as beliefs, the sliding scale proposes that being a belief is a matter of degree. Thus, we could say that a representational state guiding the behaviour of a dog with a limited conceptual apparatus, or a delusion reported by a person with Capgras syndrome which is circumscribed and rarely acted upon, are only partially beliefs; they are beliefs in so far as they satisfy the norms of rationality that we expect beliefs to satisfy (e.g., the respect of some inferential connections, action guidance in the relevant circumstances, etc.).

As one of the main objectives in the book is to challenge the existence of a necessary link between rationality and belief ascription, the assumption behind the sliding scale was not something I was prepared to accept: no interesting notion of rationality can be seen as the mark of the intentionality of belief states (this also explains why I do not end up claiming, with Tumulty, that when rational and other norms are violated, belief ascription is just a convenient shorthand. More about this in “Delusions and Folk Psychology”). With respect to the intentional stance, the sliding scale is a more psychologically plausible and flexible approach to belief ascription, but it does not explicitly resist the view that rationality is what determines whether something is a belief. Although there may be reasons to think that some mental states are genuinely in-between, and are not full beliefs, these reasons should not be dictated by whether the mental states in question satisfy norms of rationality.

In the book I suggest that whether the subject manifests some endorsement of the content of a certain mental state is an important criterion to establish whether the mental state is a belief. Thus, I am happy to concede that some delusional phenomena are not belief states (e.g., inserted thoughts that are not endorsed at all or delusional moods) or that they are borderline cases (e.g., badly endorsed delusions). However, most delusions, typical delusions, are sufficiently endorsed to be beliefs, even if they do not satisfy the standards for rational beliefs. The key point is that it is not rationality per se which dictates which mental states are beliefs. The main reason for believing that rationality does not play this central role is that, if we conclude that delusions are not beliefs because of their irrationality, then we need to conclude the same about what we take to be typical cases of belief which also fail to satisfy norms of rationality (this is what I call in the book the ‘double-standards objection’).

Schwitzgebel suggests another approach to the sliding scale option that does not depend on mental states meeting standards of rationality—his view is that belief states need to have belief-like effects.

Beliefs can arise in any old weird way, but—if they are to be beliefs—they cannot have just any old effects. They must have, broadly speaking, belief-like effects; the person in that state must be disposed to act and react, to behave, to feel, and to cognize in the way characteristic of a normal believer-that-P [13]

Since match to a functional profile is a matter of degree, it seems natural to suppose that possession of belief will also, at least sometimes, be a matter of degree [13]

This approach is one that I find very plausible. It partially overlaps with the criterion I propose in the book: one’s endorsement of the content of a mental states is central to whether one’s state is a belief. Such endorsement is usually transparent in behavioural manifestations: in reporting the belief with conviction, in providing reasons in support of the content of the belief, and in acting on the belief in the relevant circumstances. But let me briefly consider some objections against even an enlightened sliding scale approach. The sliding scale might deliver good results in terms of allowing us to discriminate between mental states with more or less belief-like features, but it becomes impractical if we think that a lot hangs on whether an individual is ascribed beliefs.

Suppose we think that only individuals with beliefs and desires (and other similarly complex intentional states) are entitled to a certain form of moral consideration (e.g., because their possession of intentional states indicates that they also have morally relevant interests). Then, the ascription of partial beliefs does not help. Debates about whether non-human animals have beliefs have important repercussions on issues concerning moral status, and debates about whether people with delusions genuinely believe what they say informs claims about their capacity for autonomy and responsibility, about appropriateness of treatment, and about potential suspension of rights.

There are many ways to address this issue which would need to be assessed in much more detail than I can do here. We may decide that there is no tight connection between intentional-state ascription and debates about moral consideration and autonomy. Alternatively, we may decide that questions about moral consideration and autonomy should also be answered by appealing to degrees: a person with partial beliefs may be only partially autonomous with respect to the decisions and actions motivated by those mental states. All I suggest is that the perceived connection between being ascribed beliefs and being given moral consideration, or being attributed autonomy and moral responsibility, which is made in legal and policy frameworks and in lay conceptions, deserves to be taken into account. If, after philosophical inquiry, we discover that such a connection is justified, then a framework where partial beliefs are ascribed is less practical than a framework according to which there is a minimal requirement for the ascription of beliefs, and ‘belief’ is an on-off notion rather than a graded one. In other words, if we have a minimal notion of belief, and a minimal notion of autonomy and moral consideration to go with it, this might make certain debates more straight-forward at least on paper, although of course it would still be very difficult to judge whether a certain individual engaging in a certain form of behaviour meets the minimal standards that apply to ‘believers’.

The other worry is how to flesh out the sliding scale account. The original sliding scale proposes that mental states are either full or partial beliefs depending on how well they meet the standards of rationality that apply to beliefs. In Schwitzgebel’s version, the claim is that delusions have only some of the effects that beliefs usually have, and thus they are neither entirely successful beliefs nor other than beliefs but something in between. Schwitzgebel also talks about some predicates being vague and needing contextual information, and draws an analogy between predicates such as “believing that p” (where the person reporting that p does not manifest the right dispositions) and “being tall”. Tumulty seems to offer us a choice: mental states that are not quite beliefs can be described as other types of intentional states (e.g., imaginings) or as more-or-less close approximations to beliefs.

In in-between cases of canonically vague predicates like “tall”, the appropriateness of ascribing the predicate varies contextually, and often the best approach is to refuse to either simply ascribe or simply deny the predicate but rather to specify more detail (e.g., “well, he’s five foot eleven inches”); so too, I would argue, in in-between cases of belief. [13]

What about cases in which the subject’s state plays some but not all of the relevant functional role, has some but not all of the appropriate belief-ish causes and effects? Delusions would appear to be just such a case [13].

Is it possible, then, that cases of delusion are, at least sometimes (when the functional role or dispositional profile is weird enough), cases in an in-betweenish gray zone—not quite belief and not quite failure to believe? [13]

When a subject fails to fit the dispositional profile for believing that p, we have a choice. We can decide it is best to view her as mostly fitting a different profile (e.g. for imagining that p, or for believing that not-p). Alternatively, we can decide that it is best to view her as fitting poorly the profile for believing that p. In the latter case, we are not pressed to find any non-doxastic account of her state. We can simply say that she’s not-quite-believing that p, or believing-badly that p [14].

What delusions are is not spelt out in any more detail. But Schwitzgebel’s position on in-between believing in general is cashed out in his 2001 paper [19], where examples of apparent beliefs that do not give rise to the expected dispositions are described and discussed. If I assert with sincerity and conviction that nurses are poisoning me whilst eating the food they hand me, my delusion is a non-entirely successful attempt at believing that nurses are poisoning me and qualifies for the in-between status.

There seem to be different versions of the sliding scale on the market. Here are two interpretations of the sliding scale as applied to delusions. One is that delusions are belief states with indeterminate content. If b does not meet the standards of rationality for beliefs, or does not have the right functional profile, then b is a belief but its content cannot be determined with precision and thus it cannot be used in explaining and predicting the behaviour of the person reporting the belief. Back to the example above, it is indeterminate whether I believe that nurses are poisoning me.

The alternative position is that delusions are hybrid states. If b does not meet the standards of rationality for beliefs, or does not have the right functional profile, then b is partially a belief and partially something else. This is compatible with b having determinate content: I may be half-believing and half-imagining that nurses are poisoning me. In the context of delusions, it has been suggested that delusions are in-between beliefs and imaginings, or between beliefs and desires (see [20] and the recent debate featuring Reimer, Bayne, and Graham in the December 2010 issue of Philosophy, Psychiatry and Psychology [2123]).

In his commentary, Murphy (this issue) raises a point that is at the basis of hybrid accounts, that beliefs are not the only intentional states that can explain and drive behaviour, and so delusions can be described equally well as beliefs and as imaginings. He does not argue for a hybrid account, but concludes that defenders and critics of the doxastic account of delusions reach a stand-off.

[A] state of mind can indeed fail the tests of rationality while explaining behavior, but such states are not beliefs. Imaginings, (Currie 2000) are one such state: I can behave in a way that makes it look like I believe that aliens are after me even if I am only imaging, not believing, that aliens are after me. On this view there are lots of intentional states that can drive behavior, even in ways that make sense of the behavior as an expression of the state of mind [24].

Imaginings can cause action, but the circumstances in which they do are typically different from those in which beliefs cause action. Moreover, beliefs tend to cause different types of action. If I imagine that there are aliens chasing after me, I can experience the relevant emotions—feel fear and adrenaline mixed together—and react to this imagined situation in a similar way to the way in which I enter the fictional reality of a well-made action movie and empathise with the struggles of the lead actor. This form of behaviour will affect my real life (I can instinctively recoil from an image that reminds me of the imagined threat), but will not last much longer than the time of the pretence (or the movie) and at some point clash with some awareness of the unreality of the scenario.

If I genuinely believe that there are aliens after me, the consequences for my behaviour will be typically deeper, longer-lasting and further-reaching. People who suffer from delusions of persecution are obviously distressed by the perceived threat to the point of transforming their lives, often in a costly and disruptive way. They can move home a number of times to escape neighbouring spies, change city and job to avoid meeting hostile aliens, or cut off contact with family and friends to spare them the intrusive and ill-meaning interference of government agencies [25]. This form of behaviour—which is motivated by the content of their delusions and is accompanied by a series of less radical but equally significant safety behaviours [26]—is not the behaviour of someone who is in the grip of her imagination, is absorbed in a movie, or daydreams. It is the behaviour of someone who believes that something genuinely dangerous is out there, threatening her. In most cases, people with delusions behave as people who believe the content of their delusions: they assert delusions with conviction, they defend them with tentative arguments, they act on them and they infer other beliefs from their delusional states.

The rejoinder is familiar by now from the discussion of Hohwy and Rajan’s and Schwitzgebel’s and Tumulty’s views. What about those delusions that are not so consistently held, come and go, or fail to motivate any relevant form of behaviour? When I say that nurses are poisoning me but eat their food nonetheless, am I not fitting the profile of the person who is imagining a threatening situation rather than that of a person who is genuinely worried about the hospital food? Here I think the ‘stand-off’ Murphy talks about can be avoided if we recognise that delusions are generally behaviourally effective, but can fail to guide action due to phenomena that are anything but rare in the psychiatric disorders that manifest with delusions, such as schizophrenia, dementia, and delusional disorders. These may include meta-representational deficits, conflicting attitudes, co-morbidity with depression, and fluctuations in motivation caused by changes in affect (e.g., poverty of action, avolition, flat affect, emotional disturbances). Action that would follow some delusions with bizarre content can also be inhibited by features of the physical and social environment surrounding the agent (see [8] for more details).

With the worry about a possible stand-off in the background, can I confidently claim that the standard doxastic account is preferable to the sliding scale approach? My impression is that the sliding scale approach is to be preferred to a doxastic account according to which (1) delusions are beliefs, and (2) the notion of belief appealed to is strictly regimented by either norms of rationality for beliefs or by the dictates of the typical functional profile of belief states. Such a doxastic account would fail to characterise delusions (and most everyday beliefs) correctly. But if the notion of belief advocated is a basic one, which permits some failures of rationality and some departure from the textbook functional profile of well-behaved beliefs, then I see no obvious advantages in adopting the sliding scale.

The modest doxasticist can describe as beliefs both the representational states guiding Fido in chasing the squirrel up the oak tree, and the delusional states of the hospital patient who suspects that nurses are poisoning her, but eats their food nonetheless. The modest doxasticist does not need to be a revolutionary either—she fleshes out the folk-psychological notion of a subject’s beliefs as states that entertain meaningful relationships with the subject’s other intentional states, are sensitive to evidence, can lead to action in some of the relevant circumstances, and can be supported with reasons if they are the type of beliefs that can be the object of justification and deliberation. Although some other intentional states share some of the features above, I still think that the combination of those features is sufficient to demarcate belief-like states from other intentional states and avoid the stand-off.

Delusions and Folk Psychology

In the commentaries by Murphy, Frankish and Tumulty the idea that delusions are beliefs is challenged on the basis of considerations about the role, nature, and future prospects of folk psychology.

Murphy argues that delusions are diagnosed precisely when folk psychology fails to account for people’s behaviour. Thus, treating delusions as beliefs can make us lose sight of what is distinctive about them. In order to press this point, he compares delusions with self-deceptions and culturally-relative beliefs. Self-deception does make sense within folk psychology as we can understand how people are motivated to believe something that is likely to be false. Beliefs are that strange from our own perspective, but are embedded in a form of life or a cultural framework can also be explained. But delusions are more puzzling.

Frankish suggests that the simplistic notion of belief used by interpretationists and by myself needs to be better qualified given the often hidden complexities of folk psychology and the recent advances in the scientific study of belief states. The proposed qualification takes the form of a distinction between types of beliefs. Frankish talks about level 1 and level 2 beliefs, and maintains that such a distinction is implicit in our folk-psychological practices. In his account, when delusions are doxastic states, they are like acceptances or policies (level 2 beliefs) which are consciously formed and reported, but not always manifested in behaviour.

Tumulty stresses the regulative function of folk psychology. In a very rich and stimulating commentary, she puts forward a view of folk psychology as a way of exercising control over ourselves and others. In her picture, norms dictated by rationality and the functional profile of well-behaved beliefs are essential to the success of belief ascription, not because people need to obey such norms in order for beliefs to be ascribed to them, but because ascribing beliefs to them disposes them to obey the norms. Thus, for Tumulty, it is a mistake to divorce belief ascription from rationality.

Aren’t There Different Types of Beliefs?

In the book I argue that there are different types of beliefs, and I characterise what is in common among them in very broad terms. I notice how ideological and religious beliefs can have a more relaxed relationship with some forms of empirical evidence than scientific beliefs, and how opinions are more likely to be defended with reasons than perceptual beliefs.

That said, I do not commit to a particular metaphysical view of what beliefs are. I compare the notion of belief to a fictional character in a fairy tale, like Cinderella. There may not be any girl who was mistreated by her step-sisters and who, after joining the royal ball, lost one of her shoes, and ended up marrying a prince. But there are girls out there who are subject to abuse in their families and find freedom, love, and happiness later in life. Similarly, beliefs as we intend them may not exist out of the fiction of folk-psychological discourse. But there must be some respectable psychological kind whose features approximate those of beliefs and can help explain some of the phenomena that we now explain by appealing to beliefs and to their relationship with other intentional states, with evidence and with action.

In their insightful commentaries, Murphy and Frankish ask legitimate questions. Is it worth spending time on the folk-psychological notion of belief, which is unlikely to denote a natural kind and is likely to be replaced in a mature science of the mind? Isn’t a general notion of belief, one-size-fits-all, also unlikely to make good sense of our folk-psychological practices? Couldn’t we distinguish between types of beliefs with different features and then ask whether delusions are beliefs of a particular type?

Perhaps the very diverse causes of belief that folk thought recognizes is evidence that belief is not a natural kind, and that a mature psychology should recognize a number of different sorts of intentional state with different relations to each other and to behavior. Perhaps we need to distinguish deliberative states that do meet the rationality constraints as a special kind of intentional state [24].

I want to introduce a basic distinction between two types of belief, which is latent in folk psychology. On the one hand, we ascribe beliefs to a wide variety of creatures and artefacts on the basis of their unreflective, nonverbal behaviour and without assuming that the attitudes we ascribe can be introspected, are under personal control, or are functionally discrete. On the other hand, we also use the term ‘belief’ in a more restricted way, for a state that is available to consciousness, is controlled, and can be selectively employed in reasoning and decision making [27].

It is difficult to resist the appeal of these suggestions, as any philosopher of mind is aware that the differences that can be found among the very heterogeneous mental states we call ‘beliefs’ are the main reason why a list of necessary and sufficient conditions for beliefs is so elusive. Beliefs differ in formation process, content, duration, accessibility, susceptibility to justification and revision, integration with other intentional states, manifestability, and so on. What I am still not sure about is whether it is worth clustering these undeniable differences into types or levels of beliefs.

For instance, Frankish claims that it is a fact that behavioural dispositions are not open to introspection, are not explicitly deliberated about, and are graded, whereas acceptances are conscious, controlled, and binary. Murphy proposes that we distinguish the rational outputs of deliberation from other forms of beliefs which are much more vulnerable to irrationality. Both suggestions are plausible, but my concern is that they will not solve the problem of having mental states that deviate from the standard (whether this is presented as a functional profile, as a set of norms of rationality, or as necessary and sufficient conditions for a type of belief). The problem is that the proposed clusterings are to some extent arbitrary. Some behavioural dispositions are open to introspection, and some acceptances may not be explicitly deliberated about. Frankish also talks about how policies and acceptances can give rise in time to level 1 beliefs, rendering the picture extremely fluid. Similarly, pace Murphy, irrationality does not spare the outputs of deliberation, whereas some behavioural dispositions we have not consciously adopted or reflected about can be perfectly in line with other beliefs, well-supported by evidence, and consistently manifested in behaviour. If this level of flexibility between levels or types of beliefs is acknowledged in the discussion, then we lose the motivation to codify beliefs into types. We can say that many delusions are more like acceptances than they are like behavioural dispositions, but we have not gained much in explanatory power. We have refined our conceptual resources and noticed how some features of beliefs often go together, and this may help us provide better descriptive account of delusions. I do not see how the new description can undermine or support the doxastic account of delusion.

It is another matter if we take these prima facie differences among beliefs to justify a more radical conclusion (which I take is Murphy’s aim), that belief is not a natural kind and is just a placeholder, to be soon replaced by not one but a set of more scientifically respectable notions. In this case, the initial observation that there are differences among types of beliefs leads to scepticism about the folk-psychological framework as a whole. There are certainly good reasons for this scepticism—but I want to throw in the debate two questions. One is whether delusions give us a special reason to challenge the unitary nature of belief states. Clearly, there are differences among ordinary beliefs which equally need explaining. Delusions are in good company when they stretch the domain of application of the concept ‘belief’, and some of the examples provided by Murphy (motivated and ideological beliefs) are as different from the ideal or standard belief (e.g., the rational one or the one with the right functional profile) as delusions are. (More about this in “Concluding Remarks: The Continuity Thesis”).

The other question is what we ought to do while we wait for scientific psychology and neuroscience to hand us a new mental vocabulary. As I suggested earlier, folk-psychological notions such as beliefs, desires, and intentions pervade our understanding of minded beings and are central to the systematization of our moral intuitions. It is essential to flag inconsistencies in these notions and challenge some of their uses, but it would be difficult to do without them altogether at this stage. Murphy says: “Bortolotti’s arguments […]may not serve as a foundation for a developed science of abnormal intentional states” [24]. True. But what shall we say about delusions, self-deception, etc. while the science of abnormal intentional states reaches maturity? And how are we going to develop such a science if not by gradually revising our existing conceptual framework?

In the book I was attempting to offer reasons to divorce the intentionality of belief states from rationality, and to provide a minimal account of belief that could capture a general phenomenon without obscuring the obvious differences between types of belief. This is one way (a moderately revisionist way) to look forward. Surely there are more revolutionary approaches, and it is possible that these can offer a better foundation for the science of abnormal intentional states. But to my knowledge they cannot yet solve the puzzle of delusions or even explain what delusions are to a better effect than modest doxasticism.

What if Folk Psychology is Regulative?

Tumulty is the most sympathetic of all the commentators to one of the main objectives in the book, as she seems genuinely convinced that there is continuity between so-called irrational beliefs and delusions. But she is also the least sympathetic to the other main objective, that is, the attempt to distinguish the criteria for ascribing beliefs from the criteria for rationality. The difference between Tumulty’s position and mine is that while I generously hand the status of beliefs to both delusions and more mundane instances of irrational thought, she is tempted to deny it to both. We can conveniently talk about beliefs, Tumulty says, but for something to be a belief certain norms need to be satisfied: “when a subject violates a norm that shapes an important part of the dispositional profile for a belief, she fails fully to have the belief in question” [14].

Why are norms so central to our folk-psychological practices? According to Tumulty, belief ascription has a regulative (as opposed to merely descriptive) function: we mould people’s behaviour by ascribing beliefs to them.

[M]any subjects presenting with delusions don’t seem to be exercising much virtual control of the kind that would make a belief-ascription appropriate. In talking about subjects like that, then, our ascriptions of belief may be merely convenient short-hand. The ascriptions signal that these subjects possess a certain cluster of dispositions, relevant in this conversational context, but don’t commit us further on the questions of how belief-like their behavior will be in general, or in the future, or in dramatically different contexts [14].

According to Tumulty, without reference to the norms that govern the ascription of beliefs it would be difficult to tell beliefs apart from other intentional states, and to characterise adequately the difference between them. Preserving the link between rationality (and other norms) and belief ascription is a way to differentiate beliefs from other intentional states that share some of their features. It is an important project to be able to tell apart beliefs from other intentional states, but I think it can and should be done without appealing to norms of rationality (as I argued in “Aren’t Delusions ‘Beliefs Gone Half Mad’?”). Even irrational or badly behaved beliefs play a significantly different role in people’s mental economies from that of imaginings and desires—the key issue seems to be the capacity that a subject has to endorse the content of the report she makes or the thought she entertains. Where the capacity is radically compromised, then the bases for ascribing beliefs is undermined.

One way in which I attempt to develop this idea in the book is by reference to the construction of self-narratives. Beliefs that are endorsed tend to be woven into stories that not only serve to impose coherence on past experiences, but also shape future behaviour and give direction to one’s life, creating a dynamic concept of oneself as an agent. Not surprisingly, Tumulty appreciates the reference to self-narratives in this context, as it goes hand in hand with some recognition of the regulative function of folk psychology. As narrators, we exercise control over our own mental states. We select which ones are important enough to be integrated in the story and slowly build a picture of ourselves as actors that we are happy with. This process of self-creation (see also [28]) mirrors the ‘moulding’ that we are often responsible for when we are interpreters of others. If people report something with conviction, we expect them to defend their claim with reasons and to act in a way that does not conflict with their report. Such expectations have the potential to affect the way in which other people behave. When we construct a self-narrative, we turn the same form of regulative attention onto ourselves.

Tumulty and I are in perfect agreement on this way of presenting the issue, but we seem to differ with respect to the question whether people with delusions can count as ‘believers’ and whether they can integrate delusions in their self-narratives. I would say they can, and Tumulty would say they cannot. In most circumstances, people with delusions do not lose their capacity for constructing a self-narrative, and they seem to be able to include many aspects of their delusional experiences and thought into a picture of themselves that guides their future actions [29]. It is precisely the capacity for constructing a delusional narrative, that is, a narrative in which the delusion is well-integrated and often plays a dominant role, which makes it sometimes really hard to get rid of the delusion. When a delusion is doubted and challenged, a whole narrative identity comes close to collapse, generating understandable anxiety and depression.

The narratives constructed by people with delusions are not necessarily good narratives, where ‘goodness’ here depends on the extent to which they correspond to real life events and on their internal coherence. Gerrans [30] describes people with delusions as unreliable autobiographers because their self-narratives may diverge substantially from the narratives that a third person would construct of them, and because they may be characterised by internal tension. People with delusions may give excessive importance to some events due to disturbances of salience, include distorted or fabricated memories into the narrative as a result of confabulation, or construct narratives that lack internal coherence due to the conflict between their delusion and some of their other beliefs.

As I care for the distinction between having a belief and having a rational belief, so I care for the distinction between constructing a self-narrative and constructing a good self-narrative. For the most part, people with delusions have beliefs and construct self-narratives. They often have irrational beliefs and they often construct bad narratives (people without delusions do that too). There are exceptions: for instance, in people at advanced stages of dementia the access to autobiographical memories might be so seriously compromised that it prevents both the integration of delusions into a self-narrative and the construction of a narrative identity. This has implications for their capacity for autonomous agency [29] and may be one of the contexts in which the concepts of ‘in-between believing’, or ‘not-quite-beliefs’ become useful.

In conclusion, the distinction between rational and irrational beliefs and that between good and bad self-narratives do not threaten the conception of folk psychology as a regulative exercise. Moulding is at work when we interpret others and when we try and make sense of who we are and what is important for us. Sometimes it works, and people take what Tumulty calls ‘corrective action’ or respond to what Davidson called ‘Socratic tutoring’, and sometimes it fails. As I argued in “Aren’t Delusions Just Like Perceptual Illusions?”, the moulding we expect from our attributions of belief does not always or even typically fail when we direct our attention towards people with delusions.

Concluding Remarks: The Continuity Thesis

The picture I put forward in the book and I developed further here in response to my generous critics is a picture of continuity between delusions and other irrational beliefs. I find this picture extremely attractive for a variety of reasons, but some of the commentators have expressed concerns about it. In particular, the worry is that by stressing the continuity between delusions and everyday irrational beliefs (prejudices, superstitions, etc.) delusions get trivialised.

Murphy [24] maintains that delusions “are attributed when we run out of the explanatory resources provided to us by our folk understandings of how the mind works.” I am not sure this is true of the general phenomenon of delusions. The content of delusions of jealousy and persecution—very common delusions—is often anything but bizarre. Reports are so mundane that accurate diagnosis may become problematic and needs to rely on whether other psychotic symptoms accompany the reports. It is not that strange for me to come to believe that my partner is cheating on me, or that my sister is boycotting my efforts to get my mum to appreciate what I do. Many widespread delusions have nothing incomprehensible about them. They are extreme versions of motivated beliefs or beliefs based on biased evidence that we all have some experience of.

There are, of course, much more bizarre delusions. But although their content is puzzling, the way people talk about the content of their delusions, the tentative explanations they offer for them, and the behaviour that these delusions motivate often fit perfectly well with “our folk understanding of how the mind works”. If I believe I am dead, I stop eating and lie motionless in bed. If I believe that President Obama has a secret crush on me and my friends find this implausible, then I may attempt to persuade them by telling them that Obama noticed me at a demonstration a few years ago and he has been trying to communicate with me ever since.

Obviously, there are differences between a jealous person and a person with delusions of jealousy, as there are differences between someone who is slightly paranoid and someone with delusions of persecution. These differences are not necessarily manifested in the epistemic features of the relevant reports. It may be irrational to believe in the unfaithfulness of one’s partner for lack of robust evidence, and this ‘irrationality’ does not demarcate delusions from everyday beliefs. Rather, the differences are apparent in social functioning and levels of well-being. Hohwy and Rajan argue for a forensic aspect of delusions, where delusions are distinguished from irrational beliefs of a more mundane sort by poor functioning, and impairments of autonomous agency and moral responsibility.

[W]e are most prepared to attribute delusions and initiate clinical arrangements when there is impairment to decision-making, autonomy and responsibility. In fact this seems to be the watershed between delusions and other delusion-like states, whether we think these states are beliefs or some other kind of mental state [3].

I wholeheartedly agree with Hohwy and Rajan that poor functioning is an important (and often neglected) aspect of delusions, but I would want to take distance from the widespread assumption that autonomy and moral responsibility are necessarily compromised in people with delusions. As I suggested in “What if Folk Psychology is Regulative?”, not all people with delusions lose the capacity to develop self-narratives and, arguably, this is one of the capacities that underlie autonomy, as it provides a sense of self that can shape future decisions and behaviour. Some have defended the view that psychological well-being ensues from self-narratives that are coherent and fairly accurate representations of autobiographical events (e.g., [31]). In psychiatric disorders accompanied by far-fetched delusions which are integrated in the self-narrative, a gap between the story and reality is created. This gap compromises social functioning, as the way in which the agent sees herself can be significantly different from the way in which others see the agent, and her account of key events might also diverge from the account provided by others. Whereas the presence of delusions does not necessarily imply a failure of autonomy and moral responsibility, it can compromise agents’ success in shaping their future in a way that is conducive to their well-being. It can also justify attributions of reduced accountability for some actions that are motivated by the delusional beliefs or for actions which would not have been performed but for the delusion.

Delusions remain a fascinating and largely mysterious phenomenon, for clinical psychiatrists, philosophers, and cognitive scientists alike. The modest doxasticist cannot hope to shed light on all aspects of delusions, and here I have largely remained silent on aetiological issues and on debates about diagnosis and treatment. All I have attempted to show is that, when it comes to a reasoned account of the nature of delusions, the modest doxasticist is not worse off than her more fashionable opponents. We can build complexity and explanatory power in a doxastic account of delusions, as long as we resist the temptation to idealise beliefs and we give up the project of finding a demarcation between delusions and everyday irrational beliefs that is based exclusively on epistemic grounds.