I argue that inference can tolerate forms of self-ignorance and that these cases of inference undermine canonical models of inference on which inferrers have to appreciate (or purport to appreciate) the support provided by the premises for the conclusion. I propose an alternative model of inference that belongs to a family of rational responses in which the subject cannot pinpoint exactly what she is responding to or why, where this kind of self-ignorance does nothing to undermine (...) the intelligence of the response. (shrink)
Recent epistemology of modality has seen a growing trend towards metaphysics-first approaches. Contrastingly, this paper offers a more philosophically modest account of justifying modal claims, focusing on the practices of scientific modal inferences. Two ways of making such inferences are identified and analyzed: actualist-manipulationist modality and relative modality. In AM, what is observed to be or not to be the case in actuality or under manipulations, allows us to make modal inferences. AM-based inferences are fallible, but the same holds for (...) practically all empirical inquiry. In RM, modal inferences are evaluated relative to what is kept fixed in a system, like a theory or a model. RM-based inferences are more certain but framework-dependent. While elements from both AM and RM can be found in some existing accounts of modality, it is worth highlighting them in their own right and isolating their features for closer scrutiny. This helps to establish their relevant epistemologies that are free from some strong philosophical assumptions often attached to them in the literature. We close by showing how combining these two routes amounts to a view that accounts for a rich variety of modal inferences in science. (shrink)
Is imagination a source of knowledge? Timothy Williamson has recently argued that our imaginative capacities can yield knowledge of a variety of matters, spanning from everyday practical matters to logic and set theory. Furthermore, imagination for Williamson plays a similar epistemic role in cognitive processes that we would traditionally classify as either a priori or a posteriori, which he takes to indicate that the distinction itself is shallow and epistemologically fruitless. In this chapter, I aim to defend the a priori-a (...) posteriori distinction from Williamson’s challenge by questioning his account of imagination. I distinguish two notions of imagination at play in Williamson’s account – sensory vs. belief-like imagination – and show that both face empirical and normative issues. Sensory imagination seems neither necessary nor sufficient for knowledge. Whereas, belief-like imagination isn’t adequately disentangled from inference. Additionally, Williamson’s examples are ad hoc and don’t generalize. I conclude that Williamson’s case against the a priori-a posteriori distinction is unconvincing, and so is the thesis that imagination is an epistemic source. (shrink)
Inferences from the absence of evidence to something are common in ordinary speech, but when used in scientific argumentations are usually considered deficient or outright false. Yet, as demonstrated here with the help of various examples, archaeologists frequently use inferences and reasoning from absence, often allowing it a status on par with inferences from tangible evidence. This discrepancy has not been examined so far. The article analyses it drawing on philosophical discussions concerning the validity of inference from absence, using (...) probabilistic models that were originally developed to show that such inferences are weak and inconclusive. The analysis reveals that inference from absence can indeed be justified in many important situations of archaeological research, such as excavations carried out to explore the past existence and time-span of sedentary human habitation. The justification is closely related to the fact that archaeology explores the human past via its material remains. The same analysis points to instances where inference from absence can have comparable validity in other historical sciences, and to research questions in which archaeological inference from absence will be problematic or totally unwarranted. (shrink)
Christian apologists, like Willian Lane Craig and Stephen T. Davis, argue that belief in Jesus’ resurrection is reasonable because it provides the best explanation of the available evidence. In this article, I refute that thesis. To do so, I lay out how the logic of inference to the best explanation (IBE) operates, including what good explanations must be and do by definition, and then apply IBE to the issue at hand. Multiple explanations—including (what I will call) The Resurrection Hypothesis, (...) The Lie Hypothesis, The Coma Hypothesis, The Imposter Hypothesis, and The Legend Hypothesis—will be considered. While I will not attempt to rank them all from worst to best, what I will reveal is how and why The Legend Hypothesis is unquestionably the best explanation, and The Resurrection Hypothesis is undeniably the worst. Consequently, not only is Craig and Davis’ conclusion mistaken, but belief in the literal resurrection of Jesus is irrational. In presenting this argument, I do not take myself to be breaking new ground; Robert Cavin and Carlos Colombetti have already presented a Bayesian refutation of Craig and Davis’ arguments. But I do take myself to be presenting an argument that the average person (and philosopher) can follow. It is my goal for the average person (and philosopher) to be able to clearly understand how and why the hypothesis “God supernaturally raised Jesus from the dead” fails utterly as an explanation of the evidence that Christian apologist cite for Jesus’ resurrection. (shrink)
I will argue that a person is causally responsible for believing what she does. Through inference, she can sustain and change her perspective on the world. When she draws an inference, she causes herself to keep or to change her take on things. In a literal sense, she makes up her own mind as to how things are. And, I will suggest, she can do this voluntarily. It is in part because she is causally responsible for believing what (...) she does that there are things that she ought to believe, and that what she believes can be to her credit or discredit. I won’t pursue these ethical matters here, but will focus instead on the metaphysics that underpin them. (shrink)
It is traditionally thought that metaphorical utterances constitute a special— nonliteral—kind of departure from lexical constraints on meaning. Dan Sperber and Deirdre Wilson have been forcefully arguing against this: according to them, relevance theory’s comprehension/interpretation procedure for metaphorical utterances does not require details specifi c to metaphor (or nonliteral discourse); instead, the same type of comprehension procedure as that in place for literal utterances covers metaphors as well. One of Sperber and Wilson’s central reasons for holding this is that metaphorical (...) utterances occupy one end of a continuum that includes literal, loose and hyperbolic utterances with no sharp boundaries in between them. Call this the continuum argument about interpreting metaphors. My aim is to show that this continuum argument doesn’t work. For if it were to work, it would have an unwanted consequence: it could be converted into a continuum argument about interpreting linguistic errors, including slips of the tongue, of which malaprops are a special case. In particular, based on the premise that the literal–loose–metaphorical continuum extends to malaprops also, we could conclude that the relevance-theoretic comprehension procedure for malaprops does not require details specifi c to linguistic errors, that is, details beyond those already in place for interpreting literal utterances. Given that we have good reason to reject this conclusion, we also have good reason to rethink the conclusion of the continuum argument about interpreting metaphors and consider what additional (metaphor-specifi c) details—about the role of constraints due to what is lexically encoded by the words used—might be added to relevance-theoretic comprehension procedures. (shrink)
This chapter explores the idea that causal inference is warranted if and only if the mechanism underlying the inferred causal association is identified. This mechanistic stance is discernible in the epidemiological literature, and in the strategies adopted by epidemiologists seeking to establish causal hypotheses. But the exact opposite methodology is also discernible, the black box stance, which asserts that epidemiologists can and should make causal inferences on the basis of their evidence, without worrying about the mechanisms that might underlie (...) their hypotheses. I argue that the mechanistic stance is indeed a bad methodology for causal inference. However, I detach and defend a mechanistic interpretation of causal generalisations in epidemiology as existence claims about underlying mechanisms. (shrink)
We argue that a modified version of Mill’s method of agreement can strongly confirm causal generalizations. This mode of causal inference implicates the explanatory virtues of mechanism, analogy, consilience, and simplicity, and we identify it as a species of Inference to the Best Explanation (IBE). Since rational causal inference provides normative guidance, IBE is not a heuristic for Bayesian rationality. We give it an objective Bayesian formalization, one that has no need of principles of indifference and yields (...) responses to the Voltaire objection, van Fraassen’s Bad Lot objection, and John Norton’s recent objection to IBE. (shrink)
In this article, I will provide a critical overview of the form of non-deductive reasoning commonly known as “Inference to the Best Explanation” (IBE). Roughly speaking, according to IBE, we ought to infer the hypothesis that provides the best explanation of our evidence. In section 2, I survey some contemporary formulations of IBE and highlight some of its putative applications. In section 3, I distinguish IBE from C.S. Peirce’s notion of abduction. After underlining some of the essential elements of (...) IBE, the rest of the entry is organized around an examination of various problems that IBE confronts, along with some extant attempts to address these problems. In section 4, I consider the question of when a fact requires an explanation, since presumably IBE applies only in cases where some explanation is called for. In section 5, I consider the difficult question of how we ought to understand IBE in light of the fact that among philosophers, there is significant disagreement about what constitutes an explanation. In section 6, I consider different strategies for justifying the truth-conduciveness of the explanatory virtues, e.g., simplicity, unification, scope, etc., criteria which play an indispensable role in any given application of IBE. In section 7, I survey some of the most recent literature on IBE, much of which consists of investigations of the status of IBE from the standpoint of the Bayesian philosophy of science. (shrink)
This essay advances and develops a dynamic conception of inference rules and uses it to reexamine a long-standing problem about logical inference raised by Lewis Carroll’s regress.
What is an inference? Logicians and philosophers have proposed various conceptions of inference. I shall first highlight seven features that contribute to distinguish these conceptions. I shall then compare three conceptions to see which of them best explains the special force that compels us to accept the conclusion of an inference, if we accept its premises.
What are the prospects (if any) for a virtue-theoretic account of inference? This paper compares three options. Firstly, assess each argument individually in terms of the virtues of the participants. Secondly, make the capacity for cogent inference itself a virtue. Thirdly, recapture a standard treatment of cogency by accounting for each of its components in terms of more familiar virtues. The three approaches are contrasted and their strengths and weaknesses assessed.
Defenders of Inference to the Best Explanation claim that explanatory factors should play an important role in empirical inference. They disagree, however, about how exactly to formulate this role. In particular, they disagree about whether to formulate IBE as an inference rule for full beliefs or for degrees of belief, as well as how a rule for degrees of belief should relate to Bayesianism. In this essay I advance a new argument against non-Bayesian versions of IBE. My (...) argument focuses on cases in which we are concerned with multiple levels of explanation of some phenomenon. I show that in many such cases, following IBE as an inference rule for full beliefs leads to deductively inconsistent beliefs, and following IBE as a non-Bayesian updating rule for degrees of belief leads to probabilistically incoherent degrees of belief. (shrink)
This paper deals with the question of agency and intentionality in the context of the free-energy principle. The free-energy principle is a system-theoretic framework for understanding living self-organizing systems and how they relate to their environments. I will first sketch the main philosophical positions in the literature: a rationalist Helmholtzian interpretation (Hohwy 2013; Clark 2013), a cybernetic interpretation (Seth 2015b) and the enactive affordance-based interpretation (Bruineberg and Rietveld 2014; Bruineberg et al. 2016) and will then show how agency and intentionality (...) are construed differently on these different philosophical interpretations. I will then argue that a purely Helmholtzian is limited, in that it can account only account for agency in the context of perceptual inference. The cybernetic account cannot give a full account of action, since purposiveness is accounted for only to the extent that it pertains to the control of homeostatic essential variables. I will then argue that the enactive affordance-based account attempts to provide broader account of purposive action without presupposing goals and intentions coming from outside of the theory. In the second part of the paper, I will discuss how each of these three interpretations conceives of the sense agency and intentionality in different ways. (shrink)
This thesis brings together two concerns. The first is the nature of inference—what it is to infer—where inference is understood as a distinctive kind of conscious and self-conscious occurrence. The second concern is the possibility of doxastic agency. To be capable of doxastic agency is to be such that one is capable of directly exercising agency over one’s beliefs. It is to be capable of exercising agency over one’s beliefs in a way which does not amount to mere (...) self-manipulation. Subjects who can exercise doxastic agency can settle questions for themselves. A challenge to the possibility of doxastic agency stems from the fact that we cannot believe or come to believe “at will”, where this in turn seems to be so because belief “aims at truth”. It must be explained how we are capable of doxastic agency despite that we cannot believe or come to believe at will. On the orthodox ‘causalist’ conception of inference for an inference to occur is for one act of acceptance to cause another in some specifiable “right way”. This conception of inference prevents its advocates from adequately seeing how reasoning could be a means to exercise doxastic agency, as it is natural to think it is. Suppose, for instance, that one reasons and concludes by inferring where one’s inference yields belief in what one infers. Such an inference cannot be performed at will. We cannot infer at will when inference yields belief any more than we can believe or come to believe at will. When it comes to understanding the extent to which one could be exercising agency in such a case the causalist conception of inference suggests that we must look to the causal history of one’s concluding act of acceptance, the nature of the act’s being determined by the way in which it is caused. What results is a picture on which such reasoning as a whole cannot be action. We are at best capable of actions of a kind which lead causally to belief fixation through “mental ballistics”. The causalist account of inference, I argue, is in fact either inadequate or unmotivated. It either fails to accommodate the self-consciousness of inference or is not best placed to play the very explanatory role which it is put forward to play. On the alternative I develop when one infers one’s inference is the conscious event which is one’s act of accepting that which one is inferring. The act’s being an inference is determined, not by the way it is caused, but by the self-knowledge which it constitutively involves. This corrected understanding of inference renders the move from the challenge to the possibility of doxastic agency to the above ballistics picture no longer tempting. It also yields an account of how we are capable of exercising doxastic agency by reasoning despite being unable to believe or come to believe at will. In order to see how such reasoning could amount to the exercise of doxastic agency it needs to be conceived of appropriately. I suggest that paradigm reasoning which potentially amounts the exercise of doxastic agency ought to be conceived of as primarily epistemic agency—agency the aim of which is knowledge. With inference conceived as suggested, I argue, it can be seen how to engage in such reasoning can just be to successfully exercise such agency. (shrink)
Consider the following three claims. (i) There are no truths of the form ‘p and ~p’. (ii) No one holds a belief of the form ‘p and ~p’. (iii) No one holds any pairs of beliefs of the form {p, ~p}. Irad Kimhi has recently argued, in effect, that each of these claims holds and holds with metaphysical necessity. Furthermore, he maintains that they are ultimately not distinct claims at all, but the same claim formulated in different ways. I find (...) his argument suggestive, if not entirely transparent. I do think there is at least an important kernel of truth even in (iii), and that (i) ultimately explains what’s right about the other two. Consciousness of an impossibility makes belief in the obtaining of the corresponding state of affairs an impossibility. Interestingly, an appreciation of this fact brings into view a novel conception of inference, according to which it consists in the consciousness of necessity. This essay outlines and defends this position. A central element of the defense is that it reveals how reasoners satisfy what Paul Boghossian calls the Taking Condition and do so without engendering regress. (shrink)
The paper is dedicated to particular cases of interaction and mutual impact of philosophy and cognitive science. Thus, philosophical preconditions in the middle of the 20th century shaped the newly born cognitive science as mainly based on conceptual and propositional representations and syntactical inference. Further developments towards neural networks and statistical representations did not change the prejudice much: many still believe that network models must be complemented with some extra tools that would account for proper human cognitive traits. I (...) address some real implemented connectionist models that show how ‘new associationism ’ of the neural network approach may not only surpass Humean limitations, but, as well, realistically explain abstraction, inference and prediction. Then I stay on Predictive Processing theories in a little more detail to demonstrate that sophisticated statistical tools applied to a biologically realist ontology may not only provide solutions to scientific problems or integrate different cognitive paradigms but propose some philosophical insights either. To conclude, I touch on a certain parallelism of Predictive Processing and philosophical inferentialism as presented by Robert Brandom. (shrink)
The paper offers an account of inference. The account underwrites the idea that inference requires that the reasoner takes her premises to support her conclusion. I reject views according to which such ‘takings’ are intuitions or beliefs. I sketch an alternative view on which inferring consists in attaching what I call ‘inferential force’ to a structured collection of contents.
It is well known that there are, at least, two sorts of cases where one should not prefer a direct inference based on a narrower reference class, in particular: cases where the narrower reference class is gerrymandered, and cases where one lacks an evidential basis for forming a precise-valued frequency judgment for the narrower reference class. I here propose (1) that the preceding exceptions exhaust the circumstances where one should not prefer direct inference based on a narrower reference (...) class, and (2) that minimal frequency information for a narrower (non-gerrymandered) reference class is sufficient to yield the defeat of a direct inference for a broader reference class. By the application of a method for inferring relatively informative expected frequencies, I argue that the latter claim does not result in an overly incredulous approach to direct inference. The method introduced here permits one to infer a relatively informative expected frequency for a reference class R', given frequency information for a superset of R' and/or frequency information for a sample drawn from R'. (shrink)
De Ray argues that relying on inference to the best explanation (IBE) requires the metaphysical belief that most phenomena have explanations. I object that instead the metaphysical belief requires the use of IBE. De Ray uses IBE himself to establish theism that God is the cause of the metaphysical belief, and thus he has the burden of establishing the metaphysical belief independently of using IBE. Naturalism that the world is the cause of the metaphysical belief is preferable to theism, (...) contrary to what de Ray thinks. (shrink)
What is the connection between justification and the kind of consequence relations that are studied by logic? In this essay, I shall try to provide an answer, by proposing a general conception of the kind of inference that counts as justified or rational.
This chapter argues that the only tenable unconscious inferences theories of cognitive achievement are ones that employ a theory internal technical notion of representation, but that once we give cash-value definitions of the relevant notions of representation and inference, there is little left of the ordinary notion of representation. We suggest that the real value of talk of unconscious inferences lies in (a) their heuristic utility in helping us to make fruitful predictions, such as about illusions, and (b) their (...) providing a high-level description of the functional organization of subpersonal faculties that makes clear how they equip an agent to navigate its environment and pursue its goals. (shrink)
In this paper we propose to present from a new perspective some loci comunes of traditional logic. More exactly, we intend to show that some hypothetico-disjunctive inferences (i.e. the complex constructive dilemma, the complex destructive dilemma, the simple constructive dilemma, the simple destructive dilemma) and two hypothetico-categorical inferences (namely modus ponendo-ponens and modus tollendo-tollens) particularize two more abstract inferential structures: the constructive n-lemma and the destructive nlemma.
Arguing for mathematical realism on the basis of Field’s explanationist version of the Quine–Putnam Indispensability argument, Alan Baker has recently claimed to have found an instance of a genuine mathematical explanation of a physical phenomenon. While I agree that Baker presents a very interesting example in which mathematics plays an essential explanatory role, I show that this example, and the argument built upon it, begs the question against the mathematical nominalist.
The overwhelming majority of those who theorize about implicit biases posit that these biases are caused by some sort of association. However, what exactly this claim amounts to is rarely specified. In this paper, I distinguish between different understandings of association, and I argue that the crucial senses of association for elucidating implicit bias are the cognitive structure and mental process senses. A hypothesis is subsequently derived: if associations really underpin implicit biases, then implicit biases should be modulated by counterconditioning (...) or extinction but should not be modulated by rational argumentation or logical interventions. This hypothesis is false; implicit biases are not predicated on any associative structures or associative processes but instead arise because of unconscious propositionally structured beliefs. I conclude by discussing how the case study of implicit bias illuminates problems with popular dual-process models of cognitive architecture. (shrink)
Some arguments include imperative clauses. For example: ‘Buy me a drink; you can’t buy me that drink unless you go to the bar; so, go to the bar!’ How should we build a logic that predicts which of these arguments are good? Because imperatives aren’t truth apt and so don’t stand in relations of truth preservation, this technical question gives rise to a foundational one: What would be the subject matter of this logic? I argue that declaratives are used to (...) produce beliefs, imperatives are used to produce intentions, and beliefs and intentions are subject to rational requirements. An argument will strike us as valid when anyone whose mental state satisfies the premises is rationally required to satisfy the conclusion. For example, the above argument reflects the principle that it is irrational not to intend what one takes to be the necessary means to one’s intended ends. I argue that all intuitively good patterns of imperative inference can be explained using off-the-shelf formulations of our rational requirements. I then develop a formal-semantic theory embodying this view that predicts a range of data, including free-choice effects and Ross’s paradox. The resulting theory shows one way that our aspirations to rational agency can be discerned in the patterns of our speech, and is a case study in how the philosophy of language and the philosophy of action can be mutually illuminating. (shrink)
"Correlation is not causation" is one of the mantras of the sciences—a cautionary warning especially to fields like epidemiology and pharmacology where the seduction of compelling correlations naturally leads to causal hypotheses. The standard view from the epistemology of causation is that to tell whether one correlated variable is causing the other, one needs to intervene on the system—the best sort of intervention being a trial that is both randomized and controlled. In this paper, we argue that some purely correlational (...) data contains information that allows us to draw causal inferences: statistical noise. Methods for extracting causal knowledge from noise provide us with an alternative to randomized controlled trials that allows us to reach causal conclusions from purely correlational data. (shrink)
This monograph is an in-depth and engaging discourse on the deeply cognitive roots of human scientific quest. The process of making scientific inferences is continuous with the day-to-day inferential activity of individuals, and is predominantly inductive in nature. Inductive inference, which is fallible, exploratory, and open-ended, is of essential relevance in our incessant efforts at making sense of a complex and uncertain world around us, and covers a vast range of cognitive activities, among which scientific exploration constitutes the pinnacle. (...) Inductive inference has a personal aspect to it, being rooted in the cognitive unconscious of individuals, which has recently been found to be of paramount importance in a wide range of complex cognitive processes. One other major aspect of the process of inference making, including the making of scientific inferences, is the role of a vast web of beliefs lodged in the human mind, as also of a huge repertoire of heuristics, that constitute an important component of ‘unconscious intelligence’. Finally, human cognitive activity is dependent in a large measure on emotions and affects, that operate mostly at an unconscious level. Of special importance in scientific inferential activity is the process of hypothesis making, which is examined in this book, along with the above aspects of inductive inference, at considerable depth. The book focuses on the inadequacy of the viewpoint of naive realism in understanding the contextual nature of scientific theories, where a cumulative progress towards an ultimate truth about Nature appears to be too simplistic a generalization. It poses a critique to the commonly perceived image of science which is seen as the last word in logic and objectivity, the latter in the double sense of being independent of individual psychological propensities and, at the same time, approaching a correct understanding of the workings of a mind-independent nature. Adopting the naturalist point of view, it examines the essential tension between the cognitive endeavours of individuals and scientific communities, immersed in belief systems and cultures, on the one hand, and the engagement with a mind-independent reality on the other. In the end, science emerges as an interpretation of nature, which is perceived by us only contextually, as successively emerging cross-sections of limited scope and extent. Successive waves of theory building in science appear as episodic and kaleidoscopic changes in perspective as certain in-built borders are crossed, rather than as a cumulative progress towards some ultimate truth. While written in an informal and conversational style, the book raises a number of deep and intriguing questions located at the interface of cognitive psychology and philosophy of science, meant for both the general lay reader and the specialist. Of particular interest is the way it explores the role of belief (aided by emotions and affects) in making the process of inductive inference possible since belief is a subtle though all-pervasive cognitive factor not adequately investigated in the current literature. (shrink)
In this paper, I argue that theories of perception that appeal to Helmholtz’s idea of unconscious inference (“Helmholtzian” theories) should be taken literally, i.e. that the inferences appealed to in such theories are inferences in the full sense of the term, as employed elsewhere in philosophy and in ordinary discourse. -/- In the course of the argument, I consider constraints on inference based on the idea that inference is a deliberate acton, and on the idea that inferences (...) depend on the syntactic structure of representations. I argue that inference is a personal-level but sometimes unconscious process that cannot in general be distinguished from association on the basis of the structures of the representations over which it’s defined. I also critique arguments against representationalist interpretations of Helmholtzian theories, and argue against the view that perceptual inference is encapsulated in a module. (shrink)
Seungbae Park argues that Bas van Fraassen’s rejection of inference to the best explanation (IBE) is problematic for his contextual theory of explanation because van Fraassen uses IBE to support the contextual theory. This paper provides a defense of van Fraassen’s views from Park’s objections. I point out three weaknesses of Park’s objection against van Fraassen. First, van Fraassen may be perfectly content to accept the implications that Park claims to follow from his views. Second, even if van Fraassen (...) rejects IBE he may still endorse particular arguments that instantiate IBE. Third, van Fraassen does not, in fact, use IBE to support his contextual theory. (shrink)
We argue in Roche and Sober (2013) that explanatoriness is evidentially irrelevant in that Pr(H | O&EXPL) = Pr(H | O), where H is a hypothesis, O is an observation, and EXPL is the proposition that if H and O were true, then H would explain O. This is a “screening-off” thesis. Here we clarify that thesis, reply to criticisms advanced by Lange (2017), consider alternative formulations of Inference to the Best Explanation, discuss a strengthened screening-off thesis, and consider (...) how it bears on the claim that unification is evidentially relevant. (shrink)
Explanation is asymmetric: if A explains B, then B does not explain A. Tradition- ally, the asymmetry of explanation was thought to favor causal accounts of explanation over their rivals, such as those that take explanations to be inferences. In this paper, we develop a new inferential approach to explanation that outperforms causal approaches in accounting for the asymmetry of explanation.
Many classically valid meta-inferences fail in a standard supervaluationist framework. This allegedly prevents supervaluationism from offering an account of good deductive reasoning. We provide a proof system for supervaluationist logic which includes supervaluationistically acceptable versions of the classical meta-inferences. The proof system emerges naturally by thinking of truth as licensing assertion, falsity as licensing negative assertion and lack of truth-value as licensing rejection and weak assertion. Moreover, the proof system respects well-known criteria for the admissibility of inference rules. Thus, (...) supervaluationists can provide an account of good deductive reasoning. Our proof system moreover brings to light how one can revise the standard supervaluationist framework to make room for higher-order vagueness. We prove that the resulting logic is sound and complete with respect to the consequence relation that preserves truth in a model of the non-normal modal logic _NT_. Finally, we extend our approach to a first-order setting and show that supervaluationism can treat vagueness in the same way at every order. The failure of conditional proof and other meta-inferences is a crucial ingredient in this treatment and hence should be embraced, not lamented. (shrink)
Conspiracy theories are typically thought to be examples of irrational beliefs, and thus unlikely to be warranted. However, recent work in Philosophy has challenged the claim that belief in conspiracy theories is irrational, showing that in a range of cases, belief in conspiracy theories is warranted. However, it is still often said that conspiracy theories are unlikely relative to non-conspiratorial explanations which account for the same phenomena. However, such arguments turn out to rest upon how we define what gets counted (...) both as a ‘conspiracy’ and a ‘conspiracy theory’, and such arguments rest upon shaky assumptions. It turns out that it is not clear that conspiracy theories are prima facie unlikely, and so the claim that such theories do not typically appear in our accounts of the best explanations for particular kinds of events needs to be reevaluated. (shrink)
Delusional beliefs have sometimes been considered as rational inferences from abnormal experiences. We explore this idea in more detail, making the following points. Firstly, the abnormalities of cognition which initially prompt the entertaining of a delusional belief are not always conscious and since we prefer to restrict the term “experience” to consciousness we refer to “abnormal data” rather than “abnormal experience”. Secondly, we argue that in relation to many delusions (we consider eight) one can clearly identify what the abnormal cognitive (...) data are which prompted the delusion and what the neuropsychological impairment is which is responsible for the occurrence of these data; but one can equally clearly point to cases where this impairments is present but delusion is not. So the impairment is not sufficient for delusion to occur. A second cognitive impairment, one which impairs the ability to evaluate beliefs, must also be present. Thirdly (and this is the main thrust of our chapter) we consider in detail what the nature of the inference is that leads from the abnormal data to the belief. This is not deductive inference and it is not inference by enumerative induction; it is abductive inference. We offer a Bayesian account of abductive inference and apply it to the explanation of delusional belief. (shrink)
This peer-reviewed paper intervenes in debates relating to overarching themes that impact upon mass media studies, communication theory and theories of cognition more generally. In particular, the paper discusses issues involving how our ordinary psychological thinking relates to norms of rationality (and how these latter are conceived). In essence, I argue against a dominant approach taken by Christopher Peacocke, that rationality can be grounded in the possession of certain concepts. The article makes a new contribution to the field by arguing (...) against the dominant approach on two grounds: (a) it fails to distinguish between true and false normative commitments; (b) it is empirically unsound. In response, I briefly offer suggestions towards an alternative, and psychologically tractable, account of rational commitment. Presentations of earlier drafts of the paper were given at a seminar at the Centre for Study of Mind and Nature, Oslo, as well as the conference of the European Society for Philosophy and Psychology. (shrink)
This chapter presents a typology of the different kinds of inductive inferences we can draw from our evidence, based on the explanatory relationship between evidence and conclusion. Drawing on the literature on graphical models of explanation, I divide inductive inferences into (a) downwards inferences, which proceed from cause to effect, (b) upwards inferences, which proceed from effect to cause, and (c) sideways inferences, which proceed first from effect to cause and then from that cause to an additional effect. I further (...) distinguish between direct and indirect forms of downwards and upwards inferences. I then show how we can subsume canonical forms of inductive inference mentioned in the literature, such as inference to the best explanation, enumerative induction, and analogical inference, under this typology. Along the way, I explore connections with probability and confirmation, epistemic defeat, the relation between abduction and enumerative induction, the compatibility of IBE and Bayesianism, and theories of epistemic justification. (shrink)
Stereotypes shape inferences in philosophical thought, political discourse, and everyday life. These inferences are routinely made when thinkers engage in language comprehension or production: We make them whenever we hear, read, or formulate stories, reports, philosophical case-descriptions, or premises of arguments – on virtually any topic. These inferences are largely automatic: largely unconscious, non-intentional, and effortless. Accordingly, they shape our thought in ways we can properly understand only by complementing traditional forms of philosophical analysis with experimental methods from psycholinguistics. This (...) paper seeks, first, to bring out the wider philosophical relevance of stereotypical inference, well beyond familiar topics like gender and race. Second, we wish to provide philosophers with a toolkit to experimentally study these ubiquitous inferences and what intuitions they may generate. This paper explains what stereotypes are, and why they matter to current and traditional concerns in philosophy – experimental, analytic, and applied. It then assembles a psycholinguistic toolkit and demonstrates through two studies how potentially questionnaire-based measures can be combined with process measures to garner evidence for specific stereotypical inferences and study when they ‘go through’ and influence our thinking. (shrink)
The major competing statistical paradigms share a common remarkable but unremarked thread: in many of their inferential applications, different probability interpretations are combined. How this plays out in different theories of inference depends on the type of question asked. We distinguish four question types: confirmation, evidence, decision, and prediction. We show that Bayesian confirmation theory mixes what are intuitively “subjective” and “objective” interpretations of probability, whereas the likelihood-based account of evidence melds three conceptions of what constitutes an “objective” probability.
How are inferences to design affected when one makes the (plausible) assumption that the universe is spatially infinite? I will show that arguments for the existence of God based on the improbable development of life don’t go through. I will also show that the model of design inferences promulgated by William Dembski is flawed. My model for design inferences has the (desirable) consequence that there are circumstances where a seeming miracle can count as evidence for the existence of God, even (...) if one would expect that type of event to naturalistically occur in a spatially infinite universe. (shrink)
This invited article is a response to the paper “Quantum Misuse in Psychic Literature,” by Jack A. Mroczkowski and Alexis P. Malozemoff, published in this issue of the Journal of Near-Death Studies. Whereas I sympathize with Mroczkowski’s and Malozemoff’s cause and goals, and I recognize the problem they attempted to tackle, I argue that their criticisms often overshot the mark and end up adding to the confusion. I address nine specific technical points that Mroczkowski and Malozemoff accused popular writers in (...) the fields of health care and parapsychology of misunderstanding and misrepresenting. I argue that, by and large—and contrary to Mroczkowski’s and Malozemoff’s claims—the statements made by these writers are often reasonable and generally consistent with the current state of play in foundations of quantum mechanics. (shrink)
The notions of inference and default are used in pragmatics with different meanings, resulting in theoretical disputes that emphasize the differences between the various pragmatic approaches. This paper is aimed at showing how the terminological and theoretical differences concerning the two aforementioned terms result from taking into account inference and default from different points of view and levels of analysis. Such differences risk making a dialogue between the theories extremely difficult. However, at a functional level of analysis the (...) different theories, definitions, and approaches to interpretation can be compared and integrated. At this level, the standardization of pragmatic inferences can be regarded as the development of a specific type of presumptions, used to draw prima-facie interpretations. (shrink)
Inference to the Best Explanation (IBE) advises reasoners to infer exactly one explanation. This uniqueness claim apparently binds us when it comes to “conjunctive explanations,” distinct explanations that are nonetheless explanatorily better together than apart. To confront this worry, explanationists qualify their statement of IBE, stipulating that this inference form only adjudicates between competing hypotheses. However, a closer look into the nature of competition reveals problems for this qualified account. Given the most common explication of competition, this qualification (...) artificially and radically constrains IBE’s domain of applicability. Using a more subtle, recent explication of competition, this qualification no longer provides a compelling treatment of conjunctive explanations. In light of these results, I suggest a different strategy for accommodating conjunctive explanations. Instead of modifying the form of IBE, I suggest a new way of thinking about the structure of IBE’s lot of considered hypotheses. (shrink)
Some assertions give rise to the acquaintance inference: the inference that the speaker is acquainted with some individual. Discussion of the acquaintance inference has previously focused on assertions about aesthetic matters and personal tastes (e.g. 'The cake is tasty'), but it also arises with reports about how things seem (e.g. 'Tom seems like he's cooking'). 'Seem'-reports give rise to puzzling acquaintance behavior, with no analogue in the previously-discussed domains. In particular, these reports call for a distinction between (...) the specific acquaintance inference (that the speaker is acquainted with a specific individual) and the general acquaintance inference (that the speaker is acquainted with something or other of relevance). We frame a novel empirical generalization -- the specific with stage-level generalization -- that systematizes the observed behavior, in terms of the semantics of the embedded 'like'-clause. We present supporting experimental work, and explain why the generalization makes sense given the evidential role of 'seem'-reports. Finally, we discuss the relevance of this result for extant proposals about the semantics of 'seem'-reports. More modestly, it fills a gap in previous theories by identifying which reports get which of two possible interpretations; more radically, it suggests a revision of the kind of explanation that should be given for the acquaintance behavior in question. (shrink)
In Lehrer’s case of the superstitious lawyer, a lawyer possesses conclusive evidence for his client’s innocence, and he appreciates that the evidence is conclusive, but the evidence is causally inert with respect to his belief in his client’s innocence. This case has divided epistemologists ever since Lehrer originally proposed it in his argument against causal analyses of knowledge. Some have taken the claim that the lawyer bases his belief on the evidence as a data point for our theories to accommodate, (...) while others have denied that the lawyer has knowledge, or that he bases his belief on the evidence. In this paper, we move the dialectic forward by way of arguing that the superstitious lawyer genuinely infers his client’s innocence from the evidence. To show that the lawyer’s inference is genuine, we argue in defense of a version of a doxastic construal of the ‘taking’ condition on inference. We also provide a pared-down superstitious lawyer-style case, which displays the key features of the original case without including its complicated and distracting features. But interestingly, although we argue that the lawyer’s belief is based on his good evidence, and is also plausibly doxastically justified, we do not argue that the lawyer knows that his client is innocent. (shrink)
ABSTRACT: A traditional objection to inferentialism states that not all inferences can be meaning-constitutive and therefore inferentialism has to comprise an analytic-synthetic distinction. As a response, Peregrin argues that meaning is a matter of inferential rules and only the subset of all the valid inferences for which there is a widely shared corrective behaviour corresponds to rules and so determines meaning. Unfortunately, Peregrin does not discuss what counts as “widely shared”. In the paper, I argue for an empirical plausibility of (...) Peregrin’s proposal. The aim of the paper is to show that we can find examples of meaning-constitutive linguistic action, which sustain Peregrin’s response. The idea is supported by examples of meaning modulation. If Peregrin is right, then we should be able to find specific meaning modulations in which a new meaning is publicly available and modulated in such a way that it has a potential to be widely shared. I believe that binding modulations – a specific type of meaning modulations – satisfy this condition. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.