The overwhelming majority of those who theorize about implicit biases posit that these biases are caused by some sort of association. However, what exactly this claim amounts to is rarely specified. In this paper, I distinguish between different understandings of association, and I argue that the crucial senses of association for elucidating implicit bias are the cognitive structure and mental process senses. A hypothesis is subsequently derived: if associations really underpin implicit biases, then implicit biases should be modulated by counterconditioning (...) or extinction but should not be modulated by rational argumentation or logical interventions. This hypothesis is false; implicit biases are not predicated on any associative structures or associative processes but instead arise because of unconscious propositionally structured beliefs. I conclude by discussing how the case study of implicit bias illuminates problems with popular dual-process models of cognitive architecture. (shrink)
Defenders of Inference to the Best Explanation claim that explanatory factors should play an important role in empirical inference. They disagree, however, about how exactly to formulate this role. In particular, they disagree about whether to formulate IBE as an inference rule for full beliefs or for degrees of belief, as well as how a rule for degrees of belief should relate to Bayesianism. In this essay I advance a new argument against non-Bayesian versions of IBE. My (...) argument focuses on cases in which we are concerned with multiple levels of explanation of some phenomenon. I show that in many such cases, following IBE as an inference rule for full beliefs leads to deductively inconsistent beliefs, and following IBE as a non-Bayesian updating rule for degrees of belief leads to probabilistically incoherent degrees of belief. (shrink)
Consider the following three claims. (i) There are no truths of the form ‘p and ~p’. (ii) No one holds a belief of the form ‘p and ~p’. (iii) No one holds any pairs of beliefs of the form {p, ~p}. Irad Kimhi has recently argued, in effect, that each of these claims holds and holds with metaphysical necessity. Furthermore, he maintains that they are ultimately not distinct claims at all, but the same claim formulated in different ways. I find (...) his argument suggestive, if not entirely transparent. I do think there is at least an important kernel of truth even in (iii), and that (i) ultimately explains what’s right about the other two. Consciousness of an impossibility makes belief in the obtaining of the corresponding state of affairs an impossibility. Interestingly, an appreciation of this fact brings into view a novel conception of inference, according to which it consists in the consciousness of necessity. This essay outlines and defends this position. A central element of the defense is that it reveals how reasoners satisfy what Paul Boghossian calls the Taking Condition and do so without engendering regress. (shrink)
What is the connection between justification and the kind of consequence relations that are studied by logic? In this essay, I shall try to provide an answer, by proposing a general conception of the kind of inference that counts as justified or rational.
I argue that inference can tolerate forms of self-ignorance and that these cases of inference undermine canonical models of inference on which inferrers have to appreciate (or purport to appreciate) the support provided by the premises for the conclusion. I propose an alternative model of inference that belongs to a family of rational responses in which the subject cannot pinpoint exactly what she is responding to or why, where this kind of self-ignorance does nothing to undermine (...) the intelligence of the response. (shrink)
Some arguments include imperative clauses. For example: ‘Buy me a drink; you can’t buy me that drink unless you go to the bar; so, go to the bar!’ How should we build a logic that predicts which of these arguments are good? Because imperatives aren’t truth apt and so don’t stand in relations of truth preservation, this technical question gives rise to a foundational one: What would be the subject matter of this logic? I argue that declaratives are used to (...) produce beliefs, imperatives are used to produce intentions, and beliefs and intentions are subject to rational requirements. An argument will strike us as valid when anyone whose mental state satisfies the premises is rationally required to satisfy the conclusion. For example, the above argument reflects the principle that it is irrational not to intend what one takes to be the necessary means to one’s intended ends. I argue that all intuitively good patterns of imperative inference can be explained using off-the-shelf formulations of our rational requirements. I then develop a formal-semantic theory embodying this view that predicts a range of data, including free-choice effects and Ross’s paradox. The resulting theory shows one way that our aspirations to rational agency can be discerned in the patterns of our speech, and is a case study in how the philosophy of language and the philosophy of action can be mutually illuminating. (shrink)
The paper offers an account of inference. The account underwrites the idea that inference requires that the reasoner takes her premises to support her conclusion. I reject views according to which such ‘takings’ are intuitions or beliefs. I sketch an alternative view on which inferring consists in attaching what I call ‘inferential force’ to a structured collection of contents.
Conspiracy theories are typically thought to be examples of irrational beliefs, and thus unlikely to be warranted. However, recent work in Philosophy has challenged the claim that belief in conspiracy theories is irrational, showing that in a range of cases, belief in conspiracy theories is warranted. However, it is still often said that conspiracy theories are unlikely relative to non-conspiratorial explanations which account for the same phenomena. However, such arguments turn out to rest upon how we define what gets counted (...) both as a ‘conspiracy’ and a ‘conspiracy theory’, and such arguments rest upon shaky assumptions. It turns out that it is not clear that conspiracy theories are prima facie unlikely, and so the claim that such theories do not typically appear in our accounts of the best explanations for particular kinds of events needs to be reevaluated. (shrink)
Delusional beliefs have sometimes been considered as rational inferences from abnormal experiences. We explore this idea in more detail, making the following points. Firstly, the abnormalities of cognition which initially prompt the entertaining of a delusional belief are not always conscious and since we prefer to restrict the term “experience” to consciousness we refer to “abnormal data” rather than “abnormal experience”. Secondly, we argue that in relation to many delusions (we consider eight) one can clearly identify what the abnormal cognitive (...) data are which prompted the delusion and what the neuropsychological impairment is which is responsible for the occurrence of these data; but one can equally clearly point to cases where this impairments is present but delusion is not. So the impairment is not sufficient for delusion to occur. A second cognitive impairment, one which impairs the ability to evaluate beliefs, must also be present. Thirdly (and this is the main thrust of our chapter) we consider in detail what the nature of the inference is that leads from the abnormal data to the belief. This is not deductive inference and it is not inference by enumerative induction; it is abductive inference. We offer a Bayesian account of abductive inference and apply it to the explanation of delusional belief. (shrink)
Seungbae Park argues that Bas van Fraassen’s rejection of inference to the best explanation (IBE) is problematic for his contextual theory of explanation because van Fraassen uses IBE to support the contextual theory. This paper provides a defense of van Fraassen’s views from Park’s objections. I point out three weaknesses of Park’s objection against van Fraassen. First, van Fraassen may be perfectly content to accept the implications that Park claims to follow from his views. Second, even if van Fraassen (...) rejects IBE he may still endorse particular arguments that instantiate IBE. Third, van Fraassen does not, in fact, use IBE to support his contextual theory. (shrink)
What are the prospects (if any) for a virtue-theoretic account of inference? This paper compares three options. Firstly, assess each argument individually in terms of the virtues of the participants. Secondly, make the capacity for cogent inference itself a virtue. Thirdly, recapture a standard treatment of cogency by accounting for each of its components in terms of more familiar virtues. The three approaches are contrasted and their strengths and weaknesses assessed.
"Correlation is not causation" is one of the mantras of the sciences—a cautionary warning especially to fields like epidemiology and pharmacology where the seduction of compelling correlations naturally leads to causal hypotheses. The standard view from the epistemology of causation is that to tell whether one correlated variable is causing the other, one needs to intervene on the system—the best sort of intervention being a trial that is both randomized and controlled. In this paper, we argue that some purely correlational (...) data contains information that allows us to draw causal inferences: statistical noise. Methods for extracting causal knowledge from noise provide us with an alternative to randomized controlled trials that allows us to reach causal conclusions from purely correlational data. (shrink)
This chapter explores the idea that causal inference is warranted if and only if the mechanism underlying the inferred causal association is identified. This mechanistic stance is discernible in the epidemiological literature, and in the strategies adopted by epidemiologists seeking to establish causal hypotheses. But the exact opposite methodology is also discernible, the black box stance, which asserts that epidemiologists can and should make causal inferences on the basis of their evidence, without worrying about the mechanisms that might underlie (...) their hypotheses. I argue that the mechanistic stance is indeed a bad methodology for causal inference. However, I detach and defend a mechanistic interpretation of causal generalisations in epidemiology as existence claims about underlying mechanisms. (shrink)
Stereotypes shape inferences in philosophical thought, political discourse, and everyday life. These inferences are routinely made when thinkers engage in language comprehension or production: We make them whenever we hear, read, or formulate stories, reports, philosophical case-descriptions, or premises of arguments – on virtually any topic. These inferences are largely automatic: largely unconscious, non-intentional, and effortless. Accordingly, they shape our thought in ways we can properly understand only by complementing traditional forms of philosophical analysis with experimental methods from psycholinguistics. This (...) paper seeks, first, to bring out the wider philosophical relevance of stereotypical inference, well beyond familiar topics like gender and race. Second, we wish to provide philosophers with a toolkit to experimentally study these ubiquitous inferences and what intuitions they may generate. This paper explains what stereotypes are, and why they matter to current and traditional concerns in philosophy – experimental, analytic, and applied. It then assembles a psycholinguistic toolkit and demonstrates through two studies how potentially questionnaire-based measures can be combined with process measures to garner evidence for specific stereotypical inferences and study when they ‘go through’ and influence our thinking. (shrink)
Is imagination a source of knowledge? Timothy Williamson has recently argued that our imaginative capacities can yield knowledge of a variety of matters, spanning from everyday practical matters to logic and set theory. Furthermore, imagination for Williamson plays a similar epistemic role in cognitive processes that we would traditionally classify as either a priori or a posteriori, which he takes to indicate that the distinction itself is shallow and epistemologically fruitless. In this chapter, I aim to defend the a priori-a (...) posteriori distinction from Williamson’s challenge by questioning his account of imagination. I distinguish two notions of imagination at play in Williamson’s account – sensory vs. belief-like imagination – and show that both face empirical and normative issues. Sensory imagination seems neither necessary nor sufficient for knowledge. Whereas, belief-like imagination isn’t adequately disentangled from inference. Additionally, Williamson’s examples are ad hoc and don’t generalize. I conclude that Williamson’s case against the a priori-a posteriori distinction is unconvincing, and so is the thesis that imagination is an epistemic source. (shrink)
Explanation is asymmetric: if A explains B, then B does not explain A. Tradition- ally, the asymmetry of explanation was thought to favor causal accounts of explanation over their rivals, such as those that take explanations to be inferences. In this paper, we develop a new inferential approach to explanation that outperforms causal approaches in accounting for the asymmetry of explanation.
I will argue that a person is causally responsible for believing what she does. Through inference, she can sustain and change her perspective on the world. When she draws an inference, she causes herself to keep or to change her take on things. In a literal sense, she makes up her own mind as to how things are. And, I will suggest, she can do this voluntarily. It is in part because she is causally responsible for believing what (...) she does that there are things that she ought to believe, and that what she believes can be to her credit or discredit. I won’t pursue these ethical matters here, but will focus instead on the metaphysics that underpin them. (shrink)
In this article, I will provide a critical overview of the form of non-deductive reasoning commonly known as “Inference to the Best Explanation” (IBE). Roughly speaking, according to IBE, we ought to infer the hypothesis that provides the best explanation of our evidence. In section 2, I survey some contemporary formulations of IBE and highlight some of its putative applications. In section 3, I distinguish IBE from C.S. Peirce’s notion of abduction. After underlining some of the essential elements of (...) IBE, the rest of the entry is organized around an examination of various problems that IBE confronts, along with some extant attempts to address these problems. In section 4, I consider the question of when a fact requires an explanation, since presumably IBE applies only in cases where some explanation is called for. In section 5, I consider the difficult question of how we ought to understand IBE in light of the fact that among philosophers, there is significant disagreement about what constitutes an explanation. In section 6, I consider different strategies for justifying the truth-conduciveness of the explanatory virtues, e.g., simplicity, unification, scope, etc., criteria which play an indispensable role in any given application of IBE. In section 7, I survey some of the most recent literature on IBE, much of which consists of investigations of the status of IBE from the standpoint of the Bayesian philosophy of science. (shrink)
Many classically valid meta-inferences fail in a standard supervaluationist framework. This allegedly prevents supervaluationism from offering an account of good deductive reasoning. We provide a proof system for supervaluationist logic which includes supervaluationistically acceptable versions of the classical meta-inferences. The proof system emerges naturally by thinking of truth as licensing assertion, falsity as licensing negative assertion and lack of truth-value as licensing rejection and weak assertion. Moreover, the proof system respects well-known criteria for the admissibility of inference rules. Thus, (...) supervaluationists can provide an account of good deductive reasoning. Our proof system moreover brings to light how one can revise the standard supervaluationist framework to make room for higher-order vagueness. We prove that the resulting logic is sound and complete with respect to the consequence relation that preserves truth in a model of the non-normal modal logic NT. Finally, we extend our approach to a first-order setting and show that supervaluationism can treat vagueness in the same way at every order. The failure of conditional proof and other meta-inferences is a crucial ingredient in this treatment and hence should be embraced, not lamented. (shrink)
This paper deals with the question of agency and intentionality in the context of the free-energy principle. The free-energy principle is a system-theoretic framework for understanding living self-organizing systems and how they relate to their environments. I will first sketch the main philosophical positions in the literature: a rationalist Helmholtzian interpretation (Hohwy 2013; Clark 2013), a cybernetic interpretation (Seth 2015b) and the enactive affordance-based interpretation (Bruineberg and Rietveld 2014; Bruineberg et al. 2016) and will then show how agency and intentionality (...) are construed differently on these different philosophical interpretations. I will then argue that a purely Helmholtzian is limited, in that it can account only account for agency in the context of perceptual inference. The cybernetic account cannot give a full account of action, since purposiveness is accounted for only to the extent that it pertains to the control of homeostatic essential variables. I will then argue that the enactive affordance-based account attempts to provide broader account of purposive action without presupposing goals and intentions coming from outside of the theory. In the second part of the paper, I will discuss how each of these three interpretations conceives of the sense agency and intentionality in different ways. (shrink)
Inferences from the absence of evidence to something are common in ordinary speech, but when used in scientific argumentations are usually considered deficient or outright false. Yet, as demonstrated here with the help of various examples, archaeologists frequently use inferences and reasoning from absence, often allowing it a status on par with inferences from tangible evidence. This discrepancy has not been examined so far. The article analyses it drawing on philosophical discussions concerning the validity of inference from absence, using (...) probabilistic models that were originally developed to show that such inferences are weak and inconclusive. The analysis reveals that inference from absence can indeed be justified in many important situations of archaeological research, such as excavations carried out to explore the past existence and time-span of sedentary human habitation. The justification is closely related to the fact that archaeology explores the human past via its material remains. The same analysis points to instances where inference from absence can have comparable validity in other historical sciences, and to research questions in which archaeological inference from absence will be problematic or totally unwarranted. (shrink)
This chapter presents a typology of the different kinds of inductive inferences we can draw from our evidence, based on the explanatory relationship between evidence and conclusion. Drawing on the literature on graphical models of explanation, I divide inductive inferences into (a) downwards inferences, which proceed from cause to effect, (b) upwards inferences, which proceed from effect to cause, and (c) sideways inferences, which proceed first from effect to cause and then from that cause to an additional effect. I further (...) distinguish between direct and indirect forms of downwards and upwards inferences. I then show how we can subsume canonical forms of inductive inference mentioned in the literature, such as inference to the best explanation, enumerative induction, and analogical inference, under this typology. Along the way, I explore connections with probability and confirmation, epistemic defeat, the relation between abduction and enumerative induction, the compatibility of IBE and Bayesianism, and theories of epistemic justification. (shrink)
An influential suggestion about the relationship between Bayesianism and inference to the best explanation holds that IBE functions as a heuristic to approximate Bayesian reasoning. While this view promises to unify Bayesianism and IBE in a very attractive manner, important elements of the view have not yet been spelled out in detail. I present and argue for a heuristic conception of IBE on which IBE serves primarily to locate the most probable available explanatory hypothesis to serve as a working (...) hypothesis in an agent’s further investigations. Along the way, I criticize what I consider to be an overly ambitious conception of the heuristic role of IBE, according to which IBE serves as a guide to absolute probability values. My own conception, by contrast, requires only that IBE can function as a guide to the comparative probability values of available hypotheses. This is shown to be a much more realistic role for IBE given the nature and limitations of the explanatory considerations with which IBE operates. (shrink)
We argue that a modified version of Mill’s method of agreement can strongly confirm causal generalizations. This mode of causal inference implicates the explanatory virtues of mechanism, analogy, consilience, and simplicity, and we identify it as a species of Inference to the Best Explanation (IBE). Since rational causal inference provides normative guidance, IBE is not a heuristic for Bayesian rationality. We give it an objective Bayesian formalization, one that has no need of principles of indifference and yields (...) responses to the Voltaire objection, van Fraassen’s Bad Lot objection, and John Norton’s recent objection to IBE. (shrink)
In this paper, I argue that theories of perception that appeal to Helmholtz’s idea of unconscious inference (“Helmholtzian” theories) should be taken literally, i.e. that the inferences appealed to in such theories are inferences in the full sense of the term, as employed elsewhere in philosophy and in ordinary discourse. -/- In the course of the argument, I consider constraints on inference based on the idea that inference is a deliberate acton, and on the idea that inferences (...) depend on the syntactic structure of representations. I argue that inference is a personal-level but sometimes unconscious process that cannot in general be distinguished from association on the basis of the structures of the representations over which it’s defined. I also critique arguments against representationalist interpretations of Helmholtzian theories, and argue against the view that perceptual inference is encapsulated in a module. (shrink)
Arguing for mathematical realism on the basis of Field’s explanationist version of the Quine–Putnam Indispensability argument, Alan Baker has recently claimed to have found an instance of a genuine mathematical explanation of a physical phenomenon. While I agree that Baker presents a very interesting example in which mathematics plays an essential explanatory role, I show that this example, and the argument built upon it, begs the question against the mathematical nominalist.
This essay advances and develops a dynamic conception of inference rules and uses it to reexamine a long-standing problem about logical inference raised by Lewis Carroll’s regress.
Inference to the Best Explanation (IBE) is widely criticized for being an unreliable form of ampliative inference – partly because the explanatory hypotheses we have considered at a given time may all be false, and partly because there is an asymmetry between the comparative judgment on which an IBE is based and the absolute verdict that IBE is meant to license. In this paper, I present a further reason to doubt the epistemic merits of IBE and argue that (...) it motivates moving to an inferential pattern in which IBE emerges as a degenerate limiting case. Since this inferential pattern is structurally similar to an argumentative strategy known as Inferential Robustness Analysis (IRA), it effectively combines the most attractive features of IBE and IRA into a unified approach to non-deductive inference. (shrink)
De Ray argues that relying on inference to the best explanation (IBE) requires the metaphysical belief that most phenomena have explanations. I object that instead the metaphysical belief requires the use of IBE. De Ray uses IBE himself to establish theism that God is the cause of the metaphysical belief, and thus he has the burden of establishing the metaphysical belief independently of using IBE. Naturalism that the world is the cause of the metaphysical belief is preferable to theism, (...) contrary to what de Ray thinks. (shrink)
This thesis brings together two concerns. The first is the nature of inference—what it is to infer—where inference is understood as a distinctive kind of conscious and self-conscious occurrence. The second concern is the possibility of doxastic agency. To be capable of doxastic agency is to be such that one is capable of directly exercising agency over one’s beliefs. It is to be capable of exercising agency over one’s beliefs in a way which does not amount to mere (...) self-manipulation. Subjects who can exercise doxastic agency can settle questions for themselves. A challenge to the possibility of doxastic agency stems from the fact that we cannot believe or come to believe “at will”, where this in turn seems to be so because belief “aims at truth”. It must be explained how we are capable of doxastic agency despite that we cannot believe or come to believe at will. On the orthodox ‘causalist’ conception of inference for an inference to occur is for one act of acceptance to cause another in some specifiable “right way”. This conception of inference prevents its advocates from adequately seeing how reasoning could be a means to exercise doxastic agency, as it is natural to think it is. Suppose, for instance, that one reasons and concludes by inferring where one’s inference yields belief in what one infers. Such an inference cannot be performed at will. We cannot infer at will when inference yields belief any more than we can believe or come to believe at will. When it comes to understanding the extent to which one could be exercising agency in such a case the causalist conception of inference suggests that we must look to the causal history of one’s concluding act of acceptance, the nature of the act’s being determined by the way in which it is caused. What results is a picture on which such reasoning as a whole cannot be action. We are at best capable of actions of a kind which lead causally to belief fixation through “mental ballistics”. The causalist account of inference, I argue, is in fact either inadequate or unmotivated. It either fails to accommodate the self-consciousness of inference or is not best placed to play the very explanatory role which it is put forward to play. On the alternative I develop when one infers one’s inference is the conscious event which is one’s act of accepting that which one is inferring. The act’s being an inference is determined, not by the way it is caused, but by the self-knowledge which it constitutively involves. This corrected understanding of inference renders the move from the challenge to the possibility of doxastic agency to the above ballistics picture no longer tempting. It also yields an account of how we are capable of exercising doxastic agency by reasoning despite being unable to believe or come to believe at will. In order to see how such reasoning could amount to the exercise of doxastic agency it needs to be conceived of appropriately. I suggest that paradigm reasoning which potentially amounts the exercise of doxastic agency ought to be conceived of as primarily epistemic agency—agency the aim of which is knowledge. With inference conceived as suggested, I argue, it can be seen how to engage in such reasoning can just be to successfully exercise such agency. (shrink)
What is an inference? Logicians and philosophers have proposed various conceptions of inference. I shall first highlight seven features that contribute to distinguish these conceptions. I shall then compare three conceptions to see which of them best explains the special force that compels us to accept the conclusion of an inference, if we accept its premises.
Proponents of the explanatory indispensability argument for mathematical platonism maintain that claims about mathematical entities play an essential explanatory role in some of our best scientific explanations. They infer that the existence of mathematical entities is supported by way of inference to the best explanation from empirical phenomena and therefore that there are the same sort of empirical grounds for believing in mathematical entities as there are for believing in concrete unobservables such as quarks. I object that this (...) class='Hi'>inference depends on a false view of how abductive considerations mediate the transfer of empirical support. More specifically, I argue that even if inference to the best explanation is cogent, and claims about mathematical entities play an essential explanatory role in some of our best scientific explanations, it doesn’t follow that the empirical phenomena that license those explanations also provide empirical support for the claim that mathematical entities exist. (shrink)
True beliefs and truth-preserving inferences are, in some sense, good beliefs and good inferences. When an inference is valid though, it is not merely truth-preserving, but truth-preserving in all cases. This motivates my question: I consider a Modus Ponens inference, and I ask what its validity in particular contributes to the explanation of why the inference is, in any sense, a good inference. I consider the question under three different definitions of ‘case’, and hence of ‘validity’: (...) the orthodox definition given in terms of interpretations or models, a metaphysical definition given in terms of possible worlds, and a substitutional definition defended by Quine. I argue that the orthodox notion is poorly suited to explain what's good about a Modus Ponens inference. I argue that there is something good that is explained by a certain kind of truth across possible worlds, but the explanation is not provided by metaphysical validity in particular; nothing of value is explained by truth across all possible worlds. Finally, I argue that the substitutional notion of validity allows us to correctly explain what is good about a valid inference. (shrink)
While we can judge and believe things by merely accepting testimony, we cannot make inferences by merely accepting testimony. A good theory of inference should explain this. The theories that are best suited to explain this fact seem to be theories that accept a so-called intuitional construal of Boghossian’s Taking Condition.
Recent epistemology of modality has seen a growing trend towards metaphysics-first approaches. Contrastingly, this paper offers a more philosophically modest account of justifying modal claims, focusing on the practices of scientific modal inferences. Two ways of making such inferences are identified and analyzed: actualist-manipulationist modality and relative modality. In AM, what is observed to be or not to be the case in actuality or under manipulations, allows us to make modal inferences. AM-based inferences are fallible, but the same holds for (...) practically all empirical inquiry. In RM, modal inferences are evaluated relative to what is kept fixed in a system, like a theory or a model. RM-based inferences are more certain but framework-dependent. While elements from both AM and RM can be found in some existing accounts of modality, it is worth highlighting them in their own right and isolating their features for closer scrutiny. This helps to establish their relevant epistemologies that are free from some strong philosophical assumptions often attached to them in the literature. We close by showing how combining these two routes amounts to a view that accounts for a rich variety of modal inferences in science. (shrink)
Current debates surrounding the virtues and shortcomings of randomization are symptomatic of a lack of appreciation of the fact that causation can be inferred by two distinct inference methods, each requiring its own, specific experimental design. There is a non-statistical type of inference associated with controlled experiments in basic biomedical research; and a statistical variety associated with randomized controlled trials in clinical research. I argue that the main difference between the two hinges on the satisfaction of the comparability (...) requirement, which is in turn dictated by the nature of the objects of study, namely homogeneous or heterogeneous populations of biological systems. Among other things, this entails that the objection according to which randomized experiments fail to provide better evidence for causation because randomization cannot guarantee comparability is mistaken. (shrink)
Inferentialism is a theory in the philosophy of language which claims that the meanings of expressions are constituted by inferential roles or relations. Instead of a traditional model-theoretic semantics, it naturally lends itself to a proof-theoretic semantics, where meaning is understood in terms of inference rules with a proof system. Most work in proof-theoretic semantics has focused on logical constants, with comparatively little work on the semantics of non-logical vocabulary. Drawing on Robert Brandom’s notion of material inference and (...) Greg Restall’s bilateralist interpretation of the multiple conclusion sequent calculus, I present a proof-theoretic semantics for atomic sentences and their constituent names and predicates. The resulting system has several interesting features: (1) the rules are harmonious and stable; (2) the rules create a structure analogous to familiar model-theoretic semantics; and (3) the semantics is compositional, in that the rules for atomic sentences are determined by those for their constituent names and predicates. (shrink)
This invited article is a response to the paper “Quantum Misuse in Psychic Literature,” by Jack A. Mroczkowski and Alexis P. Malozemoff, published in this issue of the Journal of Near-Death Studies. Whereas I sympathize with Mroczkowski’s and Malozemoff’s cause and goals, and I recognize the problem they attempted to tackle, I argue that their criticisms often overshot the mark and end up adding to the confusion. I address nine specific technical points that Mroczkowski and Malozemoff accused popular writers in (...) the fields of health care and parapsychology of misunderstanding and misrepresenting. I argue that, by and large—and contrary to Mroczkowski’s and Malozemoff’s claims—the statements made by these writers are often reasonable and generally consistent with the current state of play in foundations of quantum mechanics. (shrink)
The idea that knowledge can be extended by inference from what is known seems highly plausible. Yet, as shown by familiar preface paradox and lottery-type cases, the possibility of aggregating uncertainty casts doubt on its tenability. We show that these considerations go much further than previously recognized and significantly restrict the kinds of closure ordinary theories of knowledge can endorse. Meeting the challenge of uncertainty aggregation requires either the restriction of knowledge-extending inferences to single premises, or eliminating epistemic uncertainty (...) in known premises. The first strategy, while effective, retains little of the original idea—conclusions even of modus ponens inferences from known premises are not always known. We then look at the second strategy, inspecting the most elaborate and promising attempt to secure the epistemic role of basic inferences, namely Timothy Williamson’s safety theory of knowledge. We argue that while it indeed has the merit of allowing basic inferences such as modus ponens to extend knowledge, Williamson’s theory faces formidable difficulties. These difficulties, moreover, arise from the very feature responsible for its virtue- the infallibilism of knowledge. (shrink)
In this paper I adduce a new argument in support of the claim that IBE is an autonomous form of inference, based on a familiar, yet surprisingly, under-discussed, problem for Hume’s theory of induction. I then use some insights thereby gleaned to argue for the claim that induction is really IBE, and draw some normative conclusions.
The notions of inference and default are used in pragmatics with different meanings, resulting in theoretical disputes that emphasize the differences between the various pragmatic approaches. This paper is aimed at showing how the terminological and theoretical differences concerning the two aforementioned terms result from taking into account inference and default from different points of view and levels of analysis. Such differences risk making a dialogue between the theories extremely difficult. However, at a functional level of analysis the (...) different theories, definitions, and approaches to interpretation can be compared and integrated. At this level, the standardization of pragmatic inferences can be regarded as the development of a specific type of presumptions, used to draw prima-facie interpretations. (shrink)
This monograph is an in-depth and engaging discourse on the deeply cognitive roots of human scientific quest. The process of making scientific inferences is continuous with the day-to-day inferential activity of individuals, and is predominantly inductive in nature. Inductive inference, which is fallible, exploratory, and open-ended, is of essential relevance in our incessant efforts at making sense of a complex and uncertain world around us, and covers a vast range of cognitive activities, among which scientific exploration constitutes the pinnacle. (...) Inductive inference has a personal aspect to it, being rooted in the cognitive unconscious of individuals, which has recently been found to be of paramount importance in a wide range of complex cognitive processes. One other major aspect of the process of inference making, including the making of scientific inferences, is the role of a vast web of beliefs lodged in the human mind, as also of a huge repertoire of heuristics, that constitute an important component of ‘unconscious intelligence’. Finally, human cognitive activity is dependent in a large measure on emotions and affects, that operate mostly at an unconscious level. Of special importance in scientific inferential activity is the process of hypothesis making, which is examined in this book, along with the above aspects of inductive inference, at considerable depth. The book focuses on the inadequacy of the viewpoint of naive realism in understanding the contextual nature of scientific theories, where a cumulative progress towards an ultimate truth about Nature appears to be too simplistic a generalization. It poses a critique to the commonly perceived image of science which is seen as the last word in logic and objectivity, the latter in the double sense of being independent of individual psychological propensities and, at the same time, approaching a correct understanding of the workings of a mind-independent nature. Adopting the naturalist point of view, it examines the essential tension between the cognitive endeavours of individuals and scientific communities, immersed in belief systems and cultures, on the one hand, and the engagement with a mind-independent reality on the other. In the end, science emerges as an interpretation of nature, which is perceived by us only contextually, as successively emerging cross-sections of limited scope and extent. Successive waves of theory building in science appear as episodic and kaleidoscopic changes in perspective as certain in-built borders are crossed, rather than as a cumulative progress towards some ultimate truth. While written in an informal and conversational style, the book raises a number of deep and intriguing questions located at the interface of cognitive psychology and philosophy of science, meant for both the general lay reader and the specialist. Of particular interest is the way it explores the role of belief (aided by emotions and affects) in making the process of inductive inference possible since belief is a subtle though all-pervasive cognitive factor not adequately investigated in the current literature. (shrink)
Behavior oftentimes allows for many possible interpretations in terms of mental states, such as goals, beliefs, desires, and intentions. Reasoning about the relation between behavior and mental states is therefore considered to be an effortful process. We argue that people use simple strategies to deal with high cognitive demands of mental state inference. To test this hypothesis, we developed a computational cognitive model, which was able to simulate previous empirical findings: In two-player games, people apply simple strategies at first. (...) They only start revising their strategies when these do not pay off. The model could simulate these findings by recursively attributing its own problem solving skills to the other player, thus increasing the complexity of its own inferences. The model was validated by means of a comparison with findings from a developmental study in which the children demonstrated similar strategic developments. (shrink)
Discussions about the nature of essence and about the inference problem for non-Humean theories of nomic modality have largely proceeded independently of each other. In this article I argue that the right conclusions to draw about the inference problem actually depend significantly on how best to understand the nature of essence. In particular, I argue that this conclusion holds for the version of the inference problem developed and defended by Alexander Bird. I argue that Bird’s own argument (...) that this problem is fatal for David Armstrong’s influential theory of the laws of nature but not for dispositional essentialism is seriously flawed. In place of this argument, I develop an argument that whether Bird’s inference problem raises serious difficulties for Armstrong’s theory depends on the answers to substantial questions about how best to understand essence. The key consequence is that considerations about the nature of essence have significant, underappreciated implications for Armstrong’s theory. (shrink)
In this paper, I explore a question about deductive reasoning: why am I in a position to immediately infer some deductive consequences of what I know, but not others? I show why the question cannot be answered in the most natural ways of answering it, in particular in Descartes’s way of answering it. I then go on to introduce a new approach to answering the question, an approach inspired by Hume’s view of inductive reasoning.
I examine and resolve an exegetical dichotomy between two main interpretations of Peirce’s theory of abduction, namely, the Generative Interpretation and the Pursuitworthiness Interpretation. According to the former, abduction is the instinctive process of generating explanatory hypotheses through a mental faculty called insight. According to the latter, abduction is a rule-governed procedure for determining the relative pursuitworthiness of available hypotheses and adopting the worthiest one for further investigation—such as empirical tests—based on economic considerations. It is shown that the Generative Interpretation (...) is inconsistent with a fundamental fact of logic for Peirce—i.e., abduction is a kind of inference—and the Pursuitworthiness Interpretation is flawed and inconsistent with Peirce’s naturalistic explanation for the possibility of science and his view about the limitations of classical scientific method. Changing the exegetical locus classicus from the logical form of abduction to insight and economy of research, I argue for the Unified Interpretation according to which abduction includes both instinctive hypotheses-generation and rule-governed hypotheses-ranking. I show that the Unified Interpretation is immune to the objections raised successfully against the Generative and the Pursuitworthiness interpretations. (shrink)
It is well known that there are, at least, two sorts of cases where one should not prefer a direct inference based on a narrower reference class, in particular: cases where the narrower reference class is gerrymandered, and cases where one lacks an evidential basis for forming a precise-valued frequency judgment for the narrower reference class. I here propose (1) that the preceding exceptions exhaust the circumstances where one should not prefer direct inference based on a narrower reference (...) class, and (2) that minimal frequency information for a narrower (non-gerrymandered) reference class is sufficient to yield the defeat of a direct inference for a broader reference class. By the application of a method for inferring relatively informative expected frequencies, I argue that the latter claim does not result in an overly incredulous approach to direct inference. The method introduced here permits one to infer a relatively informative expected frequency for a reference class R', given frequency information for a superset of R' and/or frequency information for a sample drawn from R'. (shrink)
This chapter argues that the only tenable unconscious inferences theories of cognitive achievement are ones that employ a theory internal technical notion of representation, but that once we give cash-value definitions of the relevant notions of representation and inference, there is little left of the ordinary notion of representation. We suggest that the real value of talk of unconscious inferences lies in (a) their heuristic utility in helping us to make fruitful predictions, such as about illusions, and (b) their (...) providing a high-level description of the functional organization of subpersonal faculties that makes clear how they equip an agent to navigate its environment and pursue its goals. (shrink)
In previous work, we studied four well known systems of qualitative probabilistic inference, and presented data from computer simulations in an attempt to illustrate the performance of the systems. These simulations evaluated the four systems in terms of their tendency to license inference to accurate and informative conclusions, given incomplete information about a randomly selected probability distribution. In our earlier work, the procedure used in generating the unknown probability distribution (representing the true stochastic state of the world) tended (...) to yield probability distributions with moderately high entropy levels. In the present article, we present data charting the performance of the four systems when reasoning in environments of various entropy levels. The results illustrate variations in the performance of the respective reasoning systems that derive from the entropy of the environment, and allow for a more inclusive assessment of the reliability and robustness of the four systems. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.