A certain type of inferencerules in modal logics, generalizing Gabbay's Irreflexivity rule, is introduced and some general completeness results about modal logics axiomatized with such rules are proved.
In the theory of meaning, it is common to contrast truth-conditional theories of meaning with theories which identify the meaning of an expression with its use. One rather exact version of the somewhat vague use-theoretic picture is the view that the standard rules of inference determine the meanings of logical constants. Often this idea also functions as a paradigm for more general use-theoretic approaches to meaning. In particular, the idea plays a key role in the anti-realist program of (...) Dummett and his followers. In the theory of truth, a key distinction now is made between substantial theories and minimalist or deflationist views. According to the former, truth is a genuine substantial property of the truth-bearers, whereas according to the latter, truth does not have any deeper essence, but all that can be said about truth is contained in T-sentences (sentences having the form: ‘P’ is true if and only if P). There is no necessary analytic connection between the above theories of meaning and truth, but they have nevertheless some connections. Realists often favour some kind of truth-conditional theory of meaning and a substantial theory of truth (in particular, the correspondence theory). Minimalists and deflationists on truth characteristically advocate the use theory of meaning (e.g. Horwich). Semantical anti-realism (e.g. Dummett, Prawitz) forms an interesting middle case: its starting point is the use theory of meaning, but it usually accepts a substantial view on truth, namely that truth is to be equated with verifiability or warranted assertability. When truth is so understood, it is also possible to accept the idea that meaning is closely related to truth-conditions, and hence the conflict between use theories and truth-conditional theories in a sense disappears in this view. (shrink)
A background assumption of this paper is that the repertoire of inference schemes available to humanity is not fixed, but subject to change as new schemes are invented or refined and as old ones are obsolesced or abandoned. This is particularly visible in areas like health and environmental sciences, where enormous societal investment has been made in finding ways to reach more dependable conclusions. Computational modeling of argumentation, at least for the discourse in expert fields, will require the possibility (...) of modeling change in a stock of schemes that may be applied to generate conclusions from data. We examine Randomized Clinical Trial, an inference scheme established within medical science in the mid-20th Century, and show that its successful defense by means of practical reasoning allowed for its black-boxing as an inference scheme that generates (and warrants belief in) conclusions about the effects of medical treatments. Modeling the use of a scheme is well-understood; here we focus on modeling how the scheme comes to be established so that it is available for use. (shrink)
In this paper I argue that pluralism at the level of logical systems requires a certain monism at the meta-logical level, and so, in a sense, there cannot be pluralism all the way down. The adequate alternative logical systems bottom out in a shared basic meta-logic, and as such, logical pluralism is limited. I argue that the content of this basic meta-logic must include the analogue of logical rules Modus Ponens (MP) and Universal Instantiation (UI). I show this through (...) a detailed analysis of the ‘adoption problem’, which manifests something special about MP and UI. It appears that MP and UI underwrite the very nature of a logical rule of inference, due to all rules of inference being conditional and universal in their structure. As such, all logical rules presuppose MP and UI, making MP and UI self-governing, basic, unadoptable, and (most relevantly to logical pluralism) required in the meta-logic for the adequacy of any logical system. (shrink)
In attempting to form rational personal probabilities by direct inference, it is usually assumed that one should prefer frequency information concerning more specific reference classes. While the preceding assumption is intuitively plausible, little energy has been expended in explaining why it should be accepted. In the present article, I address this omission by showing that, among the principled policies that may be used in setting one’s personal probabilities, the policy of making direct inferences with a preference for frequency information (...) for more specific reference classes yields personal probabilities whose accuracy is optimal, according to all proper scoring rules, in situations where all of the relevant frequency information is point-valued. Assuming that frequency information for narrower reference classes is preferred, when the relevant frequency statements are point-valued, a dilemma arises when choosing whether to make a direct inference based upon relatively precise-valued frequency information for a broad reference class, R, or upon relatively imprecise-valued frequency information for a more specific reference class, R*. I address such cases, by showing that it is often possible to make a precise-valued frequency judgment regarding R* based on precise-valued frequency information for R, using standard principles of direct inference. Having made such a frequency judgment, the dilemma of choosing between and is removed, and one may proceed by using the precise-valued frequency estimate for the more specific reference class as a premise for direct inference. (shrink)
This essay advances and develops a dynamic conception of inferencerules and uses it to reexamine a long-standing problem about logical inference raised by Lewis Carroll’s regress.
Byrne offers a novel interpretation of the idea that the mind is transparent to its possessor, and that one knows one’s own mind by looking out at the world. This paper argues that his attempts to extend this picture of self-knowledge force him to sacrifice the theoretical parsimony he presents as the primary virtue of his account. The paper concludes by discussing two general problems transparency accounts of self-knowledge must address.
It has been common wisdom for centuries that scientific inference cannot be deductive; if it is inference at all, it must be a distinctive kind of inductive inference. According to demonstrative theories of induction, however, important scientific inferences are not inductive in the sense of requiring ampliative inferencerules at all. Rather, they are deductive inferences with sufficiently strong premises. General considerations about inferences suffice to show that there is no difference in justification between an (...)inference construed demonstratively or ampliatively. The inductive risk may be shouldered by premises or rules, but it cannot be shirked. Demonstrative theories of induction might, nevertheless, better describe scientific practice. And there may be good methodological reasons for constructing our inferences one way rather than the other. By exploring the limits of these possible advantages, I argue that scientific inference is neither of essence deductive nor of essence inductive. (shrink)
Systems of logico-probabilistic (LP) reasoning characterize inference from conditional assertions interpreted as expressing high conditional probabilities. In the present article, we investigate four prominent LP systems (namely, systems O, P, Z, and QC) by means of computer simulations. The results reported here extend our previous work in this area, and evaluate the four systems in terms of the expected utility of the dispositions to act that derive from the conclusions that the systems license. In addition to conforming to the (...) dominant paradigm for assessing the rationality of actions and decisions, our present evaluation complements our previous work, since our previous evaluation may have been too severe in its assessment of inferences to false and uninformative conclusions. In the end, our new results provide additional support for the conclusion that (of the four systems considered) inference by system Z offers the best balance of error avoidance and inferential power. Our new results also suggest that improved performance could be achieved by a modest strengthening of system Z. (shrink)
There are several arguments for internalist theses concerning our justification to employ rules of inference (and belief-forming methods more generally). In this paper, I discuss three such arguments – one based on simple cases, one based on a general conception of epistemic responsibility, and one based on our intuitive reactions to skeptical scenarios. I argue that none of these arguments is successful. Along the way, I argue that there are belief-forming methods that thinkers are epistemically entitled to employ (...) – that is, thinkers are justified in employing the methods as basic in their thought but are not so justified by virtue of believing that the methods are reliable or otherwise have some positive normative status. Finally, I briefly discuss three candidate accounts of epistemic entitlement to belief-forming methods. (shrink)
The scientific understanding of cognition and consciousness is currently hampered by the lack of rigorous and universally accepted definitions that permit comparative studies. This paper proposes new functional and un- ambiguous definitions for cognition and consciousness in order to provide clearly defined boundaries within which general theories of cognition and consciousness may be developed. The proposed definitions are built upon the construction and manipulation of reality representation, decision making and learning and are scoped in terms of an underlying logical structure. (...) It is argued that the presentation of reality also necessitates the concept of ab- sence and the capacity to perform transitive inference. Explicit predictions relating to these new definitions, along with possible ways to test them, are also described and discussed. (shrink)
Lewis Carroll’s 1895 paper “Achilles and the Tortoise” showed that we need a distinction between rules of inference and premises. We cannot, on pain of regress, treat all rules simply as further premises in an argument. But Carroll’s paper doesn’t say very much about what rules there must be. Indeed, it is consistent with what Carroll says there to think that the only rule is -elimination. You might think that modern Bayesians, who seem to think that (...) the only rule of inference they need is conditionalisation, have taken just this lesson from Carroll. But obviously nothing in Carroll’s argument rules out there being other rules as well. (shrink)
We are justified in employing the rule of inference Modus Ponens (or one much like it) as basic in our reasoning. By contrast, we are not justified in employing a rule of inference that permits inferring to some difficult mathematical theorem from the relevant axioms in a single step. Such an inferential step is intuitively “too large” to count as justified. What accounts for this difference? In this paper, I canvass several possible explanations. I argue that the most (...) promising approach is to appeal to features like usefulness or indispensability to important or required cognitive projects. On the resulting view, whether an inferential step counts as large or small depends on the importance of the relevant rule of inference in our thought. (shrink)
In their book EVALUATING CRITICAL THINKING Stephen Norris and Robert Ennis say: “Although it is tempting to think that certain [unstated] assumptions are logically necessary for an argument or position, they are not. So do not ask for them.” Numerous writers of introductory logic texts as well as various highly visible standardized tests (e.g., the LSAT and GRE) presume that the Norris/Ennis view is wrong; the presumption is that many arguments have (unstated) necessary assumptions and that readers and test takers (...) can reasonably be expected to identify such assumptions. This paper proposes and defends criteria for determining necessary assumptions of arguments. Both theoretical and empirical considerations are brought to bear. (shrink)
For deductive reasoning to be justified, it must be guaranteed to preserve truth from premises to conclusion; and for it to be useful to us, it must be capable of informing us of something. How can we capture this notion of information content, whilst respecting the fact that the content of the premises, if true, already secures the truth of the conclusion? This is the problem I address here. I begin by considering and rejecting several accounts of informational content. I (...) then develop an account on which informational contents are indeterminate in their membership. This allows there to be cases in which it is indeterminate whether a given deduction is informative. Nevertheless, on the picture I present, there are determinate cases of informative (and determinate cases of uninformative) inferences. I argue that the model I offer is the best way for an account of content to respect the meaning of the logical constants and the inferencerules associated with them without collapsing into a classical picture of content, unable to account for informative deductive inferences. (shrink)
My goal is to illuminate truth-making by way of illuminating the relation of making. My strategy is not to ask what making is, in the hope of a metaphysical theory about is nature. It's rather to look first to the language of making. The metaphor behind making refers to agency. It would be absurd to suggest that claims about making are claims about agency. It is not absurd, however, to propose that the concept of making somehow emerges from some feature (...) to do with agency. That's the contention to be explore in this paper. The way to do this is through expressivism,. Truth-making claims, and making-claims generfally, are claims in which we express mental states linked to our maipulation of concepts, like truth. In particular, they express disposition to undertake derivations using inferencerules, in which introduction rules have a specific role. I then show how this theory explains our intuitions about truth's asymmetric dependence on being. (shrink)
In a 2002 article Stewart Cohen advances the “bootstrapping problem” for what he calls “basic justification theories,” and in a 2010 followup he offers a solution to the problem, exploiting the idea that suppositional reasoning may be used with defeasible as well as with deductive inferencerules. To curtail the form of bootstrapping permitted by basic justification theories, Cohen insists that subjects must know their perceptual faculties are reliable before perception can give them knowledge. But how is such (...) knowledge of reliability to be acquired if not through perception itself? Cohen proposes that such knowledge may be acquired a priori through suppositional reasoning. I argue that his strategy runs afoul of a plausible view about how epistemic principles function; in brief, I argue that one must actually satisfy the antecedent of an epistemic principle, not merely suppose that one does, to acquire any justification by its means – even justification for a merely conditional proposition. (shrink)
There are many domains about which we think we are reliable. When there is prima facie reason to believe that there is no satisfying explanation of our reliability about a domain given our background views about the world, this generates a challenge to our reliability about the domain or to our background views. This is what is often called the reliability challenge for the domain. In previous work, I discussed the reliability challenges for logic and for deductive inference. I (...) argued for four main claims: First, there are reliability challenges for logic and for deduction. Second, these reliability challenges cannot be answered merely by providing an explanation of how it is that we have the logical beliefs and employ the deductive rules that we do. Third, we can explain our reliability about logic by appealing to our reliability about deduction. Fourth, there is a good prospect for providing an evolutionary explanation of the reliability of our deductive reasoning. In recent years, a number of arguments have appeared in the literature that can be applied against one or more of these four theses. In this paper, I respond to some of these arguments. In particular, I discuss arguments by Paul Horwich, Jack Woods, Dan Baras, Justin Clarke-Doane, and Hartry Field. (shrink)
This essay (a revised version of my undergraduate honors thesis at Stanford) constructs a theory of analogy as it applies to argumentation and reasoning, especially as used in fields such as philosophy and law. The word analogy has been used in different senses, which the essay defines. The theory developed herein applies to analogia rationis, or analogical reasoning. Building on the framework of situation theory, a type of logical relation called determination is defined. This determination relation solves a puzzle about (...) analogy in the context of logical argument, namely, whether an analogous situation contributes anything logically over and above what could be inferred from the application of prior knowledge to a present situation. Scholars of reasoning have often claimed that analogical arguments are never logically valid, and that they therefore lack cogency. However, when the right type of determination structure exists, it is possible to prove that projecting a conclusion inferred by analogy onto the situation about which one is reasoning is both valid and non-redundant. Various other properties and consequences of the determination relation are also proven. Some analogical arguments are based on principles such as similarity, which are not logically valid. The theory therefore provides us with a way to distinguish between legitimate and illegitimate arguments. It also provides an alternative to procedures based on the assessment of similarity for constructing analogies in artificial intelligence systems. (shrink)
ABSTRACTThis paper provides a naturalistic account of inference. We posit that the core of inference is constituted by bare inferential transitions, transitions between discursive mental representations guided by rules built into the architecture of cognitive systems. In further developing the concept of BITs, we provide an account of what Boghossian [2014] calls ‘taking’—that is, the appreciation of the rule that guides an inferential transition. We argue that BITs are sufficient for implicit taking, and then, to analyse explicit (...) taking, we posit rich inferential transitions, which are transitions that the subject is disposed to endorse. (shrink)
We analyze the logical form of the domain knowledge that grounds analogical inferences and generalizations from a single instance. The form of the assumptions which justify analogies is given schematically as the "determination rule", so called because it expresses the relation of one set of variables determining the values of another set. The determination relation is a logical generalization of the different types of dependency relations defined in database theory. Specifically, we define determination as a relation between schemata of first (...) order logic that have two kinds of free variables: (1) object variables and (2) what we call "polar" variables, which hold the place of truth values. Determination rules facilitate sound rule inference and valid conclusions projected by analogy from single instances, without implying what the conclusion should be prior to an inspection of the instance. They also provide a way to specify what information is sufficiently relevant to decide a question, prior to knowledge of the answer to the question. (shrink)
The “Game of the Rule” is easy enough: I give you the beginning of a sequence of numbers (say) and you have to figure out how the sequence continues, to uncover the rule by means of which the sequence is generated. The game depends on two obvious constraints, namely (1) that the initial segment uniquely identify the sequence, and (2) that the sequence be non-random. As it turns out, neither constraint can fully be met, among other reasons because the relevant (...) notion of randomness is either vacuous or undecidable. This may not be a problem when we play for fun. It is, however, a serious problem when it comes to playing the game for real, i.e., when the player to issue the initial segment is not one of us but the world out there, the sequence consisting not of numbers (say) but of the events that make up our history. Moreover, when we play for fun we know exactly what initial segment to focus on, but when we play for real we don’t even know that. This is the core difficulty in the philosophy of the inductive sciences. (shrink)
This paper is a response to Anthony Brueckner's critique of my essay "The Self-Defeating Character of Skepticism," which appeared in Philosophy and Phenomenological Research in 1992. In this reply I contend that the three main avenues by which one might plausibly account for one's self-awareness are unavailable to an individual who is restricted to the skeptic's epistemic ground rules. First, all-encompassing doubt about the world cancels our "external" epistemic access via perception to ourselves as material individuals in the world. (...) Second, one does not have direct cpistemic access to one's substantial self through introspection, since the self as such is not a proper object of inner awareness. Third, we cannot claim, as Descartes did, that we have indirect epistemic access to the substantial self by inference from the occurrence of experiences.The summary conclusion for which I argue is that, if we are to account for our self-knowledge, we cannot adopt the purely subjective epistemological stance that is at the heart of global skepticism. (shrink)
Replacing Truth.Kevin Scharp - 2007 - Inquiry: An Interdisciplinary Journal of Philosophy 50 (6):606 – 621.details
Of the dozens of purported solutions to the liar paradox published in the past fifty years, the vast majority are "traditional" in the sense that they reject one of the premises or inferencerules that are used to derive the paradoxical conclusion. Over the years, however, several philosophers have developed an alternative to the traditional approaches; according to them, our very competence with the concept of truth leads us to accept that the reasoning used to derive the paradox (...) is sound. That is, our conceptual competence leads us into inconsistency. I call this alternative the inconsistency approach to the liar. Although this approach has many positive features, I argue that several of the well-developed versions of it that have appeared recently are unacceptable. In particular, they do not recognize that if truth is an inconsistent concept, then we should replace it with new concepts that do the work of truth without giving rise to paradoxes. I outline an inconsistency approach to the liar paradox that satisfies this condition. (shrink)
Contemporary reasoning about health is infused with the work products of experts, and expert reasoning about health itself is an active site for invention and design. Building on Toulmin’s largely undeveloped ideas on field-dependence, we argue that expert fields can develop new inferencerules that, together with the backing they require, become accepted ways of drawing and defending conclusions. The new inferencerules themselves function as warrants, and we introduce the term “warranting device” to refer to (...) an assembly of the rule plus whatever material, procedural, and institutional resources are required to assure its dependability. We present a case study on the Cochrane Review, a new method for synthesizing evidence across large numbers of scientific studies. After reviewing the evolution and current structure of the device, we discuss the distinctive kinds of critical questions that may be raised around Cochrane Reviews, both within the expert field and beyond. Although Toulmin’s theory of field-dependence is often criticized for its relativism, we find that, as a matter of practical fact, field-specific warrants do not enjoy immunity from external critique. On the contrary, they can be opened to evaluation and critique from any interested perspective. (shrink)
A dynamic semantics for iffy oughts offers an attractive alternative to the folklore that Chisholm's paradox enforces an unhappy choice between the intuitive inferencerules of factual and deontic detachment. The first part of the story told here shows how a dynamic theory about ifs and oughts gives rise to a nonmonotonic perspective on deontic discourse and reasoning that elegantly removes the air of paradox from Chisholm's puzzle without sacrificing any of the two detachment principles. The second part (...) of the story showcases two bonus applications of the framework suggested here: it offers a response to Forrester's gentle murder paradox and avoids Kolodny and MacFarlane's miners paradox about deontic reasoning under epistemic uncertainty. A comparison between the dynamic semantic proposal made in this paper and a more conservative approach combining a static semantics with a dynamic pragmatics is provided. (shrink)
This paper addresses a family of issues surrounding the biological phenomenon of resistance and its representation in realist ontologies. The treatments of resistance terms in various existing ontologies are examined and found to be either overly narrow, internally inconsistent, or otherwise problematic. We propose a more coherent characterization of resistance in terms of what we shall call blocking dispositions, which are collections of mutually coordinated dispositions which are of such a sort that they cannot undergo simultaneous realization within a single (...) bearer. A definition of ‘protective resistance’ is proposed for use in the Infectious Disease Ontology (IDO) and we show how this definition can be used to characterize the antibiotic resistance in Methicillin-Resistant Staphylococcus aureus (MRSA). The ontological relations between entities in our MRSA case study are used alongside a series of logical inferencerules to illustrate logical reasoning about resistance. A description logic representation of blocking dispositions is also provided. We demonstrate that our characterization of resistance is sufficiently general to cover two other cases of resistance in the infectious disease domain involving HIV and malaria. (shrink)
What the world needs now is another theory of vagueness. Not because the old theories are useless. Quite the contrary, the old theories provide many of the materials we need to construct the truest theory of vagueness ever seen. The theory shall be similar in motivation to supervaluationism, but more akin to many-valued theories in conceptualisation. What I take from the many-valued theories is the idea that some sentences can be truer than others. But I say very different things to (...) the ordering over sentences this relation generates. I say it is not a linear ordering, so it cannot be represented by the real numbers. I also argue that since there is higher-order vagueness, any mapping between sentences and mathematical objects is bound to be inappropriate. This is no cause for regret; we can say all we want to say by using the comparative truer than without mapping it onto some mathematical objects. From supervaluationism I take the idea that we can keep classical logic without keeping the familiar bivalent semantics for classical logic. But my preservation of classical logic is more comprehensive than is normally permitted by supervaluationism, for I preserve classical inferencerules as well as classical sequents. And I do this without relying on the concept of acceptable precisifications as an unexplained explainer. The world does not need another guide to varieties of theories of vagueness, especially since Timothy Williamson (1994) and Rosanna Keefe (2000) have already provided quite good guides. I assume throughout familiarity with popular theories of vagueness. (shrink)
The question of whether the Pyrrhonist adheres to certain logical principles, criteria of justification, and inferencerules is of central importance for the study of Pyrrhonism. Its significance lies in that, whereas the Pyrrhonist describes his philosophical stance and argues against the Dogmatists by means of what may be considered a rational discourse, adherence to any such principles, criteria, and rules does not seem compatible with the radical character of his skepticism. Hence, if the Pyrrhonist does endorse (...) them, one must conclude that he is inconsistent in his outlook. Despite its import, the question under consideration has not received, in the vast literature on Pyrrhonism of the past three decades, all the attention it deserves. In the present paper, I do not propose to provide a full examination of the Pyrrhonist’s attitude towards rationality, but to focus on the question of whether he endorses the law of non-contradiction (LNC). However, I will also briefly tackle the question of the Pyrrhonist’s outlook on both the canons of rational justification at work in the so-called Five Modes of Agrippa and the logical rules of inference. In addition, given that the LNC is deemed a fundamental principle of rationality, determining the Pyrrhonist’s attitude towards it will allow us to understand his general attitude towards rationality. (shrink)
ABSTRACT: A detailed presentation of Stoic theory of arguments, including truth-value changes of arguments, Stoic syllogistic, Stoic indemonstrable arguments, Stoic inferencerules (themata), including cut rules and antilogism, argumental deduction, elements of relevance logic in Stoic syllogistic, the question of completeness of Stoic logic, Stoic arguments valid in the specific sense, e.g. "Dio says it is day. But Dio speaks truly. Therefore it is day." A more formal and more detailed account of the Stoic theory of deduction (...) can be found in S. Bobzien, Stoic Syllogistic, OSAP 1996. (shrink)
This essay presents a philosophical and computational theory of the representation of de re, de dicto, nested, and quasi-indexical belief reports expressed in natural language. The propositional Semantic Network Processing System (SNePS) is used for representing and reasoning about these reports. In particular, quasi-indicators (indexical expressions occurring in intentional contexts and representing uses of indicators by another speaker) pose problems for natural-language representation and reasoning systems, because--unlike pure indicators--they cannot be replaced by coreferential NPs without changing the meaning of the (...) embedding sentence. Therefore, the referent of the quasi-indicator must be represented in such a way that no invalid coreferential claims are entailed. The importance of quasi-indicators is discussed, and it is shown that all four of the above categories of belief reports can be handled by a single representational technique using belief spaces containing intensional entities. Inferencerules and belief-revision techniques for the system are also examined. (shrink)
It is widely taken that the first-order part of Frege's Begriffsschrift is complete. However, there does not seem to have been a formal verification of this received claim. The general concern is that Frege's system is one axiom short in the first-order predicate calculus comparing to, by now, the standard first-order theory. Yet Frege has one extra inference rule in his system. Then the question is whether Frege's first-order calculus is still deductively sufficient as far as the first-order completeness (...) is concerned. In this short note we confirm that the missing axiom is derivable from his stated axioms and inferencerules, and hence the logic system in the Begriffsschrift is indeed first-order complete. (shrink)
Applying good inductive rules inside the scope of suppositions leads to implausible results. I argue it is a mistake to think that inductive rules of inference behave anything like 'inferencerules' in natural deduction systems. And this implies that it isn't always true that good arguments can be run 'off-line' to gain a priori knowledge of conditional conclusions.
It is one thing for a given proposition to follow or to not follow from a given set of propositions and it is quite another thing for it to be shown either that the given proposition follows or that it does not follow.* Using a formal deduction to show that a conclusion follows and using a countermodel to show that a conclusion does not follow are both traditional practices recognized by Aristotle and used down through the history of logic. These (...) practices presuppose, respectively, a criterion of validity and a criterion of invalidity each of which has been extended and refined by modern logicians: deductions are studied in formal syntax (proof theory) and coun¬termodels are studied in formal semantics (model theory). The purpose of this paper is to compare these two criteria to the corresponding criteria employed in Boole’s first logical work, The Mathematical Analysis of Logic (1847). In particular, this paper presents a detailed study of the relevant metalogical passages and an analysis of Boole’s symbolic derivations. It is well known, of course, that Boole’s logical analysis of compound terms (involving ‘not’, ‘and’, ‘or’, ‘except’, etc.) contributed to the enlargement of the class of propositions and arguments formally treatable in logic. The present study shows, in addition, that Boole made significant contributions to the study of deduc¬tive reasoning. He identified the role of logical axioms (as opposed to inferencerules) in formal deductions, he conceived of the idea of an axiomatic deductive sys¬tem (which yields logical truths by itself and which yields consequences when ap¬plied to arbitrary premises). Nevertheless, surprisingly, Boole’s attempt to imple¬ment his idea of an axiomatic deductive system involved striking omissions: Boole does not use his own formal deductions to establish validity. Boole does give symbolic derivations, several of which are vitiated by “Boole’s Solutions Fallacy”: the fallacy of supposing that a solution to an equation is necessarily a logical consequence of the equation. This fallacy seems to have led Boole to confuse equational calculi (i.e., methods for gen-erating solutions) with deduction procedures (i.e., methods for generating consequences). The methodological confusion is closely related to the fact, shown in detail below, that Boole had adopted an unsound criterion of validity. It is also shown that Boole totally ignored the countermodel criterion of invalid¬ity. Careful examination of the text does not reveal with certainty a test for invalidity which was adopted by Boole. However, we have isolated a test that he seems to use in this way and we show that this test is ineffectual in the sense that it does not serve to identify invalid arguments. We go beyond the simple goal stated above. Besides comparing Boole’s earliest criteria of validity and invalidity with those traditionally (and still generally) employed, this paper also investigates the framework and details of THE MATHEMATICAL ANALYSIS OF LOGIC. (shrink)
There are two foundational, but not fully developed, ideas in paraconsistency, namely, the duality between paraconsistent and intuitionistic paradigms, and the introduction of logical operators that express meta-logical notions in the object language. The aim of this paper is to show how these two ideas can be adequately accomplished by the Logics of Formal Inconsistency (LFIs) and by the Logics of Formal Undeterminedness (LFUs). LFIs recover the validity of the principle of explosion in a paraconsistent scenario, while LFUs recover the (...) validity of the principle of excluded middle in a paracomplete scenario. We introduce definitions of duality between inferencerules and connectives that allow comparing rules and connectives that belong to different logics. Two formal systems are studied, the logics mbC and mbD, that display the duality between paraconsistency and paracompleteness as a duality between inferencerules added to a common core– in the case studied here, this common core is classical positive propositional logic (CPL + ). The logics mbC and mbD are equipped with recovery operators that restore classical logic for, respectively, consistent and determined propositions. These two logics are then combined obtaining a pair of logics of formal inconsistency and undeterminedness (LFIUs), namely, mbCD and mbCDE. The logic mbCDE exhibits some nice duality properties. Besides, it is simultaneously paraconsistent and paracomplete, and able to recover the principles of excluded middle and explosion at once. The last sections offer an algebraic account for such logics by adapting the swap-structures semantics framework of the LFIs the LFUs. This semantics highlights some subtle aspects of these logics, and allows us to prove decidability by means of finite non-deterministic matrices. (shrink)
James Van Cleve raises some objections to my attempt to solve the bootstrapping problem for what I call “basic justification theories.” I argue that given 1 the inferencerules endorsed by basic justification theorists, we are a priori (propositionally) justified in believing that perception is reliable. This blocks the bootstrapping result.
A graph-theoretic account of fibring of logics is developed, capitalizing on the interleaving characteristics of fibring at the linguistic, semantic and proof levels. Fibring of two signatures is seen as a multi-graph (m-graph) where the nodes and the m-edges include the sorts and the constructors of the signatures at hand. Fibring of two models is a multi-graph (m-graph) where the nodes and the m-edges are the values and the operations in the models, respectively. Fibring of two deductive systems is an (...) m-graph whose nodes are language expressions and the m-edges represent the inferencerules of the two original systems. The sobriety of the approach is confirmed by proving that all the fibring notions are universal constructions. This graph-theoretic view is general enough to accommodate very different fibrings of propositional based logics encompassing logics with non-deterministic semantics, logics with an algebraic semantics, logics with partial semantics and substructural logics, among others. Soundness and weak completeness are proved to be preserved under very general conditions. Strong completeness is also shown to be preserved under tighter conditions. In this setting, the collapsing problem appearing in several combinations of logic systems can be avoided. (shrink)
Our online interaction with information-systems may well provide the largest arena of formal logical reasoning in the world today. Presented here is a critique of the foundations of Logic, in which the metaphysical assumptions of such 'closed world' reasoning are contrasted with those of traditional logic. Closed worlds mostly employ a syntactic alternative to formal language namely, recording data in files. Whilst this may be unfamiliar as logical syntax, it is argued here that propositions are expressed by data stored in (...) files which are essentially non-linguistic and so cannot be expressed by simple formulae F(a), with the inference-rules normally used in Logic. Hence, the syntax of data may be said to define a fundamentally new kind of logical form for simple propositions. In this way, the logic of closed systems is shown to be non-classical, differing from traditional logic in its truth-conditions, inferences and metaphysics. This paper will be concerned mainly with how the reference and certain inferences in such a closed system differ metaphysically from classical logic. (shrink)
It is shown that a complete axiomatization of classical non-tautologies can be obtained by taking F (falsehood) as the sole axiom along with the two inferencerules: (i) if A is a substitution instance of B, then A |– B; and (ii) if A is obtained from B by replacement of equivalent sentences, then A |– B (counting as equivalent the pairs {T, ~F}, {F, F&F}, {F, F&T}, {F, T&F}, {T, T&T}). Since the set of tautologies is also (...) specifiable by purely syntactic means, the resulting picture gives an improved syntactic account of classical sentential logic. The picture can then be completed by considering related systems that prove adequate to specify the set of contradictions, the set of non-contradictions, and the set of contingencies respectively. (shrink)
PARC is an "appended numeral" system of natural deduction that I learned as an undergraduate and have taught for many years. Despite its considerable pedagogical strengths, PARC appears to have never been published. The system features explicit "tracking" of premises and assumptions throughout a derivation, the collapsing of indirect proofs into conditional proofs, and a very simple set of quantificational rules without the long list of exceptions that bedevil students learning existential instantiation and universal generalization. The system can be (...) used with any Copi-style set of inferencerules, so it is quite adaptable to many mainstream symbolic logic textbooks. Consequently, PARC may be especially attractive to logic teachers who find Jaskowski/Gentzen-style introduction/elimination rules to be far less "natural" than Copi-style rules. The PARC system is also keyboard-friendly in comparison to the widely adopted Jaskowski-style graphical subproof system of natural deduction, viz., Fitch diagrams and Copi "bent arrow" diagrams. (shrink)
The overwhelming majority of those who theorize about implicit biases posit that these biases are caused by some sort of association. However, what exactly this claim amounts to is rarely specified. In this paper, I distinguish between different understandings of association, and I argue that the crucial senses of association for elucidating implicit bias are the cognitive structure and mental process senses. A hypothesis is subsequently derived: if associations really underpin implicit biases, then implicit biases should be modulated by counterconditioning (...) or extinction but should not be modulated by rational argumentation or logical interventions. This hypothesis is false; implicit biases are not predicated on any associative structures or associative processes but instead arise because of unconscious propositionally structured beliefs. I conclude by discussing how the case study of implicit bias illuminates problems with popular dual-process models of cognitive architecture. (shrink)
Consider the following three claims. (i) There are no truths of the form ‘p and ~p’. (ii) No one holds a belief of the form ‘p and ~p’. (iii) No one holds any pairs of beliefs of the form {p, ~p}. Irad Kimhi has recently argued, in effect, that each of these claims holds and holds with metaphysical necessity. Furthermore, he maintains that they are ultimately not distinct claims at all, but the same claim formulated in different ways. I find (...) his argument suggestive, if not entirely transparent. I do think there is at least an important kernel of truth even in (iii), and that (i) ultimately explains what’s right about the other two. Consciousness of an impossibility makes belief in the obtaining of the corresponding state of affairs an impossibility. Interestingly, an appreciation of this fact brings into view a novel conception of inference, according to which it consists in the consciousness of necessity. This essay outlines and defends this position. A central element of the defense is that it reveals how reasoners satisfy what Paul Boghossian the Taking Condition and do so without engendering regress. (shrink)
Inference to the Best Explanation (IBE) advises reasoners to infer exactly one explanation. This uniqueness claim apparently binds us when it comes to “conjunctive explanations,” distinct explanations that are nonetheless explanatorily better together than apart. To confront this worry, explanationists qualify their statement of IBE, stipulating that this inference form only adjudicates between competing hypotheses. However, a closer look into the nature of competition reveals problems for this qualified account. Given the most common explication of competition, this qualification (...) artificially and radically constrains IBE’s domain of applicability. Using a more subtle, recent explication of competition, this qualification no longer provides a compelling treatment of conjunctive explanations. In light of these results, I suggest a different strategy for accommodating conjunctive explanations. Instead of modifying the form of IBE, I suggest a new way of thinking about the structure of IBE’s lot of considered hypotheses. (shrink)
Anti-exceptionalism about logic takes logic to be, as the name suggests, unexceptional. Rather, in naturalist fashion, the anti-exceptionalist takes logic to be continuous with science, and considers logical theories to be adoptable and revisable accordingly. On the other hand, the Adoption Problem aims to show that there is something special about logic that sets it apart from scientific theories, such that it cannot be adopted in the way the anti-exceptionalist proposes. In this paper I assess the damage the Adoption Problem (...) causes for anti-exceptionalism, and show that it is also problematic for exceptionalist positions too. My diagnosis of why the Adoption Problem affects both positions is that the self-governance of basic logical rules of inference prevents them from being adoptable, regardless of whether logic is exceptional or not. (shrink)
How to say no less, no more about conditional than what is needed? From a logical analysis of necessary and sufficient conditions (Section 1), we argue that a stronger account of conditional can be obtained in two steps: firstly, by reminding its historical roots inside modal logic and set-theory (Section 2); secondly, by revising the meaning of logical values, thereby getting rid of the paradoxes of material implication whilst showing the bivalent roots of conditional as a speech-act based on affirmations (...) and rejections (Section 3). Finally, the two main inferencerules for conditional, viz. Modus Ponens and Modus Tollens, are reassessed through a broader definition of logical consequence that encompasses both a normal relation of truth propagation and a weaker relation of falsity non-propagation from premises to conclusion (Section 3). (shrink)
Defenders of Inference to the Best Explanation claim that explanatory factors should play an important role in empirical inference. They disagree, however, about how exactly to formulate this role. In particular, they disagree about whether to formulate IBE as an inference rule for full beliefs or for degrees of belief, as well as how a rule for degrees of belief should relate to Bayesianism. In this essay I advance a new argument against non-Bayesian versions of IBE. My (...) argument focuses on cases in which we are concerned with multiple levels of explanation of some phenomenon. I show that in many such cases, following IBE as an inference rule for full beliefs leads to deductively inconsistent beliefs, and following IBE as a non-Bayesian updating rule for degrees of belief leads to probabilistically incoherent degrees of belief. (shrink)
Many philosophers think that games like chess, languages like English, and speech acts like assertion are constituted by rules. Lots of others disagree. To argue over this productively, it would be first useful to know what it would be for these things to be rule-constituted. Searle famously claimed in Speech Acts that rules constitute things in the sense that they make possible the performance of actions related to those things (Searle 1969). On this view, rules constitute games, (...) languages, and speech acts in the sense that they make possible playing them, speaking them and performing them. This raises the question what it is to perform rule-constituted actions (e. g. play, speak, assert) and the question what makes constitutive rules distinctive such that only they make possible the performance of new actions (e. g. playing). In this paper I will criticize Searle’s answers to these questions. However, my main aim is to develop a better view, explain how it works in the case of each of games, language, and assertion and illustrate its appeal by showing how it enables rule-based views of these things to respond to various objections. (shrink)
At its strongest, Hume's problem of induction denies the existence of any well justified assumptionless inductive inference rule. At the weakest, it challenges our ability to articulate and apply good inductive inferencerules. This paper examines an analysis that is closer to the latter camp. It reviews one answer to this problem drawn from the VC theorem in statistical learning theory and argues for its inadequacy. In particular, I show that it cannot be computed, in general, whether (...) we are in a situation where the VC theorem can be applied for the purpose we want it to. (shrink)
I argue that the accounts of inference recently presented (in this journal) by Paul Boghossian, John Broome, and Crispin Wright are unsatisfactory. I proceed in two steps: First, in Sects. 1 and 2, I argue that we should not accept what Boghossian calls the “Taking Condition on inference” as a condition of adequacy for accounts of inference. I present a different condition of adequacy and argue that it is superior to the one offered by Boghossian. More precisely, (...) I point out that there is an analog of Moore’s Paradox for inference; and I suggest that explaining this phenomenon is a condition of adequacy for accounts of inference. Boghossian’s Taking Condition derives its plausibility from the fact that it apparently explains the analog of Moore’s Paradox. Second, in Sect. 3, I show that neither Boghossian’s, nor Broome’s, nor Wright’s account of inference meets my condition of adequacy. I distinguish two kinds of mistake one is likely to make if one does not focus on my condition of adequacy; and I argue that all three—Boghossian, Broome, and Wright—make at least one of these mistakes. (shrink)
I argue that inference can tolerate forms of self-ignorance and that these cases of inference undermine canonical models of inference on which inferrers have to appreciate (or purport to appreciate) the support provided by the premises for the conclusion. I propose an alternative model of inference that belongs to a family of rational responses in which the subject cannot pinpoint exactly what she is responding to or why, where this kind of self-ignorance does nothing to undermine (...) the intelligence of the response. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.