This paper proposes substitutional definitions of logical truth and consequence in terms of relative interpretations that are extensionally equivalent to the model-theoretic definitions for any relational first-order language. Our philosophical motivation to consider substitutional definitions is based on the hope to simplify the meta-theory of logicalconsequence. We discuss to what extent our definitions can contribute to that.
When some P implies some Q, this should have some impact on what attitudes we take to P and Q. In other words: logicalconsequence has a normative import. I use this idea, recently explored by a number of scholars, as a stepping stone to a bolder view: that relations of logicalconsequence can be identified with norms on our propositional attitudes, or at least that our talk of logicalconsequence can be explained in (...) terms of such norms. I investigate the prospects of such a cognitive norm account of logicalconsequence. I go over the challenges involved in finding a plausible bridge principle connecting logicalconsequence to cognitive norms, in particular a biconditional principle that gives us not only necessary but sufficient conditions for logicalconsequence in terms of norms on propositional attitudes. Then, on the assumption that an adequate norm can be found, I consider what the philosophical merits of such a cognitive norm account would be, and what theoretical commitments it would generate. (shrink)
The investigation into logical form and structure of natural sciences and mathematics covers a significant part of contemporary philosophy. In contrast to this, the metatheory of normative theories is a slowly developing research area in spite of its great predecessors, such as Aristotle, who discovered the sui generis character of practical logic, or Hume, who posed the “is-ought” problem. The intrinsic reason for this situation lies in the complex nature of practical logic. The metatheory of normative educational (...) philosophy and theory inherits all the difficulties inherent in the general metatheory but has also significantly contributed to its advancement. In particular, the discussion on its mixed normative-descriptive character and complex composition has remained an important part of research in educational philosophy and theory. The two points seem to be indisputable. First, the content of educational philosophy and theory is a complex one, connecting different disciplines. Second, these disciplines are integrated within the logical form of practical inference or means-end reasoning. On the other hand, the character of consequence relation in this field, although generally recognized as specific, represents an unresolved prob- lem, a solution of which requires a sophisticated logical theory and promises to influence the self- understanding of educational philosophy and theory. (shrink)
Logical pluralism is the view that there is more than one correct logic. This very general characterization gives rise to a whole family of positions. I argue that not all of them are stable. The main argument in the paper is inspired by considerations known as the “collapse problem”, and it aims at the most popular form of logical pluralism advocated by JC Beall and Greg Restall. I argue that there is a more general argument available that challenges (...) all variants of logical pluralism that meet the following three conditions: that there are at least two correct logical systems characterized in terms of different consequence relations, that there is some sort of rivalry among the correct logics, and that logicalconsequence is normative. The hypothesis I argue for amounts to conditional claim: If a position satisfies all these conditions, then that position is unstable in the sense that it collapses into competing positions. (shrink)
This paper discusses the history of the confusion and controversies over whether the definition of consequence presented in the 11-page 1936 Tarski consequence-definition paper is based on a monistic fixed-universe framework?like Begriffsschrift and Principia Mathematica. Monistic fixed-universe frameworks, common in pre-WWII logic, keep the range of the individual variables fixed as the class of all individuals. The contrary alternative is that the definition is predicated on a pluralistic multiple-universe framework?like the 1931 Gödel incompleteness paper. A pluralistic multiple-universe framework (...) recognizes multiple universes of discourse serving as different ranges of the individual variables in different interpretations?as in post-WWII model theory. In the early 1960s, many logicians?mistakenly, as we show?held the ?contrary alternative? that Tarski 1936 had already adopted a Gödel-type, pluralistic, multiple-universe framework. We explain that Tarski had not yet shifted out of the monistic, Frege?Russell, fixed-universe paradigm. We further argue that between his Principia-influenced pre-WWII Warsaw period and his model-theoretic post-WWII Berkeley period, Tarski's philosophy underwent many other radical changes. (shrink)
This interesting and imaginative monograph is based on the author’s PhD dissertation supervised by Saul Kripke. It is dedicated to Timothy Smiley, whose interpretation of PRIOR ANALYTICS informs its approach. As suggested by its title, this short work demonstrates conclusively that Aristotle’s syllogistic is a suitable vehicle for fruitful discussion of contemporary issues in logical theory. Aristotle’s syllogistic is represented by Corcoran’s 1972 reconstruction. The review studies Lear’s treatment of Aristotle’s logic, his appreciation of the Corcoran-Smiley paradigm, and his (...) understanding of modern logical theory. In the process Corcoran and Scanlan present new, previously unpublished results. Corcoran regards this review as an important contribution to contemporary study of PRIOR ANALYTICS: both the book and the review deserve to be better known. (shrink)
ABSTRACT: This 1974 paper builds on our 1969 paper (Corcoran-Weaver [2]). Here we present three (modal, sentential) logics which may be thought of as partial systematizations of the semantic and deductive properties of a sentence operator which expresses certain kinds of necessity. The logical truths [sc. tautologies] of these three logics coincide with one another and with those of standard formalizations of Lewis's S5. These logics, when regarded as logistic systems (cf. Corcoran [1], p. 154), are seen to be (...) equivalent; but, when regarded as consequence systems (ibid., p. 157), one diverges from the others in a fashion which suggests that two standard measures of semantic complexity may not be as closely linked as previously thought. -/- This 1974 paper uses the linear notation for natural deduction presented in [2]: each two-dimensional deduction is represented by a unique one-dimensional string of characters. Thus obviating need for two-dimensional trees, tableaux, lists, and the like—thereby facilitating electronic communication of natural deductions. The 1969 paper presents a (modal, sentential) logic which may be thought of as a partial systematization of the semantic and deductive properties of a sentence operator which expresses certain kinds of necessity. The logical truths [sc. tautologies] of this logic coincides those of standard formalizations of Lewis’s S4. Among the paper's innovations is its treatment of modal logic in the setting of natural deduction systems--as opposed to axiomatic systems. The author’s apologize for the now obsolete terminology. For example, these papers speak of “a proof of a sentence from a set of premises” where today “a deduction of a sentence from a set of premises” would be preferable. 1. Corcoran, John. 1969. Three Logical Theories, Philosophy of Science 36, 153–77. J P R -/- 2. Corcoran, John and George Weaver. 1969. LogicalConsequence in Modal Logic: Natural Deduction in S5 Notre Dame Journal of Formal Logic 10, 370–84. MR0249278 (40 #2524). 3. Weaver, George and John Corcoran. 1974. LogicalConsequence in Modal Logic: Some Semantic Systems for S4, Notre Dame Journal of Formal Logic 15, 370–78. MR0351765 (50 #4253). (shrink)
An interesting question is whether deflationism about truth (and falsity) extends to related properties and relations on truthbearers. Lionel Shapiro (2011) answers affirmatively by arguing that a certain deflationism about truth is as plausible as an analogous version of deflationism about logicalconsequence. I argue that the argument fails on two counts. First, it trivializes to any relation between truthbearers, including substantive ones; in other words, his argument can be used to establish that deflationism about truth is as (...) plausible as deflationism about an arbitrary sentential relation. Second, the alleged analogy between the arguments for deflationism about truth and deflationism about consequence fails. Along the way I consider what implications the failure of the equiplausibility thesis has for deflationism about falsity. (shrink)
Gómez-Torrente’s papers have made important contributions to vindicate Tarski’s model-theoretic account of the logical properties in the face of Etchemendy’s criticisms. However, at some points his vindication depends on interpreting the Tarskian account as purportedly modally deflationary, i.e., as not intended to capture the intuitive modal element in the logical properties, that logicalconsequence is (epistemic or alethic) necessary truth-preservation. Here it is argued that the views expressed in Tarski’s seminal work do not support this modally (...) deflationary interpretation, even if Tarski himself was sceptical about modalities. (shrink)
This paper discusses critically what simulation models of the evolution ofcooperation can possibly prove by examining Axelrod’s “Evolution of Cooperation” and the modeling tradition it has inspired. Hardly any of the many simulation models of the evolution of cooperation in this tradition have been applicable empirically. Axelrod’s role model suggested a research design that seemingly allowed to draw general conclusions from simulation models even if the mechanisms that drive the simulation could not be identified empirically. But this research design was (...) fundamentally flawed, because it is not possible to draw general empirical conclusions from theoretical simulations. At best such simulations can claim to prove logical possibilities, i.e. they prove that certain phenomena are possible as the consequence of the modeling assumptions built into the simulation, but not that they are possible or can be expected to occur in reality I suggest several requirements under which proofs of logical possibilities can nevertheless be considered useful. Sadly, most Axelrod-style simulations do not meet these requirements. I contrast this with Schelling’s neighborhood segregation model, thecore mechanism of which can be retraced empirically. (shrink)
There is a profound, but frequently ignored, relationship between the classical conception of logicalconsequence and the material implication. The first repeats the patterns of the latter, but with a wider modal reach. This relationship suggests that there should be also a connection between the notion of logicalconsequence and the conditional connective of any given logical system. This implies, among other things, that it is incoherent to propose alternatives to the material implication while maintaining (...) the classical conception of logicalconsequence. The other important implication is that we need to posit different conceptions of logicalconsequence that are consistent with different theories of conditionals in order to evaluate their relative merits from new and unexplored angles. As a pilot study of this research program, we evaluate two new notions of logicalconsequence motivated by conditional-assertion theory and possible world theories. Those alternatives are compared unfavourably with the classical conception of logicalconsequence. (shrink)
In this work, we propose a definition of logicalconsequence based on the relation between the quantity of information present in a particular set of formulae and a particular formula. As a starting point, we use Shannon‟s quantitative notion of information, founded on the concepts of logarithmic function and probability value. We first consider some of the basic elements of an axiomatic probability theory, and then construct a probabilistic semantics for languages of classical propositional logic. We define the (...) quantity of information for the formulae of these languages and introduce the concept of informational logicalconsequence, identifying some important results, among them: certain arguments that have traditionally been considered valid, such as modus ponens, are not valid from the informational perspective; the logic underlying informational logicalconsequence is not classical, and is at the least paraconsistent sensu lato; informational logicalconsequence is not a Tarskian logicalconsequence. (shrink)
This paper discusses critically what simulation models of the evolution of cooperation can possibly prove by examining Axelrod’s “Evolution of Cooperation” (1984) and the modeling tradition it has inspired. Hardly any of the many simulation models in this tradition have been applicable empirically. Axelrod’s role model suggested a research design that seemingly allowed to draw general conclusions from simulation models even if the mechanisms that drive the simulation could not be identified empirically. But this research design was fundamentally flawed. At (...) best such simulations can claim to prove logical possibilities, i.e. they prove that certain phenomena are possible as the consequence of the modeling assumptions built into the simulation, but not that they are possible or can be expected to occur in reality. I suggest several requirements under which proofs of logical possibilities can nevertheless be considered useful. Sadly, most Axelrod-style simulations do not meet these requirements. It would be better not to use this kind of simulations at all. (shrink)
The problem analysed in this paper is whether we can gain knowledge by using valid inferences, and how we can explain this process from a model-theoretic perspective. According to the paradox of inference (Cohen & Nagel 1936/1998, 173), it is logically impossible for an inference to be both valid and its conclusion to possess novelty with respect to the premises. I argue in this paper that valid inference has an epistemic significance, i.e., it can be used by an agent to (...) enlarge his knowledge, and this significance can be accounted in model-theoretic terms. I will argue first that the paradox is based on an equivocation, namely, it arises because logical containment, i.e., logical implication, is identified with epistemological containment, i.e., the knowledge of the premises entails the knowledge of the conclusion. Second, I will argue that a truth-conditional theory of meaning has the necessary resources to explain the epistemic significance of valid inferences. I will explain this epistemic significance starting from Carnap’s semantic theory of meaning and Tarski’s notion of satisfaction. In this way I will counter (Prawitz 2012b)’s claim that a truth-conditional theory of meaning is not able to account the legitimacy of valid inferences, i.e., their epistemic significance. (shrink)
The starting point of this paper concerns the apparent difference between what we might call absolute truth and truth in a model, following Donald Davidson. The notion of absolute truth is the one familiar from Tarski’s T-schema: ‘Snow is white’ is true if and only if snow is white. Instead of being a property of sentences as absolute truth appears to be, truth in a model, that is relative truth, is evaluated in terms of the relation between sentences and models. (...) I wish to examine the apparent dual nature of logical truth (without dwelling on Davidson), and suggest that we are dealing with a distinction between a metaphysical and a linguistic interpretation of truth. I take my cue from John Etchemendy, who suggests that absolute truth could be considered as being equivalent to truth in the ‘right model’, i.e., the model that corresponds with the world. However, the notion of ‘model’ is not entirely appropriate here as it is closely associated with relative truth. Instead, I propose that the metaphysical interpretation of truth may be illustrated in modal terms, by metaphysical modality in particular. One of the tasks that I will undertake in this paper is to develop this modal interpretation, partly building on my previous work on the metaphysical interpretation of the law of non-contradiction (Tahko 2009). After an explication of the metaphysical interpretation of logical truth, a brief study of how this interpretation connects with some recent important themes in philosophical logic follows. In particular, I discuss logical pluralism and propose an understanding of pluralism from the point of view of the metaphysical interpretation. (shrink)
Putnam, Hilary FPhilosophy of logic. Harper Essays in Philosophy. Harper Torchbooks, No. TB 1544. Harper & Row, Publishers, New York-London, 1971. v+76 pp. The author of this book has made highly regarded contributions to mathematics, to philosophy of logic and to philosophy of science, and in this book he brings his ideas in these three areas to bear on the traditional philosophic problem of materialism versus (objective) idealism. The book assumes that contemporary science (mathematical and physical) is largely correct as (...) far as it goes, or at least that it is rational to believe in it. The main thesis of the book is that consistent acceptance of contemporary science requires the acceptance of some sort of Platonistic idealism affirming the existence of abstract, non-temporal, non-material, non-mental entities (numbers,scientific laws, mathematical formulas, etc.). The author is thus in direct opposition to the extreme materialism which had dominated philosophy of science in the first three quarters of this century. the book can be especially recommended to the scientifically literate, general reader whose acquaintance with these areas is limited to the earlier literature of when it had been assumed that empiricistic materialism was the only philosophy compatible with a scientific outlook. To this group the book presents an eye-opening challenge fulfilling the author’s intention of “shaking up preconceptions and stimulating further discussion”. (shrink)
Work on the nature and scope of formal logic has focused unduly on the distinction between logical and extra-logical vocabulary; which argument forms a logical theory countenances depends not only on its stock of logical terms, but also on its range of grammatical categories and modes of composition. Furthermore, there is a sense in which logical terms are unnecessary. Alexandra Zinke has recently pointed out that propositional logic can be done without logical terms. By (...) defining a logical-term-free language with the full expressive power of first-order logic with identity, I show that this is true of logic more generally. Furthermore, having, in a logical theory, non-trivial valid forms that do not involve logical terms is not merely a technical possibility. As the case of adverbs shows, issues about the range of argument forms logic should countenance can quite naturally arise in such a way that they do not turn on whether we countenance certain terms as logical. (shrink)
ABSTRACT Clinicians, researchers and the informed public have come to view addiction as a brain disease. However, in nature even extreme events often reflect normal processes, for instance the principles of plate tectonics explain earthquakes as well as the gradual changes in the face of the earth. In the same way, excessive drug use is predicted by general principles of choice. One of the implications of this result is that drugs do not turn addicts into compulsive drug users; they retain (...) the capacity to say ?no?. In support of the logical implications of the choice theory approach to addiction, research reveals that most addicts quit using drugs by about age 30, that most quit without professional help, that the correlates of quitting are the correlates of decision making, and, according to the most recent epidemiological evidence, the probability of quitting remains constant over time and independent of the onset of dependence. This last result implies that, after an initial period of heavy drug use, remission is independent of any further exposure to drugs. In short, there is much empirical support for the claim that addiction emerges as a function of the rules of everyday choice. (shrink)
Logical realism is a view about the metaphysical status of logic. Common to most if not all the views captured by the label ‘logical realism’ is that logical facts are mind- and language-independent. But that does not tell us anything about the nature of logical facts or about our epistemic access to them. The goal of this paper is to outline and systematize the different ways that logical realism could be entertained and to examine some (...) of the challenges that these views face. It will be suggested that logical realism is best understood as a metaphysical view about the logical structure of the world, but this raises an important question: does logical realism collapse into standard metaphysical realism? It will be argued that this result can be accommodated, even if it cannot be altogether avoided. (shrink)
According to a theorem recently proved in the theory of logical aggregation, any nonconstant social judgment function that satisfies independence of irrelevant alternatives (IIA) is dictatorial. We show that the strong and not very plausible IIA condition can be replaced with a minimal independence assumption plus a Pareto-like condition. This new version of the impossibility theorem likens it to Arrow’s and arguably enhances its paradoxical value.
The theory of imperatives is philosophically relevant since in building it — some of the long standing problems need to be addressed, and presumably some new ones are waiting to be discovered. The relevance of the theory of imperatives for philosophical research is remarkable, but usually recognized only within the ﬁeld of practical philosophy. Nevertheless, the emphasis can be put on problems of theoretical philosophy. Proper understanding of imperatives is likely to raise doubts about some of our deeply entrenched and (...) tacit presumptions. In philosophy of language it is the presumption that declaratives provide the paradigm for sentence form; in philosophy of science it is the belief that theory construction is independent from the language practice, in logic it is the conviction that logical meaning relations are constituted out of logical terminology, in ontology it is the view that language use is free from ontological commitments. The list is not exhaustive; it includes only those presumptions that this paper concerns. (shrink)
The identity theory’s rise to prominence in analytic philosophy of mind during the late 1950s and early 1960s is widely seen as a watershed in the development of physicalism, in the sense that whereas logical behaviourism proposed analytic and a priori ascertainable identities between the meanings of mental and physical-behavioural concepts, the identity theory proposed synthetic and a posteriori knowable identities between mental and physical properties. While this watershed does exist, the standard account of it is misleading, as it (...) is founded in erroneous intensional misreadings of the logical positivists’—especially Carnap’s—extensional notions of translation and meaning, as well as misinterpretations of the positivists’ shift from the strong thesis of translation-physicalism to the weaker and more liberal notion of reduction-physicalism that occurred in the Unity of Science programme. After setting the historical record straight, the essay traces the first truly modern identity theory to Schlick’s pre-positivist views circa 1920 and goes on to explore its further development in Feigl, arguing that the fundamental difference between the Schlick-Feigl identity theory and the more familiar and influential Place-Smart-Armstrong identity theory has resurfaced in the deep and seemingly unbridgeable gulf in contemporary philosophy of consciousness between inflationary mentalism and deflationary physicalism. (shrink)
The main goal of this paper is to investigate what explanatory resources Robert Brandom’s distinction between acknowledged and consequential commitments affords in relation to the problem of logical omniscience. With this distinction the importance of the doxastic perspective under consideration for the relationship between logic and norms of reasoning is emphasized, and it becomes possible to handle a number of problematic cases discussed in the literature without thereby incurring a commitment to revisionism about logic. One such case in particular (...) is the preface paradox, which will receive an extensive treatment. As we shall see, the problem of logical omniscience not only arises within theories based on deductive logic; but also within the recent paradigm shift in psychology of reasoning. So dealing with this problem is important not only for philosophical purposes but also from a psychological perspective. (shrink)
Substructural logics and their application to logical and semantic paradoxes have been extensively studied, but non-reexive systems have been somewhat neglected. Here, we aim to (at least partly) ll this lacuna, by presenting a non-reexive logic and theory of naïve consequence (and truth). We also investigate the semantics and the proof-theory of the system. Finally, we develop a compositional theory of truth (and consequence) in our non-reexive framework.
According to the truth-functional analysis of conditions, to be ‘necessary for’ and ‘sufficient for’ are converse relations. From this, it follows that to be ‘necessary and sufficient for’ is a symmetric relation, that is, that if P is a necessary and sufficient condition for Q, then Q is a necessary and sufficient condition for P. This view is contrary to common sense. In this paper, I point out that it is also contrary to a widely accepted ontological view of conditions, (...) according to which if P is a necessary and sufficient condition for Q, then Q is in no sense a condition for P; it is a mere consequence of P. (shrink)
I discuss Greg Restall’s attempt to generate an account of logicalconsequence from the incoherence of certain packages of assertions and denials. I take up his justification of the cut rule and argue that, in order to avoid counterexamples to cut, he needs, at least, to introduce a notion of logical form. I then suggest a few problems that will arise for his account if a notion of logical form is assumed. I close by sketching what (...) I take to be the most natural minimal way of distinguishing content and form and suggest further problems arising for this route. (shrink)
This special issue collects together nine new essays on logicalconsequence :the relation obtaining between the premises and the conclusion of a logically valid argument. The present paper is a partial, and opinionated,introduction to the contemporary debate on the topic. We focus on two inﬂuential accounts of consequence, the model-theoretic and the proof-theoretic, and on the seeming platitude that valid arguments necessarilypreserve truth. We brieﬂy discuss the main objections these accounts face, as well as Hartry Field’s contention (...) that such objections show consequenceto be a primitive, indeﬁnable notion, and that we must reject the claim that valid arguments necessarily preserve truth. We suggest that the accountsin question have the resources to meet the objections standardly thought to herald their demise and make two main claims: (i) that consequence, as opposed to logicalconsequence, is the epistemologically signiﬁcant relation philosophers should be mainly interested in; and (ii) that consequence is a paradoxical notion if truth is. (shrink)
I discuss Greg Restall’s attempt to generate an account of logicalconsequence from the incoherence of certain packages of assertions and denials. I take up his justification of the cut rule and argue that, in order to avoid counterexamples to cut, he needs, at least, to introduce a notion of logical form. I then suggest a few problems that will arise for his account if a notion of logical form is assumed. I close by sketching what (...) I take to be the most natural minimal way of distinguishing content and form and suggest further problems arising for this route. (shrink)
This study concerns logical systems considered as theories. By searching for the problems which the traditionally given systems may reasonably be intended to solve, we clarify the rationales for the adequacy criteria commonly applied to logical systems. From this point of view there appear to be three basic types of logical systems: those concerned with logical truth; those concerned with logical truth and with logicalconsequence; and those concerned with deduction per se as (...) well as with logical truth and logicalconsequence. Adequacy criteria for systems of the first two types include: effectiveness, soundness, completeness, Post completeness, "strong soundness" and strong completeness. Consideration of a logical system as a theory of deduction leads us to attempt to formulate two adequacy criteria for systems of proofs. The first deals with the concept of rigor or "gaplessness" in proofs. The second is a completeness condition for a system of proofs. An historical note at the end of the paper suggests a remarkable parallel between the above hierarchy of systems and the actual historical development of this area of logic. (shrink)
Philosophy of biology is often said to have emerged in the last third of the twentieth century. Prior to this time, it has been alleged that the only authors who engaged philosophically with the life sciences were either logical empiricists who sought to impose the explanatory ideals of the physical sciences onto biology, or vitalists who invoked mystical agencies in an attempt to ward off the threat of physicochemical reduction. These schools paid little attention to actual biological science, and (...) as a result philosophy of biology languished in a state of futility for much of the twentieth century. The situation, we are told, only began to change in the late 1960s and early 1970s, when a new generation of researchers began to focus on problems internal to biology, leading to the consolidation of the discipline. In this paper we challenge this widely accepted narrative of the history of philosophy of biology. We do so by arguing that the most important tradition within early twentieth-century philosophy of biology was neither logical empiricism nor vitalism, but the organicist movement that flourished between the First and Second World Wars. We show that the organicist corpus is thematically and methodologically continuous with the contemporary literature in order to discredit the view that early work in the philosophy of biology was unproductive, and we emphasize the desirability of integrating the historical and contemporary conversations into a single, unified discourse. (shrink)
A truth-preservation fallacy is using the concept of truth-preservation where some other concept is needed. For example, in certain contexts saying that consequences can be deduced from premises using truth-preserving deduction rules is a fallacy if it suggests that all truth-preserving rules are consequence-preserving. The arithmetic additive-associativity rule that yields 6 = (3 + (2 + 1)) from 6 = ((3 + 2) + 1) is truth-preserving but not consequence-preserving. As noted in James Gasser’s dissertation, Leibniz has been (...) criticized for using that rule in attempting to show that arithmetic equations are consequences of definitions. -/- A system of deductions is truth-preserving if each of its deductions having true premises has a true conclusion—and consequence-preserving if, for any given set of sentences, each deduction having premises that are consequences of that set has a conclusion that is a consequence of that set. Consequence-preserving amounts to: in each of its deductions the conclusion is a consequence of the premises. The same definitions apply to deduction rules considered as systems of deductions. Every consequence-preserving system is truth-preserving. It is not as well-known that the converse fails: not every truth-preserving system is consequence-preserving. Likewise for rules: not every truth-preserving rule is consequence-preserving. There are many famous examples. In ordinary first-order Peano-Arithmetic, the induction rule yields the conclusion ‘every number x is such that: x is zero or x is a successor’—which is not a consequence of the null set—from two tautological premises, which are consequences of the null set, of course. The arithmetic induction rule is truth-preserving but not consequence-preserving. Truth-preserving rules that are not consequence-preserving are non-logical or extra-logical rules. Such rules are unacceptable to persons espousing traditional truth-and-consequence conceptions of demonstration: a demonstration shows its conclusion is true by showing that its conclusion is a consequence of premises already known to be true. The 1965 Preface in Benson Mates (1972, vii) contains the first occurrence of truth-preservation fallacies in the book. (shrink)
The main task in this paper is to detail and investigate Carnap’s conception of a “linguistic framework”. On this basis, we will see whether Carnap’s dichotomies, such as the analytic-synthetic distinction, are to be construed as absolute/fundamental dichotomies or merely as relative dichotomies. I argue for a novel interpretation of Carnap’s conception of a LF and, on that basis, will show that, according to Carnap, all the dichotomies to be discussed are relative dichotomies; they depend on conventional decisions concerning the (...)logical syntax of LF. Thus, all of the dichotomies directly hinge on the conception of the LF. The LF’s logical structure, in turn, is an immediate consequence of adopting the linguistic doctrine of logical truths. As we will see, no appeal to any of these distinctions is necessary in establishing a LF and all of its components. I will also draw attention to the differences between what Carnap labels a “way of speaking”, “language”, and “artificial language”. Consequently, I will briefly conclude that none of Quine’s major objections address the main points of Carnap’s theory. (shrink)
This article discusses the logical form of action sentences with particular attention to the role of adverbial modification, reviewing and extending the event analysis of action sentences.
The “sign of consequence” is a notation for propositional logic that Peirce invented in 1886 and used at least until 1894. It substituted the “copula of inclusion” which he had been using since 1870.
In appearance, Husserl’s writings seem not to have had any influence on linguistic research, nor does what the German philosopher wrote about language seem to be worth a place in the history of linguistics. The purpose of the paper is exactly to contrast this view, by reassessing both the position and the role of Husserl’s early masterpiece — the Logical Investigations — within the history of linguistics. To this end, I will focus mainly on the third (On the theory (...) of wholes and parts) and fourth (The distinction between independent and non-independent meanings) Investigations, paying special attention to Husserl’s mereology and to the idea of a general pure grammar. The paper tries to situate the third and fourth Logical Investigation within the general context of late nineteenth-century and early twentieth-century linguistics and furthermore attempts to show the historical and theoretical importance of the Logical Investigations for the birth and the development of one of the most important linguistic “schools” of the twentieth century, namely structural linguistics. (shrink)
The main objective of the paper is to provide a conceptual apparatus of a general logical theory of language communication. The aim of the paper is to outline a formal-logical theory of language in which the concepts of the phenomenon of language communication and language communication in general are defined and some conditions for their adequacy are formulated. The theory explicates the key notions of contemporary syntax, semantics, and pragmatics. The theory is formalized on two levels: token-level and (...) type-level. As such, it takes into account the dual – token and type – ontological character of linguistic entities. The basic notions of the theory: language communication, meaning and interpretation are introduced on the second, type-level of formalization, and their required prior formalization of some of the notions introduced on the first, token-level; among others, the notion of an act of communication. Owing to the theory, it is possible to address the problems of adequacy of both empirical acts of communication and of language communication in general. All the conditions of adequacy of communication discussed in the presented paper, are valid for one-way communication (sender-recipient); nevertheless, they can also apply to the reverse direction of language communication (recipient-sender). Therefore, they concern the problem of two-way understanding in language communication. (shrink)
Monists say that the nature of truth is invariant, whichever sentence you consider; pluralists say that the nature of truth varies between different sets of sentences. The orthodoxy is that logic and logical form favour monism: there must be a single property that is preserved in any valid inference; and any truth-functional complex must be true in the same way as its components. The orthodoxy, I argue, is mistaken. Logic and logical form impose only structural constraints on a (...) metaphysics of truth. Monistic theories are not guaranteed to satisfy these constraints, and there is a pluralistic theory that does so. (shrink)
In this paper, we argue that a distinction ought to be drawn between two ways in which a given world might be logically impossible. First, a world w might be impossible because the laws that hold at w are different from those that hold at some other world (say the actual world). Second, a world w might be impossible because the laws of logic that hold in some world (say the actual world) are violated at w. We develop a novel (...) way of modelling logical possibility that makes room for both kinds of logical impossibility. Doing so has interesting implications for the relationship between logical possibility and other kinds of possibility (for example, metaphysical possibility) and implications for the necessity or contingency of the laws of logic. (shrink)
The traditional possible-worlds model of belief describes agents as ‘logically omniscient’ in the sense that they believe all logical consequences of what they believe, including all logical truths. This is widely considered a problem if we want to reason about the epistemic lives of non-ideal agents who—much like ordinary human beings—are logically competent, but not logically omniscient. A popular strategy for avoiding logical omniscience centers around the use of impossible worlds: worlds that, in one way or another, (...) violate the laws of logic. In this paper, we argue that existing impossible-worlds models of belief fail to describe agents who are both logically non-omniscient and logically competent. To model such agents, we argue, we need to ‘dynamize’ the impossible-worlds framework in a way that allows us to capture not only what agents believe, but also what they are able to infer from what they believe. In light of this diagnosis, we go on to develop the formal details of a dynamic impossible-worlds framework, and show that it successfully models agents who are both logically non-omniscient and logically competent. (shrink)
This paper claims that there is no such thing as the correct answer to the question of what is logical form: two significantly different notions of logical form are needed to fulfil two major theoretical roles that pertain respectively to logic and semantics. The first part of the paper outlines the thesis that a unique notion of logical form fulfils both roles, and argues that the alleged best candidate for making it true is unsuited for one of (...) the two roles. The second part spells out a considerably different notion which is free from that problem, although it does not fit the other role. As it will be suggested, each of the two notions suits at most one role, so the uniqueness thesis is ungrounded. (shrink)
Information-theoretic approaches to formal logic analyse the "common intuitive" concept of propositional implication (or argumental validity) in terms of information content of propositions and sets of propositions: one given proposition implies a second if the former contains all of the information contained by the latter; an argument is valid if the conclusion contains no information beyond that of the premise-set. This paper locates information-theoretic approaches historically, philosophically and pragmatically. Advantages and disadvantages are identified by examining such approaches in themselves and (...) by contrasting them with standard transformation-theoretic approaches. Transformation-theoretic approaches analyse validity (and thus implication) in terms of transformations that map one argument onto another: a given argument is valid if no transformation carries it onto an argument with all true premises and false conclusion. Model-theoretic, set-theoretic, and substitution-theoretic approaches, which dominate current literature, can be construed as transformation-theoretic, as can the so-called possible-worlds approaches. Ontic and epistemic presuppositions of both types of approaches are considered. Attention is given to the question of whether our historically cumulative experience applying logic is better explained from a purely information-theoretic perspective or from a purely transformation-theoretic perspective or whether apparent conflicts between the two types of approaches need to be reconciled in order to forge a new type of approach that recognizes their basic complementarity. (shrink)
This paper reconstructs the American reception of logical positivism in the early 1930s. I argue that Moritz Schlick (who had visiting positions at Stanford and Berkeley between 1929 and 1932) and Herbert Feigl (who visited Harvard in the 1930-31 academic year) played a crucial role in promoting the *Wissenschaftliche Weltauffassung*, years before members of the Vienna Circle, the Berlin Group, and the Lvov-Warsaw school would seek refuge in the United States. Building on archive material from the Wiener Kreis Archiv, (...) the Harvard University Archives, and the Herbert Feigl Papers, as well as a large number of publications in American philosophy journals from the early 1930s, I reconstruct the subtle transformation of the American philosophical landscape in the years immediately preceding the European exodus. I argue that (1) American philosophical discussions about meaning and significance and (2) internal dynamics in the Vienna Circle between 1929 and 1931 significantly impacted the way in which US philosophers came to perceive logical positivism. (shrink)
The received view has it that Hans Reichenbach and his friends of the Berlin Group worked close together with the more prominent Vienna Circle. In the wake of this view, Reichenbach was often treated as a logical positivist – despite the fact that he decisively opposed it. In this chapter we follow another thread. We shall show the “third man”– besides Reichenbach and Walter Dubislav – of the Berlin Group, Kurt Grelling, as a man who could grasp the academic (...) trends of the time faster than anybody else, who was better informed about logic and philosophy of nature than his two prominent colleagues and thus, could better delineate, although tentatively, central threads of research of the Berlin Group. Grelling did this on several occasions, but most ostensibly in the last years of his life when he was focused on problems of formal ontology. On the basis of this analysis, we shall see that in the early 1920s, Reichenbach too was led by a project in ontology of science that he elaborated together with the psychologist Kurt Lewin. Moreover, Reichenbach’s later philosophy of nature was also shaped by this project. We present this direction of philosophy of science as a “road less travelled” which, however, if revived, can point to a new direction that will more closely connect philosophy and science. (shrink)
2nd edition. The theory of logicalconsequence is central in modern logic and its applications. However, it is mostly dispersed in an abundance of often difficultly accessible papers, and rarely treated with applications in mind. This book collects the most fundamental aspects of this theory and offers the reader the basics of its applications in computer science, artificial intelligence, and cognitive science, to name but the most important fields where this notion finds its many applications.
It is a received view that Kant’s formal logic (or what he calls “pure general logic”) is thoroughly intensional. On this view, even the notion of logical extension must be understood solely in terms of the concepts that are subordinate to a given concept. I grant that the subordination relation among concepts is an important theme in Kant’s logical doctrine of concepts. But I argue that it is both possible and important to ascribe to Kant an objectual notion (...) of logical extension according to which the extension of a concept is the multitude of objects falling under it. I begin by defending this ascription in response to three reasons that are commonly invoked against it. First, I explain that this ascription is compatible with Kant’s philosophical reflections on the nature and boundary of a formal logic. Second, I show that the objectual notion of extension I ascribe to Kant can be traced back to many of the early modern works of logic with which he was more or less familiar. Third, I argue that such a notion of extension makes perfect sense of a pivotal principle in Kant’s logic, namely the principle that the quantity of a concept’s extension is inversely proportional to that of its intension. In the process, I tease out two important features of the Kantian objectual notion of logical extension in terms of which it markedly differs from the modern one. First, on the modern notion the extension of a concept is the sum of the objects actually falling under it; on the Kantian notion, by contrast, the extension of a concept consists of the multitude of possible objects—not in the metaphysical sense of possibility, though—to which a concept applies in virtue of being a general representation. While the quantity of the former extension is finite, that of the latter is infinite—as is reflected in Kant’s use of a plane-geometrical figure (e.g., circle, square), which is continuum as opposed to discretum, to represent the extension in question. Second, on the modern notion of extension, a concept that signifies exactly one object has a one-member extension; on the Kantian notion, however, such a concept has no extension at all—for a concept is taken to have extension only if it signifies a multitude of things. This feature of logical extension is manifested in Kant’s claim that a singular concept (or a concept in its singular use) can, for lack of extension, be figuratively represented only by a point—as opposed to an extended figure like circle, which is reserved for a general concept (or a concept in its general use). Precisely on account of these two features, the Kantian objectual extension proves vital to Kant’s theory of logical quantification (in universal, particular and singular judgments, respectively) and to his view regarding the formal truth of analytic judgments. (shrink)
The suggestion of Logical Quanta (LQ) is a bidirectional synthesis of the theory of logos of Maximus the Confessor and the philosophical interpretation of quantum mechanics. The result of such a synthesis is enrichment to the ontology of classical mechanics that enable us to have a unified view and an explanatory frame of the whole cosmos. It also enables us to overcome the Cartesian duality both on biology and the interaction of body and mind. Finally, one can reconstruct a (...) new understanding of religion. (shrink)
Husserl introduces a phenomenological concept called “motivation” early in the First Investigation of his magnum opus, the Logical Investigations. The importance of this concept has been overlooked since Husserl passes over it rather quickly on his way to an analysis of the meaningful nature of expression. I argue, however, that motivation is essential to Husserl’s overall project, even if it is not essen- tial for defining expression in the First Investigation. For Husserl, motivation is a relation between mental acts (...) whereby the content of one act make some fur- ther meaningful content probable. I explicate the nature of this relation in terms of “evidentiary weight” and differentiate it from Husserl’s notion of Evidenz, often translated as “self-evidence”. I elucidate the importance of motivation in Husserl’s overall phenomenological project by focusing on his analyses of thing-perception and empathy. Through these examples, we can better understand the continuity between the Logical Investigations and Husserl’s later work. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.