This paper discusses proof-theoreticsemantics, the project of specifying the meanings of the logical constants in terms of rules of inference governing them. I concentrate on Michael Dummett’s and Dag Prawitz’ philosophical motivations and give precise characterisations of the crucial notions of harmony and stability, placed in the context of proving normalisation results in systems of natural deduction. I point out a problem for defining the meaning of negation in this framework and prospects for an account of the (...) meanings of modal operators in terms of rules of inference. (shrink)
This paper considers proof-theoreticsemantics for necessity within Dummett's and Prawitz's framework. Inspired by a system of Pfenning's and Davies's, the language of intuitionist logic is extended by a higher order operator which captures a notion of validity. A notion of relative necessary is defined in terms of it, which expresses a necessary connection between the assumptions and the conclusion of a deduction.
The paper briefly surveys the sentential proof-theoreticsemantics for fragment of English. Then, appealing to a version of Frege’s context-principle (specified to fit type-logical grammar), a method is presented for deriving proof-theoretic meanings for sub-sentential phrases, down to lexical units (words). The sentential meaning is decomposed according to the function-argument structure as determined by the type-logical grammar. In doing so, the paper presents a novel proof-theoretic interpretation of simple type, replacing Montague’s model-theoretic type interpretation (in arbitrary (...) Henkin models). The domains of derivations are collections of derivations in the associated “dedicated” natural-deduction proof-system, and functions therein (with no appeal to models, truth-values and elements of a domain). The compositionality of the semantics is analyzed. (shrink)
Dummett’s justification procedures are revisited. They are used as background for the discussion of some conceptual and technical issues in proof-theoreticsemantics, especially the role played by assumptions in proof-theoretic definitions of validity.
This paper deals with a collection of concerns that, over a period of time, led the author away from the Routley–Meyer semantics, and towards proof- theoretic approaches to relevant logics, and indeed to the weak relevant logic MC of meaning containment.
In the proof-theoreticsemantics approach to meaning, harmony , requiring a balance between introduction-rules (I-rules) and elimination rules (E-rules) within a meaning conferring natural-deduction proof-system, is a central notion. In this paper, we consider two notions of harmony that were proposed in the literature: 1. GE-harmony , requiring a certain form of the E-rules, given the form of the I-rules. 2. Local intrinsic harmony : imposes the existence of certain transformations of derivations, known as reduction and expansion . (...) We propose a construction of the E-rules (in GE-form) from given I-rules, and prove that the constructed rules satisfy also local intrinsic harmony. The construction is based on a classification of I-rules, and constitute an implementation to Gentzen’s (and Pawitz’) remark, that E-rules can be “read off” I-rules. (shrink)
I argue for a kind of logical pluralism on the basis of a difficulty with defining the meaning of negation in the framework of Dummett's and Prawitz' proof-theoreticsemantics.
We examine the proof-theoretic verificationist justification procedure proposed by Dummett. After some scrutiny, two distinct interpretations with respect to bases are advanced: the independent and the dependent interpretation. We argue that both are unacceptable as a semantics for propositional intuitionistic logic.
We present epistemic multilateral logic, a general logical framework for reasoning involving epistemic modality. Standard bilateral systems use propositional formulae marked with signs for assertion and rejection. Epistemic multilateral logic extends standard bilateral systems with a sign for the speech act of weak assertion (Incurvati and Schlöder 2019) and an operator for epistemic modality. We prove that epistemic multilateral logic is sound and complete with respect to the modal logic S5 modulo an appropriate translation. The logical framework developed provides the (...) basis for a novel, proof-theoretic approach to the study of epistemic modality. To demonstrate the fruitfulness of the approach, we show how the framework allows us to reconcile classical logic with the contradictoriness of so-called Yalcin sentences and to distinguish between various inference patterns on the basis of the epistemic properties they preserve. (shrink)
Alberto Coffa used the phrase "the Copernican turn in semantics" to denote a revolutionary transformation of philosophical views about the connection between the meanings of words and the acceptability of sentences and arguments containing those words. According to the new conception resulting from the Copernican turn, here called "the Copernican view", rules of use are constitutive of the meanings of words. This view has been linked with two doctrines: (A) the instances of meaning-constitutive rules are analytically and a priori (...) true or valid; (B) to grasp a meaning is to accept its rules. The pros and cons of different versions of the Copernican view, ascribable to Wittgenstein, Carnap, Gentzen, Dummett, Prawitz, Boghossian and other authors, will be weighed. A new version will be proposed, which implies neither (A) nor (B). (shrink)
Donald Davidson was one of the most influential philosophers of the last half of the 20th century, especially in the theory of meaning and in the philosophy of mind and action. In this paper, I concentrate on a field-shaping proposal of Davidson’s in the theory of meaning, arguably his most influential, namely, that insight into meaning may be best pursued by a bit of indirection, by showing how appropriate knowledge of a finitely axiomatized truth theory for a language can put (...) one in a position both to interpret the utterance of any sentence of the language and to see how its semantically primitive constituents together with their mode of combination determines its meaning (Davidson 1965, 1967, 1970, 1973a). This project has come to be known as truth-theoretic semantics. My aim in this paper is to render the best account I can of the goals and methods of truth-theoretic semantics, to defend it against some objections, and to identify its limitations. Although I believe that the project I describe conforms to the main idea that Davidson had, my aim is not primarily Davidson exegesis. I want to get on the table an approach to compositional semantics for natural languages, inspired by Davidson, but extended and developed, which I think does about as much along those lines as any theory could. I believe it is Davidson’s project, and I defend this in detail elsewhere (Ludwig 2015; Lepore and Ludwig 2005, 2007a, 2007b, 2011). But I want to develop and defend the project while also exploring its limitations, without getting entangled in exegetical questions. (shrink)
Philosophers are divided on whether the proof- or truth-theoretic approach to logic is more fruitful. The paper demonstrates the considerable explanatory power of a truth-based approach to logic by showing that and how it can provide (i) an explanatory characterization —both semantic and proof-theoretical—of logical inference, (ii) an explanatory criterion for logical constants and operators, (iii) an explanatory account of logic’s role (function) in knowledge, as well as explanations of (iv) the characteristic features of logic —formality, strong modal force, generality, (...) topic neutrality, basicness, and (quasi-)apriority, (v) the veridicality of logic and its applicability to science, (v) the normativity of logic, (vi) error, revision, and expansion in/of logic, and (vii) the relation between logic and mathematics. The high explanatory power of the truth-theoretic approach does not rule out an equal or even higher explanatory power of the proof-theoretic approach. But to the extent that the truth-theoretic approach is shown to be highly explanatory, it sets a standard for other approaches to logic, including the proof-theoretic approach. (shrink)
The fundamental assumption of Dummett’s and Prawitz’ proof-theoretic justification of deduction is that ‘if we have a valid argument for a complex statement, we can construct a valid argument for it which finishes with an application of one of the introduction rules governing its principal operator’. I argue that the assumption is flawed in this general version, but should be restricted, not to apply to arguments in general, but only to proofs. I also argue that Dummett’s and Prawitz’ project (...) of providing a logical basis for metaphysics only relies on the restricted assumption. (shrink)
In this dissertation, we shall investigate whether Tennant's criterion for paradoxicality(TCP) can be a correct criterion for genuine paradoxes and whether the requirement of a normal derivation(RND) can be a proof-theoretic solution to the paradoxes. Tennant’s criterion has two types of counterexamples. The one is a case which raises the problem of overgeneration that TCP makes a paradoxical derivation non-paradoxical. The other is one which generates the problem of undergeneration that TCP renders a non-paradoxical derivation paradoxical. Chapter 2 deals (...) with the problem of undergeneration and Chapter 3 concerns the problem of overgeneration. Chapter 2 discusses that Tenant’s diagnosis of the counterexample which applies CR−rule and causes the undergeneration problem is not correct and presents a solution to the problem of undergeneration. Chapter 3 argues that Tennant’s diagnosis of the counterexample raising the overgeneration problem is wrong and provides a solution to the problem. Finally, Chapter 4 addresses what should be explicated in order for RND to be a proof-theoretic solution to the paradoxes. (shrink)
Semantics plays a role in grammar in at least three guises. (A) Linguists seek to account for speakers‘ knowledge of what linguistic expressions mean. This goal is typically achieved by assigning a model theoretic interpretation2 in a compositional fashion. For example, No whale flies is true if and only if the intersection of the sets of whales and fliers is empty in the model. (B) Linguists seek to account for the ability of speakers to make various inferences based on (...) semantic knowledge. For example, No whale flies entails No blue whale flies and No whale flies high. (C) The wellformedness of a variety of syntactic constructions depends on morpho-syntactic features with a semantic flavor. For example, Under no circumstances would a whale fly is grammatical, whereas Under some circumstances would a whale fly is not, corresponding to the downward vs. upward monotonic features of the preposed phrases. (shrink)
The focus of this paper are Dummett's meaning-theoretical arguments against classical logic based on consideration about the meaning of negation. Using Dummettian principles, I shall outline three such arguments, of increasing strength, and show that they are unsuccessful by giving responses to each argument on behalf of the classical logician. What is crucial is that in responding to these arguments a classicist need not challenge any of the basic assumptions of Dummett's outlook on the theory of meaning. In particular, I (...) shall grant Dummett his general bias towards verificationism, encapsulated in the slogan 'meaning is use'. The second general assumption I see no need to question is Dummett's particular breed of molecularism. Some of Dummett's assumptions will have to be given up, if classical logic is to be vindicated in his meaning-theoretical framework. A major result of this paper will be that the meaning of negation cannot be defined by rules of inference in the Dummettian framework. (shrink)
Ian Rumfitt has proposed systems of bilateral logic for primitive speech acts of assertion and denial, with the purpose of ‘exploring the possibility of specifying the classically intended senses for the connectives in terms of their deductive use’ : 810f). Rumfitt formalises two systems of bilateral logic and gives two arguments for their classical nature. I assess both arguments and conclude that only one system satisfies the meaning-theoretical requirements Rumfitt imposes in his arguments. I then formalise an intuitionist system of (...) bilateral logic which also meets those requirements. Thus Rumfitt cannot claim that only classical bilateral rules of inference succeed in imparting a coherent sense onto the connectives. My system can be extended to classical logic by adding the intuitionistically unacceptable half of a structural rule Rumfitt uses to codify the relation between assertion and denial. Thus there is a clear sense in which, in the bilateral framework, the difference between classicism and intuitionism is not one of the rules of inference governing negation, but rather one of the relation between assertion and denial. (shrink)
Many prominent writers on the philosophy of logic, including Michael Dummett, Dag Prawitz, Neil Tennant, have held that the introduction and elimination rules of a logical connective must be ‘in harmony ’ if the connective is to possess a sense. This Harmony Thesis has been used to justify the choice of logic: in particular, supposed violations of it by the classical rules for negation have been the basis for arguments for switching from classical to intuitionistic logic. The Thesis has also (...) had an influence on the philosophy of language: some prominent writers in that area, notably Dummett and Robert Brandom, have taken it to be a special case of a more general requirement that the grounds for asserting a statement must cohere with its consequences. This essay considers various ways of making the Harmony Thesis precise and scrutinizes the most influential arguments for it. The verdict is negative: all the extant arguments for the Thesis are weak, and no version of it is remotely plausible. (shrink)
[...] I will only investigate [Austin's] claims as challenges to present-day model theoretic semantics. My main point will be to draw a sharp line between the semantic and pragmatic aspects of performatives and thereby discover a gap in Austin’s treatment. This will in my view naturally lead to the proposal in Section 2, that is, to treating performatives as denoting changes in intensional models. The rest of Section 2 will be concerned with the status of felicity conditions and a (...) tentative extension of Montague’s PTQ. (shrink)
This paper considers Rumfitt’s bilateral classical logic (BCL), which is proposed to counter Dummett’s challenge to classical logic. First, agreeing with several authors, we argue that Rumfitt’s notion of harmony, used to justify logical rules by a purely proof theoretical manner, is not sufficient to justify coordination rules in BCL purely proof-theoretically. For the central part of this paper, we propose a notion of proof-theoretical validity similar to Prawitz for BCL and proves that BCL is sound and complete respect to (...) this notion of validity. The major difficulty in defining validity for BCL is that validity of positive +A appears to depend on negative −A, and vice versa. Thus, the straightforward inductive definition does not work because of this circular dependance. However, Knaster-Tarski’s fixed point theorem can resolve this circularity. Finally, we discuss the philosophical relevance of our work, in particular, the impact of the use of fixed point theorem and the issue of decidability. (shrink)
Since 1976 Hilary Putnam has on many occasions proposed an argument, founded on some model-theoretic results, to the effect that any philosophical programme whose purpose is to naturalize semantics would fail to account for an important feature of every natural language, the determinacy of reference. Here, after having presented the argument, I will suggest that it does not work, because it simply assumes what it should prove, that is that we cannot extend the metatheory: Putnam appears to think that (...) all we may determinately say about the relations between words and entities in the world is what the model theory tells us, but he has never offered justifications for that. At the end of the article, I will discuss the apparently reliable intuition that seems to me to be at the root of the argument, that is that, given a formal theory, there is an infinite number of ways of connecting it to, or of projecting it onto, the world. I will suggest that we should resist this intuition, because it rests on a very doubtful notion of world, which assumes that for any class of objects there is a corresponding property. (shrink)
A graph-theoretic account of fibring of logics is developed, capitalizing on the interleaving characteristics of fibring at the linguistic, semantic and proof levels. Fibring of two signatures is seen as a multi-graph (m-graph) where the nodes and the m-edges include the sorts and the constructors of the signatures at hand. Fibring of two models is a multi-graph (m-graph) where the nodes and the m-edges are the values and the operations in the models, respectively. Fibring of two deductive systems is an (...) m-graph whose nodes are language expressions and the m-edges represent the inference rules of the two original systems. The sobriety of the approach is confirmed by proving that all the fibring notions are universal constructions. This graph-theoretic view is general enough to accommodate very different fibrings of propositional based logics encompassing logics with non-deterministic semantics, logics with an algebraic semantics, logics with partial semantics and substructural logics, among others. Soundness and weak completeness are proved to be preserved under very general conditions. Strong completeness is also shown to be preserved under tighter conditions. In this setting, the collapsing problem appearing in several combinations of logic systems can be avoided. (shrink)
A new proof style adequate for modal logics is defined from the polynomial ring calculus. The new semantics not only expresses truth conditions of modal formulas by means of polynomials, but also permits to perform deductions through polynomial handling. This paper also investigates relationships among the PRC here defined, the algebraic semantics for modal logics, equational logics, the Dijkstra???Scholten equational-proof style, and rewriting systems. The method proposed is throughly exemplified for S 5, and can be easily extended to (...) other modal logics. (shrink)
The paper studies a cluster of systems for fully disquotational truth based on the restriction of initial sequents. Unlike well-known alternative approaches, such systems display both a simple and intuitive model theory and remarkable proof-theoretic properties. We start by showing that, due to a strong form of invertibility of the truth rules, cut is eliminable in the systems via a standard strategy supplemented by a suitable measure of the number of applications of truth rules to formulas in derivations. Next, (...) we notice that cut remains eliminable when suitable arithmetical axioms are added to the system. Finally, we establish a direct link between cut-free derivability in infinitary formulations of the systems considered and fixed-point semantics. Noticeably, unlike what happens with other background logics, such links are established without imposing any restriction to the premisses of the truth rules. (shrink)
This paper shows how to derive nested calculi from labelled calculi for propositional intuitionistic logic and first-order intuitionistic logic with constant domains, thus connecting the general results for labelled calculi with the more refined formalism of nested sequents. The extraction of nested calculi from labelled calculi obtains via considerations pertaining to the elimination of structural rules in labelled derivations. Each aspect of the extraction process is motivated and detailed, showing that each nested calculus inherits favorable proof-theoretic properties from its (...) associated labelled calculus. (shrink)
Can one combine Davidsonian semantics with a deflationary conception of truth? Williams argues, contra a common worry, that Davidsonian semantics does not require truth-talk to play an explanatory role. Horisk replies that, in any event, the expressive role of truth-talk that Williams emphasizes disqualifies deflationary accounts—at least extant varieties—from combination with Davidsonian semantics. She argues, in particular, that this is so for Quine's disquotationalism, Horwich's minimalism, and Brandom's prosententialism. I argue that Horisk fails to establish her claim (...) in all three cases. This involves clarifying Quine’s understanding of a purely referential occurrence; explaining how Davidsonians can avail themselves of a syntactic treatment of lexical ambiguity; and correcting a common misreading of Brandom (answering along the way an objection offered by Künne as well). (shrink)
While non-classical theories of truth that take truth to be transparent have some obvious advantages over any classical theory that evidently must take it as non-transparent, several authors have recently argued that there's also a big disadvantage of non-classical theories as compared to their “external” classical counterparts: proof-theoretic strength. While conceding the relevance of this, the paper argues that there is a natural way to beef up extant internal theories so as to remove their proof-theoretic disadvantage. It is (...) suggested that the resulting internal theories should seem preferable to their external counterparts. (shrink)
The aim of this paper is to provide an intuitive semantics for systems of justification logic which allows us to cope with the distinction between implicit and explicit justifiers. The paper is subdivided into three sections. In the first one, the distinction between implicit and explicit justifiers is presented and connected with a proof-theoretic distinction between two ways of interpreting sequences of sentences; that is, as sequences of axioms in a certain set and as sequences proofs constructed from (...) that set of axioms. In the second section, a basic system of justification logic for implicit and explicit justifiers is analyzed and some significant facts about it are proved. In the final section, an adequate semantics is proposed, and the system is proved to be sound and complete whit respect to it. (shrink)
Philosophers of language have drawn on metamathematical results in varied ways. Extensionalist philosophers have been particularly impressed with two, not unrelated, facts: the existence, due to Frege/Tarski, of a certain sort of semantics, and the seeming absence of intensional contexts from mathematical discourse. The philosophical import of these facts is at best murky. Extensionalists will emphasize the success and clarity of the model theoretic semantics; others will emphasize the relative poverty of the mathematical idiom; still others will question (...) the aptness of the standard extensional semantics for mathematics. In this paper I investigate some implications of the Gödel Second Incompleteness Theorem for these positions. I argue that the realm of mathematics, proof theory in particular, has been a breeding ground for intensionality and that satisfactory intensional semantic theories are implicit in certain rigorous technical accounts. (shrink)
Both in formal and computational natural language semantics, the classical correspondence view of meaning – and, more specifically, the view that the meaning of a declarative sentence coincides with its truth conditions – is widely held. Truth (in the world or a situation) plays the role of the given, and meaning is analysed in terms of it. Both language and the world feature in this perspective on meaning, but language users are conspicuously absent. In contrast, the inferentialist semantics (...) that Robert Brandom proposes in his magisterial book ‘Making It Explicit’ puts the language user centre stage. According to his theory of meaning, the utterance of a sentence is meaningful in as far as it is a move by a language user in a game of giving and asking for reasons (with reasons underwritten by a notion of good inferences). In this paper, I propose a proof-theoretic formalisation of the game of giving and asking for reasons that lends itself to computer implementation. In the current proposal, I flesh out an account of defeasible inferences, a variety of inferences which play a pivotal role in ordinary (and scientific) language use. (shrink)
In this paper, we defend Davidson's program in truth-theoretical semantics against recent criticisms by Scott Soames. We argue that Soames has misunderstood Davidson's project, that in consequence his criticisms miss the mark, that appeal to meanings as entities in the alternative approach that Soames favors does no work, and that the approach is no advance over truth-theoretic semantics.
Assertoric sentences are sentences which admit of truth or falsity. Non-assertoric sentences, imperatives and interrogatives, have long been a source of difficulty for the view that a theory of truth for a natural language can serve as the core of a theory of meaning. The trouble for truth-theoretic semantics posed by non-assertoric sentences is that, prima facie, it does not make sense to say that imperatives, such as 'Cut your hair', or interrogatives such as 'What time is it?', are (...) truth or false. Thus, the vehicle for giving the meaning of a sentence by using an interpretive truth theory, the T-sentence, is apparently unavailable for non-assertoric sentences. This paper shows how to incorporate non-assertoric sentences into a theory of meaning that gives central place to an interpretive truth theory for the language, without, however, reducing the non-assertorics to assertorics, or treating their utterances as semantically equivalent to one or more utterances of assertoric sentences. Four proposals for how to incorporate non-assertoric sentences into a broadly truth-theoretic semantics are reviewed. The proposals fall into two classes, those that attempt to explain the meaning of non-assertoric sentences solely by appeal to truth conditions, and those that attempt to explain the meaning of non-assertroic sentences by appeal to compliance conditions, which can be treated as one variety of fulfillment conditions for sentences of which truth conditions are another variety. The paper argues that none of the extant approaches is successful, but develops a version of the generalized fulfillment approach which avoids the difficulties of previous approaches and still exhibits a truth theory as the central component of a compositional meaning theory for all sentences of natural language. (shrink)
A sorites argument is a symptom of the vagueness of the predicate with which it is constructed. A vague predicate admits of at least one dimension of variation (and typically more than one) in its intended range along which we are at a loss when to say the predicate ceases to apply, though we start out confident that it does. It is this feature of them that the sorites arguments exploit. Exactly how is part of the subject of this paper. (...) The majority of philosophers writing on vagueness take it to be a kind of semantic phenomenon. If we are right, they are correct in this assumption, which is surely the default position, but they have not so far provided a satisfactory account of the implications of this or a satisfactory diagnosis of the sorites arguments. Other philosophers have urged more exotic responses, which range from the view that the fault lies not in our language, but in the world, which they propose to be populated with vague objects which our semantics precisely reflects, to the view that the world and language are both perfectly in order, but that the fault lies with our knowledge of the properties of the words we use (epistemicism). In contrast to the exotica to which some philosophers have found themselves driven in an attempt to respond to the sorites puzzles, we undertake a defense of the commonsense view that vague terms are semantically vague. Our strategy is to take fresh look at the phenomenon of vagueness. Rather than attempting to adjudicate between different extant theories, we begin with certain pre-theoretic intuitions about vague terms, and a default position on classical logic. The aim is to see whether (i) a natural story can be told which will explain the vagueness phenomenon and the puzzling nature of soritical arguments, and, in the course of this, to see whether (ii) there arises any compelling pressure to give up the natural stance. We conclude that there is a simple and natural story to be told, and we tell it, and that there is no good reason to abandon our intuitively compelling starting point. The importance of the strategy lies in its dialectical structure. Not all positions on vagueness are on a par. Some are so incredible that even their defenders think of them as positions of last resort, positions to which we must be driven by the power of philosophical argument. We aim to show that there is no pressure to adopt these incredible positions, obviating the need to respond to them directly. If we are right, semantic vagueness is neither surprising, nor threatening. It provides no reason to suppose that the logic of natural languages is not classical or to give up any independently plausible principle of bivalence. Properly understood, it provides us with a satisfying diagnosis of the sorites argumentation. It would be rash to claim to have any completely novel view about a topic so well worked as vagueness. But we believe that the subject, though ancient, still retains its power to inform and challenge us. In particular, we will argue that taking seriously the central phenomenon of predicate vagueness—the “boundarylessness” of vague predicates—on the commonsense assumption that vagueness is semantic, leads ineluctably to the view that no sentences containing vague expressions (henceforth ‘vague sentences’) are truth-evaluable. This runs counter to much of the literature on vagueness, which commonly assumes that, though some applications of vague predicates to objects fail to be truth-evaluable, in clear positive and negative cases vague sentences are unproblematically true or false. It is clarity on this, and related points, that removes the puzzles associated with vagueness, and helps us to a satisfying diagnosis of why the sorites arguments both seem compelling and yet so obviously a bit of trickery. We give a proof that semantically vague predicates neither apply nor fail-to-apply to anything, and that consequently it is a mistake to diagnose sorites arguments, as is commonly done, by attempting to locate in them a false premise. Sorites arguments are not sound, but not unsound either. We offer an explanation of their appeal, and defend our position against a variety of worries that might arise about it. The plan of the paper is as follows. We first introduce an important distinction in terms of which we characterize what has gone wrong with vague predicates. We characterize what we believe to be our natural starting point in thinking about the phenomenon of vagueness, from which only a powerful argument should move us, and then trace out the consequences of accepting this starting point. We consider the charge that among the consequences of semantic vagueness are that we must give up classical logic and the principle of bivalence, which has figured prominently in arguments for epistemicism. We argue there are no such consequences of our view: neither the view that the logic of natural languages is classical, nor any plausible principle of bivalence, need be given up. Next, we offer a diagnosis of what has gone wrong in sorites arguments on the basis of our account. We then present an argument to show that our account must be accepted on pain of embracing (in one way or another) the epistemic view of “vagueness”, i.e., of denying that there are any semantically vague terms at all. Next, we discuss some worries that may arise about the intelligibility of our linguistic practices if our account is correct. We argue none of these worries should force us from our intuitive starting point. Finally, we cast a quick glance at other forms of semantic incompleteness. (shrink)
The starting point for this paper is a critical discussion of claims of psychological reality articulated within Borg’s (forth.) minimal semantics and Carpintero’s (2007) character*-semantics. It has been proposed, for independent reasons, that their respective accounts can accommodate, or at least avoid the challenge from psychological evidence. I outline their respective motivations, suggesting various shortcomings in their efforts of preserving the virtues of an uncontaminated semantics in the face of psychological objection (I-II), and try to make the (...) case that, at least for a theory of utterance comprehension, a truth-conditional pragmatic stance is far preferable. An alternative from a relevance-theoretic perspective is offered in terms of mutual adjustment between truth-conditional content and implicature(s), arguing that many “free” pragmatic processes are needed to uncover the truth-conditional content, which can then warrant the expected implicature(s) (III). I finally illustrate the difficulties their accounts have in predicting the correct order of interpretation in cases of ironic metaphor, i.e. metaphor is computed first, as part of truth-conditional content, while irony is inferentially grounded in metaphorical content (IV). (shrink)
A Mathematical Review by John Corcoran, SUNY/Buffalo -/- Macbeth, Danielle Diagrammatic reasoning in Frege's Begriffsschrift. Synthese 186 (2012), no. 1, 289–314. ABSTRACT This review begins with two quotations from the paper: its abstract and the first paragraph of the conclusion. The point of the quotations is to make clear by the “give-them-enough-rope” strategy how murky, incompetent, and badly written the paper is. I know I am asking a lot, but I have to ask you to read the quoted passages—aloud if (...) possible. Don’t miss the silly attempt to recycle Kant’s quip “Concepts without intuitions are empty; intuitions without concepts are blind”. What the paper was aiming at includes the absurdity: “Proofs without definitions are empty; definitions without proofs are, if not blind, then dumb.” But the author even bollixed this. The editor didn’t even notice. The copy-editor missed it. And the author’s proof-reading did not catch it. In order not to torment you I will quote the sentence as it appears: “In a slogan: proofs without definitions are empty, merely the aimless manipulation of signs according to rules; and definitions without proofs are, if no blind, then dumb.”[sic] The rest of my review discusses the paper’s astounding misattribution to contemporary logicians of the information-theoretic approach. This approach was cruelly trashed by Quine in his 1970 Philosophy of Logic, and thereafter ignored by every text I know of. The paper under review attributes generally to modern philosophers and logicians views that were never espoused by any of the prominent logicians—such as Hilbert, Gödel, Tarski, Church, and Quine—apparently in an attempt to distance them from Frege: the focus of the article. On page 310 we find the following paragraph. “In our logics it is assumed that inference potential is given by truth-conditions. Hence, we think, deduction can be nothing more than a matter of making explicit information that is already contained in one’s premises. If the deduction is valid then the information contained in the conclusion must be contained already in the premises; if that information is not contained already in the premises […], then the argument cannot be valid.” Although the paper is meticulous in citing supporting literature for less questionable points, no references are given for this. In fact, the view that deduction is the making explicit of information that is only implicit in premises has not been espoused by any standard symbolic logic books. It has only recently been articulated by a small number of philosophical logicians from a younger generation, for example, in the prize-winning essay by J. Sagüillo, Methodological practice and complementary concepts of logical consequence: Tarski’s model-theoretic consequence and Corcoran’s information-theoretic consequence, History and Philosophy of Logic, 30 (2009), pp. 21–48. The paper omits definitions of key terms including ‘ampliative’, ‘explicatory’, ‘inference potential’, ‘truth-condition’, and ‘information’. The definition of prime number on page 292 is as follows: “To say that a number is prime is to say that it is not divisible without remainder by another number”. This would make one be the only prime number. The paper being reviewed had the benefit of two anonymous referees who contributed “very helpful comments on an earlier draft”. Could these anonymous referees have read the paper? -/- J. Corcoran, U of Buffalo, SUNY -/- PS By the way, if anyone has a paper that has been turned down by other journals, any journal that would publish something like this might be worth trying. (shrink)
Our aim in the present paper is to investigate, from the standpoint of truth-theoretic semantics, English tense, temporal designators and quantifiers, and other expressions we use to relate ourselves and other things to the temporal order. Truth-theoretic semantics provides a particularly illuminating standpoint from which to discuss issues about the semantics of tense, and their relation to thoughts at, and about, times. Tense, and temporal modifiers, contribute systematically to conditions under which sentences we utter are true or (...) false. A Tarski-style truth-theoretic semantics, by requiring explicitly represented truth conditions, helps to sharpen questions about the function of tense, and to deepen our insight into the contribution the tenses and temporal modifiers make to what we say by using them. (shrink)
At the heart of semantics in the 20th century is Frege’s distinction between sense and force. This is the idea that the content of a self-standing utterance of a sentence S can be divided into two components. One part, the sense, is the proposition that S’s linguistic meaning and context associates with it as its semantic interpretation. The second component is S’s illocutionary force. Illocutionary forces correspond to the three basic kinds of sentential speech acts: assertions, orders, and questions. (...) Forces are then kinds of acts in which propositions are deployed with certain purposes. I sketch a speech-act theoretic semantics in which that distinction does not hold. Instead of propositions and forces, the theory proposes proto-illocutionary acts and illocutionary acts. The orthodox notion of a proposition plays no role in the framework, which is a good thing, since that notion is deeply problematic. The framework also shows how expressionists, who embrace a sophisticated speech-act framework, face no Frege-Geach embedding problem, since the latter assumes the Sense/Force distinction. (shrink)
Roughly, a proof of a theorem, is “pure” if it draws only on what is “close” or “intrinsic” to that theorem. Mathematicians employ a variety of terms to identify pure proofs, saying that a pure proof is one that avoids what is “extrinsic,” “extraneous,” “distant,” “remote,” “alien,” or “foreign” to the problem or theorem under investigation. In the background of these attributions is the view that there is a distance measure (or a variety of such measures) between mathematical statements and (...) proofs. Mathematicians have paid little attention to specifying such distance measures precisely because in practice certain methods of proof have seemed self- evidently impure by design: think for instance of analytic geometry and analytic number theory. By contrast, mathematicians have paid considerable attention to whether such impurities are a good thing or to be avoided, and some have claimed that they are valuable because generally impure proofs are simpler than pure proofs. This article is an investigation of this claim, formulated more precisely by proof- theoretic means. After assembling evidence from proof theory that may be thought to support this claim, we will argue that on the contrary this evidence does not support the claim. (shrink)
Gaisi Takeuti (1926–2017) is one of the most distinguished logicians in proof theory after Hilbert and Gentzen. He extensively extended Hilbert's program in the sense that he formulated Gentzen's sequent calculus, conjectured that cut-elimination holds for it (Takeuti's conjecture), and obtained several stunning results in the 1950–60s towards the solution of his conjecture. Though he has been known chiefly as a great mathematician, he wrote many papers in English and Japanese where he expressed his philosophical thoughts. In particular, he used (...) several keywords such as "active intuition" and "self-reflection" from Nishida's philosophy. In this paper, we aim to describe a general outline of our project to investigate Takeuti's philosophy of mathematics. In particular, after reviewing Takeuti's proof-theoretic results briefly, we describe some key elements in Takeuti's texts. By explaining these texts, we point out the connection between Takeuti's proof theory and Nishida's philosophy and explain the future goals of our project. (shrink)
This document presents a Gentzen-style deductive calculus and proves that it is complete with respect to a 3-valued semantics for a language with quantifiers. The semantics resembles the strong Kleene semantics with respect to conjunction, disjunction and negation. The completeness proof for the sentential fragment fills in the details of a proof sketched in Arnon Avron (2003). The extension to quantifiers is original but uses standard techniques.
Nearly a decade has past since Grove gave a semantics for the AGM postulates. The semantics, called sphere semantics, provided a new perspective of the area of study, and has been widely used in the context of theory or belief change. However, the soundness proof that Grove gives in his paper contains an error. In this note, we will point this out and give two ways of repairing it.
In this paper we give some formal examples of ideas developed by Penco in two papers on the tension inside Frege's notion of sense (see Penco 2003). The paper attempts to compose the tension between semantic and cognitive aspects of sense, through the idea of sense as proof or procedure – not as an alternative to the idea of sense as truth condition, but as complementary to it (as it happens sometimes in the old tradition of procedural semantics).
It is regrettably common for theorists to attempt to characterize the Humean dictum that one can’t get an ‘ought’ from an ‘is’ just in broadly logical terms. We here address an important new class of such approaches which appeal to model-theoretic machinery. Our complaint about these recent attempts is that they interfere with substantive debates about the nature of the ethical. This problem, developed in detail for Daniel Singer’s and Gillian Russell and Greg Restall’s accounts of Hume’s dictum, is of (...) a general type arising for the use of model-theoretic structures in cashing out substantive philosophical claims: the question of whether an abstract model-theoretic structure successfully interprets something often involves taking a stand on non-trivial issues surrounding the thing. In the particular case of Hume’s dictum, given reasonable conceptual or metaphysical claims about the ethical, Singer’s and Russell and Restall’s accounts treat obviously ethical claims as descriptive and vice versa. Consequently, their model-theoretic characterizations of Hume’s dictum are not metaethically neutral. This encourages skepticism about whether model-theoretic machinery suffices to provide an illuminating distinction between the ethical and the descriptive. (shrink)
A generative grammar for a language L generates one or more syntactic structures for each sentence of L and interprets those structures both phonologically and semantically. A widely accepted assumption in generative linguistics dating from the mid-60s, the Generative Grammar Hypothesis , is that the ability of a speaker to understand sentences of her language requires her to have tacit knowledge of a generative grammar of it, and the task of linguistic semantics in those early days was taken to (...) be that of specifying the form that the semantic component of a generative grammar must take. Then in the 70s linguistic semantics took a curious turn. Without rejecting GGH, linguists turned away from the task of characterizing the semantic component of a generative grammar to pursue instead the Montague-inspired project of providing for natural languages the same kind of model-theoretic semantics that logicians devise for the artificial languages of formal systems of logic, and “formal semantics” continues to dominate semantics in linguistics. This essay argues that the sort of compositional meaning theory that would verify GGH would not only be quite different from the theories formal semanticists construct, but would be a more fundamental theory that supersedes those theories in that it would explain why they are true when they are true, but their truth wouldn’t explain its truth. Formal semantics has undoubtedly made important contributions to our understanding of such phenomena as anaphora and quantification, but semantics in linguistics is supposed to be the study of meaning. This means that the formal semanticist can’t be unconcerned that the kind of semantic theory for a natural language that interests her has no place in a theory of linguistic competence; for if GGH is correct, then the more fundamental semantic theory is the compositional meaning theory that is the semantic component of the internally represented generative grammar, and if that is so, then linguistic semantics has so far ignored what really ought to be its primary concern. (shrink)
When speakers utter conflicting moral sentences, it seems clear that they disagree. It has often been suggested that the fact that the speakers disagree gives us evidence for a claim about the semantics of the sentences they are uttering. Specifically, it has been suggested that the existence of the disagreement gives us reason to infer that there must be an incompatibility between the contents of these sentences. This inference then plays a key role in a now-standard argument against certain (...) theories in moral semantics. In this paper, we introduce new evidence that bears on this debate. We show that there are moral conflict cases in which people are inclined to say both that the two speakers disagree and that it is not the case at least one of them must be saying something incorrect. We then explore how we might understand such disagreements. As a proof of concept, we sketch an account of the concept of disagreement and an independently motivated theory of moral semantics which, together, explain the possibility of such cases. (shrink)
Most theories of slurs fall into one of two families: those which understand slurring terms to involve special descriptive/informational content (however conveyed), and those which understand them to encode special emotive/expressive content. Our view is that both offer essential insights, but that part of what sets slurs apart is use-theoretic content. In particular, we urge that slurring words belong at the intersection of a number of categories in a sociolinguistic register taxonomy, one that usually includes [+slang] and [+vulgar] and always (...) includes [-polite] and [+derogatory]. Thus, e.g., what distinguishes ‘Chinese’ from ‘chink’ is neither a peculiar sort of descriptive nor emotional content, but rather the fact that ‘chink’ is lexically marked as belonging to different registers than ‘Chinese’. It is, moreover, partly such facts which makes slurring ethically unacceptable. (shrink)
In this paper, I shall consider the challenge that Quine posed in 1947 to the advocates of quantified modal logic to provide an explanation, or interpretation, of modal notions that is intuitively clear, allows “quantifying in”, and does not presuppose, mysterious, intensional entities. The modal concepts that Quine and his contemporaries, e.g. Carnap and Ruth Barcan Marcus, were primarily concerned with in the 1940’s were the notions of (broadly) logical, or analytical, necessity and possibility, rather than the metaphysical modalities that (...) have since become popular, largely due to the influence of Kripke. In the 1950’s modal logicians responded to Quine’s challenge by providing quantified modal logic with model-theoretic semantics of various types. In doing so they also, explicitly or implicitly addressed Quine’s interpretation problem. Here I shall consider the approaches developed by Carnap in the late 1940’s, and by Kanger, Hintikka, Montague, and Kripke in the 1950’s, and discuss to what extent these approaches were successful in meeting Quine’s doubts about the intelligibility of quantified modal logic. (shrink)
This article begins by distinguishing force and mood. Then it lays out desiderata on a successful account. It sketches as background the program of truth-theoretic semantics. Next, it surveys assimilation approaches and argues that they are inadequate. Then it shows how the fulfillment-conditional approach can be applied to imperatives, interrogatives, molecular sentences containing them, and quantification into mood markers. Next, it considers briefly the recent set of propositions approach to the semantics of interrogatives and exclamatives. Finally, it shows (...) how to integrate exclamatives and optatives into a framework similar to the fulfillment approach. (shrink)
In this paper, we outline an approach to giving extensional truth-theoretic semantics for what have traditionally been seen as opaque sentential contexts. We outline an approach to providing a compositional truth-theoretic semantics for opaque contexts which does not require quantifying over intensional entities of any kind, and meets standard objections to such accounts. The account we present aims to meet the following desiderata on a semantic theory T for opaque contexts: (D1) T can be formulated in a first-order (...) extensional language; (D2) T does not require quantification over intensional entitiesi.e., meanings, propositions, properties, relations, or the likein its treatment of opaque contexts; (D3) T captures the entailment relations that hold in virtue of form between sentences in the language for which it is a theory; (D4) T has a finite number of axioms. If the approach outlined here is correct, it resolves a longstanding complex of problems in metaphysics, the philosophy of mind and the philosophy of language. (shrink)
Takeuti and Titani have introduced and investigated a logic they called intuitionistic fuzzy logic. This logic is characterized as the first-order Gödel logic based on the truth value set [0,1]. The logic is known to be axiomatizable, but no deduction system amenable to proof-theoretic, and hence, computational treatment, has been known. Such a system is presented here, based on previous work on hypersequent calculi for propositional Gödel logics by Avron. It is shown that the system is sound and complete, (...) and allows cut-elimination. A question by Takano regarding the eliminability of the Takeuti-Titani density rule is answered affirmatively. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.