In recent years there has been a revitalised interest in non-classical solutions to the semantic paradoxes. In this paper I show that a number of logics are susceptible to a strengthened version of Curry'sparadox. This can be adapted to provide a proof theoretic analysis of the omega-inconsistency in Lukasiewicz's continuum valued logic, allowing us to better evaluate which logics are suitable for a naïve truth theory. On this basis I identify two natural subsystems of Lukasiewicz logic which (...) individually, but not jointly, lack the problematic feature. (shrink)
In 1942 Haskell B. Curry presented what is now called Curry'sparadox which can be found in a logic independently of its stand on negation. In recent years there has been a revitalised interest in non-classical solutions to the semantic paradoxes. In this article the non-classical resolution of Curry’s Paradox and Shaw-Kwei' sparadox without rejection any contraction postulate is proposed. In additional relevant paraconsistent logic C ̌_n^#,1≤n<ω, in fact,provide an effective way of circumventing triviality of da Costa’s (...) paraconsistent Set Theories〖NF〗n^C. (shrink)
Nontransitive responses to the validity Curry paradox face a dilemma that was recently formulated by Barrio, Rosenblatt and Tajer. It seems that, in the nontransitive logic ST enriched with a validity predicate, either you cannot prove that all derivable metarules preserve validity, or you can prove that instances of Cut that are not admissible in the logic preserve validity. I respond on behalf of the nontransitive approach. The paper argues, first, that we should reject the detachment principle for naive (...) validity. Secondly, I show how to add a validity predicate to ST while avoiding the dilemma. (shrink)
Beall and Murzi :143–165, 2013) introduce an object-linguistic predicate for naïve validity, governed by intuitive principles that are inconsistent with the classical structural rules. As a consequence, they suggest that revisionary approaches to semantic paradox must be substructural. In response to Beall and Murzi, Field :1–19, 2017) has argued that naïve validity principles do not admit of a coherent reading and that, for this reason, a non-classical solution to the semantic paradoxes need not be substructural. The aim of this (...) paper is to respond to Field’s objections and to point to a coherent notion of validity which underwrites a coherent reading of Beall and Murzi’s principles: grounded validity. The notion, first introduced by Nicolai and Rossi, is a generalisation of Kripke’s notion of grounded truth, and yields an irreflexive logic. While we do not advocate the adoption of a substructural logic, we take the notion of naïve validity to be a legitimate semantic notion that points to genuine expressive limitations of fully structural revisionary approaches. (shrink)
Any theory of truth must find a way around Curry’s paradox, and there are well-known ways to do so. This paper concerns an apparently analogous paradox, about validity rather than truth, which JC Beall and Julien Murzi call the v-Curry. They argue that there are reasons to want a common solution to it and the standard Curry paradox, and that this rules out the solutions to the latter offered by most “naive truth theorists.” To this end they (...) recommend a radical solution to both paradoxes, involving a substructural logic, in particular, one without structural contraction. In this paper I argue that substructuralism is unnecessary. Diagnosing the “v-Curry” is complicated because of a multiplicity of readings of the principles it relies on. But these principles are not analogous to the principles of naive truth, and taken together, there is no reading of them that should have much appeal to anyone who has absorbed the morals of both the ordinary Curry paradox and the second incompleteness theorem. (shrink)
Curry'sparadox for "if.. then.." concerns the paradoxical features of sentences of the form "If this very sentence is true, then 2+2=5". Standard inference principles lead us to the conclusion that such conditionals have true consequents: so, for example, 2+2=5 after all. There has been a lot of technical work done on formal options for blocking Curry paradoxes while only compromising a little on the various central principles of logic and meaning that are under threat. -/- Once we (...) have a sense of the technical options, though, a philosophical choice remains. When dealing with puzzles in the logic of conditionals, a natural place to turn is independently motivated semantic theories of the behaviour of "if... then...". This paper argues that the closest-worlds approach outlined in Nolan 1997 offers a philosophically satisfying reason to deny conditional proof and so block the paradoxical Curry reasoning, and can give the verdict that standard Curry conditionals are false, along with related "contraction conditionals". (shrink)
This paper presents a range of new triviality proofs pertaining to naïve truth theory formulated in paraconsistent relevant logics. It is shown that excluded middle together with various permutation principles such as A → (B → C)⊩B → (A → C) trivialize naïve truth theory. The paper also provides some new triviality proofs which utilize the axioms ((A → B)∧ (B → C)) → (A → C) and (A → ¬A) → ¬A, the fusion connective and the Ackermann constant. An (...) overview over various ways to formulate Leibniz’s law in non-classical logics and two new triviality proofs for naïve set theory are also provided. (shrink)
For semantic inferentialists, the basic semantic concept is validity. An inferentialist theory of meaning should offer an account of the meaning of "valid." If one tries to add a validity predicate to one's object language, however, one runs into problems like the v-Curry paradox. In previous work, I presented a validity predicate for a non-transitive logic that can adequately capture its own meta-inferences. Unfortunately, in that system, one cannot show of any inference that it is invalid. Here I extend (...) the system so that it can capture invalidities. (shrink)
Stephen Barker presents a novel approach to solving semantic paradoxes, including the Liar and its variants and Curry’s paradox. His approach is based around the concept of alethic undecidability. His approach, if successful, renders futile all attempts to assign semantic properties to the paradoxical sentences, whilst leaving classical logic fully intact. And, according to Barker, even the T-scheme remains valid, for validity is not undermined by undecidable instances. Barker’s approach is innovative and worthy of further consideration, particularly by those (...) of us who aim to find a solution without logical revisionism. As it stands, however, the approach is unsuccessful, as I shall demonstrate below. (shrink)
The semantic paradoxes are a family of arguments – including the liar paradox, Curry’s paradox, Grelling’s paradox of heterologicality, Richard’s and Berry’s paradoxes of definability, and others – which have two things in common: first, they make an essential use of such semantic concepts as those of truth, satisfaction, reference, definition, etc.; second, they seem to be very good arguments until we see that their conclusions are contradictory or absurd. These arguments raise serious doubts concerning the coherence (...) of the concepts involved. This article will offer an introduction to some of the main theories that have been proposed to solve the paradoxes and avert those doubts. Included is also a brief history of the semantic paradoxes from Eubulides to Tarski and Curry. (shrink)
Aim of the paper is to analyze Priest’s dialetheic solution to Curry’s paradox. It has been shown that a solution refuting ABS, accepting MPP and consequently refuting CP meets some difficulties. Here I just concentrate on one difficulty: one obtains the validity of MPP just using FA in the metalanguage, an invalid rule for a dialetheist.
This paper targets a series of potential issues for the discussion of, and modal resolution to, the alethic paradoxes advanced by Scharp (2013). I aim, then, to provide a novel, epistemicist treatment of the alethic paradoxes. In response to Curry'sparadox, the epistemicist solution that I advance enables the retention of both classical logic and the traditional rules for the alethic predicate: truth-elimination and truth-introduction. By availing of epistemic modal logic, the epistemicist approach permits, further, of a descriptively (...) adequate explanation of the indeterminacy that is exhibited by epistemic states concerning liar-paradoxical sentences. (shrink)
The perhaps most important criticism of the nontransitive approach to semantic paradoxes is that it cannot truthfully express exactly which metarules preserve validity. I argue that this criticism overlooks that the admissibility of metarules cannot be expressed in any logic that allows us to formulate validity-Curry sentences and that is formulated in a classical metalanguage. Hence, the criticism applies to all approaches that do their metatheory in classical logic. If we do the metatheory of nontransitive logics in a nontransitive logic, (...) however, there is no reason to think that the argument behind the criticism goes through. In general, asking a logic to express its own admissible metarules may not be a good idea. (shrink)
Within the (Haskell Curry) notion of a formal system we complete Tarski's formal correctness: ∀x True(x) ↔ ⊢ x and use this finally formalized notion of Truth to refute his own Undefinability Theorem (based on the Liar Paradox), the Liar Paradox, and the (Panu Raatikainen) essence of the conclusion of the 1931 Incompleteness Theorem.
The ‘syntax’ and ‘combinatorics’ of my title are what Curry (1961) referred to as phenogrammatics and tectogrammatics respectively. Tectogrammatics is concerned with the abstract combinatorial structure of the grammar and directly informs semantics, while phenogrammatics deals with concrete operations on syntactic data structures such as trees or strings. In a series of previous papers (Muskens, 2001a; Muskens, 2001b; Muskens, 2003) I have argued for an architecture of the grammar in which ﬁnite sequences of lambda terms are the basic data structures, (...) pairs of terms syntax, semantics for example. These sequences then combine with the help of simple generalizations of the usual abstraction and application operations. This theory, which I call Lambda Grammars and which is closely related to the independently formulated theory of Abstract Categorial Grammars (de Groote, 2001; de Groote, 2002), in fact is an implementation of Curry’s ideas: the level of tectogrammar is encoded by the sequences of lambda-terms and their ways of combination, while the syntactic terms in those sequences constitute the phenogrammatical level. In de Groote’s formulation of the theory, tectogrammar is the level of abstract terms, while phenogrammar is the level of object terms. (shrink)
Motivated by H. Curry’s well-known objection and by a proposal of L. Henkin, this article introduces the positive tableaux, a form of tableau calculus without refutation based upon the idea of implicational triviality. The completeness of the method is proven, which establishes a new decision procedure for the (classical) positive propositional logic. We also introduce the concept of paratriviality in order to contribute to the question of paradoxes and limitations imposed by the behavior of classical implication.
This work studies some problems connected to the role of negation in logic, treating the positive fragments of propositional calculus in order to deal with two main questions: the proof of the completeness theorems in systems lacking negation, and the puzzle raised by positive paradoxes like the well-known argument of Haskel Curry. We study the constructive com- pleteness method proposed by Leon Henkin for classical fragments endowed with implication, and advance some reasons explaining what makes difficult to extend this constructive (...) method to non-classical fragments equipped with weaker implications (that avoid Curry's objection). This is the case, for example, of Jan Lukasiewicz's n-valued logics and Wilhelm Ackermann's logic of restricted implication. Besides such problems, both Henkin's method and the triviality phenomenon enable us to propose a new positive tableau proof system which uses only positive meta-linguistic resources, and to mo- tivate a new discussion concerning the role of negation in logic proposing the concept of paratriviality. In this way, some relations between positive reasoning and infinity, the possibilities to obtain a ¯first-order positive logic as well as the philosophical connection between truth and meaning are dis- cussed from a conceptual point of view. (shrink)
This dissertation concerns the foundations of epistemic modality. I examine the nature of epistemic modality, when the modal operator is interpreted as concerning both apriority and conceivability, as well as states of knowledge and belief. The dissertation demonstrates how phenomenal consciousness and gradational possible-worlds models in Bayesian perceptual psychology relate to epistemic modal space. The dissertation demonstrates, then, how epistemic modality relates to the computational theory of mind; metaphysical modality; deontic modality; logical modality; the types of mathematical modality; to the (...) epistemic status of undecidable propositions and abstraction principles in the philosophy of mathematics; to the apriori-aposteriori distinction; to the modal profile of rational propositional intuition; and to the types of intention, when the latter is interpreted as a modal mental state. Examining the nature of epistemic logic itself, I develop a novel approach to conditions of self-knowledge in the setting of the modal μ-calculus, as well as novel epistemicist solutions to Curry's, the liar, and the knowability paradoxes. Solutions to previously intransigent issues concerning the first-person concept; the distinction between fundamental and derivative truths; and the unity of intention and its role in decision theory, are developed along the way. (shrink)
Combinatory logic (Curry and Feys 1958) is a “variable-free” alternative to the lambda calculus. The two have the same expressive power but build their expressions differently. “Variable-free” semantics is, more precisely, “free of variable binding”: it has no operation like abstraction that turns a free variable into a bound one; it uses combinators—operations on functions—instead. For the general linguistic motivation of this approach, see the works of Steedman, Szabolcsi, and Jacobson, among others. The standard view in linguistics is that reflexive (...) and personal pronouns are free variables that get bound by an antecedent through some coindexing mechanism. In variable free semantics the same task is performed by some combinator that identifies two arguments of the function it operates on (a duplicator). This combinator may be built into the lexical semantics of the pronoun, into that of the antecedent, or it may be a free-floating operation applicable to predicates or larger chunks of texts, i.e. a typeshifter. This note is concerned with the case of cross-sentential anaphora. It adopts Hepple’s and Jacobson’s interpretation of pronouns as identity maps and asks how this can be extended to the cross-sentential case, assuming the dynamic semantic view of anaphora. It first outlines the possibility of interpreting indefinites that antecede non-ccommanded pronouns as existential quantifiers enriched with a duplicator. Then it argues that it is preferable to use the duplicator as a type-shifter that applies “on the fly”. The proposal has consequences for two central ingredients of the classical dynamic semantic treatment: it does away with abstraction over assignments and with treating indefinites as inherently existentially quantified. However, cross-sentential anaphora remains a matter of binding, and the idea of propositions as context change potentials is retained. (shrink)
In this paper we discuss a new perspective on the syntax-semantics interface. Semantics, in this new set-up, is not ‘read off’ from Logical Forms as in mainstream approaches to generative grammar. Nor is it assigned to syntactic proofs using a Curry-Howard correspondence as in versions of the Lambek Calculus, or read off from f-structures using Linear Logic as in Lexical-Functional Grammar (LFG, Kaplan & Bresnan [9]). All such approaches are based on the idea that syntactic objects (trees, proofs, fstructures) are (...) somehow prior and that semantics must be parasitic on those syntactic objects. We challenge this idea and develop a grammar in which syntax and semantics are treated in a strictly parallel fashion. The grammar will have many ideas in common with the (converging) frameworks of categorial grammar and LFG, but its treatment of the syntax-semantics interface is radically different. Also, although the meaning component of the grammar is a version of Montague semantics and although there are obvious affinities between Montague’s conception of grammar and the work presented here, the grammar is not compositional, in the sense that composition of meaning need not follow surface structure. (shrink)
I here present and defend what I call the Triviality Theory of Truth, to be understood in analogy with Matti Eklund’s Inconsistency Theory of Truth. A specific formulation of is defended and compared with alternatives found in the literature. A number of objections against the proposed notion of meaning-constitutivity are discussed and held inconclusive. The main focus, however, is on the problem, discussed at length by Gupta and Belnap, that speakers do not accept epistemically neutral conclusions of Curry derivations. I (...) first argue that the facts about speakers’ reactions to such Curry derivations do not constitute a problem for the Triviality Theory specifically. Rather, they follow from independent, uncontroversial facts. I then propose a solution which coheres with the theory as I understand it. Finally, I consider a normative reading of their objection and offer a response. (shrink)
Tarski's Undefinability of Truth Theorem comes in two versions: that no consistent theory which interprets Robinson's Arithmetic (Q) can prove all instances of the T-Scheme and hence define truth; and that no such theory, if sound, can even express truth. In this note, I prove corresponding limitative results for validity. While Peano Arithmetic already has the resources to define a predicate expressing logical validity, as Jeff Ketland has recently pointed out (2012, Validity as a primitive. Analysis 72: 421-30), no theory (...) which interprets Q closed under the standard structural rules can define nor express validity, on pain of triviality. The results put pressure on the widespread view that there is an asymmetry between truth and validity, viz. that while the former cannot be defined within the language, the latter can. I argue that Vann McGee's and Hartry Field's arguments for the asymmetry view are problematic. (shrink)
As the 19th century drew to a close, logicians formalized an ideal notion of proof. They were driven by nothing other than an abiding interest in truth, and their proofs were as ethereal as the mind of God. Yet within decades these mathematical abstractions were realized by the hand of man, in the digital stored-program computer. How it came to be recognized that proofs and programs are the same thing is a story that spans a century, a chase with as (...) many twists and turns as a thriller. At the end of the story is a new principle for designing programming languages that will guide computers into the 21st century. -/- For my money, Gentzen’s natural deduction and Church’s lambda calculus are on a par with Einstein’s relativity and Dirac’s quantum physics for elegance and insight. And the maths are a lot simpler. I want to show you the essence of these ideas. I’ll need a few symbols, but not too many, and I’ll explain as I go along. -/- To simplify, I’ll present the story as we understand it now, with some asides to fill in the history. First, I’ll introduce Gentzen’s natural deduction, a formalism for proofs. Next, I’ll introduce Church’s lambda calculus, a formalism for programs. Then I’ll explain why proofs and programs are really the same thing, and how simplifying a proof corresponds to executing a program. Finally, I’ll conclude with a look at how these principles are being applied to design a new generation of programming languages, particularly mobile code for the Internet. (shrink)
Many psychologists studying lay belief attribution and behavior explanation cite Donald Davidson in support of their assumption that people construe beliefs as inner causes. But Davidson’s influential argument is unsound; there are no objective grounds for the intuition that the folk construe beliefs as inner causes that produce behavior. Indeed, recent experimental work by Ian Apperly, Bertram Malle, Henry Wellman, and Tania Lombrozo provides an empirical framework that accords well with Gilbert Ryle’s alternative thesis that the folk construe beliefs as (...) patterns of living that contextualize behavior. (shrink)
Descartes held the following view of declarative memory: to remember is to reconstruct an idea that you intellectually recognize as a reconstruction. Descartes countenanced two overarching varieties of declarative memory. To have an intellectual memory is to intellectually reconstruct a universal idea that you recognize as a reconstruction, and to have a sensory memory is to neurophysiologically reconstruct a particular idea that you recognize as a reconstruction. Sensory remembering is thus a capacity of neither ghosts nor machines, but only of (...) human beings qua mind-body unions. This interpretation unifies Descartes’s various remarks (and conspicuous silences) about remembering, from the 1628 Rules for the Direction of the Mind through the suppressed-in-1633 Treatise of Man to the 1649 Passions of the Soul. It also rebuts a prevailing thesis in the current secondary literature—that Cartesian critters can remember—while incorporating the textual evidence for that thesis—Descartes’s detailed descriptions of the corporeal mechanisms that construct sensory memories. (shrink)
This article offers an interpretation of Descartes’s method of doubt. It wields an examination of Descartes’s pedagogy—as exemplified by The Search for Truth as well as the Meditations—to make the case for the sincerity (as opposed to artificiality) of the doubts engendered by the First Meditation. Descartes was vigilant about balancing the need to use his method of doubt to achieve absolute certainty with the need to compensate for the various foibles of his scholastic and unschooled readers. Nevertheless, Descartes endeavored (...) to instill willful, context-independent, universal doubt across his readership. If all goes well, readers of the Meditations are like method actors; the Meditator is the character they are meant to bring to life, via the method of meditating on reasons for doubt. The article concludes with the suggestion that Descartes was the same kind of skeptic as the early Academic skeptics Arcesilaus and Carneades. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.