This article is a translation of the paper in Polish (Alfred Tarski - człowiek, który zdefiniował prawdę) published in Ruch Filozoficzny 4 (4) (2007). It is a personal Alfred Tarski memories based on my stay in Berkeley and visit the Alfred Tarski house for the invitation of Janusz Tarski.
Alfred Tarski (1901--1983) is widely regarded as one of the two giants of twentieth-century logic and also as one of the four greatest logicians of all time (Aristotle, Frege and Gödel being the other three). Of the four, Tarski was the most prolific as a logician. The four volumes of his collected papers, which exclude most of his 19 monographs, span over 2500 pages. Aristotle's writings are comparable in volume, but most of the Aristotelian corpus is not about logic, (...) whereas virtually everything written by Tarski concerns logic more or less directly. There is no doubt that Tarski wrote more on logic than any other author; he started publishing on logic in 1921 at the age of 20 and continued until his death at the age of 82. Two of his works appeared posthumously [Hist. Philos. Logic 7 (1986), no. 2, 143--154; MR0868748 (88b:03010); Tarski and Givant, A formalization of set theory without variables, Amer. Math. Soc., Providence, RI, 1987; MR0920815 (89g:03012)]. Tarski's voluminous writings were widely scattered in numerous journals, some quite rare. It has been extremely difficult to study the development of Tarski's thought and to trace the interconnections and interdependence of his various papers. Thanks to the present collection all this has changed, and it is likely that the increased accessibility of Tarski's papers will have the effect of increasing Tarski's already enormous influence. (shrink)
Alfred Tarski was one of the greatest logicians of the twentieth century. His influence comes not merely through his own work but from the legion of students who pursued his projects, both in Poland and Berkeley. This chapter focuses on three key areas of Tarski's research, beginning with his groundbreaking studies of the concept of truth. Tarski's work led to the creation of the area of mathematical logic known as model theory and prefigured semantic approaches in the philosophy of (...) language and philosophical logic, such as Kripke's possible worlds semantics for modal logic. We also examine the paradoxical decomposition of the sphere known as the Banach–Tarski paradox. Finally we examine Tarski's work on decidable and undecidable theories, which he carried out in collaboration with students such as Mostowski, Presburger, Robinson and others. (shrink)
Alfred Tarski seems to endorse a partial conception of truth, the T-schema, which he believes might be clarified by the application of empirical methods, specifically citing the experimental results of Arne Næss (1938a). The aim of this paper is to argue that Næss’ empirical work confirmed Tarski’s semantic conception of truth, among others. In the first part, I lay out the case for believing that Tarski’s T-schema, while not the formal and generalizable Convention-T, provides a partial account of truth (...) that may be buttressed by an examination of the ordinary person’s views of truth. Then, I address a concern raised by Tarski’s contemporaries who saw Næss’ results as refuting Tarski’s semantic conception. Following that, I summarize Næss’ results. Finally, I will contend with a few objections that suggest a strict interpretation of Næss’ results might recommend an overturning of Tarski’s theory. (shrink)
This paper discusses the history of the confusion and controversies over whether the definition of consequence presented in the 11-page 1936 Tarski consequence-definition paper is based on a monistic fixed-universe framework?like Begriffsschrift and Principia Mathematica. Monistic fixed-universe frameworks, common in pre-WWII logic, keep the range of the individual variables fixed as the class of all individuals. The contrary alternative is that the definition is predicated on a pluralistic multiple-universe framework?like the 1931 Gödel incompleteness paper. A pluralistic multiple-universe framework recognizes multiple (...) universes of discourse serving as different ranges of the individual variables in different interpretations?as in post-WWII model theory. In the early 1960s, many logicians?mistakenly, as we show?held the ?contrary alternative? that Tarski 1936 had already adopted a Gödel-type, pluralistic, multiple-universe framework. We explain that Tarski had not yet shifted out of the monistic, Frege?Russell, fixed-universe paradigm. We further argue that between his Principia-influenced pre-WWII Warsaw period and his model-theoretic post-WWII Berkeley period, Tarski's philosophy underwent many other radical changes. (shrink)
Tarski’s Convention T—presenting his notion of adequate definition of truth (sic)—contains two conditions: alpha and beta. Alpha requires that all instances of a certain T Schema be provable. Beta requires in effect the provability of ‘every truth is a sentence’. Beta formally recognizes the fact, repeatedly emphasized by Tarski, that sentences (devoid of free variable occurrences)—as opposed to pre-sentences (having free occurrences of variables)—exhaust the range of significance of is true. In Tarski’s preferred usage, it is part of the meaning (...) of true that attribution of being true to a given thing presupposes the thing is a sentence. Beta’s importance is further highlighted by the fact that alpha can be satisfied using the recursively definable concept of being satisfied by every infinite sequence, which Tarski explicitly rejects. Moreover, in Definition 23, the famous truth-definition, Tarski supplements “being satisfied by every infinite sequence” by adding the condition “being a sentence”. Even where truth is undefinable and treated by Tarski axiomatically, he adds as an explicit axiom a sentence to the effect that every truth is a sentence. Surprisingly, the sentence just before the presentation of Convention T seems to imply that alpha alone might be sufficient. Even more surprising is the sentence just after Convention T saying beta “is not essential”. Why include a condition if it is not essential? Tarski says nothing about this dissonance. Considering the broader context, the Polish original, the German translation from which the English was derived, and other sources, we attempt to determine what Tarski might have intended by the two troubling sentences which, as they stand, are contrary to the spirit, if not the letter, of several other passages in Tarski’s corpus. (shrink)
Hilary Putnam's famous arguments criticizing Tarski's theory of truth are evaluated. It is argued that they do not succeed to undermine Tarski's approach. One of the arguments is based on the problematic idea of a false instance of T-schema. The other ignores various issues essential for Tarski's setting such as language-relativity of truth definition.
Equality and identity. Bulletin of Symbolic Logic. 19 (2013) 255-6. (Coauthor: Anthony Ramnauth) Also see https://www.academia.edu/s/a6bf02aaab This article uses ‘equals’ [‘is equal to’] and ‘is’ [‘is identical to’, ‘is one and the same as’] as they are used in ordinary exact English. In a logically perfect language the oxymoron ‘the numbers 3 and 2+1 are the same number’ could not be said. Likewise, ‘the number 3 and the number 2+1 are one number’ is just as bad from a logical point (...) of view. In normal English these two sentences are idiomatically taken to express the true proposition that ‘the number 3 is the number 2+1’. Another idiomatic convention that interferes with clarity about equality and identity occurs in discussion of numbers: it is usual to write ‘3 equals 2+1’ when “3 is 2+1” is meant. When ‘3 equals 2+1’ is written there is a suggestion that 3 is not exactly the same number as 2+1 but that they merely have the same value. This becomes clear when we say that two of the sides of a triangle are equal if the two angles they subtend are equal or have the same measure. -/- Acknowledgements: Robert Barnes, Mark Brown, Jack Foran, Ivor Grattan-Guinness, Forest Hansen, David Hitchcock, Spaulding Hoffman, Calvin Jongsma, Justin Legault, Joaquin Miller, Tania Miller, and Wyman Park. -/- ► JOHN CORCORAN AND ANTHONY RAMNAUTH, Equality and identity. Philosophy, University at Buffalo, Buffalo, NY 14260-4150, USA E-mail: corcoran@buffalo.edu The two halves of one line are equal but not identical [one and the same]. Otherwise the line would have only one half! Every line equals infinitely many other lines, but no line is [identical to] any other line—taking ‘identical’ strictly here and below. Knowing that two lines equaling a third are equal is useful; the condition “two lines equaling a third” often holds. In fact any two sides of an equilateral triangle is equal to the remaining side! But could knowing that two lines being [identical to] a third are identical be useful? The antecedent condition “two things identical to a third” never holds, nor does the consequent condition “two things being identical”. If two things were identical to a third, they would be the third and thus not be two things but only one. The plural predicate ‘are equal’ as in ‘All diameters of a given circle are equal’ is useful and natural. ‘Are identical’ as in ‘All centers of a given circle are identical’ is awkward or worse; it suggests that a circle has multiple centers. Substituting equals for equals [replacing one of two equals by the other] makes sense. Substituting identicals for identicals is empty—a thing is identical only to itself; substituting one thing for itself leaves that thing alone, does nothing. There are as many types of equality as magnitudes: angles, lines, planes, solids, times, etc. Each admits unit magnitudes. And each such equality analyzes as identity of magnitude: two lines are equal [in length] if the one’s length is identical to the other’s. Tarski [1] hardly mentioned equality-identity distinctions (pp. 54-63). His discussion begins: -/- Among the logical concepts […], the concept of IDENTITY or EQUALITY […] has the greatest importance. -/- Not until page 62 is there an equality-identity distinction. His only “notion of equality”, if such it is, is geometrical congruence—having the same size and shape—an equivalence relation not admitting any unit. Does anyone but Tarski ever say ‘this triangle is equal to that’ to mean that the first is congruent to that? What would motivate him to say such a thing? This lecture treats the history and philosophy of equality-identity distinctions. [1] ALFRED TARSKI, Introduction to Logic, Dover, New York, 1995. [This is expanded from the printed abstract.] . (shrink)
Corcoran’s 27 entries in the 1999 second edition of Robert Audi’s Cambridge Dictionary of Philosophy [Cambridge: Cambridge UP]. -/- ancestral, axiomatic method, borderline case, categoricity, Church (Alonzo), conditional, convention T, converse (outer and inner), corresponding conditional, degenerate case, domain, De Morgan, ellipsis, laws of thought, limiting case, logical form, logical subject, material adequacy, mathematical analysis, omega, proof by recursion, recursive function theory, scheme, scope, Tarski (Alfred), tautology, universe of discourse. -/- The entire work is available online free at more (...) than one website. Paste the whole URL. http://archive.org/stream/RobertiAudi_The.Cambridge.Dictionary.of.Philosophy/Robert.Audi_The.Cambrid ge.Dictionary.of.Philosophy -/- The 2015 third edition will be available soon. Before you think of buying it read some reviews on Amazon and read reviews of its competition: For example, my review of the 2008 Oxford Companion to Philosophy, History and Philosophy of Logic,29:3,291-292. URL: http://dx.doi.org/10.1080/01445340701300429 -/- Some of the entries have already been found to be flawed. For example, Tarski’s expression ‘materially adequate’ was misinterpreted in at least one article and it was misused in another where ‘materially correct’ should have been used. The discussion provides an opportunity to bring more flaws to light. -/- Acknowledgements: Each of these entries was presented at meetings of The Buffalo Logic Dictionary Project sponsored by The Buffalo Logic Colloquium. The members of the colloquium read drafts before the meetings and were generous with corrections, objections, and suggestions. Usually one 90-minute meeting was devoted to one entry although in some cases, for example, “axiomatic method”, took more than one meeting. Moreover, about half of the entries are rewrites of similarly named entries in the 1995 first edition. Besides the help received from people in Buffalo, help from elsewhere was received by email. We gratefully acknowledge the following: José Miguel Sagüillo, John Zeis, Stewart Shapiro, Davis Plache, Joseph Ernst, Richard Hull, Concha Martinez, Laura Arcila, James Gasser, Barry Smith, Randall Dipert, Stanley Ziewacz, Gerald Rising, Leonard Jacuzzo, George Boger, William Demopolous, David Hitchcock, John Dawson, Daniel Halpern, William Lawvere, John Kearns, Ky Herreid, Nicolas Goodman, William Parry, Charles Lambros, Harvey Friedman, George Weaver, Hughes Leblanc, James Munz, Herbert Bohnert, Robert Tragesser, David Levin, Sriram Nambiar, and others. -/- . (shrink)
Alfred Tarski was a nominalist. But he published almost nothing on his nominalist views, and until recently the only sources scholars had for studying Tarski’s nominalism were conversational reports from his friends and colleagues. However, a recently-discovered archival resource provides the most detailed information yet about Tarski’s nominalism. Tarski spent the academic year 1940-41 at Harvard, along with many of the leading lights of scientific philosophy: Carnap, Quine, Hempel, Goodman, and (for the fall semester) Russell. This group met frequently (...) to discuss logical and philosophical topics of shared interest. At these meetings, Carnap took dictation notes, which are now stored in the Archives of Scientific Philosophy. Interestingly, and somewhat surprisingly, the plurality of notes covers a proposal Tarski presents for a nominalist language of unified science. This chapter addresses the following questions about this project. What, precisely, is Tarski’s nominalist position? What rationales are given for Tarski’s nominalist stance—and are these rationales defensible? Finally, how is Tarskian nominalism of 1941 related to current nominalist projects? (shrink)
Many commentators on Alfred Tarski have, following Hartry Field, claimed that Tarski's truth-definition was motivated by physicalism—the doctrine that all facts, including semantic facts, must be reducible to physical facts. I claim, instead, that Tarski did not aim to reduce semantic facts to physical ones. Thus, Field's criticism that Tarski's truth-definition fails to fulfill physicalist ambitions does not reveal Tarski to be inconsistent, since Tarski's goal is not to vindicate physicalism. I argue that Tarski's only published remarks that speak (...) approvingly of physicalism were written in unusual circumstances: Tarski was likely attempting to appease an audience of physicalists that he viewed as hostile to his ideas. In later sections I develop positive accounts of: (1) Tarski's reduction of semantic concepts; (2) Tarski's motivation to develop formal semantics in the particular way he does; and (3) the role physicalism plays in Tarski's thought. (shrink)
In the early 20th century, scepticism was common among philosophers about the very meaningfulness of the notion of truth – and of the related notions of denotation, definition etc. (i.e., what Tarski called semantical concepts). Awareness was growing of the various logical paradoxes and anomalies arising from these concepts. In addition, more philosophical reasons were being given for this aversion.1 The atmosphere changed dramatically with Alfred Tarski’s path-breaking contribution. What Tarski did was to show that, assuming that the syntax (...) of the object language is specified exactly enough, and that the metatheory has a certain amount of set theoretic power,2 one can explicitly define truth in the object language. And what can be explicitly defined can be eliminated. It follows that the defined concept cannot give rise to any inconsistencies (that is, paradoxes). This gave new respectability to the concept of truth and related notions. Nevertheless, philosophers’ judgements on the nature and philosophical relevance of Tarski’s work have varied. It is my aim here to review and evaluate some threads in this debate. (shrink)
We discuss misinformation about “the liar antinomy” with special reference to Tarski’s 1933 truth-definition paper [1]. Lies are speech-acts, not merely sentences or propositions. Roughly, lies are statements of propositions not believed by their speakers. Speakers who state their false beliefs are often not lying. And speakers who state true propositions that they don’t believe are often lying—regardless of whether the non-belief is disbelief. Persons who state propositions on which they have no opinion are lying as much as those who (...) state propositions they believe to be false. Not all lies are statements of false propositions—some lies are true; some have no truth-value. People who only occasionally lie are not liars: roughly, liars repeatedly and habitually lie. Some half-truths are statements intended to mislead even though the speakers “interpret” the sentences used as expressing true propositions. Others are statements of propositions believed by the speakers to be questionable but without revealing their supposed problematic nature. The two “formulations” of “the antinomy of the liar” in [1], pp.157–8 and 161–2, have nothing to do with lying or liars. The first focuses on an “expression” Tarski calls ‘c’, namely the following. -/- c is not a true sentence -/- The second focuses on another “expression”, also called ‘c’, namely the following. -/- for all p, if c is identical with the sentence ‘p’, then not p -/- Without argumentation or even discussion, Tarski implies that these strange “expressions” are English sentences. [1] Alfred Tarski, The concept of truth in formalized languages, pp. 152–278, Logic, Semantics, Metamathematics, papers from 1923 to 1938, ed. John Corcoran, Hackett, Indianapolis 1983. -/- https://www.academia.edu/12525833/Sentence_Proposition_Judgment_Statement_and_Fact_Speaking_about_th e_Written_English_Used_in_Logic. (shrink)
Reid, Constance. Hilbert (a Biography). Reviewed by Corcoran in Philosophy of Science 39 (1972), 106–08. -/- Constance Reid was an insider of the Berkeley-Stanford logic circle. Her San Francisco home was in Ashbury Heights near the homes of logicians such as Dana Scott and John Corcoran. Her sister Julia Robinson was one of the top mathematical logicians of her generation, as was Julia’s husband Raphael Robinson for whom Robinson Arithmetic was named. Julia was a Tarski PhD and, in recognition of (...) a distinguished career, was elected President of the American Mathematics Society. https://en.wikipedia.org/wiki/Julia_Robinson http://www.awm-math.org/noetherbrochure/Robinson82.html. (shrink)
In his essay ‘“Wang’s Paradox”’, Crispin Wright proposes a solution to the Sorites Paradox (in particular, the form of it he calls the ‘Paradox of Sharp Boundaries’) that involves adopting intuitionistic logic when reasoning with vague predicates. He does not give a semantic theory which accounts for the validity of intuitionistic logic (and the invalidity of stronger logics) in that area. The present essay tentatively makes good the deficiency. By applying a theorem of Tarski, it shows that intuitionistic logic is (...) the strongest logic that may be applied, given certain semantic assumptions about vague predicates. The essay ends with an inconclusive discussion of whether those semantic assumptions should be accepted. (shrink)
Formalizing Euclid’s first axiom. Bulletin of Symbolic Logic. 20 (2014) 404–5. (Coauthor: Daniel Novotný) -/- Euclid [fl. 300 BCE] divides his basic principles into what came to be called ‘postulates’ and ‘axioms’—two words that are synonyms today but which are commonly used to translate Greek words meant by Euclid as contrasting terms. -/- Euclid’s postulates are specifically geometric: they concern geometric magnitudes, shapes, figures, etc.—nothing else. The first: “to draw a line from any point to any point”; the last: the (...) parallel postulate. -/- Euclid’s axioms are general principles of magnitude: they concern geometric magnitudes and magnitudes of other kinds as well even numbers. The first is often translated “Things that equal the same thing equal one another”. -/- There are other differences that are or might become important. -/- Aristotle [fl. 350 BCE] meticulously separated his basic principles [archai, singular archê] according to subject matter: geometrical, arithmetic, astronomical, etc. However, he made no distinction that can be assimilated to Euclid’s postulate/axiom distinction. -/- Today we divide basic principles into non-logical [topic-specific] and logical [topic-neutral] but this too is not the same as Euclid’s. In this regard it is important to be cognizant of the difference between equality and identity—a distinction often crudely ignored by modern logicians. Tarski is a rare exception. The four angles of a rectangle are equal to—not identical to—one another; the size of one angle of a rectangle is identical to the size of any other of its angles. No two angles are identical to each other. -/- The sentence ‘Things that equal the same thing equal one another’ contains no occurrence of the word ‘magnitude’. This paper considers the problem of formalizing the proposition Euclid intended as a principle of magnitudes while being faithful to the logical form and to its information content. (shrink)
In “Psychopower and Ordinary Madness” my ambition, as it relates to Bernard Stiegler’s recent literature, was twofold: 1) critiquing Stiegler’s work on exosomatization and artefactual posthumanism—or, more specifically, nonhumanism—to problematize approaches to media archaeology that rely upon technical exteriorization; 2) challenging how Stiegler engages with Giuseppe Longo and Francis Bailly’s conception of negative entropy. These efforts were directed by a prevalent techno-cultural qualifier: the rise of Synthetic Intelligence (including neural nets, deep learning, predictive processing and Bayesian models of cognition). This (...) paper continues this project but first directs a critical analytic lens at the Derridean practice of the ontologization of grammatization from which Stiegler emerges while also distinguishing how metalanguages operate in relation to object-oriented environmental interaction by way of inferentialism. Stalking continental (Kapp, Simondon, Leroi-Gourhan, etc.) and analytic traditions (e.g., Carnap, Chalmers, Clark, Sutton, Novaes, etc.), we move from artefacts to AI and Predictive Processing so as to link theories related to technicity with philosophy of mind. Simultaneously drawing forth Robert Brandom’s conceptualization of the roles that commitments play in retrospectively reconstructing the social experiences that lead to our endorsement(s) of norms, we compliment this account with Reza Negarestani’s deprivatized account of intelligence while analyzing the equipollent role between language and media (both digital and analog). (shrink)
Mark Wilson argues that the standard categorizations of "Theory T thinking"— logic-centered conceptions of scientific organization (canonized via logical empiricists in the mid-twentieth century)—dampens the understanding and appreciation of those strategic subtleties working within science. By "Theory T thinking," we mean to describe the simplistic methodology in which mathematical science allegedly supplies ‘processes’ that parallel nature's own in a tidily isomorphic fashion, wherein "Theory T’s" feigned rigor and methodological dogmas advance inadequate discrimination that fails to distinguish between explanatory structures that (...) are architecturally distinct. One of Wilson's main goals is to reverse such premature exclusions and, thus, early on Wilson returns to John Locke's original physical concerns regarding material science and the congeries of descriptive concern insofar as capturing varied phenomena (i.e., cohesion, elasticity, fracture, and the transmission of coherent work) encountered amongst ordinary solids like wood and steel are concerned. Of course, Wilson methodologically updates such a purview by appealing to multiscalar techniques of modern computing, drawing from Robert Batterman's work on the greediness of scales and Jim Woodward's insights on causation. (shrink)
CORCORAN REVIEWS THE 4 VOLUMES OF TARSKI’S COLLECTED PAPERS Alfred Tarski (1901--1983) is widely regarded as one of the two giants of twentieth-century logic and also as one of the four greatest logicians of all time (Aristotle, Frege and Gödel being the other three). Of the four, Tarski was the most prolific as a logician. The four volumes of his collected papers, which exclude most of his 19 monographs, span over 2500 pages. Aristotle's writings are comparable in volume, but (...) most of the Aristotelian corpus is not about logic, whereas virtually everything written by Tarski concerns logic more or less directly. There is no doubt that Tarski wrote more on logic than any other author; he started publishing on logic in 1921 at the age of 20 and continued until his death at the age of 82. (shrink)
Prior Analytics by the Greek philosopher Aristotle (384 – 322 BCE) and Laws of Thought by the English mathematician George Boole (1815 – 1864) are the two most important surviving original logical works from before the advent of modern logic. This article has a single goal: to compare Aristotle’s system with the system that Boole constructed over twenty-two centuries later intending to extend and perfect what Aristotle had started. This comparison merits an article itself. Accordingly, this article does not discuss (...) many other historically and philosophically important aspects of Boole’s book, e.g. his confused attempt to apply differential calculus to logic, his misguided effort to make his system of ‘class logic’ serve as a kind of ‘truth-functional logic’, his now almost forgotten foray into probability theory, or his blindness to the fact that a truth-functional combination of equations that follows from a given truth-functional combination of equations need not follow truth-functionally. One of the main conclusions is that Boole’s contribution widened logic and changed its nature to such an extent that he fully deserves to share with Aristotle the status of being a founding figure in logic. By setting forth in clear and systematic fashion the basic methods for establishing validity and for establishing invalidity, Aristotle became the founder of logic as formal epistemology. By making the first unmistakable steps toward opening logic to the study of ‘laws of thought’—tautologies and laws such as excluded middle and non-contradiction—Boole became the founder of logic as formal ontology. (shrink)
Thirteen meanings of 'implication' are described and compared. Among them are relations that have been called: logical implication, material implication,deductive implication, formal implication, enthymemic implication, and factual implication. In a given context, implication is the homogeneous two-place relation expressed by the relation verb 'implies'. For heuristic and expository reasons this article skirts many crucial issues including use-mention, the nature of the entities that imply and are implied, and the processes by which knowledge of these relations are achieved. This paper is (...) better thought of as an early stage of a dialogue than as a definitive treatise. (shrink)
Information-theoretic approaches to formal logic analyze the "common intuitive" concepts of implication, consequence, and validity in terms of information content of propositions and sets of propositions: one given proposition implies a second if the former contains all of the information contained by the latter; one given proposition is a consequence of a second if the latter contains all of the information contained by the former; an argument is valid if the conclusion contains no information beyond that of the premise-set. This (...) paper locates information-theoretic approaches historically, philosophically, and pragmatically. Advantages and disadvantages are identified by examining such approaches in themselves and by contrasting them with standard transformation-theoretic approaches. Transformation-theoretic approaches analyze validity (and thus implication) in terms of transformations that map one argument onto another: a given argument is valid if no transformation carries it onto an argument with all true premises and false conclusion. Model-theoretic, set-theoretic, and substitution-theoretic approaches, which dominate current literature, can be construed as transformation-theoretic, as can the so-called possible-worlds approaches. Ontic and epistemic presuppositions of both types of approaches are considered. Attention is given to the question of whether our historically cumulative experience applying logic is better explained from a purely information-theoretic perspective or from a purely transformation-theoretic perspective or whether apparent conflicts between the two types of approaches need to be reconciled in order to forge a new type of approach that recognizes their basic complementarity. (shrink)
It is one thing for a given proposition to follow or to not follow from a given set of propositions and it is quite another thing for it to be shown either that the given proposition follows or that it does not follow.* Using a formal deduction to show that a conclusion follows and using a countermodel to show that a conclusion does not follow are both traditional practices recognized by Aristotle and used down through the history of logic. These (...) practices presuppose, respectively, a criterion of validity and a criterion of invalidity each of which has been extended and refined by modern logicians: deductions are studied in formal syntax (proof theory) and coun¬termodels are studied in formal semantics (model theory). The purpose of this paper is to compare these two criteria to the corresponding criteria employed in Boole’s first logical work, The Mathematical Analysis of Logic (1847). In particular, this paper presents a detailed study of the relevant metalogical passages and an analysis of Boole’s symbolic derivations. It is well known, of course, that Boole’s logical analysis of compound terms (involving ‘not’, ‘and’, ‘or’, ‘except’, etc.) contributed to the enlargement of the class of propositions and arguments formally treatable in logic. The present study shows, in addition, that Boole made significant contributions to the study of deduc¬tive reasoning. He identified the role of logical axioms (as opposed to inference rules) in formal deductions, he conceived of the idea of an axiomatic deductive sys¬tem (which yields logical truths by itself and which yields consequences when ap¬plied to arbitrary premises). Nevertheless, surprisingly, Boole’s attempt to imple¬ment his idea of an axiomatic deductive system involved striking omissions: Boole does not use his own formal deductions to establish validity. Boole does give symbolic derivations, several of which are vitiated by “Boole’s Solutions Fallacy”: the fallacy of supposing that a solution to an equation is necessarily a logical consequence of the equation. This fallacy seems to have led Boole to confuse equational calculi (i.e., methods for gen-erating solutions) with deduction procedures (i.e., methods for generating consequences). The methodological confusion is closely related to the fact, shown in detail below, that Boole had adopted an unsound criterion of validity. It is also shown that Boole totally ignored the countermodel criterion of invalid¬ity. Careful examination of the text does not reveal with certainty a test for invalidity which was adopted by Boole. However, we have isolated a test that he seems to use in this way and we show that this test is ineffectual in the sense that it does not serve to identify invalid arguments. We go beyond the simple goal stated above. Besides comparing Boole’s earliest criteria of validity and invalidity with those traditionally (and still generally) employed, this paper also investigates the framework and details of THE MATHEMATICAL ANALYSIS OF LOGIC. (shrink)
This review concludes that if the authors know what mathematical logic is they have not shared their knowledge with the readers. This highly praised book is replete with errors and incoherency.
Corcoran, J. 2007. Psychologism. American Philosophy: an Encyclopedia. Eds. John Lachs and Robert Talisse. New York: Routledge. Pages 628-9. -/- Psychologism with respect to a given branch of knowledge, in the broadest neutral sense, is the view that the branch is ultimately reducible to, or at least is essentially dependent on, psychology. The parallel with logicism is incomplete. Logicism with respect to a given branch of knowledge is the view that the branch is ultimately reducible to logic. Every branch of (...) knowledge depends on logic. Psychologism is found in several fields including history, political science, economics, ethics, epistemology, linguistics, aesthetics, mathematics, and logic. Logicism is found mainly in branches of mathematics: number theory, analysis, and, more rarely, geometry. Although the ambiguous term ‘psychologism’ has senses with entirely descriptive connotations, it is widely used in senses that are derogatory. No writers with any appreciation of this point will label their own views as psychologistic. It is usually used pejoratively by people who disapprove of psychologism. The term ‘scientism’ is similar in that it too has both pejorative and descriptive senses but its descriptive senses are rarely used any more. It is almost a law of linguistics that the negative connotations tend to drive out the neutral and the positive. Dictionaries sometimes mark both words with a usage label such as “Usually disparaging”. In this article, the word is used descriptively mainly because there are many psychologistic views that are perfectly respectable and even endorsed by people who would be offended to have their views labeled psychologism. A person who subscribes to logicism is called a logicist, but there is no standard word for a person who subscribes to psychologism. ‘Psychologist’, which is not suitable, occurs in this sense. ‘Psychologician’, with stress on the second syllable as in ‘psychologist’, has been proposed. In the last century, some of the most prominent forms of psychologism pertained to logic; the rest of this article treats only such forms. Psychologism in logic is very “natural”. After all, logic studies reasoning, which is done by the mind, whose nature and functioning is studied in psychology—using the word ‘psychology’ in its broadest etymological sense. (shrink)
Corcoran, J. 2007. Syntactics, American Philosophy: an Encyclopedia. 2007. Eds. John Lachs and Robert Talisse. New York: Routledge. pp.745-6. -/- Syntactics, semantics, and pragmatics are the three levels of investigation into semiotics, or the comprehensive study of systems of communication, as described in 1938 by the American philosopher Charles Morris (1903-1979). Syntactics studies signs themselves and their interrelations in abstraction from their meanings and from their uses and users. Semantics studies signs in relation to their meanings, but still in abstraction (...) from their uses and users. Pragmatics studies signs as meaningful entities used in various ways by humans. Taking current written English as the system of communication under investigation, it is a matter of syntactics that the two four-character strings ‘tact’ and ‘tics’ both occur in the ten-character string ‘syntactics’. It is a matter of semantics that the ten-character string ‘syntactics’ has only one sense and, in that sense, it denotes a branch of semiotics. It is a matter of pragmatics that the ten-character string ‘syntactics’ was not used as an English word before 1937 and that it is sometimes confused with the much older six-character string ‘syntax’. Syntactics is the simplest and most abstract branch of semiotics. At the same time, it is the most basic. Pragmatics presupposes semantics and syntactics; semantics presupposes syntactics. The basic terms of syntactics include the following: ‘character’ as alphabetic letters, numeric digits, and punctuation marks; ‘string’ as sign composed of a concatenation of characters; ‘occur’ as ‘t’ and ‘c’ both occur twice in ‘syntactics’. However, perhaps the most basic terms of syntactics are ‘type’ and ‘token’ in the senses introduced by Charles Sanders Peirce (1839-1914), America’s greatest logician, who could be considered the grandfather of syntactics, if not the father. These are explained below. (shrink)
One of the most fundamental questions in the philosophy of mathematics concerns the relation between truth and formal proof. The position according to which the two concepts are the same is called deflationism, and the opposing viewpoint substantialism. In an important result of mathematical logic, Kurt Gödel proved in his first incompleteness theorem that all consistent formal systems containing arithmetic include sentences that can neither be proved nor disproved within that system. However, such undecidable Gödel sentences can be established to (...) be true once we expand the formal system with Alfred Tarski s semantical theory of truth, as shown by Stewart Shapiro and Jeffrey Ketland in their semantical arguments for the substantiality of truth. According to them, in Gödel sentences we have an explicit case of true but unprovable sentences, and hence deflationism is refuted. -/- Against that, Neil Tennant has shown that instead of Tarskian truth we can expand the formal system with a soundness principle, according to which all provable sentences are assertable, and the assertability of Gödel sentences follows. This way, the relevant question is not whether we can establish the truth of Gödel sentences, but whether Tarskian truth is a more plausible expansion than a soundness principle. In this work I will argue that this problem is best approached once we think of mathematics as the full human phenomenon, and not just consisting of formal systems. When pre-formal mathematical thinking is included in our account, we see that Tarskian truth is in fact not an expansion at all. I claim that what proof is to formal mathematics, truth is to pre-formal thinking, and the Tarskian account of semantical truth mirrors this relation accurately. -/- However, the introduction of pre-formal mathematics is vulnerable to the deflationist counterargument that while existing in practice, pre-formal thinking could still be philosophically superfluous if it does not refer to anything objective. Against this, I argue that all truly deflationist philosophical theories lead to arbitrariness of mathematics. In all other philosophical accounts of mathematics there is room for a reference of the pre-formal mathematics, and the expansion of Tarkian truth can be made naturally. Hence, if we reject the arbitrariness of mathematics, I argue in this work, we must accept the substantiality of truth. Related subjects such as neo-Fregeanism will also be covered, and shown not to change the need for Tarskian truth. -/- The only remaining route for the deflationist is to change the underlying logic so that our formal languages can include their own truth predicates, which Tarski showed to be impossible for classical first-order languages. With such logics we would have no need to expand the formal systems, and the above argument would fail. From the alternative approaches, in this work I focus mostly on the Independence Friendly (IF) logic of Jaakko Hintikka and Gabriel Sandu. Hintikka has claimed that an IF language can include its own adequate truth predicate. I argue that while this is indeed the case, we cannot recognize the truth predicate as such within the same IF language, and the need for Tarskian truth remains. In addition to IF logic, also second-order logic and Saul Kripke s approach using Kleenean logic will be shown to fail in a similar fashion. (shrink)
In this manuscript, published here for the first time, Tarski explores the concept of logical notion. He draws on Klein's Erlanger Programm to locate the logical notions of ordinary geometry as those invariant under all transformations of space. Generalizing, he explicates the concept of logical notion of an arbitrary discipline.
This book treats ancient logic: the logic that originated in Greece by Aristotle and the Stoics, mainly in the hundred year period beginning about 350 BCE. Ancient logic was never completely ignored by modern logic from its Boolean origin in the middle 1800s: it was prominent in Boole’s writings and it was mentioned by Frege and by Hilbert. Nevertheless, the first century of mathematical logic did not take it seriously enough to study the ancient logic texts. A renaissance in ancient (...) logic studies occurred in the early 1950s with the publication of the landmark Aristotle’s Syllogistic by Jan Łukasiewicz, Oxford UP 1951, 2nd ed. 1957. Despite its title, it treats the logic of the Stoics as well as that of Aristotle. Łukasiewicz was a distinguished mathematical logician. He had created many-valued logic and the parenthesis-free prefix notation known as Polish notation. He co-authored with Alfred Tarski’s an important paper on metatheory of propositional logic and he was one of Tarski’s the three main teachers at the University of Warsaw. Łukasiewicz’s stature was just short of that of the giants: Aristotle, Boole, Frege, Tarski and Gödel. No mathematical logician of his caliber had ever before quoted the actual teachings of ancient logicians. -/- Not only did Łukasiewicz inject fresh hypotheses, new concepts, and imaginative modern perspectives into the field, his enormous prestige and that of the Warsaw School of Logic reflected on the whole field of ancient logic studies. Suddenly, this previously somewhat dormant and obscure field became active and gained in respectability and importance in the eyes of logicians, mathematicians, linguists, analytic philosophers, and historians. Next to Aristotle himself and perhaps the Stoic logician Chrysippus, Łukasiewicz is the most prominent figure in ancient logic studies. A huge literature traces its origins to Łukasiewicz. -/- This Ancient Logic and Its Modern Interpretations, is based on the 1973 Buffalo Symposium on Modernist Interpretations of Ancient Logic, the first conference devoted entirely to critical assessment of the state of ancient logic studies. (shrink)
It might come as a surprise for someone who has only a superficial knowledge of Donald Davidson’s philosophy that he has claimed literary language to be ‘a prime test of the adequacy of any view on the nature of language’.1 The claim, however, captures well the transformation that has happened in Davidson’s thinking on language since he began in the 1960’s to develop a truth-conditional semantic theory for natural languages in the lines of Alfred Tarski’s semantic conception of truth. (...) About twenty years afterwards, this project was replaced with a view that highlights the flexible nature of language and, in consequence, the importance of the speaker’s intentions for a theory of meaning, culminating in Davidson’s staggering claim that ‘there is no such thing as a language’. (shrink)
Theories of truthmaking have been introduced quite recently in epistemology. Having little to do with truth serums, or truths drugs, their concern is to define truth in terms of a certain relation between truthbearers and truthmakers. Those theories make an attempt to remedy what is supposed to be lacking in classical theories of truth, especially in Alfred Tarski’s semantic theory.
Cosmic Justice Hypotheses. -/- This applied-logic lecture builds on [1] arguing that character traits fostered by logic serve clarity and understanding in ethics, confirming hopeful views of Alfred Tarski [2, Preface, and personal communication]. Hypotheses in one strict usage are propositions not known to be true and not known to be false or—more loosely—propositions so considered for discussion purposes [1, p. 38]. Logic studies hypotheses by determining their implications (propositions they imply) and their implicants (propositions that imply them). Logic (...) also studies hypotheses by seeing how variations affect implications and implicants. People versed in logical methods are more inclined to enjoy working with hypotheses and less inclined to dismiss them or to accept them without sufficient evidence. Cosmic Justice Hypotheses (CJHs), such as “in the fullness of time every act will be rewarded or punished in exact proportion to its goodness or badness”, have been entertained by intelligent thinkers. Absolute CJHs, ACHJs, imply that it is pointless to make sacrifices, make pilgrimages, or ask divine forgiveness: once acts are done, doers must ready themselves for the inevitable payback, since the cosmos works inexorably toward justice. Ceteris Paribus CJHs, CPCJHs, on the other hand, such as “in the fullness of time every act will be rewarded or punished in exact proportion to its goodness or badness—other things being equal”, leave room for exceptions. For example, some people subscribing to Ceteris Paribus CJHs think that certain bad acts can be performed with impunity as long as certain procedures are carried out previous to, or simultaneous with, or even after the acts. Belief Ceteris Paribus CJHs has been exploited by unscrupulous “spiritual leaders” who claim to have power to grant exceptions. In opposition to belief in CPCJHs are CJHs that hold belief in CPCJHs to be inherently wrong and subject to punishment. Other variants of CJHs are Cumulative Cosmic Justice Hypotheses, such as “in the fullness of time every person will be rewarded or punished in exact proportion to the net goodness or badness of their acts”. Still other variants include the Hereditary Cumulative Cosmic Justice Hypotheses, such as “in the fullness of time every person will be rewarded or punished in exact proportion to the net goodness or badness of their ancestors’ acts”. [1] JOHN CORCORAN, Inseparability of Logic and Ethics, Free Inquiry, S. 1989, pp. 37–40. [2] ALFRED TARSKI, Introduction to Logic, Dover, 1995. (shrink)
The aim of this dissertation is to offer and defend a correspondence theory of truth. I begin by critically examining the coherence, pragmatic, simple, redundancy, disquotational, minimal, and prosentential theories of truth. Special attention is paid to several versions of disquotationalism, whose plausibility has led to its fairly constant support since the pioneering work of Alfred Tarski, through that by W. V. Quine, and recently in the work of Paul Horwich. I argue that none of these theories meets the (...) correspondence intuition---that a true sentence or proposition in some way corresponds to reality---despite the explicit claims by each to capture this intuition. I distinguish six versions of the correspondence theory, and defend two against traditional objections, standardly taken as decisive against them, and show, plainly, that these two theories capture the correspondence intuition. Due to the importance of meeting this intuition, only these two theories stands a chance of being a satisfactory theory of truth. I argue that the version of the correspondence theory incorporating a simple semantic representation relation is preferable to its rival, for which the representation relation is complex. I present and argue for a novel version of this correspondence theory according to which truth is a correspondence property sensitive to semantic context. One consequence of this context-sensitivity is that an ungrounded sentence does not express a proposition. In addition to accounting for the similarity between the Liar and Truth-Teller sentences, this theory of truth is immune to the Liar Paradox, including empirical versions. It is argued that the Liar Paradox is devastating to all of the other theories above, and even formal theories of truth designed to solve it, such as the revision and vagueness theories. Customized versions of the Liar Paradox besetting this theory are handled by its context-sensitivity, and by enforcing the distinction between truth and truth value. This same pair of considerations also yields solutions to Lob's Paradox and Grelling's Paradox. Arguments similar to those given to defend this correspondence theory show that with one minor alteration, Kripke's fixed point theory may be used to model this correspondence notion of truth. (shrink)
In Heidegger’s Being and Time certain concepts are discussed which are central to the ontological constitution of Dasein. This paper demonstrates the interesting manner in which some of these concepts can be used in a reading of T.S. Eliot’s The Love Song of J. Alfred Prufrock. A comparative analysis is performed, explicating the relevant Heideggerian terms and then relating them to Eliot’s poem. In this way strong parallels are revealed between the two men’s respective thoughts and distinct modernist sensibilities. (...) Prufrock, the protagonist of the poem, and the world he inhabits illustrate poetically concepts such as authenticity, inauthenticity, the ‘they’, idle talk and angst, which Heidegger develops in Being and Time. (shrink)
Continuing Franz Boas' work to establish anthropology as an academic discipline in the US at the turn of the twentieth century, Alfred L. Kroeber re-defined culture as a phenomenon sui generis. To achieve this he asked geneticists to enter into a coalition against hereditarian thoughts prevalent at that time in the US. The goal was to create space for anthropology as a separate discipline within academia, distinct from other disciplines. To this end he crossed the boundary separating anthropology from (...) biology in order to secure the boundary. His notion of culture, closely bound to the concept of heredity, saw it as independent of biological heredity (culture as superorganic) but at the same time as a heredity of another sort. The paper intends to summarise the shifting boundaries of anthropology at the beginning of the twentieth century, and to present Kroeber?s ideas on culture, with a focus on how the changing landscape of concepts of heredity influenced his views. The historical case serves to illustrate two general conclusions: that the concept of culture played and plays different roles in explaining human existence; that genetics and the concept of Weismannian hard inheritance did not have an unambiguous unidirectional historical effect on the vogue for hereditarianism at that time; on the contrary, it helped to establish culture in Kroeber's sense, culture as independent of heredity. (shrink)
Tarski’s pioneering work on truth has been thought by some to motivate a robust, correspondence-style theory of truth, and by others to motivate a deflationary attitude toward truth. I argue that Tarski’s work suggests neither; if it motivates any contemporary theory of truth, it motivates conceptual primitivism, the view that truth is a fundamental, indefinable concept. After outlining conceptual primitivism and Tarski’s theory of truth, I show how the two approaches to truth share much in common. While Tarski does not (...) explicitly accept primitivism, the view is open to him, and fits better with his formal work on truth than do correspondence or deflationary theories. Primitivists, in turn, may rely on Tarski’s insights in motivating their own perspective on truth. I conclude by showing how viewing Tarski through the primitivist lens provides a fresh response to some familiar charges from Putnam and Etchemendy. (shrink)
This paper describes Tarski’s project of rehabilitating the notion of truth, previously considered dubious by many philosophers. The project was realized by providing a formal truth definition, which does not employ any problematic concept.
Este artículo aborda, desde la relectura del trabajo de Alfred Schütz “La ejecución musical conjunta” (“Makingmusictogether”), la idea de intersubjetividad como sintonía en las relaciones sociales explorando en los elementos señalados por el autor, como la dimensión temporal, el cara a cara y la sincronización con el Otro. Se abre así una posibilidad de comprensión de los procesos intersubjetivos, en la perspectiva del reconocimiento de la alteridad (Levinas) como constitutiva de intersubjetividad, y se plantea la pregunta de si es (...) posible, en contextos de exclusión, avanzar hacia la constitución de sintonía entre sociedad y Estado. (shrink)
Homo deceptus is a book that brings together new ideas on language, consciousness and physics into a comprehensive theory that unifies science and philosophy in a different kind of Theory of Everything. The subject of how we are to make sense of the world is addressed in a structured and ordered manner, which starts with a recognition that scientific truths are constructed within a linguistic framework. The author argues that an epistemic foundation of natural language must be understood before laying (...) claim to any notion of reality. This foundation begins with Ludwig Wittgenstein’s Tractatus Logico-Philosophicus and the relationship of language to formal logic. Ultimately, we arrive at an answer to the question of why people believe the things they do. This is effectively a modification of Alfred Tarski’s semantic theory of truth. The second major issue addressed is the ‘dreaded’ Hard Problem of Consciousness as first stated by David Chalmers in 1995. The solution is found in the unification of consciousness, information theory and notions of physicalism. The physical world is shown to be an isomorphic representation of the phenomenological conscious experience. New concepts in understanding how language operates help to explain why this relationship has been so difficult to appreciate. The inclusion of concepts from information theory shows how a digital mechanics resolves heretofore conflicting theories in physics, cognitive science and linguistics. Scientific orthodoxy is supported, but viewed in a different light. Mainstream science is not challenged, but findings are interpreted in a manner that unifies consciousness without contradiction. Digital mechanics and formal systems of logic play central roles in combining language, consciousness and the physical world into a unified theory where all can be understood within a single consistent framework. (shrink)
People often talk to others about their personal past. These discussions are inherently selective. Selective retrieval of memories in the course of a conversation may induce forgetting of unmentioned but related memories for both speakers and listeners (Cuc, Koppel, & Hirst, 2007). Cuc et al. (2007) defined the forgetting on the part of the speaker as within-individual retrieval-induced forgetting (WI-RIF) and the forgetting on the part of the listener as socially shared retrieval-induced forgetting (SS-RIF). However, if the forgetting (...) associated with WI-RIF and SS-RIF is to be taken seriously as a mechanism that shapes both individual and shared memories, this mechanism must be demonstrated with meaningful material and in ecologically valid groups. In our first 2 experiments we extended SS-RIF from unemotional, experimenter-contrived material to the emotional and unemotional autobiographical memories of strangers (Experiment 1) and intimate couples (Experiment 2) when merely overhearing the speaker selectively practice memories. We then extended these results to the context of a free-flowing conversation (Experiments 3 and 4). In all 4 experiments we found WI-RIF and SS-RIF regardless of the emotionalvalence or individual ownership of the memories. We discuss our findings in terms of the role of conversational silence in shaping both our personal and shared pasts. (shrink)
The generalized conclusion of the Tarski and Gödel proofs: All formal systems of greater expressive power than arithmetic necessarily have undecidable sentences. Is not the immutable truth that Tarski made it out to be it is only based on his starting assumptions. -/- When we reexamine these starting assumptions from the perspective of the philosophy of logic we find that there are alternative ways that formal systems can be defined that make undecidability inexpressible in all of these formal systems.
Both Tarski and Gödel “prove” that provability can diverge from Truth. When we boil their claim down to its simplest possible essence it is really claiming that valid inference from true premises might not always derive a true consequence. This is obviously impossible.
In this paper the importance of Tarski's truth definition is evaluated like a productive resource to criticize Nietzsche's nihilistic view and any pragmatic understanding of truth.
This book offers a theoretical investigation into the general problem of reality as a multiplicity of ‘finite provinces of meaning’, as developed in the work of Alfred Schutz. A critical introduction to Schutz’s sociology of multiple realities as well as a sympathetic re-reading and reconstruction of his project, Experiencing Multiple Realities traces the genesis and implications of this concept in Schutz’s writings before presenting an analysis of various ways in which it can shed light on major sociological problems, such (...) as social action, social time, social space, identity, or narrativity. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.