Here are considered the conditions under which the method of diagrams is liable to include non-classical logics, among which the spatial representation of non-bivalent negation. This will be done with two intended purposes, namely: a review of the main concepts involved in the definition of logical negation; an explanation of the epistemological obstacles against the introduction of non-classical negations within diagrammaticlogic.
It is argued, on the basis of ideas derived from Wittgenstein's Tractatus and Husserl's Logical Investigations, that the formal comprehends more than the logical. More specifically: that there exist certain formal-ontological constants (part, whole, overlapping, etc.) which do not fall within the province of logic. A two-dimensional directly depicting language is developed for the representation of the constants of formal ontology, and means are provided for the extension of this language to enable the representation of certain materially necessary relations. (...) The paper concludes with a discussion of the relationship between formal logic, formal ontology, and mathematics. (shrink)
A Mathematical Review by John Corcoran, SUNY/Buffalo -/- Macbeth, Danielle Diagrammatic reasoning in Frege's Begriffsschrift. Synthese 186 (2012), no. 1, 289–314. ABSTRACT This review begins with two quotations from the paper: its abstract and the first paragraph of the conclusion. The point of the quotations is to make clear by the “give-them-enough-rope” strategy how murky, incompetent, and badly written the paper is. I know I am asking a lot, but I have to ask you to read the quoted passages—aloud (...) if possible. Don’t miss the silly attempt to recycle Kant’s quip “Concepts without intuitions are empty; intuitions without concepts are blind”. What the paper was aiming at includes the absurdity: “Proofs without definitions are empty; definitions without proofs are, if not blind, then dumb.” But the author even bollixed this. The editor didn’t even notice. The copy-editor missed it. And the author’s proof-reading did not catch it. In order not to torment you I will quote the sentence as it appears: “In a slogan: proofs without definitions are empty, merely the aimless manipulation of signs according to rules; and definitions without proofs are, if no blind, then dumb.”[sic] The rest of my review discusses the paper’s astounding misattribution to contemporary logicians of the information-theoretic approach. This approach was cruelly trashed by Quine in his 1970 Philosophy of Logic, and thereafter ignored by every text I know of. The paper under review attributes generally to modern philosophers and logicians views that were never espoused by any of the prominent logicians—such as Hilbert, Gödel, Tarski, Church, and Quine—apparently in an attempt to distance them from Frege: the focus of the article. On page 310 we find the following paragraph. “In our logics it is assumed that inference potential is given by truth-conditions. Hence, we think, deduction can be nothing more than a matter of making explicit information that is already contained in one’s premises. If the deduction is valid then the information contained in the conclusion must be contained already in the premises; if that information is not contained already in the premises […], then the argument cannot be valid.” Although the paper is meticulous in citing supporting literature for less questionable points, no references are given for this. In fact, the view that deduction is the making explicit of information that is only implicit in premises has not been espoused by any standard symbolic logic books. It has only recently been articulated by a small number of philosophical logicians from a younger generation, for example, in the prize-winning essay by J. Sagüillo, Methodological practice and complementary concepts of logical consequence: Tarski’s model-theoretic consequence and Corcoran’s information-theoretic consequence, History and Philosophy of Logic, 30 (2009), pp. 21–48. The paper omits definitions of key terms including ‘ampliative’, ‘explicatory’, ‘inference potential’, ‘truth-condition’, and ‘information’. The definition of prime number on page 292 is as follows: “To say that a number is prime is to say that it is not divisible without remainder by another number”. This would make one be the only prime number. The paper being reviewed had the benefit of two anonymous referees who contributed “very helpful comments on an earlier draft”. Could these anonymous referees have read the paper? -/- J. Corcoran, U of Buffalo, SUNY -/- PS By the way, if anyone has a paper that has been turned down by other journals, any journal that would publish something like this might be worth trying. (shrink)
This book is written for those who wish to learn some basic principles of formal logic but more importantly learn some easy methods to unpick arguments and assess their value for truth and validity. -/- The first section explains the ideas behind traditional logic which was formed well over two thousand years ago by the ancient Greeks. Terms such as ‘categorical syllogism’, ‘premise’, ‘deduction’ and ‘validity’ may appear at first sight to be inscrutable but will easily be understood (...) with examples bringing the subjects to life. Traditionally, Venn diagrams have been employed to test arguments. These are very useful but their application is limited and they are not open to quantification. The mid-section of this book introduces a methodology that makes the analysis of arguments accessible with the use of a new form of diagram, modified from those of the mathematician Leonhard Euler. These new diagrammatic methods will be employed to demonstrate an addition to the basic form of syllogism. This includes a refined definition of the terms ‘most’ and ‘some’ within propositions. This may seem a little obscure at the moment but one will readily apprehend these new methods and principles of a more modern logic. (shrink)
Charles Peirce's diagrammaticlogic — the Existential Graphs — is presented as a tool for illuminating how we know necessity, in answer to Benacerraf's famous challenge that most ‘semantics for mathematics’ do not ‘fit an acceptable epistemology’. It is suggested that necessary reasoning is in essence a recognition that a certain structure has the particular structure that it has. This means that, contra Hume and his contemporary heirs, necessity is observable. One just needs to pay attention, not merely (...) to individual things but to how those things are related in larger structures, certain aspects of which relations force certain other aspects to be a certain way. (shrink)
Recent work in formal philosophy has concentrated over-whelmingly on the logical problems pertaining to epistemic shortfall - which is to say on the various ways in which partial and sometimes incorrect information may be stored and processed. A directly depicting language, in contrast, would reflect a condition of epistemic perfection. It would enable us to construct representations not of our knowledge but of the structures of reality itself, in much the way that chemical diagrams allow the representation (at a certain (...) level of abstractness) of the structures of molecules of different sorts. A diagram of such a language would be true if that which it sets out to depict exists in reality, i.e. if the structural relations between the names (and other bits and pieces in the diagram) map structural relations among the corresponding objects in the world. Otherwise it would be false. All of this should, of course, be perfectly familiar. (See, for example, Aristotle, Metaphysics, 1027 b 22, 1051 b 32ff.) The present paper seeks to go further than its predecessors, however, in offering a detailed account of the syntax of a working universal characteristic and of the ways in which it might be used. (shrink)
The present dissertation presents an examination of the Carrollian logic through the reconstruction of its syllogistic theory. Lewis Carroll was one of the main responsible for the dissemination of logic during the nineteenth century, but most of his logical writings remained unknown until a posthumous publication of 1977. The reconstruction of the Carrollian syllogistic theory was based on the comparison of the two books on author's logic, "The Game of Logic" and "Symbolic Logic". The analysis (...) of the Carrollian syllogistics starts from a study of the historical context of development of the logic and the developments of syllogistics previous to the contribution of the author. Situated in the historical period of algebraical logic, Carrollian syllogistics is characterized as a conservative extension of the Aristotelian syllogistics, the main innovation is the use of negative terms and the introduction of a diagrammatic method suitable for the representation of negative terms. The diagrammatic method of the Carrollian syllogistics presents advances in relation to the methods of Euler and Venn. The use of negative terms also requires a redefinition of the notion of syllogism, simplifying and expanding the amount of arguments amenable to logical treatment. Carroll does not use four, but only three categorical propositions in his syllogistic, with interpretation of existential presuppositions congruent with a syntactic-existential reading. Carrollian syllogistics uses some techniques found in the work of algebraists of logic and also made the same confusions between notions of "class" and "member" that were common in the period. Convinced of the social utility of logic and dedicated to popularize it, Carroll priorized a creation of new didactics for the teaching of logic in his works, where he can include his diagrammatic method of solving syllogisms. Carroll made only scant considerations of his conception of logic. Based on the small considerations found throughout the study and on the constant claim of the social utility of logic, it is suggested that Carroll is close to the so-called pragmatic position, which considers a logic as an instrument of regulation of discourse. (shrink)
Logicians commonly speak in a relatively undifferentiated way about pre-euler diagrams. The thesis of this paper, however, is that there were three periods in the early modern era in which euler-type diagrams (line diagrams as well as circle diagrams) were expansively used. Expansive periods are characterized by continuity, and regressive periods by discontinuity: While on the one hand an ongoing awareness of the use of euler-type diagrams occurred within an expansive period, after a subsequent phase of regression the entire knowledge (...) about the systematic application and the history of euler-type diagrams was lost. I will argue that the first expansive period lasted from Vives (1531) to Alsted (1614). The second period began around 1660 with Weigel and ended in 1712 with lange. The third period of expansion started around 1760 with the works of Ploucquet, euler and lambert. Finally, it is shown that euler-type diagrams became popular in the debate about intuition which took place in the 1790s between leibnizians and Kantians. The article is thus limited to the historical periodization between 1530 and 1800. (shrink)
In the visual representation of ontologies, in particular of part-whole relationships, it is customary to use graph theory as the representational background. We claim here that the standard graph-based approach has a number of limitations, and we propose instead a new representation of part-whole structures for ontologies, and describe the results of experiments designed to show the effectiveness of this new proposal especially as concerns reduction of visual complexity. The proposal is developed to serve visualization of ontologies conformant to the (...) Basic Formal Ontology. But it can be used also for more general applications, particularly in the biomedical domain. (shrink)
The discussions which follow rest on a distinction, first expounded by Husserl, between formal logic and formal ontology. The former concerns itself with (formal) meaning-structures; the latter with formal structures amongst objects and their parts. The paper attempts to show how, when formal ontological considerations are brought into play, contemporary extensionalist theories of part and whole, and above all the mereology of Leniewski, can be generalised to embrace not only relations between concrete objects and object-pieces, but also relations between (...) what we shall call dependent parts or moments. A two-dimensional formal language is canvassed for the resultant ontological theory, a language which owes more to the tradition of Euler, Boole and Venn than to the quantifier-centred languages which have predominated amongst analytic philosophers since the time of Frege and Russell. Analytic philosophical arguments against moments, and against the entire project of a formal ontology, are considered and rejected. The paper concludes with a brief account of some applications of the theory presented. (shrink)
In recent years, academics and educators have begun to use software mapping tools for a number of education-related purposes. Typically, the tools are used to help impart critical and analytical skills to students, to enable students to see relationships between concepts, and also as a method of assessment. The common feature of all these tools is the use of diagrammatic relationships of various kinds in preference to written or verbal descriptions. Pictures and structured diagrams are thought to be more (...) comprehensible than just words, and a clearer way to illustrate understanding of complex topics. Variants of these tools are available under different names: “concept mapping”, “mind mapping” and “argument mapping”. Sometimes these terms are used synonymously. However, as this paper will demonstrate, there are clear differences in each of these mapping tools. This paper offers an outline of the various types of tool available and their advantages and disadvantages. It argues that the choice of mapping tool largely depends on the purpose or aim for which the tool is used and that the tools may well be converging to offer educators as yet unrealised and potentially complementary functions. (shrink)
This paper argues that the theory of structured propositions is not undermined by the Russell-Myhill paradox. I develop a theory of structured propositions in which the Russell-Myhill paradox doesn't arise: the theory does not involve ramification or compromises to the underlying logic, but rather rejects common assumptions, encoded in the notation of the $\lambda$-calculus, about what properties and relations can be built. I argue that the structuralist had independent reasons to reject these underlying assumptions. The theory is given both (...) a diagrammatic representation, and a logical representation in a novel language. In the latter half of the paper I turn to some technical questions concerning the treatment of quantification, and demonstrate various equivalences between the diagrammatic and logical representations, and a fragment of the $\lambda$-calculus. (shrink)
Necessity is a touchstone issue in the thought of Charles Peirce, not least because his pragmatist account of meaning relies upon modal terms. We here offer an overview of Peirce’s highly original and multi-faceted take on the matter. We begin by considering how a self-avowed pragmatist and fallibilist can even talk about necessary truth. We then outline the source of Peirce’s theory of representation in his three categories of Firstness, Secondness and Thirdness, (monadic, dyadic and triadic relations). These have modal (...) purport insofar as the first category corresponds to possibility, the second to mechanical necessity and the third to a kind of semantic or intentional necessity. We then turn to Peirce’s explicit modal epistemology and show how it began as information-relative, with different modalities (e.g. logical, physical, practical) distinguished in terms of respective ‘designated states of information’, and shifted later in his life towards a more robust realism founded in direct perception of ideas in their relations. We then turn to Peirce’s formal logic, focusing on his diagrammatic system of Existential Graphs where he did his most serious logical research. Finally we discuss Peirce’s modal metaphysics and its implications for determinism and realism about universals. (shrink)
Reism or concretism are the labels for a position in ontology and semantics that is represented by various philosophers. As Kazimierz Ajdukiewicz and Jan Woleński have shown, there are two dimensions with which the abstract expression of reism can be made concrete: The ontological dimension of reism says that only things exist; the semantic dimension of reism says that all concepts must be reduced to concrete terms in order to be meaningful. In this paper we argue for the following two (...) theses: (1) Arthur Schopenhauer has advocated a reistic philosophy of language which says that all concepts must ultimately be based on concrete intuition in order to be meaningful. (2) In his semantics, Schopenhauer developed a theory of logic diagrams that can be interpreted by modern means in order to concretize the abstract position of reism. Thus we are not only enhancing Jan Woleński’s list of well-known reists, but we are also adding a diagrammatic dimension to concretism, represented by Schopenhauer. (shrink)
The aim of this article is to investigate the roles of commutative diagrams (CDs) in a speciﬁc mathematical domain, and to unveil the reasons underlying their effectiveness as a mathematical notation; this will be done through a case study. It will be shown that CDs do not depict spatial relations, but represent mathematical structures. CDs will be interpreted as a hybrid notation that goes beyond the traditional bipartition of mathematical representations into diagrammatic and linguistic. It will be argued that (...) one of the reasons why CDs form a good notation is that they are highly mathematically tractable: experts can obtain valid results by ‘calculating’ with CDs. These calculations, take the form of ‘diagram chases’. In order to draw inferences, experts move algebraic elements around the diagrams. It will be argued that these diagrams are dynamic. It is thanks to their dynamicity that CDs can externalize the relevant reasoning and allow experts to draw conclusions directly by manipulating them. Lastly, it will be shown that CDs play essential roles in the context of proof as well as in other phases of the mathematical enterprise, such as discovery and conjecture formation. (shrink)
Commonsense reasoning is one of the main open problems in the field of Artificial Intelligence (AI) while, on the other hand, seems to be a very intuitive and default reasoning mode in humans and other animals. In this talk, we discuss the different paradigms that have been developed in AI and Computational Cognitive Science to deal with this problem (ranging from logic-based methods, to diagrammatic-based ones). In particular, we discuss - via two different case studies concerning commonsense categorization (...) and knowledge invention tasks - how cognitively inspired heuristics can help (both in terms of efficiency and efficacy) in the realization of intelligent artificial systems able to reason in a human-like fashion, with results comparable to human-level performances. (shrink)
Just before the Scientific Revolution, there was a "Mathematical Revolution", heavily based on geometrical and machine diagrams. The "faculty of imagination" (now called scientific visualization) was developed to allow 3D understanding of planetary motion, human anatomy and the workings of machines. 1543 saw the publication of the heavily geometrical work of Copernicus and Vesalius, as well as the first Italian translation of Euclid.
In recent years, semiotics has become an innovative theoretical framework in mathematics education. The purpose of this article is to show that semiotics can be used to explain learning as a process of experimenting with and communicating about one's own representations of mathematical problems. As a paradigmatic example, we apply a Peircean semiotic framework to answer the question of how students learned the concept of "distribution" in a statistics course by "diagrammatic reasoning" and by developing "hypostatic abstractions," that is (...) by forming new mathematical objects which can be used as means for communication and further reasoning. Peirce's semiotic terminology is used as an alternative for notions such as modeling, symbolizing, and reification. We will show that it is a precise instrument of analysis with regard to the complexity of learning and of communication in mathematics classroom. (shrink)
Charles S. Peirce’s semiotics uniquely divides signs into: i) symbols, which pick out their objects by arbitrary convention or habit, ii) indices, which pick out their objects by unmediated ‘pointing’, and iii) icons, which pick out their objects by resembling them (as Peirce put it: an icon’s parts are related in the same way that the objects represented by those parts are themselves related). Thus representing structure is one of the icon’s greatest strengths. It is argued that the implications of (...) scaffolding education iconically are profound: for providing learners with a navigable road-map of a subject matter, for enabling them to see further connections of their own in what is taught, and for supporting meaningful active learning. Potential objections that iconic teaching is excessively entertaining and overly susceptible to misleading rhetorical manipulation are addressed. (shrink)
In the graphical representation of ontologies, it is customary to use graph theory as the representational background. We claim here that the standard graph-based approach has a number of limitations. We focus here on a problem in the graph-based representation of ontologies in complex domains such as biomedical, engineering and manufacturing: lack of mereotopological representation. Based on such limitation, we proposed a diagrammatic way to represent an entity’s structure and various forms of mereotopological relationships between the entities.
We argue that the extant evidence for Stoic logic provides all the elements required for a variable-free theory of multiple generality, including a number of remarkably modern features that straddle logic and semantics, such as the understanding of one- and two-place predicates as functions, the canonical formulation of universals as quantified conditionals, a straightforward relation between elements of propositional and first-order logic, and the roles of anaphora and rigid order in the regimented sentences that express multiply general (...) propositions. We consider and reinterpret some ancient texts that have been neglected in the context of Stoic universal and existential propositions and offer new explanations of some puzzling features in Stoic logic. Our results confirm that Stoic logic surpasses Aristotle’s with regard to multiple generality, and are a reminder that focusing on multiple generality through the lens of Frege-inspired variable-binding quantifier theory may hamper our understanding and appreciation of pre-Fregean theories of multiple generality. (shrink)
A natural suggestion and increasingly popular account of how to revise our logical beliefs treats revision of logic analogously to the revision of scientific theories. I investigate this approach and argue that simple applications of abductive methodology to logic result in revision-cycles, developing a detailed case study of an actual dispute with this property. This is problematic if we take abductive methodology to provide justification for revising our logical framework. I then generalize the case study, pointing to similarities (...) with more recent and popular heterodox logics such as naïve logics of truth. I use this discussion to motivate a constraint—logical partisanhood—on the uses of such methodology: roughly: both the proposed alternative and our actual background logic must be able to agree that moving to the alternative logic is no worse than staying put. (shrink)
Philosophy of biology is often said to have emerged in the last third of the twentieth century. Prior to this time, it has been alleged that the only authors who engaged philosophically with the life sciences were either logical empiricists who sought to impose the explanatory ideals of the physical sciences onto biology, or vitalists who invoked mystical agencies in an attempt to ward off the threat of physicochemical reduction. These schools paid little attention to actual biological science, and as (...) a result philosophy of biology languished in a state of futility for much of the twentieth century. The situation, we are told, only began to change in the late 1960s and early 1970s, when a new generation of researchers began to focus on problems internal to biology, leading to the consolidation of the discipline. In this paper we challenge this widely accepted narrative of the history of philosophy of biology. We do so by arguing that the most important tradition within early twentieth-century philosophy of biology was neither logical empiricism nor vitalism, but the organicist movement that flourished between the First and Second World Wars. We show that the organicist corpus is thematically and methodologically continuous with the contemporary literature in order to discredit the view that early work in the philosophy of biology was unproductive, and we emphasize the desirability of integrating the historical and contemporary conversations into a single, unified discourse. (shrink)
In this article, I outline a logic of design of a system as a specific kind of conceptual logic of the design of the model of a system, that is, the blueprint that provides information about the system to be created. In section two, I introduce the method of levels of abstraction as a modelling tool borrowed from computer science. In section three, I use this method to clarify two main conceptual logics of information inherited from modernity: Kant’s (...) transcendental logic of conditions of possibility of a system, and Hegel’s dialectical logic of conditions of in/stability of a system. Both conceptual logics of information analyse structural properties of given systems. Strictly speaking, neither is a conceptual logic of information about the conditions of feasibility of a system, that is, neither is a logic of information as a logic of design. So, in section four, I outline this third conceptual logic of information and then interpret the conceptual logic of design as a logic of requirements, by introducing the relation of “sufficientisation”. In the conclusion, I argue that the logic of requirements is exactly what we need in order to make sense of, and buttress, a constructionist approach to knowledge. (shrink)
We present a framework for epistemic logic, modeling the logical aspects of System 1 and System 2 cognitive processes, as per dual process theories of reasoning. The framework combines non-normal worlds semantics with the techniques of Dynamic Epistemic Logic. It models non-logically-omniscient, but moderately rational agents: their System 1 makes fast sense of incoming information by integrating it on the basis of their background knowledge and beliefs. Their System 2 allows them to slowly, step-wise unpack some of the (...) logical consequences of such knowledge and beliefs, by paying a cognitive cost. The framework is applied to three instances of limited rationality, widely discussed in cognitive psychology: Stereotypical Thinking, the Framing Effect, and the Anchoring Effect. (shrink)
What is a logical constant? The question is addressed in the tradition of Tarski's definition of logical operations as operations which are invariant under permutation. The paper introduces a general setting in which invariance criteria for logical operations can be compared and argues for invariance under potential isomorphism as the most natural characterization of logical operations.
A logic is called 'paraconsistent' if it rejects the rule called 'ex contradictione quodlibet', according to which any conclusion follows from inconsistent premises. While logicians have proposed many technically developed paraconsistent logical systems and contemporary philosophers like Graham Priest have advanced the view that some contradictions can be true, and advocated a paraconsistent logic to deal with them, until recent times these systems have been little understood by philosophers. This book presents a comprehensive overview on paraconsistent logical systems (...) to change this situation. The book includes almost every major author currently working in the field. The papers are on the cutting edge of the literature some of which discuss current debates and others present important new ideas. The editors have avoided papers about technical details of paraconsistent logic, but instead concentrated upon works that discuss more 'big picture' ideas. Different treatments of paradoxes takes centre stage in many of the papers, but also there are several papers on how to interpret paraconistent logic and some on how it can be applied to philosophy of mathematics, the philosophy of language, and metaphysics. (shrink)
Logic arguably plays a role in the normativity of reasoning. In particular, there are plausible norms of belief/disbelief whose antecedents are constituted by claims about what follows from what. But is logic also relevant to the normativity of agnostic attitudes? The question here is whether logical entailment also puts constraints on what kinds of things one can suspend judgment about. In this paper I address that question and I give a positive answer to it. In particular, I advance (...) two logical norms of agnosticism, where the first one allows us to assess situations in which the subject is agnostic about the conclusion of a valid argument and the second one allows us to assess situations in which the subject is agnostic about one of the premises of a valid argument. (shrink)
In spite of its significance for everyday and philosophical discourse, the explanatory connective has not received much treatment in the philosophy of logic. The present paper develops a logic for based on systematic connections between and the truth-functional connectives.
There has been a recent surge of work on deontic modality within philosophy of language. This work has put the deontic logic tradition in contact with natural language semantics, resulting in significant increase in sophistication on both ends. This chapter surveys the main motivations, achievements, and prospects of this work.
In this paper I will develop a view about the semantics of imperatives, which I term Modal Noncognitivism, on which imperatives might be said to have truth conditions (dispositionally, anyway), but on which it does not make sense to see them as expressing propositions (hence does not make sense to ascribe to them truth or falsity). This view stands against “Cognitivist” accounts of the semantics of imperatives, on which imperatives are claimed to express propositions, which are then enlisted in explanations (...) of the relevant logico-semantic phenomena. It also stands against the major competitors to Cognitivist accounts—all of which are non-truth-conditional and, as a result, fail to provide satisfying explanations of the fundamental semantic characteristics of imperatives (or so I argue). The view of imperatives I defend here improves on various treatments of imperatives on the market in giving an empirically and theoretically adequate account of their semantics and logic. It yields explanations of a wide range of semantic and logical phenomena about imperatives—explanations that are, I argue, at least as satisfying as the sorts of explanations of semantic and logical phenomena familiar from truth-conditional semantics. But it accomplishes this while defending the notion—which is, I argue, substantially correct—that imperatives could not have propositions, or truth conditions, as their meanings. (shrink)
Logical information theory is the quantitative version of the logic of partitions just as logical probability theory is the quantitative version of the dual Boolean logic of subsets. The resulting notion of information is about distinctions, differences and distinguishability and is formalized using the distinctions of a partition. All the definitions of simple, joint, conditional and mutual entropy of Shannon information theory are derived by a uniform transformation from the corresponding definitions at the logical level. The purpose of (...) this paper is to give the direct generalization to quantum logical information theory that similarly focuses on the pairs of eigenstates distinguished by an observable, i.e., qudits of an observable. The fundamental theorem for quantum logical entropy and measurement establishes a direct quantitative connection between the increase in quantum logical entropy due to a projective measurement and the eigenstates that are distinguished by the measurement. Both the classical and quantum versions of logical entropy have simple interpretations as “two-draw” probabilities for distinctions. The conclusion is that quantum logical entropy is the simple and natural notion of information for quantum information theory focusing on the distinguishing of quantum states. (shrink)
Gaining information can be modelled as a narrowing of epistemic space . Intuitively, becoming informed that such-and-such is the case rules out certain scenarios or would-be possibilities. Chalmers’s account of epistemic space treats it as a space of a priori possibility and so has trouble in dealing with the information which we intuitively feel can be gained from logical inference. I propose a more inclusive notion of epistemic space, based on Priest’s notion of open worlds yet which contains only those (...) epistemic scenarios which are not obviously impossible. Whether something is obvious is not always a determinate matter and so the resulting picture is of an epistemic space with fuzzy boundaries. (shrink)
We investigate an enrichment of the propositional modal language L with a "universal" modality ■ having semantics x ⊧ ■φ iff ∀y(y ⊧ φ), and a countable set of "names" - a special kind of propositional variables ranging over singleton sets of worlds. The obtained language ℒ $_{c}$ proves to have a great expressive power. It is equivalent with respect to modal definability to another enrichment ℒ(⍯) of ℒ, where ⍯ is an additional modality with the semantics x ⊧ ⍯φ (...) iff Vy(y ≠ x → y ⊧ φ). Model-theoretic characterizations of modal definability in these languages are obtained. Further we consider deductive systems in ℒ $_{c}$ . Strong completeness of the normal ℒ $_{c}$ logics is proved with respect to models in which all worlds are named. Every ℒ $_{c}$ -logic axiomatized by formulae containing only names (but not propositional variables) is proved to be strongly frame-complete. Problems concerning transfer of properties ([in]completeness, filtration, finite model property etc.) from ℒ to ℒ $_{c}$ are discussed. Finally, further perspectives for names in multimodal environment are briefly sketched. (shrink)
This paper discusses three relevant logics that obey Component Homogeneity - a principle that Goddard and Routley introduce in their project of a logic of significance. The paper establishes two main results. First, it establishes a general characterization result for two families of logic that obey Component Homogeneity - that is, we provide a set of necessary and sufficient conditions for their consequence relations. From this, we derive characterization results for S*fde, dS*fde, crossS*fde. Second, the paper establishes complete (...) sequent calculi for S*fde, dS*fde, crossS*fde. Among the other accomplishments of the paper, we generalize the semantics from Bochvar, Hallden, Deutsch and Daniels, we provide a general recipe to define containment logics, we explore the single-premise/single-conclusion fragment of S*fde, dS*fde, crossS*fdeand the connections between crossS*fde and the logic Eq of equality by Epstein. Also, we present S*fde as a relevant logic of meaninglessness that follows the main philosophical tenets of Goddard and Routley, and we briefly examine three further systems that are closely related to our main logics. Finally, we discuss Routley's criticism to containment logic in light of our results, and overview some open issues. (shrink)
This book has three main parts. The first, longer, part is a reprint of the author's Deviant Logic, which initially appeared as a book by itself in 1974. The second and third parts include reprints of five papers originally published between 1973 and 1980. Three of them focus on the nature and justification of deductive reasoning, which are also a major concern of Deviant Logic. The other two are on fuzzy logic, and make up for a major (...) omission of Deviant Logic. (shrink)
The result of combining classical quantificational logic with modal logic proves necessitism – the claim that necessarily everything is necessarily identical to something. This problem is reflected in the purely quantificational theory by theorems such as ∃x t=x; it is a theorem, for example, that something is identical to Timothy Williamson. The standard way to avoid these consequences is to weaken the theory of quantification to a certain kind of free logic. However, it has often been noted (...) that in order to specify the truth conditions of certain sentences involving constants or variables that don’t denote, one has to apparently quantify over things that are not identical to anything. In this paper I defend a contingentist, non-Meinongian metaphysics within a positive free logic. I argue that although certain names and free variables do not actually refer to anything, in each case there might have been something they actually refer to, allowing one to interpret the contingentist claims without quantifying over mere possibilia. (shrink)
This book treats ancient logic: the logic that originated in Greece by Aristotle and the Stoics, mainly in the hundred year period beginning about 350 BCE. Ancient logic was never completely ignored by modern logic from its Boolean origin in the middle 1800s: it was prominent in Boole’s writings and it was mentioned by Frege and by Hilbert. Nevertheless, the first century of mathematical logic did not take it seriously enough to study the ancient (...) class='Hi'>logic texts. A renaissance in ancient logic studies occurred in the early 1950s with the publication of the landmark Aristotle’s Syllogistic by Jan Łukasiewicz, Oxford UP 1951, 2nd ed. 1957. Despite its title, it treats the logic of the Stoics as well as that of Aristotle. Łukasiewicz was a distinguished mathematical logician. He had created many-valued logic and the parenthesis-free prefix notation known as Polish notation. He co-authored with Alfred Tarski’s an important paper on metatheory of propositional logic and he was one of Tarski’s the three main teachers at the University of Warsaw. Łukasiewicz’s stature was just short of that of the giants: Aristotle, Boole, Frege, Tarski and Gödel. No mathematical logician of his caliber had ever before quoted the actual teachings of ancient logicians. -/- Not only did Łukasiewicz inject fresh hypotheses, new concepts, and imaginative modern perspectives into the field, his enormous prestige and that of the Warsaw School of Logic reflected on the whole field of ancient logic studies. Suddenly, this previously somewhat dormant and obscure field became active and gained in respectability and importance in the eyes of logicians, mathematicians, linguists, analytic philosophers, and historians. Next to Aristotle himself and perhaps the Stoic logician Chrysippus, Łukasiewicz is the most prominent figure in ancient logic studies. A huge literature traces its origins to Łukasiewicz. -/- This Ancient Logic and Its Modern Interpretations, is based on the 1973 Buffalo Symposium on Modernist Interpretations of Ancient Logic, the first conference devoted entirely to critical assessment of the state of ancient logic studies. (shrink)
We explore the view that Frege's puzzle is a source of straightforward counterexamples to Leibniz's law. Taking this seriously requires us to revise the classical logic of quantifiers and identity; we work out the options, in the context of higher-order logic. The logics we arrive at provide the resources for a straightforward semantics of attitude reports that is consistent with the Millian thesis that the meaning of a name is just the thing it stands for. We provide models (...) to show that some of these logics are non-degenerate. (shrink)
Recent work in formal semantics suggests that the language system includes not only a structure building device, as standardly assumed, but also a natural deductive system which can determine when expressions have trivial truth‐conditions (e.g., are logically true/false) and mark them as unacceptable. This hypothesis, called the ‘logicality of language’, accounts for many acceptability patterns, including systematic restrictions on the distribution of quantifiers. To deal with apparent counter‐examples consisting of acceptable tautologies and contradictions, the logicality of language is often paired (...) with an additional assumption according to which logical forms are radically underspecified: i.e., the language system can see functional terms but is ‘blind’ to open class terms to the extent that different tokens of the same term are treated as if independent. This conception of logical form has profound implications: it suggests an extreme version of the modularity of language, and can only be paired with non‐classical—indeed quite exotic—kinds of deductive systems. The aim of this paper is to show that we can pair the logicality of language with a different and ultimately more traditional account of logical form. This framework accounts for the basic acceptability patterns which motivated the logicality of language, can explain why some tautologies and contradictions are acceptable, and makes better predictions in key cases. As a result, we can pursue versions of the logicality of language in frameworks compatible with the view that the language system is not radically modular vis‐á‐vis its open class terms and employs a deductive system that is basically classical. (shrink)
One logic or many? I say—many. Or rather, I say there is one logic for each way of specifying the class of all possible circumstances, or models, i.e., all ways of interpreting a given language. But because there is no unique way of doing this, I say there is no unique logic except in a relative sense. Indeed, given any two competing logical theories T1 and T2 (in the same language) one could always consider their common core, (...) T, and settle on that theory. So, given any language L, one could settle on the minimal logic T0 corresponding to the common core shared by all competitors. That would be a way of resisting relativism, as long as one is willing to redraw the bounds of logic accordingly. However, such a minimal theory T0 may be empty if the syntax of L contains no special ingredients the interpretation of which is independent of the specification of the relevant L-models. And generally—I argue—this is indeed the case. (shrink)
Many philosophers take purportedly logical cases of ground ) to be obvious cases, and indeed such cases have been used to motivate the existence of and importance of ground. I argue against this. I do so by motivating two kinds of semantic determination relations. Intuitions of logical ground track these semantic relations. Moreover, our knowledge of semantics for first order logic can explain why we have such intuitions. And, I argue, neither semantic relation can be a species of ground (...) even on a quite broad conception of what ground is. Hence, without a positive argument for taking so-called ‘logical ground’ to be something distinct from a semantic determination relation, we should cease treating logical cases as cases of ground. (shrink)
This original research hypothesises that the most fundamental building blocks of logical descriptions of cognitive, or knowledge, agents’ descriptions are expressible based on their conceptions (of the world). This article conceptually and logically analyses agents’ conceptions in order to offer a constructivist- based logical model for terminological knowledge. The most significant characteristic of [terminological] knowing is that there are strong interrelationships between terminological knowledge and the individualistic constructed, and to-be-constructed, models of knowledge. Correspondingly, I conceptually and logically analyse conception expressions (...) based on terminological knowledge, and I show how terminological knowledge may reasonably be assumed to be constructed based on the agents’ conceptions of the world. The focus of my model is on terminological knowledge structures, which may find applications in such diverse fields as the Semantic Web and educational/learning systems. (shrink)
Recent work in formal semantics suggests that the language system includes not only a structure building device, as standardly assumed, but also a natural deductive system which can determine when expressions have trivial truth-conditions (e.g., are logically true/false) and mark them as unacceptable. This hypothesis, called the `logicality of language', accounts for many acceptability patterns, including systematic restrictions on the distribution of quantifiers. To deal with apparent counter-examples consisting of acceptable tautologies and contradictions, the logicality of language is often paired (...) with an additional assumption according to which logical forms are radically underspecified: i.e., the language system can see functional terms but is `blind' to open class terms to the extent that different tokens of the same term are treated as if independent. This conception of logical form has profound implications: it suggests an extreme version of the modularity of language, and can only be paired with non-classical---indeed quite exotic---kinds of deductive systems. The aim of this paper is to show that we can pair the logicality of language with a different and ultimately more traditional account of logical form. This framework accounts for the basic acceptability patterns which motivated the logicality of language, can explain why some tautologies and contradictions are acceptable, and makes better predictions in key cases. As a result, we can pursue versions of the logicality of language in frameworks compatible with the view that the language system is not radically modular vis-a-vis its open class terms and employs a deductive system that is basically classical. (shrink)
The logic of indicative conditionals remains the topic of deep and intractable philosophical disagreement. I show that two influential epistemic norms—the Lockean theory of belief and the Ramsey test for conditional belief—are jointly sufficient to ground a powerful new argument for a particular conception of the logic of indicative conditionals. Specifically, the argument demonstrates, contrary to the received historical narrative, that there is a real sense in which Stalnaker’s semantics for the indicative did succeed in capturing the (...) class='Hi'>logic of the Ramseyan indicative conditional. (shrink)
Epistemic two-dimensional semantics is a theory in the philosophy of language that provides an account of meaning which is sensitive to the distinction between necessity and apriority. While this theory is usually presented in an informal manner, I take some steps in formalizing it in this paper. To do so, I define a semantics for a propositional modal logic with operators for the modalities of necessity, actuality, and apriority that captures the relevant ideas of epistemic two-dimensional semantics. I also (...) describe some properties of the logic that are interesting from a philosophical perspective, and apply it to the so-called nesting problem. (shrink)
An exact truthmaker for A is a state which, as well as guaranteeing A’s truth, is wholly relevant to it. States with parts irrelevant to whether A is true do not count as exact truthmakers for A. Giving semantics in this way produces a very unusual consequence relation, on which conjunctions do not entail their conjuncts. This feature makes the resulting logic highly unusual. In this paper, we set out formal semantics for exact truthmaking and characterise the resulting notion (...) of entailment, showing that it is compact and decidable. We then investigate the effect of various restrictions on the semantics. We also formulate a sequent-style proof system for exact entailment and give soundness and completeness results. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.