A natural suggestion and increasingly popular account of how to revise our logical beliefs treats revision of logic analogously to the revision of scientific theories. I investigate this approach and argue that simple applications of abductive methodology to logic result in revision-cycles, developing a detailed case study of an actual dispute with this property. This is problematic if we take abductive methodology to provide justification for revising our logical framework. I then generalize the case study, pointing to similarities with more (...) recent and popular heterodox logics such as naïve logics of truth. I use this discussion to motivate a constraint—logical partisanhood—on the uses of such methodology: roughly: both the proposed alternative and our actual background logic must be able to agree that moving to the alternative logic is no worse than staying put. (shrink)
We present a framework for epistemic logic, modeling the logical aspects of System 1 and System 2 cognitive processes, as per dual process theories of reasoning. The framework combines non-normal worlds semantics with the techniques of Dynamic Epistemic Logic. It models non-logically-omniscient, but moderately rational agents: their System 1 makes fast sense of incoming information by integrating it on the basis of their background knowledge and beliefs. Their System 2 allows them to slowly, step-wise unpack some of the logical consequences (...) of such knowledge and beliefs, by paying a cognitive cost. The framework is applied to three instances of limited rationality, widely discussed in cognitive psychology: Stereotypical Thinking, the Framing Effect, and the Anchoring Effect. (shrink)
Does rationality require logical omniscience? Our best formal theories of rationality imply that it does, but our ordinary evaluations of rationality seem to suggest otherwise. This paper aims to resolve the tension by arguing that our ordinary evaluations of rationality are not only consistent with the thesis that rationality requires logical omniscience, but also provide a compelling rationale for accepting this thesis in the first place. This paper also defends an account of apriori justification for logical beliefs that is designed (...) to explain the rational requirement of logical omniscience. On this account, apriori justification for beliefs about logic has its source in logical facts, rather than psychological facts about experience, reasoning, or understanding. This account has important consequences for the epistemic role of experience in the logical domain. In a slogan, the epistemic role of experience in the apriori domain is not a justifying role, but rather an enabling and disabling role. (shrink)
In this paper I will develop a view about the semantics of imperatives, which I term Modal Noncognitivism, on which imperatives might be said to have truth conditions (dispositionally, anyway), but on which it does not make sense to see them as expressing propositions (hence does not make sense to ascribe to them truth or falsity). This view stands against “Cognitivist” accounts of the semantics of imperatives, on which imperatives are claimed to express propositions, which are then enlisted in explanations (...) of the relevant logico-semantic phenomena. It also stands against the major competitors to Cognitivist accounts—all of which are non-truth-conditional and, as a result, fail to provide satisfying explanations of the fundamental semantic characteristics of imperatives (or so I argue). The view of imperatives I defend here improves on various treatments of imperatives on the market in giving an empirically and theoretically adequate account of their semantics and logic. It yields explanations of a wide range of semantic and logical phenomena about imperatives—explanations that are, I argue, at least as satisfying as the sorts of explanations of semantic and logical phenomena familiar from truth-conditional semantics. But it accomplishes this while defending the notion—which is, I argue, substantially correct—that imperatives could not have propositions, or truth conditions, as their meanings. (shrink)
According to certain normative theories in epistemology, rationality requires us to be logically omniscient. Yet this prescription clashes with our ordinary judgments of rationality. How should we resolve this tension? In this paper, I focus particularly on the logical omniscience requirement in Bayesian epistemology. Building on a key insight by Hacking :311–325, 1967), I develop a version of Bayesianism that permits logical ignorance. This includes: an account of the synchronic norms that govern a logically ignorant individual at any given time; (...) an account of how we reduce our logical ignorance by learning logical facts and how we should update our credences in response to such evidence; and an account of when logical ignorance is irrational and when it isn’t. At the end, I explain why the requirement of logical omniscience remains true of ideal agents with no computational, processing, or storage limitations. (shrink)
Logic arguably plays a role in the normativity of reasoning. In particular, there are plausible norms of belief/disbelief whose antecedents are constituted by claims about what follows from what. But is logic also relevant to the normativity of agnostic attitudes? The question here is whether logical entailment also puts constraints on what kinds of things one can suspend judgment about. In this paper I address that question and I give a positive answer to it. In particular, I advance two logical (...) norms of agnosticism, where the first one allows us to assess situations in which the subject is agnostic about the conclusion of a valid argument and the second one allows us to assess situations in which the subject is agnostic about one of the premises of a valid argument. (shrink)
It is argued, on the basis of ideas derived from Wittgenstein's Tractatus and Husserl's Logical Investigations, that the formal comprehends more than the logical. More specifically: that there exist certain formal-ontological constants (part, whole, overlapping, etc.) which do not fall within the province of logic. A two-dimensional directly depicting language is developed for the representation of the constants of formal ontology, and means are provided for the extension of this language to enable the representation of certain materially necessary relations. The (...) paper concludes with a discussion of the relationship between formal logic, formal ontology, and mathematics. (shrink)
Philosophy of biology is often said to have emerged in the last third of the twentieth century. Prior to this time, it has been alleged that the only authors who engaged philosophically with the life sciences were either logical empiricists who sought to impose the explanatory ideals of the physical sciences onto biology, or vitalists who invoked mystical agencies in an attempt to ward off the threat of physicochemical reduction. These schools paid little attention to actual biological science, and as (...) a result philosophy of biology languished in a state of futility for much of the twentieth century. The situation, we are told, only began to change in the late 1960s and early 1970s, when a new generation of researchers began to focus on problems internal to biology, leading to the consolidation of the discipline. In this paper we challenge this widely accepted narrative of the history of philosophy of biology. We do so by arguing that the most important tradition within early twentieth-century philosophy of biology was neither logical empiricism nor vitalism, but the organicist movement that flourished between the First and Second World Wars. We show that the organicist corpus is thematically and methodologically continuous with the contemporary literature in order to discredit the view that early work in the philosophy of biology was unproductive, and we emphasize the desirability of integrating the historical and contemporary conversations into a single, unified discourse. (shrink)
Recent work in formal semantics suggests that the language system includes not only a structure building device, as standardly assumed, but also a natural deductive system which can determine when expressions have trivial truth-conditions (e.g., are logically true/false) and mark them as unacceptable. This hypothesis, called the `logicality of language', accounts for many acceptability patterns, including systematic restrictions on the distribution of quantifiers. To deal with apparent counter-examples consisting of acceptable tautologies and contradictions, the logicality of language is (...) often paired with an additional assumption according to which logical forms are radically underspecified: i.e., the language system can see functional terms but is `blind' to open class terms to the extent that different tokens of the same term are treated as if independent. This conception of logical form has profound implications: it suggests an extreme version of the modularity of language, and can only be paired with non-classical---indeed quite exotic---kinds of deductive systems. The aim of this paper is to show that we can pair the logicality of language with a different and ultimately more traditional account of logical form. This framework accounts for the basic acceptability patterns which motivated the logicality of language, can explain why some tautologies and contradictions are acceptable, and makes better predictions in key cases. As a result, we can pursue versions of the logicality of language in frameworks compatible with the view that the language system is not radically modular vis-a-vis its open class terms and employs a deductive system that is basically classical. (shrink)
Theories of epistemic justification are commonly assessed by exploring their predictions about particular hypothetical cases – predictions as to whether justification is present or absent in this or that case. With a few exceptions, it is much less common for theories of epistemic justification to be assessed by exploring their predictions about logical principles. The exceptions are a handful of ‘closure’ principles, which have received a lot of attention, and which certain theories of justification are well known to invalidate. But (...) these closure principles are only a small sample of the logical principles that we might consider. In this paper, I will outline four further logical principles that plausibly hold for justification and two which plausibly do not. While my primary aim is just to put these principles forward, I will use them to evaluate some different approaches to justification and (tentatively) conclude that a ‘normic’ theory of justification best captures its logic. (shrink)
In spite of its significance for everyday and philosophical discourse, the explanatory connective has not received much treatment in the philosophy of logic. The present paper develops a logic for based on systematic connections between and the truth-functional connectives.
Logical Indefinites.Jack Woods - 2014 - Logique Et Analyse -- Special Issue Edited by Julien Murzi and Massimiliano Carrara 227: 277-307.details
I argue that we can and should extend Tarski's model-theoretic criterion of logicality to cover indefinite expressions like Hilbert's ɛ operator, Russell's indefinite description operator η, and abstraction operators like 'the number of'. I draw on this extension to discuss the logical status of both abstraction operators and abstraction principles.
Many philosophers take purportedly logical cases of ground ) to be obvious cases, and indeed such cases have been used to motivate the existence of and importance of ground. I argue against this. I do so by motivating two kinds of semantic determination relations. Intuitions of logical ground track these semantic relations. Moreover, our knowledge of semantics for first order logic can explain why we have such intuitions. And, I argue, neither semantic relation can be a species of ground even (...) on a quite broad conception of what ground is. Hence, without a positive argument for taking so-called ‘logical ground’ to be something distinct from a semantic determination relation, we should cease treating logical cases as cases of ground. (shrink)
In this paper I argue that pluralism at the level of logical systems requires a certain monism at the meta-logical level, and so, in a sense, there cannot be pluralism all the way down. The adequate alternative logical systems bottom out in a shared basic meta-logic, and as such, logical pluralism is limited. I argue that the content of this basic meta-logic must include the analogue of logical rules Modus Ponens and Universal Instantiation. I show this through a detailed analysis (...) of the ‘adoption problem’, which manifests something special about MP and UI. It appears that MP and UI underwrite the very nature of a logical rule of inference, due to all rules of inference being conditional and universal in their structure. As such, all logical rules presuppose MP and UI, making MP and UI self-governing, basic, unadoptable, and required in the meta-logic for the adequacy of any logical system. (shrink)
We investigate an enrichment of the propositional modal language L with a "universal" modality ■ having semantics x ⊧ ■φ iff ∀y(y ⊧ φ), and a countable set of "names" - a special kind of propositional variables ranging over singleton sets of worlds. The obtained language ℒ $_{c}$ proves to have a great expressive power. It is equivalent with respect to modal definability to another enrichment ℒ(⍯) of ℒ, where ⍯ is an additional modality with the semantics x ⊧ ⍯φ (...) iff Vy(y ≠ x → y ⊧ φ). Model-theoretic characterizations of modal definability in these languages are obtained. Further we consider deductive systems in ℒ $_{c}$ . Strong completeness of the normal ℒ $_{c}$ logics is proved with respect to models in which all worlds are named. Every ℒ $_{c}$ -logic axiomatized by formulae containing only names (but not propositional variables) is proved to be strongly frame-complete. Problems concerning transfer of properties ([in]completeness, filtration, finite model property etc.) from ℒ to ℒ $_{c}$ are discussed. Finally, further perspectives for names in multimodal environment are briefly sketched. (shrink)
In this article, I outline a logic of design of a system as a specific kind of conceptual logic of the design of the model of a system, that is, the blueprint that provides information about the system to be created. In section two, I introduce the method of levels of abstraction as a modelling tool borrowed from computer science. In section three, I use this method to clarify two main conceptual logics of information inherited from modernity: Kant’s transcendental logic (...) of conditions of possibility of a system, and Hegel’s dialectical logic of conditions of in/stability of a system. Both conceptual logics of information analyse structural properties of given systems. Strictly speaking, neither is a conceptual logic of information about the conditions of feasibility of a system, that is, neither is a logic of information as a logic of design. So, in section four, I outline this third conceptual logic of information and then interpret the conceptual logic of design as a logic of requirements, by introducing the relation of “sufficientisation”. In the conclusion, I argue that the logic of requirements is exactly what we need in order to make sense of, and buttress, a constructionist approach to knowledge. (shrink)
One logic or many? I say—many. Or rather, I say there is one logic for each way of specifying the class of all possible circumstances, or models, i.e., all ways of interpreting a given language. But because there is no unique way of doing this, I say there is no unique logic except in a relative sense. Indeed, given any two competing logical theories T1 and T2 (in the same language) one could always consider their common core, T, and settle (...) on that theory. So, given any language L, one could settle on the minimal logic T0 corresponding to the common core shared by all competitors. That would be a way of resisting relativism, as long as one is willing to redraw the bounds of logic accordingly. However, such a minimal theory T0 may be empty if the syntax of L contains no special ingredients the interpretation of which is independent of the specification of the relevant L-models. And generally—I argue—this is indeed the case. (shrink)
The result of combining classical quantificational logic with modal logic proves necessitism – the claim that necessarily everything is necessarily identical to something. This problem is reflected in the purely quantificational theory by theorems such as ∃x t=x; it is a theorem, for example, that something is identical to Timothy Williamson. The standard way to avoid these consequences is to weaken the theory of quantification to a certain kind of free logic. However, it has often been noted that in order (...) to specify the truth conditions of certain sentences involving constants or variables that don’t denote, one has to apparently quantify over things that are not identical to anything. In this paper I defend a contingentist, non-Meinongian metaphysics within a positive free logic. I argue that although certain names and free variables do not actually refer to anything, in each case there might have been something they actually refer to, allowing one to interpret the contingentist claims without quantifying over mere possibilia. (shrink)
We argue that the extant evidence for Stoic logic provides all the elements required for a variable-free theory of multiple generality, including a number of remarkably modern features that straddle logic and semantics, such as the understanding of one- and two-place predicates as functions, the canonical formulation of universals as quantified conditionals, a straightforward relation between elements of propositional and first-order logic, and the roles of anaphora and rigid order in the regimented sentences that express multiply general propositions. We consider (...) and reinterpret some ancient texts that have been neglected in the context of Stoic universal and existential propositions and offer new explanations of some puzzling features in Stoic logic. Our results confirm that Stoic logic surpasses Aristotle’s with regard to multiple generality, and are a reminder that focusing on multiple generality through the lens of Frege-inspired variable-binding quantifier theory may hamper our understanding and appreciation of pre-Fregean theories of multiple generality. (shrink)
The current resurgence of interest in cognition and in the nature of cognitive processing has brought with it also a renewed interest in the early work of Husserl, which contains one of the most sustained attempts to come to grips with the problems of logic from a cognitive point of view. Logic, for Husserl, is a theory of science; but it is a theory which takes seriously the idea that scientific theories are constituted by the mental acts of cognitive subjects. (...) The present essay begins with an exposition of Husserl's act-based conception of what a science is, and goes on to consider his account of the role of linguistic meanings, of the ontology of scientific objects, and of evidence and truth. The essay concentrates almost exclusively on the Logical Investigations of 1900/01. This is not only because this work, which is surely Husserl's single most important masterpiece, has been overshadowed first of all by his Ideas I and then later by the Crisis. It is also because the Investigations contain, in a peculiarly clear and pregnant form, a whole panoply of ideas on logic and cognitive theory which either simply disappeared in Husserl's own later writings or became obfuscated by an admixture of that great mystery which is 'transcendental phenomenology'. (shrink)
We explore the view that Frege's puzzle is a source of straightforward counterexamples to Leibniz's law. Taking this seriously requires us to revise the classical logic of quantifiers and identity; we work out the options, in the context of higher-order logic. The logics we arrive at provide the resources for a straightforward semantics of attitude reports that is consistent with the Millian thesis that the meaning of a name is just the thing it stands for. We provide models to show (...) that some of these logics are non-degenerate. (shrink)
What is a logical constant? The question is addressed in the tradition of Tarski's definition of logical operations as operations which are invariant under permutation. The paper introduces a general setting in which invariance criteria for logical operations can be compared and argues for invariance under potential isomorphism as the most natural characterization of logical operations.
Epistemic two-dimensional semantics is a theory in the philosophy of language that provides an account of meaning which is sensitive to the distinction between necessity and apriority. While this theory is usually presented in an informal manner, I take some steps in formalizing it in this paper. To do so, I define a semantics for a propositional modal logic with operators for the modalities of necessity, actuality, and apriority that captures the relevant ideas of epistemic two-dimensional semantics. I also describe (...) some properties of the logic that are interesting from a philosophical perspective, and apply it to the so-called nesting problem. (shrink)
Prior to Kripke's seminal work on the semantics of modal logic, McKinsey offered an alternative interpretation of the necessity operator, inspired by the Bolzano-Tarski notion of logical truth. According to this interpretation, `it is necessary that A' is true just in case every sentence with the same logical form as A is true. In our paper, we investigate this interpretation of the modal operator, resolving some technical questions, and relating it to the logical interpretation of modality and some views in (...) modal metaphysics. In particular, we present an hitherto unpublished solution to problems 41 and 42 from Friedman's 102 problems, which uses a different method of proof from the solution presented in the paper of Tadeusz Prucnal. (shrink)
Recent work in formal semantics suggests that the language system includes not only a structure building device, as standardly assumed, but also a natural deductive system which can determine when expressions have trivial truth‐conditions (e.g., are logically true/false) and mark them as unacceptable. This hypothesis, called the ‘logicality of language’, accounts for many acceptability patterns, including systematic restrictions on the distribution of quantifiers. To deal with apparent counter‐examples consisting of acceptable tautologies and contradictions, the logicality of language is (...) often paired with an additional assumption according to which logical forms are radically underspecified: i.e., the language system can see functional terms but is ‘blind’ to open class terms to the extent that different tokens of the same term are treated as if independent. This conception of logical form has profound implications: it suggests an extreme version of the modularity of language, and can only be paired with non‐classical—indeed quite exotic—kinds of deductive systems. The aim of this paper is to show that we can pair the logicality of language with a different and ultimately more traditional account of logical form. This framework accounts for the basic acceptability patterns which motivated the logicality of language, can explain why some tautologies and contradictions are acceptable, and makes better predictions in key cases. As a result, we can pursue versions of the logicality of language in frameworks compatible with the view that the language system is not radically modular vis‐á‐vis its open class terms and employs a deductive system that is basically classical. (shrink)
Buddhist philosophers have developed a rich tradition of logic. Buddhist material on logic that forms the Buddhist tradition of logic, however, is hardly discussed or even known. This article presents some of that material in a manner that is accessible to contemporary logicians and philosophers of logic and sets agendas for global philosophy of logic.
One of the open problems in the philosophy of information is whether there is an information logic (IL), different from epistemic (EL) and doxastic logic (DL), which formalises the relation “a is informed that p” (Iap) satisfactorily. In this paper, the problem is solved by arguing that the axiom schemata of the normal modal logic (NML) KTB (also known as B or Br or Brouwer’s system) are well suited to formalise the relation of “being informed”. After having shown that IL (...) can be constructed as an informational reading of KTB, four consequences of a KTB-based IL are explored: information overload; the veridicality thesis (Iap → p); the relation between IL and EL; and the Kp → Bp principle or entailment property, according to which knowledge implies belief. Although these issues are discussed later in the article, they are the motivations behind the development of IL. (shrink)
Logical information theory is the quantitative version of the logic of partitions just as logical probability theory is the quantitative version of the dual Boolean logic of subsets. The resulting notion of information is about distinctions, differences and distinguishability and is formalized using the distinctions of a partition. All the definitions of simple, joint, conditional and mutual entropy of Shannon information theory are derived by a uniform transformation from the corresponding definitions at the logical level. The purpose of this paper (...) is to give the direct generalization to quantum logical information theory that similarly focuses on the pairs of eigenstates distinguished by an observable, i.e., qudits of an observable. The fundamental theorem for quantum logical entropy and measurement establishes a direct quantitative connection between the increase in quantum logical entropy due to a projective measurement and the eigenstates that are distinguished by the measurement. Both the classical and quantum versions of logical entropy have simple interpretations as “two-draw” probabilities for distinctions. The conclusion is that quantum logical entropy is the simple and natural notion of information for quantum information theory focusing on the distinguishing of quantum states. (shrink)
There has been a recent surge of work on deontic modality within philosophy of language. This work has put the deontic logic tradition in contact with natural language semantics, resulting in significant increase in sophistication on both ends. This chapter surveys the main motivations, achievements, and prospects of this work.
In this manuscript, published here for the first time, Tarski explores the concept of logical notion. He draws on Klein's Erlanger Programm to locate the logical notions of ordinary geometry as those invariant under all transformations of space. Generalizing, he explicates the concept of logical notion of an arbitrary discipline.
Those who conceive logic as a science have generally favoured one of two alternative conceptions as to what the subject-matter of this science ought to be. On the one hand is the nowadays somewhat old-fashioned-seeming view of logic as the science of judgment, or of thinking or reasoning activities in general. On the other hand is the view of logic as a science of ideal meanings, 'thoughts', or 'propositions in themselves'. There is, however, a third alternative conception, which enjoyed only (...) a brief flowering in the years leading up to the first World War, but whose lingering presence can be detected in the background of more recent ontologising trends in logic, as for example in the 'situation semantics' of Barwise and Perry. This third conception sees logic as a science of special objects called 'Sachverhalte' or 'states of affairs'. A view of this sort is present in simplified form in the works of Meinong, but it received its definitive formulation in the writings of Adolf Reinach, a student of Husserl who is otherwise noteworthy for having anticipated, in a monograph of 1913, large chunks of what later became known as the theory of speech acts.(1). (shrink)
In explaining the notion of a fundamental property or relation, metaphysicians will often draw an analogy with languages. The fundamental properties and relations stand to reality as the primitive predicates and relations stand to a language: the smallest set of vocabulary God would need in order to write the “book of the world.” This paper attempts to make good on this metaphor. To that end, a modality is introduced that, put informally, stands to propositions as logical truth stands to sentences. (...) The resulting theory, formulated in higher-order logic, also vindicates the Humean idea that fundamental properties and relations are freely recombinable and a variant of the structural idea that propositions can be decomposed into their fundamental constituents via logical operations. Indeed, it is seen that, although these ideas are seemingly distinct, they are not independent, and fall out of a natural and general theory about the granularity of reality. (shrink)
This book treats ancient logic: the logic that originated in Greece by Aristotle and the Stoics, mainly in the hundred year period beginning about 350 BCE. Ancient logic was never completely ignored by modern logic from its Boolean origin in the middle 1800s: it was prominent in Boole’s writings and it was mentioned by Frege and by Hilbert. Nevertheless, the first century of mathematical logic did not take it seriously enough to study the ancient logic texts. A renaissance in ancient (...) logic studies occurred in the early 1950s with the publication of the landmark Aristotle’s Syllogistic by Jan Łukasiewicz, Oxford UP 1951, 2nd ed. 1957. Despite its title, it treats the logic of the Stoics as well as that of Aristotle. Łukasiewicz was a distinguished mathematical logician. He had created many-valued logic and the parenthesis-free prefix notation known as Polish notation. He co-authored with Alfred Tarski’s an important paper on metatheory of propositional logic and he was one of Tarski’s the three main teachers at the University of Warsaw. Łukasiewicz’s stature was just short of that of the giants: Aristotle, Boole, Frege, Tarski and Gödel. No mathematical logician of his caliber had ever before quoted the actual teachings of ancient logicians. -/- Not only did Łukasiewicz inject fresh hypotheses, new concepts, and imaginative modern perspectives into the field, his enormous prestige and that of the Warsaw School of Logic reflected on the whole field of ancient logic studies. Suddenly, this previously somewhat dormant and obscure field became active and gained in respectability and importance in the eyes of logicians, mathematicians, linguists, analytic philosophers, and historians. Next to Aristotle himself and perhaps the Stoic logician Chrysippus, Łukasiewicz is the most prominent figure in ancient logic studies. A huge literature traces its origins to Łukasiewicz. -/- This Ancient Logic and Its Modern Interpretations, is based on the 1973 Buffalo Symposium on Modernist Interpretations of Ancient Logic, the first conference devoted entirely to critical assessment of the state of ancient logic studies. (shrink)
In the paper, original formal-logical conception of syntactic and semantic: intensional and extensional senses of expressions of any language L is outlined. Syntax and bi-level intensional and extensional semantics of language L are characterized categorically: in the spirit of some Husserl’s ideas of pure grammar, Leśniewski-Ajukiewicz’s theory syntactic/semantic categories and in accordance with Frege’s ontological canons, Bocheński’s famous motto—syntax mirrors ontology and some ideas of Suszko: language should be a linguistic scheme of ontological reality and simultaneously a tool of its (...) cognition. In the logical conception of language L, its expressions should satisfy some general conditions of language adequacy. The adequacy ensures their unambiguous syntactic and semantic senses and mutual, syntactic, and semantic compatibility, correspondence guaranteed by the acceptance of a postulate of categorial compatibility syntactic and semantic categories of expressions of L. From this postulate, three principles of compositionality follow: one syntactic and two semantic already known to Frege. They are treated as conditions of homomorphism partial algebra of L into algebraic models of L: syntactic, intensional, and extensional. In the paper, they are applied to some expressions with quantifiers. Language adequacy connected with the logical senses described in the logical conception of language L is, of course, an idealization, but only expressions with high degrees of precision of their senses, after due justification, may become theorems of science. (shrink)
Sentences about logic are often used to show that certain embedding expressions are hyperintensional. Yet it is not clear how to regiment “logic talk” in the object language so that it can be compositionally embedded under such expressions. In this paper, I develop a formal system called hyperlogic that is designed to do just that. I provide a hyperintensional semantics for hyperlogic that doesn’t appeal to logically impossible worlds, as traditionally understood, but instead uses a shiftable parameter that determines the (...) interpretation of the logical connectives. I argue this semantics compares favorably to the more common impossible worlds semantics, which faces difficulties interpreting propositionally quantified logic talk. (shrink)
Gaining information can be modelled as a narrowing of epistemic space . Intuitively, becoming informed that such-and-such is the case rules out certain scenarios or would-be possibilities. Chalmers’s account of epistemic space treats it as a space of a priori possibility and so has trouble in dealing with the information which we intuitively feel can be gained from logical inference. I propose a more inclusive notion of epistemic space, based on Priest’s notion of open worlds yet which contains only those (...) epistemic scenarios which are not obviously impossible. Whether something is obvious is not always a determinate matter and so the resulting picture is of an epistemic space with fuzzy boundaries. (shrink)
What does it mean for the laws of logic to fail? My task in this paper is to answer this question. I use the resources that Routley/Sylvan developed with his collaborators for the semantics of relevant logics to explain a world where the laws of logic fail. I claim that the non-normal worlds that Routley/Sylvan introduced are exactly such worlds. To disambiguate different kinds of impossible worlds, I call such worlds logically impossible worlds. At a logically impossible world, the laws (...) of logic fail. In this paper, I provide a definition of logically impossible worlds. I then show that there is nothing strange about admitting such worlds. (shrink)
Paraconsistent logics are logical systems that reject the classical principle, usually dubbed Explosion, that a contradiction implies everything. However, the received view about paraconsistency focuses only the inferential version of Explosion, which is concerned with formulae, thereby overlooking other possible accounts. In this paper, we propose to focus, additionally, on a meta-inferential version of Explosion, i.e. which is concerned with inferences or sequents. In doing so, we will offer a new characterization of paraconsistency by means of which a logic is (...) paraconsistent if it invalidates either the inferential or the meta-inferential notion of Explosion. We show the non-triviality of this criterion by discussing a number of logics. On the one hand, logics which validate and invalidate both versions of Explosion, such as classical logic and Asenjo–Priest’s 3-valued logic LP. On the other hand, logics which validate one version of Explosion but not the other, such as the substructural logics TS and ST, introduced by Malinowski and Cobreros, Egré, Ripley and van Rooij, which are obtained via Malinowski’s and Frankowski’s q- and p-matrices, respectively. (shrink)
Buddhist philosophers have investigated the techniques and methodologies of debate and argumentation which are important aspects of Buddhist intellectual life. This was particularly the case in India, where Buddhism and Buddhist philosophy originated. But these investigations have also engaged philosophers in China, Japan, Korea and Tibet, and many other parts of the world that have been influenced by Buddhism and Buddhist philosophy. Several elements of the Buddhist tradition of philosophy are thought to be part of this investigation. -/- There are (...) interesting reasoning patterns discernible in the writings of Buddhist philosophers. For instance, the Mādhyamika philosopher Nāgārjuna presents arguments for the emptiness of all things in terms of catuṣkoṭi (four ‘corners’: roughly speaking, truth, false, both, neither). There are also the Indian vāda (debate) literature and Tibetan bsdus grwa (collected topics) that list techniques of debates. -/- The main interest here is the tradition of Buddhist philosophy, sometimes referred to as Pramāṇavāda, whose central figures are Dignāga (approx. 480–540 CE) and Dharmakīrti (6th– 7th CE). This philosophy is understood to be the Buddhist school of logic-epistemology. ‘Pramāṇavāda’ is not a doxographical term traditionally used to refer to a recognised school of Buddhist philosophy. It is a conventional term that is sometimes used in modern literature. Nevertheless, in what follows, we think of it as a tradition within Buddhist philosophy and ‘Buddhist logic’ refers to what is developed in this tradition. -/- Buddhist logicians have systematically analysed the kind of reasoning involved in acquiring knowledge. They hold that there are valid ways to reason that are productive of knowledge. In what follows, some of the main elements of their analyses will be described. Note, however, that, while exegetical studies of Buddhist texts are important, we must step back from them and consider what is involved in taking a Buddhist approach to logic in light of modern formal logic. This analysis will be done against the backdrop of the contemporary literature on logic and related subjects in contemporary philosophy to make sense of the logical studies by Buddhist logicians. (shrink)
My first paper on the Is/Ought issue. The young Arthur Prior endorsed the Autonomy of Ethics, in the form of Hume’s No-Ought-From-Is (NOFI) but the later Prior developed a seemingly devastating counter-argument. I defend Prior's earlier logical thesis (albeit in a modified form) against his later self. However it is important to distinguish between three versions of the Autonomy of Ethics: Ontological, Semantic and Ontological. Ontological Autonomy is the thesis that moral judgments, to be true, must answer to a realm (...) of sui generis non-natural PROPERTIES. Semantic autonomy insists on a realm of sui generis non-natural PREDICATES which do not mean the same as any natural counterparts. Logical Autonomy maintains that moral conclusions cannot be derived from non-moral premises.-moral premises with the aid of logic alone. Logical Autonomy does not entail Semantic Autonomy and Semantic Autonomy does not entail Ontological Autonomy. But, given some plausible assumptions Ontological Autonomy entails Semantic Autonomy and given the conservativeness of logic – the idea that in a valid argument you don’t get out what you haven’t put in – Semantic Autonomy entails Logical Autonomy. So if Logical Autonomy is false – as Prior appears to prove – then Semantic and Ontological Autonomy would appear to be false too! I develop a version of Logical Autonomy (or NOFI) and vindicate it against Prior’s counterexamples, which are also counterexamples to the conservativeness of logic as traditionally conceived. The key concept here is an idea derived in part from Quine - that of INFERENCE-RELATIVE VACUITY. I prove that you cannot derive conclusions in which the moral terms appear non-vacuously from premises from which they are absent. But this is because you cannot derive conclusions in which ANY (non-logical) terms appear non-vacuously from premises from which they are absent Thus NOFI or Logical Autonomy comes out as an instance of the conservativeness of logic. This means that the reverse entailment that I have suggested turns out to be a mistake. The falsehood of Logical Autonomy would not entail either the falsehood Semantic Autonomy or the falsehood of Ontological Autonomy, since Semantic Autonomy only entails Logical Autonomy with the aid of the conservativeness of logic of which Logical Autonomy is simply an instance. Thus NOFI or Logical Autonomy is vindicated, but it turns out to be a less world-shattering thesis than some have supposed. It provides no support for either non-cognitivism or non-naturalism. (shrink)
A logic is called 'paraconsistent' if it rejects the rule called 'ex contradictione quodlibet', according to which any conclusion follows from inconsistent premises. While logicians have proposed many technically developed paraconsistent logical systems and contemporary philosophers like Graham Priest have advanced the view that some contradictions can be true, and advocated a paraconsistent logic to deal with them, until recent times these systems have been little understood by philosophers. This book presents a comprehensive overview on paraconsistent logical systems to change (...) this situation. The book includes almost every major author currently working in the field. The papers are on the cutting edge of the literature some of which discuss current debates and others present important new ideas. The editors have avoided papers about technical details of paraconsistent logic, but instead concentrated upon works that discuss more 'big picture' ideas. Different treatments of paradoxes takes centre stage in many of the papers, but also there are several papers on how to interpret paraconistent logic and some on how it can be applied to philosophy of mathematics, the philosophy of language, and metaphysics. (shrink)
This original research hypothesises that the most fundamental building blocks of logical descriptions of cognitive, or knowledge, agents’ descriptions are expressible based on their conceptions (of the world). This article conceptually and logically analyses agents’ conceptions in order to offer a constructivist- based logical model for terminological knowledge. The most significant characteristic of [terminological] knowing is that there are strong interrelationships between terminological knowledge and the individualistic constructed, and to-be-constructed, models of knowledge. Correspondingly, I conceptually and logically analyse conception expressions (...) based on terminological knowledge, and I show how terminological knowledge may reasonably be assumed to be constructed based on the agents’ conceptions of the world. The focus of my model is on terminological knowledge structures, which may find applications in such diverse fields as the Semantic Web and educational/learning systems. (shrink)
Imagine a dog tracing a scent to a crossroads, sniffing all but one of the exits, and then proceeding down the last without further examination. According to Sextus Empiricus, Chrysippus argued that the dog effectively employs disjunctive syllogism, concluding that since the quarry left no trace on the other paths, it must have taken the last. The story has been retold many times, with at least four different morals: (1) dogs use logic, so they are as clever as humans; (2) (...) dogs use logic, so using logic is nothing special; (3) dogs reason well enough without logic; (4) dogs reason better for not having logic. This paper traces the history of Chrysippus's dog, from antiquity up to its discussion by relevance logicians in the twentieth century. (shrink)
Deontic logic is devoted to the study of logical properties of normative predicates such as permission, obligation and prohibition. Since it is usual to apply these predicates to actions, many deontic logicians have proposed formalisms where actions and action combinators are present. Some standard action combinators are action conjunction, choice between actions and not doing a given action. These combinators resemble boolean operators, and therefore the theory of boolean algebra offers a well-known athematical framework to study the properties of the (...) classic deontic operators when applied to actions. In his seminal work, Segerberg uses constructions coming from boolean algebras to formalize the usual deontic notions. Segerberg’s work provided the initial step to understand logical properties of deontic operators when they are applied to actions. In the last years, other authors have proposed related logics. In this chapter we introduce Segerberg’s work, study related formalisms and investigate further challenges in this area. (shrink)
The Bounds of Logic presents a new philosophical theory of the scope and nature of logic based on critical analysis of the principles underlying modern Tarskian logic and inspired by mathematical and linguistic development. Extracting central philosophical ideas from Tarski’s early work in semantics, Sher questions whether these are fully realized by the standard first-order system. The answer lays the foundation for a new, broader conception of logic. By generally characterizing logical terms, Sher establishes a fundamental result in semantics. Her (...) development of the notion of logicality for quantifiers and her work on branching are of great importance for linguistics. Sher outlines the boundaries of the new logic and points out some of the philosophical ramifications of the new view of logic for such issues as the logicist thesis, ontological commitment, the role of mathematics in logic, and the metaphysical underpinning of logic. She proposes a constructive definition of logical terms, reexamines and extends the notion of branching quantification, and discusses various linguistic issues and applications. (shrink)
The reasoning process of analogy is characterized by a strict interdependence between a process of abstraction of a common feature and the transfer of an attribute of the Analogue to the Primary Subject. The first reasoning step is regarded as an abstraction of a generic characteristic that is relevant for the attribution of the predicate. The abstracted feature can be considered from a logic-semantic perspective as a functional genus, in the sense that it is contextually essential for the attribution of (...) the predicate, i.e. that is pragmatically fundamental (i.e. relevant) for the predica-tion, or rather the achievement of the communicative intention. While the transfer of the predicate from the Analogue to the analogical genus and from the genus to the Primary Subject is guaranteed by the maxims (or rules of inference) governing the genus-species relation, the connection between the genus and the predicate can be complex, characterized by various types of reasoning patterns. The relevance relation can hide implicit arguments, such as an implicit argument from classification , an evaluation based on values, consequences or rules, a causal relation, or an argument from practical reasoning. (shrink)
According to traditional logical expressivism, logical operators allow speakers to explicitly endorse claims that are already implicitly endorsed in their discursive practice — endorsed in virtue of that practice’s having instituted certain logical relations. Here, I propose a different version of logical expressivism, according to which the expressive role of logical operators is explained without invoking logical relations at all, but instead in terms of the expression of discursive-practical attitudes. In defense of this alternative, I present a deflationary account of (...) the expressive role of vocabulary by which we ascribe logical relations. (shrink)
When discussing Logical Pluralism several critics argue that such an open-minded position is untenable. The key to this conclusion is that, given a number of widely accepted assumptions, the pluralist view collapses into Logical Monism. In this paper we show that the arguments usually employed to arrive at this conclusion do not work. The main reason for this is the existence of certain substructural logics which have the same set of valid inferences as Classical Logic—although they are, in a clear (...) sense, non-identical to it. We argue that this phenomenon can be generalized, given the existence of logics which coincide with Classical Logic regarding a number of metainferential levels—although they are, again, clearly different systems. We claim this highlights the need to arrive at a more refined version of the Collapse Argument, which we discuss at the end of the paper. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.