An important part of the Unified Medical Language System (UMLS) is its Semantic Network, consisting of 134 Semantic Types connected to each other by edges formed by one or more of 54 distinct Relation Types. This Network is however for many purposes overcomplex, and various groups have thus made attempts at simplification. Here we take this work further by simplifying the relations which involve the three Semantic Types – Diagnostic Procedure, Laboratory Procedure and Therapeutic or Preventive Procedure. We define operators (...) which can be used to generate terms instantiating types from this selected set when applied to terms designating certain other Semantic Types, including almost all the terms specifying clinical tasks. Usage of such operators thus provides a useful and economical way of specifying clinical tasks. The operators allow us to define a mapping between those types within the UMLS which do not represent clinical tasks and those which do. This mapping then provides a basis for an ontology of clinical tasks that can be used in the formulation of computer-interpretable clinical guideline models. (shrink)
There are some who defend a view of vagueness according to which there are intrinsically vague objects or attributes in reality. Here, in contrast, we defend a view of vagueness as a semantic property of names and predicates. All entities are crisp, on this view, but there are, for each vague name, multiple portions of reality that are equally good candidates for being its referent, and, for each vague predicate, multiple classes of objects that are equally good candidates for being (...) its extension. We provide a new formulation of these ideas in terms of a theory of granular partitions. We show that this theory provides a general framework within which we can understand the relation between vague terms and concepts on the one hand and correlated portions of reality on the other. We also sketch how it might be possible to formulate within this framework a theory of vagueness which dispenses with the notion of truth-value gaps and other artifacts of more familiar approaches. (shrink)
An important problem with machine learning is that when label number n>2, it is very difficult to construct and optimize a group of learning functions, and we wish that optimized learning functions are still useful when prior distribution P(x) (where x is an instance) is changed. To resolve this problem, the semantic information G theory, Logical Bayesian Inference (LBI), and a group of Channel Matching (CM) algorithms together form a systematic solution. MultilabelMultilabel A semantic channel in the G theory consists (...) of a group of truth functions or membership functions. In comparison with likelihood functions, Bayesian posteriors, and Logistic functions used by popular methods, membership functions can be more conveniently used as learning functions without the above problem. In Logical Bayesian Inference (LBI), every label’s learning is independent. For Multilabel learning, we can directly obtain a group of optimized membership functions from a big enough sample with labels, without preparing different samples for different labels. A group of Channel Matching (CM) algorithms are developed for machine learning. For the Maximum Mutual Information (MMI) classification of three classes with Gaussian distributions on a two-dimensional feature space, 2-3 iterations can make mutual information between three classes and three labels surpass 99% of the MMI for most initial partitions. For mixture models, the Expectation-Maxmization (EM) algorithm is improved and becomes the CM-EM algorithm, which can outperform the EM algorithm when mixture ratios are imbalanced, or local convergence exists. The CM iteration algorithm needs to combine neural networks for MMI classifications on high-dimensional feature spaces. LBI needs further studies for the unification of statistics and logic. (shrink)
A computer can come to understand natural language the same way Helen Keller did: by using “syntactic semantics”—a theory of how syntax can suffice for semantics, i.e., how semantics for natural language can be provided by means of computational symbol manipulation. This essay considers real-life approximations of Chinese Rooms, focusing on Helen Keller’s experiences growing up deaf and blind, locked in a sort of Chinese Room yet learning how to communicate with the outside world. Using the SNePS (...) computational knowledge-representation system, the essay analyzes Keller’s belief that learning that “everything has a name” was the key to her success, enabling her to “partition” her mental concepts into mental representations of: words, objects, and the naming relations between them. It next looks at Herbert Terrace’s theory of naming, which is akin to Keller’s, and which only humans are supposed to be capable of. The essay suggests that computers at least, and perhaps non-human primates, are also capable of this kind of naming. (shrink)
Observe that complement questions can be either directly or indirectly conjoined, but they can only be indirectly disjoined. • What theories of questions and coordination predict this difference? • Look at Partition theory (Groenendijk & Stokhof 1984) and Inquisitive Semantics (Groenendijk & Roelofsen 2009, Ciardelli et al. 2012).
This paper investigates the prospects for a semantic theory that treats disjunction as a modal operator. Potential motivation for such a theory comes from the way in which modals embed within disjunctions. After reviewing some of the relevant data, I go on to distinguish a variety of modal theories of disjunction. I analyze these theories by considering pairs of conflicting desiderata, highlighting some of the tradeoffs they must face.
The paper surveys the currently available axiomatizations of common belief (CB) and common knowledge (CK) by means of modal propositional logics. (Throughout, knowledge- whether individual or common- is defined as true belief.) Section 1 introduces the formal method of axiomatization followed by epistemic logicians, especially the syntax-semantics distinction, and the notion of a soundness and completeness theorem. Section 2 explains the syntactical concepts, while briefly discussing their motivations. Two standard semantic constructions, Kripke structures and neighbourhood structures, are introduced in (...) Sections 3 and 4, respectively. It is recalled that Aumann's partitional model of CK is a particular case of a definition in terms of Kripke structures. The paper also restates the well-known fact that Kripke structures can be regarded as particular cases of neighbourhood structures. Section 3 reviews the soundness and completeness theorems proved w.r.t. the former structures by Fagin, Halpern, Moses and Vardi, as well as related results by Lismont. Section 4 reviews the corresponding theorems derived w.r.t. the latter structures by Lismont and Mongin. A general conclusion of the paper is that the axiomatization of CB does not require as strong systems of individual belief as was originally thought- only monotonicity has thusfar proved indispensable. Section 5 explains another consequence of general relevance: despite the "infinitary" nature of CB, the axiom systems of this paper admit of effective decision procedures, i.e., they are decidable in the logician's sense. (shrink)
Roman Suszko said that “Obviously, any multiplication of logical values is a mad idea and, in fact, Łukasiewicz did not actualize it.” The aim of the present paper is to qualify this ‘obvious’ statement through a number of logical and philosophical writings by Professor Jan Woleński, all focusing on the nature of truth-values and their multiple uses in philosophy. It results in a reconstruction of such an abstract object, doing justice to what Suszko held a ‘mad’ project within a generalized (...) logic of judgments. Four main issues raised by Woleński will be considered to test the insightfulness of such generalized truth-values, namely: the principle of bivalence, the logic of scepticism, the coherence theory of truth, and nothingness. (shrink)
It has been recently argued that the well-known square of opposition is a gathering that can be reduced to a one-dimensional figure, an ordered line segment of positive and negative integers [3]. However, one-dimensionality leads to some difficulties once the structure of opposed terms extends to more complex sets. An alternative algebraic semantics is proposed to solve the problem of dimensionality in a systematic way, namely: partition (or bitstring) semantics. Finally, an alternative geometry yields a new and (...) unique pattern of oppositions that proceeds with colored diagrams and an increasing set of bitstrings. (shrink)
Since the early 1980s, there has been a debate in the semantics literature pertaining to whether wh-interrogatives can be directly disjoined, as main clauses and as complements. Those who held that the direct disjunction of wh-interrogatives was in conflict with certain theoretical considerations proposed that they could be disjoined indirectly. Indirect disjunction proceeds by first lifting both wh-interrogatives and then disjoining them; it assigns matrix-level scope to OR. As we will see, the notorious theoretical need for indirect disjunction has (...) disappeared by today. But the factual question remains. Are wh-complements disjoined directly or indirectly? What is the fact of the matter? This paper argues that the case for indirect disjunction remains reasonably strong. (shrink)
Languages vary in their semantic partitioning of the world. This has led to speculation that language might shape basic cognitive processes. Spatial cognition has been an area of research in which linguistic relativity – the effect of language on thought – has both been proposed and rejected. Prior studies have been inconclusive, lacking experimental rigor or appropriate research design. Lacking detailed ethnographic knowledge as well as failing to pay attention to intralanguage variations, these studies often fall short of defining an (...) appropriate concept of language, culture, and cognition. Our study constitutes the first research exploring (1) individuals speaking different languages yet living (for generations) in the same immediate environment and (2) systematic intralanguage variation. Results show that language does not shape spatial cognition and plays at best the secondary role of foregrounding alternative possibilities for encoding spatial arrangements. (shrink)
The logical basis for information theory is the newly developed logic of partitions that is dual to the usual Boolean logic of subsets. The key concept is a "distinction" of a partition, an ordered pair of elements in distinct blocks of the partition. The logical concept of entropy based on partition logic is the normalized counting measure of the set of distinctions of a partition on a finite set--just as the usual logical notion of probability based (...) on the Boolean logic of subsets is the normalized counting measure of the subsets (events). Thus logical entropy is a measure on the set of ordered pairs, and all the compound notions of entropy (join entropy, conditional entropy, and mutual information) arise in the usual way from the measure (e.g., the inclusion-exclusion principle)--just like the corresponding notions of probability. The usual Shannon entropy of a partition is developed by replacing the normalized count of distinctions (dits) by the average number of binary partitions (bits) necessary to make all the distinctions of the partition. (shrink)
Our approach is based on a tri-partite method of integrating psychodynamic hypotheses, cognitive subliminal processes, and psychophysiological alpha power measures. We present ten social phobic subjects with three individually selected groups of words representing unconscious conflict, conscious symptom experience, and Osgood Semantic negative valence words used as a control word group. The unconscious conflict and conscious symptom words, presented subliminally and supraliminally, act as primes preceding the conscious symptom and control words presented as supraliminal targets. With alpha power as a (...) marker of inhibitory brain activity, we show that unconscious conflict primes, only when presented subliminally, have a unique inhibitory effect on conscious symptom targets. This effect is absent when the unconscious conflict primes are presented supraliminally, or when the target is the control words. Unconscious conflict prime effects were found to correlate with a measure of repressiveness in a similar previous study (Shevrin et al., 1992, 1996). Conscious symptom primes have no inhibitory effect when presented subliminally. Inhibitory effects with conscious symptom primes are present, but only when the primes are supraliminal, and they did not correlate with repressiveness in a previous study (Shevrin et al., 1992, 1996). We conclude that while the inhibition following supraliminal conscious symptom primes is due to conscious threat bias, the inhibition following subliminal unconscious conflict primes provides a neurological blueprint for dynamic repression: it is only activated subliminally by an individual's unconscious conflict and has an inhibitory effect specific only to the conscious symptom. These novel findings constitute neuroscientific evidence for the psychoanalytic concepts of unconscious conflict and repression, while extending neuroscience theory and methods into the realm of personal, psychological meaning. (shrink)
Abstract: We propose a view of vagueness as a semantic property of names and predicates. All entities are crisp, on this semantic view, but there are, for each vague name, multiple portions of reality that are equally good candidates for being its referent, and, for each vague predicate, multiple classes of objects that are equally good candidates for being its extension. We provide a new formulation of these ideas in terms of a theory of granular partitions. We show that this (...) theory provides a general framework within which we can understand the relation between vague terms and concepts and the corresponding crisp portions of reality. We also sketch how it might be possible to formulate within this framework a theory of vagueness which dispenses with the notion of truth-value gaps and other artifacts of more familiar approaches. Central to our approach is the idea that judgments about reality involve in every case (1) a separation of reality into foreground and background of attention and (2) the feature of granularity. On this basis we attempt to show that even vague judgments made in naturally occurring contexts are not marked by truth-value indeterminacy. We distinguish, in addition to crisp granular partitions, also vague partitions, and reference partitions, and we explain the role of the latter in the context of judgments that involve vagueness. We conclude by showing how reference partitions provide an effective means by which judging subjects are able to temper the vagueness of their judgments by means of approximations. (shrink)
Hugh MacColl is commonly seen as a pioneer of modal and many-valued logic, given his introduction of modalities that go beyond plain truth and falsehood. But a closer examination shows that such a legacy is debatable and should take into account the way in which these modalities proceeded. We argue that, while MacColl devised a modal logic in the broad sense of the word, he did not give rise to a many-valued logic in the strict sense. Rather, his logic is (...) similar to a “non-Fregean logic”: an algebraic logic that partitions the semantic classes of truth and falsehood into subclasses but does not extend the range of truth-values. (shrink)
I observe that the “concept-generator” theory of Percus and Sauerland (2003), Anand (2006), and Charlow and Sharvit (2014) does not predict an intuitive true interpretation of the sentence “Plato did not believe that Hesperus was Phosphorus”. In response, I present a simple theory of attitude reports which employs a fine-grained semantics for names, according to which names which intuitively name the same thing may have distinct compositional semantic values. This simple theory solves the problem with the concept-generator theory, but, (...) as I go on to show, it has problems of its own. I present three examples which the concept-generator theory can accommodate, but the simple fine-grained theory cannot. These examples motivate the full theory of the paper, which combines the basic ideas behind the concept-generator theory with a fine-grained semantics for names. The examples themselves are of interest independently of my theory: two of them constrain the original concept-generator theory more tightly than previously discussed examples had. (shrink)
According to the singular conception of reality, there are objects and there are singular properties, i.e. properties that are instantiated by objects separately. It has been argued that semantic considerations about plurals give us reasons to embrace a plural conception of reality. This is the view that, in addition to singular properties, there are plural properties, i.e. properties that are instantiated jointly by many objects. In this article, I propose and defend a novel semantic account of plurals which dispenses with (...) plural properties and thus undermines the semantic argument in favor of the plural conception of reality. (shrink)
We have a variety of different ways of dividing up, classifying, mapping, sorting and listing the objects in reality. The theory of granular partitions presented here seeks to provide a general and unified basis for understanding such phenomena in formal terms that is more realistic than existing alternatives. Our theory has two orthogonal parts: the first is a theory of classification; it provides an account of partitions as cells and subcells; the second is a theory of reference or intentionality; it (...) provides an account of how cells and subcells relate to objects in reality. We define a notion of well-formedness for partitions, and we give an account of what it means for a partition to project onto objects in reality. We continue by classifying partitions along three axes: (a) in terms of the degree of correspondence between partition cells and objects in reality; (b) in terms of the degree to which a partition represents the mereological structure of the domain it is projected onto; and (c) in terms of the degree of completeness with which a partition represents this domain. (shrink)
Classical logic is usually interpreted as the logic of propositions. But from Boole's original development up to modern categorical logic, there has always been the alternative interpretation of classical logic as the logic of subsets of any given (nonempty) universe set. Partitions on a universe set are dual to subsets of a universe set in the sense of the reverse-the-arrows category-theoretic duality--which is reflected in the duality between quotient objects and subobjects throughout algebra. Hence the idea arises of a dual (...) logic of partitions. That dual logic is described here. Partition logic is at the same mathematical level as subset logic since models for both are constructed from (partitions on or subsets of) arbitrary unstructured sets with no ordering relations, compatibility or accessibility relations, or topologies on the sets. Just as Boole developed logical finite probability theory as a quantitative treatment of subset logic, applying the analogous mathematical steps to partition logic yields a logical notion of entropy so that information theory can be refounded on partition logic. But the biggest application is that when partition logic and the accompanying logical information theory are "lifted" to complex vector spaces, then the mathematical framework of quantum mechanics is obtained. Partition logic models indefiniteness (i.e., numerical attributes on a set become more definite as the inverse-image partition becomes more refined) while subset logic models the definiteness of classical physics (an entity either definitely has a property or definitely does not). Hence partition logic provides the backstory so the old idea of "objective indefiniteness" in QM can be fleshed out to a full interpretation of quantum mechanics. (shrink)
In virtue of what does a sign have meaning? This is the question raised by Wittgenstein's rule-following considerations. Semantic dispositionalism is a (type of) theory that purports to answer this question. The present paper argues that semantic dispositionalism faces a heretofore unnoticed problem, one that ultimately comes down to its reliance on unanalyzed notions of repeated types of signs. In the context of responding to the rule-following paradox—and offering a putative solution to it—this amounts to simply assuming a solution to (...) the problem in one domain and using it to solve the same problem in another. Given, moreover, the level at which the rule-following paradox undercuts dispositionalism—the level of the notion of a sign's repetition—the objections made to the view also rule out causal/informational theories of meaning as well as communitarian/assertion-theoretic ones as potential solutions to the rule-following paradox. (shrink)
Written as a comment on Crispin Wright's "Vagueness: A Fifth Column Approach", this paper defends a form of supervaluationism against Wright's criticisms. Along the way, however, it takes up the question what is really wrong with Epistemicism, how the appeal of the Sorities ought properly to be understood, and why Contextualist accounts of vagueness won't do.
I argue that semantics is the study of the proprietary database of a centrally inaccessible and informationally encapsulated input–output system. This system’s role is to encode and decode partial and defeasible evidence of what speakers are saying. Since information about nonlinguistic context is therefore outside the purview of semantic processing, a sentence’s semantic value is not its content but a partial and defeasible constraint on what it can be used to say. I show how to translate this thesis into (...) a detailed compositional-semantic theory based on the influential framework of Heim and Kratzer. This approach situates semantics within an independently motivated account of human cognitive architecture and reveals the semantics–pragmatics interface to be grounded in the underlying interface between modular and central systems. (shrink)
Modern categorical logic as well as the Kripke and topological models of intuitionistic logic suggest that the interpretation of ordinary “propositional” logic should in general be the logic of subsets of a given universe set. Partitions on a set are dual to subsets of a set in the sense of the category-theoretic duality of epimorphisms and monomorphisms—which is reflected in the duality between quotient objects and subobjects throughout algebra. If “propositional” logic is thus seen as the logic of subsets of (...) a universe set, then the question naturally arises of a dual logic of partitions on a universe set. This paper is an introduction to that logic of partitions dual to classical subset logic. The paper goes from basic concepts up through the correctness and completeness theorems for a tableau system of partition logic. (shrink)
The epistemic modal auxiliaries must and might are vehicles for expressing the force with which a proposition follows from some body of evidence or information. Standard approaches model these operators using quantificational modal logic, but probabilistic approaches are becoming increasingly influential. According to a traditional view, must is a maximally strong epistemic operator and might is a bare possibility one. A competing account—popular amongst proponents of a probabilisitic turn—says that, given a body of evidence, must \ entails that \\) is (...) high but non-maximal and might \ that \\) is significantly greater than 0. Drawing on several observations concerning the behavior of must, might and similar epistemic operators in evidential contexts, deductive inferences, downplaying and retractions scenarios, and expressions of epistemic tension, I argue that those two influential accounts have systematic descriptive shortcomings. To better make sense of their complex behavior, I propose instead a broadly Kratzerian account according to which must \ entails that \ = 1\) and might \ that \ > 0\), given a body of evidence and a set of normality assumptions about the world. From this perspective, must and might are vehicles for expressing a common mode of reasoning whereby we draw inferences from specific bits of evidence against a rich set of background assumptions—some of which we represent as defeasible—which capture our general expectations about the world. I will show that the predictions of this Kratzerian account can be substantially refined once it is combined with a specific yet independently motivated ‘grammatical’ approach to the computation of scalar implicatures. Finally, I discuss some implications of these results for more general discussions concerning the empirical and theoretical motivation to adopt a probabilisitic semantic framework. (shrink)
Classical physics and quantum physics suggest two meta-physical types of reality: the classical notion of a objectively definite reality with properties "all the way down," and the quantum notion of an objectively indefinite type of reality. The problem of interpreting quantum mechanics (QM) is essentially the problem of making sense out of an objectively indefinite reality. These two types of reality can be respectively associated with the two mathematical concepts of subsets and quotient sets (or partitions) which are category-theoretically dual (...) to one another and which are developed in two mathematical logics, the usual Boolean logic of subsets and the more recent logic of partitions. Our sense-making strategy is "follow the math" by showing how the logic and mathematics of set partitions can be transported in a natural way to Hilbert spaces where it yields the mathematical machinery of QM--which shows that the mathematical framework of QM is a type of logical system over ℂ. And then we show how the machinery of QM can be transported the other way down to the set-like vector spaces over ℤ₂ showing how the classical logical finite probability calculus (in a "non-commutative" version) is a type of "quantum mechanics" over ℤ₂, i.e., over sets. In this way, we try to make sense out of objective indefiniteness and thus to interpret quantum mechanics. (shrink)
I develop and defend a truthmaker semantics for the relevant logic R. The approach begins with a simple philosophical idea and develops it in various directions, so as to build a technically adequate relevant semantics. The central philosophical idea is that truths are true in virtue of specific states. Developing the idea formally results in a semantics on which truthmakers are relevant to what they make true. A very natural notion of conditionality is added, giving us relevant (...) implication. I then investigate ways to add conjunction, disjunction, and negation; and I discuss how to justify contraposition and excluded middle within a truthmaker semantics. (shrink)
In this paper we propose a formal theory of partitions (ways of dividing up or sorting or mapping reality) and we show how the theory can be applied in the geospatial domain. We characterize partitions at two levels: as systems of cells (theory A), and in terms of their projective relation to reality (theory B). We lay down conditions of well-formedness for partitions and we define what it means for partitions to project truly onto reality. We continue by classifying well-formed (...) partitions along three axes: (a) degree of correspondence between partition cells and objects in reality; (b) degree to which a partition represents the mereological structure of the domain it is projected onto; and (c) degree of completeness and exhaustiveness with which a partition represents reality. This classification is used to characterize three types of partitions that play an important role in spatial information science: cadastral partitions, categorical coverages, and the partitions involved in folk categorizations of the geospatial domain. (shrink)
Externalism is the thesis that the contents of intentional states and speech acts are not determined by the way the subjects of those states or acts are internally. It is a widely accepted but not entirely uncontroversial thesis. Among such theses in philosophy, externalism is notable for owing the assent it commands almost entirely to thought experiments, especially to variants of Hilary Putnam's famous Twin Earth scenario. This paper presents a thought experiment-free argument for externalism. It shows that externalism is (...) a deductive consequence of a pair of widely accepted principles whose relevance to the issue has hitherto gone unnoticed. (shrink)
My paper examines the popular idea, defended by Kripke, that meaning is an essentially normative notion. I consider four common versions of this idea and suggest that none of them can be supported, either because the alleged normativity has nothing to do with normativity or because it cannot plausibly be said that meaning is normative in the sense suggested. I argue that contrary to received opinion, we don’t need normativity to secure the possibility of meaning. I conclude by considering the (...) repercussions of rejecting semantic normativity on three central issues: justification, communication, and naturalism. (shrink)
We discuss a well-known puzzle about the lexicalization of logical operators in natural language, in particular connectives and quantifiers. Of the many logically possible operators, only few appear in the lexicon of natural languages: the connectives in English, for example, are conjunction and, disjunction or, and negated disjunction nor; the lexical quantifiers are all, some and no. The logically possible nand and Nall are not expressed by lexical entries in English, nor in any natural language. Moreover, the lexicalized operators are (...) all upward or downward monotone, an observation known as the Monotonicity Universal. We propose a logical explanation of lexical gaps and of the Monotonicity Universal, based on the dynamic behaviour of connectives and quantifiers. We define update potentials for logical operators as procedures to modify the context, under the assumption that an update by \ depends on the logical form of \ and on the speech act performed: assertion or rejection. We conjecture that the adequacy of update potentials determines the limits of lexicalizability for logical operators in natural language. Finally, we show that on this framework the Monotonicity Universal follows from the logical properties of the updates that correspond to each operator. (shrink)
Externalism is widely endorsed within contemporary philosophy of mind and language. Despite this, it is far from clear how the externalist thesis should be construed and, indeed, why we should accept it. In this entry I distinguish and examine three central types of externalism: what I call foundational externalism, externalist semantics, and psychological externalism. I suggest that the most plausible version of externalism is not in fact a very radical thesis and does not have any terribly interesting implications for (...) philosophy of mind, whereas the more radical and interesting versions of externalism are quite difficult to support. (shrink)
In this paper, we present a new semantic framework designed to capture a distinctly cognitive or epistemic notion of meaning akin to Fregean senses. Traditional Carnapian intensions are too coarse-grained for this purpose: they fail to draw semantic distinctions between sentences that, from a Fregean perspective, differ in meaning. This has led some philosophers to introduce more fine-grained hyperintensions that allow us to draw semantic distinctions among co-intensional sentences. But the hyperintensional strategy has a flip-side: it risks drawing semantic distinctions (...) between sentences that, from a Fregean perspective, do not differ in meaning. This is what we call the ‘new problem’ of hyperintensionality to distinguish it from the ‘old problem’ that faced the intensional theory. We show that our semantic framework offers a joint solution to both these problems by virtue of satisfying a version of Frege’s so-called ‘equipollence principle’ for sense individuation. Frege’s principle, we argue, not only captures the semantic intuitions that give rise to the old and the new problem of hyperintensionality, but also points the way to an independently motivated solution to both problems. (shrink)
Philosophers have spilled a lot of ink over the past few years exploring the nature and significance of grounding. Kit Fine has made several seminal contributions to this discussion, including an exact treatment of the formal features of grounding [Fine, 2012a]. He has specified a language in which grounding claims may be expressed, proposed a system of axioms which capture the relevant formal features, and offered a semantics which interprets the language. Unfortunately, the semantics Fine offers faces a (...) number of problems. In this paper, I review the problems and offer an alternative that avoids them. I offer a semantics for the pure logic of ground that is motivated by ideas already present in the grounding literature, and for which a natural axiomatization capturing central formal features of grounding is sound and complete. I also show how the semantics I offer avoids the problems faced by Fine’s semantics. (shrink)
The article investigates the significance of the so-called phenomenon of apparent faultless disagreement for debates about the semantics of taste discourse. Two kinds of description of the phenomenon are proposed. The first ensures that faultless disagreement raises a distinctive philosophical challenge; yet, it is argued that Contextualist, Realist and Relativist semantic theories do not account for this description. The second, by contrast, makes the phenomenon irrelevant for the problem of what the right semantics of taste discourse should be. (...) Lastly, the following dilemma is assessed: either faultless disagreement provides strong evidence against semantic theories; or its significance should be considerably downplayed. (shrink)
Semantic information is usually supposed to satisfy the veridicality thesis: p qualifies as semantic information only if p is true. However, what it means for semantic information to be true is often left implicit, with correspondentist interpretations representing the most popular, default option. The article develops an alternative approach, namely a correctness theory of truth (CTT) for semantic information. This is meant as a contribution not only to the philosophy of information but also to the philosophical debate on the nature (...) of truth. After the introduction, in Sect. 2, semantic information is shown to be translatable into propositional semantic information (i). In Sect. 3, i is polarised into a query (Q) and a result (R), qualified by a specific context, a level of abstraction and a purpose. This polarization is normalised in Sect. 4, where [Q + R] is transformed into a Boolean question and its relative yes/no answer [Q + A]. This completes the reduction of the truth of i to the correctness of A. In Sects. 5 and 6, it is argued that (1) A is the correct answer to Q if and only if (2) A correctly saturates Q by verifying and validating it (in the computer science’s sense of “verification” and “validation”); that (2) is the case if and only if (3) [Q + A] generates an adequate model (m) of the relevant system (s) identified by Q; that (3) is the case if and only if (4) m is a proxy of s (in the computer science’s sense of “proxy”) and (5) proximal access to m commutes with the distal access to s (in the category theory’s sense of “commutation”); and that (5) is the case if and only if (6) reading/writing (accessing, in the computer science’s technical sense of the term) m enables one to read/write (access) s. Sect. 7 provides some further clarifications about CTT, in the light of semantic paradoxes. Section 8 draws a general conclusion about the nature of CTT as a theory for systems designers not just systems users. In the course of the article all technical expressions from computer science are explained. (shrink)
This paper considers a now familiar argument that the ubiquity of context -dependence threatens the project of natural language semantics, at least as that project has usually been conceived: as concerning itself with `what is said' by an utterance of a given sentence. I argue in response that the `anti-semantic' argument equivocates at a crucial point and, therefore, that we need not choose between semantic minimalism, truth-conditional pragmatism, and the like. Rather, we must abandon the idea, familiar from Kaplan (...) and others, that utterances express propositions `relative to contexts' and replace it with the Strawonian idea that speakers express propositions by making utterances in contexts. The argument for this claim consists in a detailed investigation of the particular case of demonstratives, which I argue demand such a Strawsonian treatment. I then respond to several objections, the most important of which allege that the Strawsonian account somehow undermines the project of natural language semantics, or threatens the semantics -pragmatics distinction. Please note that the paper posted here is an extended version of what was published. (shrink)
A semantics of pictorial representation should provide an account of how pictorial signs are associated with the contents they express. Unlike the familiar semantics of spoken languages, this problem has a distinctively spatial cast for depiction. Pictures themselves are two-dimensional artifacts, and their contents take the form of pictorial spaces, perspectival arrangements of objects and properties in three dimensions. A basic challenge is to explain how pictures are associated with the particular pictorial spaces they express. Inspiration here comes (...) from recent proposals that analyze depiction in terms of geometrical projection. In this essay, I will argue that, for a central class of pictures, the projection-based theory of depiction provides the best explanation for how pictures express pictorial spaces, while rival perceptual and resemblance theories fall short. Since the composition of pictorial space is itself the basis for all other aspects of pictorial content, the proposal provides a natural foundation for further pictorial semantics. (shrink)
In his “Bridging mainstream and formal ontology”, Augusto (2021) gives an excellent analysis of Dietrich von Freiberg’s idea of using causality as a partitioning principle for upper ontologies. For this Dietrich’s notion of extrinsic principles is crucial. The question whether causation can and indeed should be used as a partitioning principle for ontologies is discussed using mathematics and physics as examples.
This paper presents a semantical analysis of the Weak Kleene Logics Kw3 and PWK from the tradition of Bochvar and Halldén. These are three-valued logics in which a formula takes the third value if at least one of its components does. The paper establishes two main results: a characterisation result for the relation of logical con- sequence in PWK – that is, we individuate necessary and sufficient conditions for a set.
This paper gives an outline of truthmaker semantics for natural language against the background of standard possible-worlds semantics. It develops a truthmaker semantics for attitude reports and deontic modals based on an ontology of attitudinal and modal objects and on a semantic function of clauses as predicates of such objects. It also présents new motivations for 'object-based truthmaker semantics' from intensional transitive verbs such as ‘need’, ‘look for’, ‘own’, and ‘buy’ and gives an outline of their (...)semantics. This paper is a commissioned 'target' article, with commentaries by W. Davis, B. Arsenijevic, K. Moulton, K. Liefke, M. Kaufmann, R. Matthews, P. Portner and A. Rubinstein, P. Elliott, G. Ramchand and my reply. (shrink)
There is no consensus yet on the definition of semantic information. This paper contributes to the current debate by criticising and revising the Standard Definition of semantic Information as meaningful data, in favour of the Dretske-Grice approach: meaningful and well-formed data constitute semantic information only if they also qualify as contingently truthful. After a brief introduction, SDI is criticised for providing necessary but insufficient conditions for the definition of semantic information. SDI is incorrect because truth-values do not supervene on semantic (...) information, and misinformation is not a type of semantic information, but pseudo-information, that is not semantic information at all. This is shown by arguing that none of the reasons for interpreting misinformation as a type of semantic information is convincing, whilst there are compelling reasons to treat it as pseudo-information. As a consequence, SDI is revised to include a necessary truth-condition. The last section summarises the main results of the paper and indicates some interesting areas of application of the revised definition. (shrink)
I provide an analysis of sentences of the form ‘To be F is to be G’ in terms of exact truth-maker semantics—an approach that identifies the meanings of sentences with the states of the world directly responsible for their truth-values. Roughly, I argue that these sentences hold just in case that which makes something F is that which makes it G. This approach is hyperintensional, and possesses desirable logical and modal features. These sentences are reflexive, transitive and symmetric, and, (...) if they are true, then they are necessarily true, and it is necessary that all and only Fs are Gs. I close by defining an asymmetric and irreflexive notion of analysis in terms of the reflexive and symmetric one. (shrink)
The paper looks at the semantics and ontology of dispositions in the light of recent work on the subject. Objections to the simple conditionals apparently entailed by disposition statements are met by replacing them with so-called 'reduction sentences' and some implications of this are explored. The usual distinction between categorical and dispositional properties is criticised and the relation between dispositions and their bases examined. Applying this discussion to two typical cases leads to the conclusion that fragility is not a (...) real property and that, while both temperature and its bases are, this does not generate any problem of overdetermination. (shrink)
The aim of this paper is to reinterpret success semantics, a theory of mental content, according to which the content of a belief is fixed by the success conditions of some actions based on this belief. After arguing that in its present form, success semantics is vulnerable to decisive objections, I examine the possibilities of salvaging the core of this proposal. More specifically, I propose that the content of some very simple, but very important, mental states, the immediate (...) mental antecedents of action, can be explained in this manner. (shrink)
Expressivists about epistemic modals deny that ‘Jane might be late’ canonically serves to express the speaker’s acceptance of a certain propositional content. Instead, they hold that it expresses a lack of acceptance. Prominent expressivists embrace pragmatic expressivism: the doxastic property expressed by a declarative is not helpfully identified with that sentence’s compositional semantic value. Against this, we defend semantic expressivism about epistemic modals: the semantic value of a declarative from this domain is the property of doxastic attitudes it canonically serves (...) to express. In support, we synthesize data from the critical literature on expressivism—largely reflecting interactions between modals and disjunctions—and present a semantic expressivism that readily predicts the data. This contrasts with salient competitors, including: pragmatic expressivism based on domain semantics or dynamic semantics; semantic expressivism à la Moss [2015]; and the bounded relational semantics of Mandelkern [2019]. (shrink)
The article addresses the problem of how semantic information can be upgraded to knowledge. The introductory section explains the technical terminology and the relevant background. Section 2 argues that, for semantic information to be upgraded to knowledge, it is necessary and sufficient to be embedded in a network of questions and answers that correctly accounts for it. Section 3 shows that an information flow network of type A fulfils such a requirement, by warranting that the erotetic deficit, characterising the target (...) semantic information t by default, is correctly satisfied by the information flow of correct answers provided by an informational source s. Section 4 illustrates some of the major advantages of such a Network Theory of Account (NTA) and clears the ground of a few potential difficulties. Section 5 clarifies why NTA and an informational analysis of knowledge, according to which knowledge is accounted semantic information, is not subject to Gettier-type counterexamples. A concluding section briefly summarises the results obtained. (shrink)
We present a minimum message length (MML) framework for trajectory partitioning by point selection, and use it to automatically select the tolerance parameter ε for Douglas-Peucker partitioning, adapting to local trajectory complexity. By examining a range of ε for synthetic and real trajectories, it is easy to see that the best ε does vary by trajectory, and that the MML encoding makes sensible choices and is robust against Gaussian noise. We use it to explore the identification of micro-activities within a (...) longer trajectory. This MML metric is comparable to the TRACLUS metric – and shares the constraint of abstracting only by omission of points – but is a true lossless encoding. Such encoding has several theoretical advantages – particularly with very small segments (high frame rates) – but actual performance interacts strongly with the search algorithm. Both differ from unconstrained piecewise linear approximations, including other MML formulations. (shrink)
There is a prevalent notion among cognitive scientists and philosophers of mind that computers are merely formal symbol manipulators, performing the actions they do solely on the basis of the syntactic properties of the symbols they manipulate. This view of computers has allowed some philosophers to divorce semantics from computational explanations. Semantic content, then, becomes something one adds to computational explanations to get psychological explanations. Other philosophers, such as Stephen Stich, have taken a stronger view, advocating doing away with (...)semantics entirely. This paper argues that a correct account of computation requires us to attribute content to computational processes in order to explain which functions are being computed. This entails that computational psychology must countenance mental representations. Since anti-semantic positions are incompatible with computational psychology thus construed, they ought to be rejected. Lastly, I argue that in an important sense, computers are not formal symbol manipulators. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.