Was ist Natur oder was könnte sie sein? Diese und weitere Fragen sind grundlegend für Naturdenken und -handeln. Das Lehr- und Studienbuch bietet eine historisch-systematische und zugleich praxisbezogene Einführung in die Naturphilosophie mit ihren wichtigsten Begriffen. Es nimmt den pluralen Charakter der Wahrnehmung von Natur in den philosophischen Blick und ist auch zum Selbststudium bestens geeignet.
Most epistemologists hold that knowledge entails belief. However, proponents of this claim rarely offer a positive argument in support of it. Rather, they tend to treat the view as obvious and assert that there are no convincing counterexamples. We find this strategy to be problematic. We do not find the standard view obvious, and moreover, we think there are cases in which it is intuitively plausible that a subject knows some proposition P without—or at least without determinately—believing that P. Accordingly, (...) we present five plausible examples of knowledge without (determinate) belief, and we present empirical evidence suggesting that our intuitions about these scenarios are not atypical. (shrink)
Statements about the behavior of biochemical entities (e.g., about the interaction between two proteins) abound in the literature on molecular biology and are increasingly becoming the targets of information extraction and text mining techniques. We show that an accurate analysis of the semantics of such statements reveals a number of ambiguities that have to be taken into account in the practice of biomedical ontology engineering: Such statements can not only be understood as event reporting statements, but also as ascriptions of (...) dispositions or tendencies that may or may not refer to collectives of interacting molecules or even to collectives of interaction events. (shrink)
This paper embeds the core part of Discourse Representation Theory in the classical theory of types plus a few simple axioms that allow the theory to express key facts about variables and assignments on the object level of the logic. It is shown how the embedding can be used to combine core analyses of natural language phenomena in Discourse Representation Theory with analyses that can be obtained in Montague Semantics.
During much of the past century, it was widely believed that phonemes--the human speech sounds that constitute words--have no inherent semantic meaning, and that the relationship between a combination of phonemes (a word) and its referent is simply arbitrary. Although recent work has challenged this picture by revealing psychological associations between certain phonemes and particular semantic contents, the precise mechanisms underlying these associations have not been fully elucidated. Here we provide novel evidence that certain phonemes have an inherent, non-arbitrary emotional (...) quality. Moreover, we show that the perceived emotional valence of certain phoneme combinations depends on a specific acoustic feature--namely, the dynamic shift within the phonemes' first two frequency components. These data suggest a phoneme-relevant acoustic property influencing the communication of emotion in humans, and provide further evidence against previously held assumptions regarding the structure of human language. This finding has potential applications for a variety of social, educational, clinical, and marketing contexts. (shrink)
In this paper it is shown how the DRT (Discourse Representation Theory) treatment of temporal anaphora can be formalized within a version of Montague Semantics that is based on classical type logic.
The paper shows how ideas that explain the sense of an expression as a method or algorithm for finding its reference, preshadowed in Frege’s dictum that sense is the way in which a referent is given, can be formalized on the basis of the ideas in Thomason (1980). To this end, the function that sends propositions to truth values or sets of possible worlds in Thomason (1980) must be replaced by a relation and the meaning postulates governing the behaviour of (...) this relation must be given in the form of a logic program. The resulting system does not only throw light on the properties of sense and their relation to computation, but also shows circular behaviour if some ingredients of the Liar Paradox are added. The connection is natural, as algorithms can be inherently circular and the Liar is explained as expressing one of those. Many ideas in the present paper are closely related to those in Moschovakis (1994), but receive a considerably lighter formalization. (shrink)
A logic is called higher order if it allows for quantification over higher order objects, such as functions of individuals, relations between individuals, functions of functions, relations between functions, etc. Higher order logic began with Frege, was formalized in Russell [46] and Whitehead and Russell [52] early in the previous century, and received its canonical formulation in Church [14].1 While classical type theory has since long been overshadowed by set theory as a foundation of mathematics, recent decades have shown remarkable (...) comebacks in the fields of mechanized reasoning (see, e.g., Benzm¨. (shrink)
In this paper we define intensional models for the classical theory of types, thus arriving at an intensional type logic ITL. Intensional models generalize Henkin's general models and have a natural definition. As a class they do not validate the axiom of Extensionality. We give a cut-free sequent calculus for type theory and show completeness of this calculus with respect to the class of intensional models via a model existence theorem. After this we turn our attention to applications. Firstly, it (...) is argued that, since ITL is truly intensional, it can be used to model ascriptions of propositional attitude without predicting logical omniscience. In order to illustrate this a small fragment of English is defined and provided with an ITL semantics. Secondly, it is shown that ITL models contain certain objects that can be identified with possible worlds. Essential elements of modal logic become available within classical type theory once the axiom of Extensionality is given up. (shrink)
In this paper we consider the theory of predicate logics in which the principle of Bivalence or the principle of Non-Contradiction or both fail. Such logics are partial or paraconsistent or both. We consider sequent calculi for these logics and prove Model Existence. For L4, the most general logic under consideration, we also prove a version of the Craig-Lyndon Interpolation Theorem. The paper shows that many techniques used for classical predicate logic generalise to partial and paraconsistent logics once the right (...) set-up is chosen. Our logic L4 has a semantics that also underlies Belnap’s [4] and is related to the logic of bilattices. L4 is in focus most of the time, but it is also shown how results obtained for L4 can be transferred to several variants. (shrink)
In this paper we discuss a new perspective on the syntax-semantics interface. Semantics, in this new set-up, is not ‘read off’ from Logical Forms as in mainstream approaches to generative grammar. Nor is it assigned to syntactic proofs using a Curry-Howard correspondence as in versions of the Lambek Calculus, or read off from f-structures using Linear Logic as in Lexical-Functional Grammar (LFG, Kaplan & Bresnan [9]). All such approaches are based on the idea that syntactic objects (trees, proofs, fstructures) are (...) somehow prior and that semantics must be parasitic on those syntactic objects. We challenge this idea and develop a grammar in which syntax and semantics are treated in a strictly parallel fashion. The grammar will have many ideas in common with the (converging) frameworks of categorial grammar and LFG, but its treatment of the syntax-semantics interface is radically different. Also, although the meaning component of the grammar is a version of Montague semantics and although there are obvious affinities between Montague’s conception of grammar and the work presented here, the grammar is not compositional, in the sense that composition of meaning need not follow surface structure. (shrink)
The pivotal role of the relation part-of in the description of living organisms is widely acknowledged. Organisms are open systems, which means that in contradistinction to mechanical artifacts they are characterized by a continuous flow and exchange of matter. A closer analysis of the spatial relations in biological organisms reveals that the decision as to whether a given particular is part-of a second particular or whether it is only contained-in the second particular is often controversial. We here propose a rule-based (...) approach which allows us to decide on the basis of well-defined criteria which of the two relations holds between two anatomical objects, given that one spatially includes the other. We discuss the advantages and limitations of this approach, using concrete examples from human anatomy. (shrink)
The neurosciences not only challenge assumptions about the mind’s place in the natural world but also urge us to reconsider its role in the normative world. Based on mind-brain dualism, the law affords only one-sided protection: it systematically protects bodies and brains, but only fragmentarily minds and mental states. The fundamental question, in what ways people may legitimately change mental states of others, is largely unexplored in legal thinking. With novel technologies to both intervene into minds and detect mental activity, (...) the law should, we suggest, introduce stand alone protection for the inner sphere of persons. We shall address some metaphysical questions concerning physical and mental harm and demonstrate gaps in current doctrines, especially in regard to manipulative interferences with decision-making processes. We then outline some reasons for the law to recognize a human right to mental liberty and propose elements of a novel criminal offence proscribing severe interventions into other minds. (shrink)
The paper develops Lambda Grammars, a form of categorial grammar that, unlike other categorial formalisms, is non-directional. Linguistic signs are represented as sequences of lambda terms and are combined with the help of linear combinators.
Vector models of language are based on the contextual aspects of words and how they co-occur in text. Truth conditional models focus on the logical aspects of language, the denotations of phrases, and their compositional properties. In the latter approach the denotation of a sentence determines its truth conditions and can be taken to be a truth value, a set of possible worlds, a context change potential, or similar. In this short paper, we develop a vector semantics for language based (...) on the simply typed lambda calculus. Our semantics uses techniques familiar from the truth conditional tradition and is based on a form of dynamic interpretation inspired by Heim's context updates. (shrink)
This paper introduces λ-grammar, a form of categorial grammar that has much in common with LFG. Like other forms of categorial grammar, λ-grammars are multi-dimensional and their components are combined in a strictly parallel fashion. Grammatical representations are combined with the help of linear combinators, closed pure λ-terms in which each abstractor binds exactly one variable. Mathematically this is equivalent to employing linear logic, in use in LFG for semantic composition, but the method seems more practicable.
There are two kinds of semantic theories of anaphora. Some, such as Heim’s File Change Semantics, Groenendijk and Stokhof’s Dynamic Predicate Logic, or Muskens’ Compositional DRT (CDRT), seem to require full coindexing of anaphora and their antecedents prior to interpretation. Others, such as Kamp’s Discourse Representation Theory (DRT), do not require this coindexing and seem to have an important advantage here. In this squib I will sketch a procedure that the first group of theories may help themselves to so that (...) they can interleave interpretation and coindexing in DRT’s way. (shrink)
The desideratum of semantic interoperability has been intensively discussed in medical informatics circles in recent years. Originally, experts assumed that this issue could be sufficiently addressed by insisting simply on the application of shared clinical terminologies or clinical information models. However, the use of the term ‘ontology’ has been steadily increasing more recently. We discuss criteria for distinguishing clinical ontologies from clinical terminologies and information models. Then, we briefly present the role clinical ontologies play in two multicentric research projects. Finally, (...) we discuss the interactions between these different kinds of knowledge representation artifacts and the stakeholders involved in developing interoperational real-world clinical applications. We provide ontology engineering examples from two EU-funded projects. (shrink)
The ‘syntax’ and ‘combinatorics’ of my title are what Curry (1961) referred to as phenogrammatics and tectogrammatics respectively. Tectogrammatics is concerned with the abstract combinatorial structure of the grammar and directly informs semantics, while phenogrammatics deals with concrete operations on syntactic data structures such as trees or strings. In a series of previous papers (Muskens, 2001a; Muskens, 2001b; Muskens, 2003) I have argued for an architecture of the grammar in which finite sequences of lambda terms are the basic data structures, (...) pairs of terms syntax, semantics for example. These sequences then combine with the help of simple generalizations of the usual abstraction and application operations. This theory, which I call Lambda Grammars and which is closely related to the independently formulated theory of Abstract Categorial Grammars (de Groote, 2001; de Groote, 2002), in fact is an implementation of Curry’s ideas: the level of tectogrammar is encoded by the sequences of lambda-terms and their ways of combination, while the syntactic terms in those sequences constitute the phenogrammatical level. In de Groote’s formulation of the theory, tectogrammar is the level of abstract terms, while phenogrammar is the level of object terms. (shrink)
We present Logical Description Grammar (LDG), a model ofgrammar and the syntax-semantics interface based on descriptions inelementary logic. A description may simultaneously describe the syntacticstructure and the semantics of a natural language expression, i.e., thedescribing logic talks about the trees and about the truth-conditionsof the language described. Logical Description Grammars offer a naturalway of dealing with underspecification in natural language syntax andsemantics. If a logical description (up to isomorphism) has exactly onetree plus truth-conditions as a model, it completely specifies thatgrammatical (...) object. More common is the situation, corresponding tounderspecification, in which there is more than one model. A situation inwhich there are no models corresponds to an ungrammatical input. (shrink)
In their paper Nothing but the Truth Andreas Pietz and Umberto Rivieccio present Exactly True Logic, an interesting variation upon the four-valued logic for first-degree entailment FDE that was given by Belnap and Dunn in the 1970s. Pietz & Rivieccio provide this logic with a Hilbert-style axiomatisation and write that finding a nice sequent calculus for the logic will presumably not be easy. But a sequent calculus can be given and in this paper we will show that a calculus for (...) the Belnap-Dunn logic we have defined earlier can in fact be reused for the purpose of characterising ETL, provided a small alteration is made—initial assignments of signs to the sentences of a sequent to be proved must be different from those used for characterising FDE. While Pietz & Rivieccio define ETL on the language of classical propositional logic we also study its consequence relation on an extension of this language that is functionally complete for the underlying four truth values. On this extension the calculus gets a multiple-tree character—two proof trees may be needed to establish one proof. (shrink)
Vector models of language are based on the contextual aspects of language, the distributions of words and how they co-occur in text. Truth conditional models focus on the logical aspects of language, compositional properties of words and how they compose to form sentences. In the truth conditional approach, the denotation of a sentence determines its truth conditions, which can be taken to be a truth value, a set of possible worlds, a context change potential, or similar. In the vector models, (...) the degree of co-occurrence of words in context determines how similar the meanings of words are. In this paper, we put these two models together and develop a vector semantics for language based on the simply typed lambda calculus models of natural language. We provide two types of vector semantics: a static one that uses techniques familiar from the truth conditional tradition and a dynamic one based on a form of dynamic interpretation inspired by Heim’s context change potentials. We show how the dynamic model can be applied to entailment between a corpus and a sentence and provide examples. (shrink)
In this paper it is shown how simple texts that can be parsed in a Lambek Categorial Grammar can also automatically be provided with a semantics in the form of a Discourse Representation Structure in the sense of Kamp [1981]. The assignment of meanings to texts uses the Curry-Howard-Van Benthem correspondence.
In this paper we introduce a Gentzen calculus for (a functionally complete variant of) Belnap's logic in which establishing the provability of a sequent in general requires \emph{two} proof trees, one establishing that whenever all premises are true some conclusion is true and one that guarantees the falsity of at least one premise if all conclusions are false. The calculus can also be put to use in proving that one statement \emph{necessarily approximates} another, where necessary approximation is a natural dual (...) of entailment. The calculus, and its tableau variant, not only capture the classical connectives, but also the `information' connectives of four-valued Belnap logics. This answers a question by Avron. (shrink)
In their recent paper Bi-facial truth: a case for generalized truth values Zaitsev and Shramko [7] distinguish between an ontological and an epistemic interpretation of classical truth values. By taking the Cartesian product of the two disjoint sets of values thus obtained, they arrive at four generalized truth values and consider two “semi-classical negations” on them. The resulting semantics is used to define three novel logics which are closely related to Belnap’s well-known four valued logic. A syntactic characterization of these (...) logics is left for further work. In this paper, based on our previous work on a functionally complete extension of Belnap’s logic, we present a sound and complete tableau calculus for these logics. It crucially exploits the Cartesian nature of the four values, which is reflected in the fact that each proof consists of two tableaux. The bi-facial notion of truth of Z&S is thus augmented with a bi-facial notion of proof. We also provide translations between the logics for semi-classical negation and classical logic and show that an argument is valid in a logic for semi-classical negation just in case its translation is valid in classical logic. (shrink)
Ambiguities in natural language can multiply so fast that no person or machine can be expected to process a text of even moderate length by enumerating all possible disambiguations. A sentence containing $n$ scope bearing elements which are freely permutable will have $n!$ readings, if there are no other, say lexical or syntactic, sources of ambiguity. A series of $m$ such sentences would lead to $(n!)^m$ possibilities. All in all the growth of possibilities will be so fast that generating readings (...) first and testing their acceptability afterwards will not be feasible. -/- This insight has led a series of researchers to adopt a level of representation at which ambiguities remain unresolved. The idea here is not to generate and test many possible interpretations but to first generate one `underspecified' representation which in a sense represents all its complete specifications and then use whatever information is available to further specify the result. -/- One central hypothesis in the paper will be that the relation between an underspecified representation and its full representations is not so much the relation between one structure and a set of other structures but is in fact the relation between a description (a set of logical sentences) and its models. (shrink)
This paper argues for the idea that in describing language we should follow Haskell Curry in distinguishing between the structure of an expression and its appearance or manifestation . It is explained how making this distinction obviates the need for directed types in type-theoretic grammars and a simple grammatical formalism is sketched in which representations at all levels are lambda terms. The lambda term representing the abstract structure of an expression is homomorphically translated to a lambda term representing its manifestation, (...) but also to a lambda term representing its semantics. (shrink)
The desideratum of semantic interoperability has been intensively discussed in medical informatics circles in recent years. Originally, experts assumed that this issue could be sufficiently addressed by insisting simply on the application of shared clinical terminologies or clinical information models. However, the use of the term ‘ontology’ has been steadily increasing more recently. We discuss criteria for distinguishing clinical ontologies from clinical terminologies and information models. Then, we briefly present the role clinical ontologies play in two multicentric research projects. Finally, (...) we discuss the interactions between these different kinds of knowledge representation artifacts and the stakeholders involved in developing interoperational real-world clinical applications. We provide ontology engineering examples from two EU-funded projects. (shrink)
In this paper we give an analytic tableau calculus P L 1 6 for a functionally complete extension of Shramko and Wansing’s logic. The calculus is based on signed formulas and a single set of tableau rules is involved in axiomatising each of the four entailment relations ⊧ t, ⊧ f, ⊧ i, and ⊧ under consideration—the differences only residing in initial assignments of signs to formulas. Proving that two sets of formulas are in one of the first three entailment (...) relations will in general require developing four tableaux, while proving that they are in the ⊧ relation may require six. (shrink)
In this paper it is shown how a formal theory of interpretation in Montague’s style can be reconciled with a view on meaning as a social construct. We sketch a formal theory in which agents can have their own theory of interpretation and in which groups can have common theories of interpretation. Frege solved the problem how different persons can have access to the same proposition by placing the proposition in a Platonic realm, independent from all language users but accessible (...) to all of them. Here we explore the alternative of letting meaning be socially constructed. The meaning of a sentence is accessible to each member of a linguistic community because the way the sentence is to be interpreted is common knowledge among the members of that community. Misunderstandings can arise when the semantic knowledge of two or more individuals is not completely in sync. (shrink)
Statements about the behavior of biological entities, e.g. about the interaction between two proteins, abound in the literature on molecular biology and are increasingly becoming the targets of information extraction and text mining techniques. We show that an accurate analysis of the semantics of such statements reveals a number of ambiguities that is necessary to take into account in the practice of biomedical ontology engineering. Several concurring formalizations are proposed. Emphasis is laid on the discussion of biological dispositions.
The Foundational Model of Anatomy (FMA) is a map of the human body. Like maps of other sorts – including the map-like representations we find in familiar anatomical atlases – it is a representation of a certain portion of spatial reality as it exists at a certain (idealized) instant of time. But unlike other maps, the FMA comes in the form of a sophisticated ontology of its objectdomain, comprising some 1.5 million statements of anatomical relations among some 70,000 anatomical kinds. (...) It is further distinguished from other maps in that it represents not some specific portion of spatial reality (say: Leeds in 1996), but rather the generalized or idealized spatial reality associated with a generalized or idealized human being at some generalized or idealized instant of time. It will be our concern in what follows to outline the approach to ontology that is represented by the FMA and to argue that it can serve as the basis for a new type of anatomical information science. We also draw some implications for our understanding of spatial reasoning and spatial ontologies in general. (shrink)
This paper uses classical logic for a simultaneous description of the syntax and semantics of a fragment of English and it is argued that such an approach to natural language allows procedural aspects of linguistic theory to get a purely declarative formulation. In particular, it will be shown how certain construction rules in Discourse Representation Theory, such as the rule that indefinites create new discourse referents and definites pick up an existing referent, can be formulated declaratively if logic is used (...) as a metalanguage for English. In this case the declarative aspects of a rule are highlighted when we focus on the model theory of the description language while a procedural perspective is obtained when its proof theory is concentrated on. Themes of interest are Discourse Representation Theory, resolution of anaphora, resolution of presuppositions, and underspecification. (shrink)
This essay articulates and defends Aristotle’s argument in Politics 7.4 that there is a rational limit to the size of the political community. Aristotle argues that size can negatively affect the ability of an organized being to attain its proper end. After examining the metaphysical grounds for this principle in both natural beings and artifacts, we defend Aristotle’s extension of the principle to the polis. He argues that the state is in the relevant sense an organism, one whose primary end (...) is to make good reasons available to individuals and promote them as choiceworthy. The size of a polis can affect its ability to perform this function, since growth promotes anonymity among citizens, which in turn frustrates the familiarity between citizens required for the exercise of distributive and restorative justice as well as political prudence. This paper suggests several ways in which Aristotle’s argument, if sound, is important for contemporary issues in moral psychology, Ralwsian political philosophy, and big data analytics. (shrink)
According to the Thomistic tradition, the Principle of Totality (TPoT) articulates a secondary principle of natural law which guides the exercise of human ownership or dominium over creation. In its general signification, TPoT is a principle of distributive justice determining the right ordering of wholes to their parts. In the medical field it is traditionally understood as entailing an absolute prohibition of bodily mutilation as irrational and immoral, and an imperfect obligation to use the parts of one’s body for the (...) perfection of the bodily whole. TPoT is thus a key element of the system of principles within which an individual exercises her right to life: it helps specify the nature, scope and limits of those actions by which an agent permissibly acts in order to preserve her life. While the Thomistic tradition and the Catholic Church have drawn clear conclusions from the principle regarding, e.g., direct sterilization and non-therapeutic experimentation on human subjects, less attention has been given to the implications of TPoT for non-therapeutic procedures that may positively impact biological functioning or supra-biological goals–that is, for human “enhancement.” We will explore the degree to which TPoT non-univocally guides our use of both artifacts and bodies. We will argue that a careful analysis of these distinct kinds of totalities suggests that the application of TPoT to artifacts and bodies is strongly isomorphic, which is what tempts advocates of the Principle of Autonomy to invalidly infer the absolute dominium of the individual over her body. The inference is invalid because this isomorphism also includes a principle of intrinsic value whose function is to resist the instrumentalization of both artifacts and bodies in some contexts; we are not even related to artifacts as advocates of absolute autonomy believe we are, let alone to our bodies. Rather, the limits of human dominium are determined by the nature and finalities, inherent or acquired, of the objects in question, and it will be argued that articulating these limits raises important, understudied, and fascinating questions about the permissibility of various kinds of human enhancement. (shrink)
Propomos uma tipologia dos artefatos de representação para as áreas de saúde e ciências biológicas, e a associação dessa tipologia com diferentes tipos de ontologia formal e lógica, chegando a conclusões quanto aos pontos fortes e limitações da ontologia de diferentes tipos de recursos lógicos, enquanto mantemos o foco na lógica descritiva. Consideramos quatro tipos de representação de área: (i) representação léxico-semântica, (ii) representação de tipos de entidades, (iii) representação de conhecimento prévio, e (iv) representação de indivíduos. Defendemos uma clara (...) distinção entre os quatro tipos de representação, de forma a oferecer uma base mais racional para o uso das ontologias e artefatos relacionados no avanço da integração de dados e interoperabilidade de sistemas de raciocínio associados. Destacamos que apenas uma pequena porção de fatos cientificamente relevantes em áreas como a biomedicina pode ser adequadamente representada por ontologias formais, quando estas últimas são concebidas como representações de tipos de entidades. Particularmente, a tentativa de codificar conhecimento padrão ou probabilístico pela utilização de ontologias assim concebidas é fadada à produção de modelos não intencionais e errôneos. (shrink)
Die Erörterung über Zivilisationen und die Goldenen Regeln ist zentral ein sprachliches Projekt, das dazu dienen soll, eine angemessene Bedeutung und mittels dieser einen möglichen Bezug zu finden. Reinhard Matern sucht und entwickelt ein Kriterium, um zivilisierte von unzivilisierten Gesellschaften zu differenzieren und nutzt dabei die weltweit entstandenen Goldenen Regeln, die er im Plural anführt, weil sich die überlieferten Formulierungen konkret unterscheiden. Es sind jedoch nicht die Unterschiede, sondern es ist das Gemeinsame, das ihn auf dem Weg zu einem (...) allgemeinen Kriterium interessiert. -/- Primär im ersten Jahrtausend vor unserer Zeitrechnung werden Spruchweisheiten notiert, über deren Ursprünge nichts bekannt ist. Die Goldenen Regel erlauben gegenüber dem formalisierten Vergeltungsrecht eine grundsätzlich andere Herangehenweise an menschliches Verhalten, ist auf Gleichbehandlung ausgerichtet. Diese vergleichsweise moderne Haltung ermöglicht es, ein allgemeines Kriterium zu bilden. Empirisch ist Gleichbehandlung in heutigen Gesellschaften jedoch immer noch nicht vollzogen, eine Zivilisation deshalb noch nicht erlangt. -/- Zum Abschluss findet sich ein Ausblick, der eine Erweiterung der Perspektive aufgrund zukünftiger Entwicklungen anstrebt: die Integration von humanoiden Robotern. (shrink)
As health systems around the world turn towards highly distributed, specialized and cooperative structures to increase quality and safety of care as well as efficiency and efficacy of delivery processes, there is a growing need for supporting communication and collaboration of all parties involved with advanced ICT solutions. The Electronic Health Record (EHR) provides the information platform which is maturing towards the eHealth core application. To meet the requirements for sustainable, semantically interoperable, and trustworthy EHR solutions, different standards and different (...) national strategies have been established. The workshop summarizes the requirements for such advanced EHR systems and their underlying architecture, presents different strategies and solutions advocated by corresponding protagonists, discusses pros and cons as well as harmonization and migration strategies for those approaches. It particularly highlights a turn towards ontology-driven architectures. (shrink)
The battle of Little Bighorn in 1876 marked the beginning of the end of conflict between the U.S. and its military against the various Native American tribes west of the Mississippi River. Historians have given us various ideas of why Lieutenant Colonel Custer met with defeat. But none have noted, in connection with the November 3rd “secret meeting” between Grant and his generals, a movement of troops away from the Black Hills even before decisions were supposedly made to no longer (...) keep miners out of that sacred land. When we study attitude and orders in conjunction with what we know about these events, the idea emerges that the government knew that they couldn’t get the Indians to break the Fort Laramie Treaty unless they were attacked. Here, then, is a presentation of the possibility of deliberate defeat by the U.S. government and its military in order to take the Black Hills. (shrink)
Bislang vernachlässigte man in der Forschung sowohl die Ansichten von Horkheimer und Adorno über Sprachgeschichte als auch Zusammenhänge mit jüdischen Theologien, die gemeinsam die Grundlage der Geschichtsphilosophien innerhalb der ‚Dialektik der Aufklärung‘ bilden; mit der vorliegenden Untersuchung werden die Versäumnisse nachgeholt. -/- Matern bietet eine ausführliche, aber sehr konzentrierte Diskussion im Kontext von ethnographischen, philologischen und theologischen Forschungen. Der große Aufwand ist erforderlich, um (a) mögliche Bezüge von Horkheimer und Adorno herausstellen zu können, (b) eine Basis für angemessene Interpretationen zu (...) erhalten. Resultat der Forschung ist, dass beide Autoren unterscheidbare apokalyptisch messianische Theologien entwerfen, die nachweisbar im Kontext von messianischen Ausrichtungen innerhalb der jüdischen Kabbala stehen. (shrink)
Logic has its roots in the study of valid argument, but while traditional logicians worked with natural language directly, modern approaches first translate natural arguments into an artificial language. The reason for this step is that some artificial languages now have very well developed inferential systems. There is no doubt that this is a great advantage in general, but for the study of natural reasoning it is a drawback that the original linguistic forms get lost in translation. An alternative approach (...) would be to develop a general theory of the natural logic behind human reasoning and human information processing by studying formal logics that operate directly on linguistic representations. That this is possible we will try to make plausible in this paper. It will turn out that one level of representation, that of Logical Form, can meaningfully be identified with the language of an existing and well-understood logic, a restricted form of the theory of types. It is not difficult to devise inference systems for this language, and it is thus possible to study reasoning systems that are based directly on language. (shrink)
Standard approaches to proper names, based on Kripke's views, hold that the semantic values of expressions are (set-theoretic) functions from possible worlds to extensions and that names are rigid designators, i.e.\ that their values are \emph{constant} functions from worlds to entities. The difficulties with these approaches are well-known and in this paper we develop an alternative. Based on earlier work on a higher order logic that is \emph{truly intensional} in the sense that it does not validate the axiom scheme of (...) Extensionality, we develop a simple theory of names in which Kripke's intuitions concerning rigidity are accounted for, but the more unpalatable consequences of standard implementations of his theory are avoided. The logic uses Frege's distinction between sense and reference and while it accepts the rigidity of names it rejects the view that names have direct reference. Names have constant denotations across possible worlds, but the semantic value of a name is not determined by its denotation. (shrink)
In mathematical languages and in predicate logic coreferential terms can be interchanged in any sentence without altering the truth value of that sentence. Replacing 3 + 5 by 12 − 4 in any formula of arithmetic will never lead from truth to falsity or from falsity to truth. But natural languages are different in this respect. While in some contexts it is always allowed to interchange coreferential terms, other contexts do not admit this. An example of the first sort of (...) context is likes bananas: for any two coreferential noun phrases A and B the sentence A likes bananas is true if and only if B likes bananas is. A context that does not allow intersubstitution of coreferents is The Ancients knew that appears at dawn. If we fill the hole with the noun phrase the Morning Star we get the true (1a), while if we plug in the Evening Star we get the false (1b). Yet the Morning Star and the Evening Star both refer to the planet Venus and are thus coreferential. (shrink)
An attractive way to model the relation between an underspecified syntactic representation and its completions is to let the underspecified representation correspond to a logical description and the completions to the models of that description. This approach, which underlies the Description Theory of Marcus et al. 1983 has been integrated in Vijay-Shanker 1992 with a pure unification approach to Lexicalized Tree-Adjoining Grammars (Joshi et al. 1975, Schabes 1990). We generalize Description Theory by integrating semantic information, that is, we propose to (...) tackle both syntactic and semantic underspecification using descriptions. (shrink)
Kant said that existence is not a predicate and Russell agreed, arguing that a sentence such as ‘The king of France exists’, which seems to attribute existence to the king of France, really has a logical form that is not reflected in the surface structure of the sentence at all. While the surface form of the sentence consists of a subject and a predicate, the underlying logical form, according to Russell, is the formula given in. This formula obviously has no (...) subjectpredicate form and in fact has no single constituent that corresponds to the verb phrase ‘exists’ in the surface sentence. ∃x∀y The importance of Russell’s analysis becomes clear when we consider ‘The king of France does not exist’. If this sentence would attribute non-existence to the king it should entail that there is someone who does not exist, just as ‘Mary doesn’t like bananas’ entails that there is someone who doesn’t like bananas. Thus the idea that all sentences have subject-predicate form has led some philosophers to the view that there are objects that lack existence. This embarrassing position can be avoided once Russell’s analysis is accepted: if ‘The king of France does not exist’ is formalised as the negation of formula, no unwanted consequences follow. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.