Vector models of language are based on the contextual aspects of language, the distributions of words and how they co-occur in text. Truth conditional models focus on the logical aspects of language, compositional properties of words and how they compose to form sentences. In the truth conditional approach, the denotation of a sentence determines its truth conditions, which can be taken to be a truth value, a set of possible worlds, a context change potential, or similar. In the vector models, (...) the degree of co-occurrence of words in context determines how similar the meanings of words are. In this paper, we put these two models together and develop a vector semantics for language based on the simply typed lambdacalculus models of natural language. We provide two types of vector semantics: a static one that uses techniques familiar from the truth conditional tradition and a dynamic one based on a form of dynamic interpretation inspired by Heim’s context change potentials. We show how the dynamic model can be applied to entailment between a corpus and a sentence and provide examples. (shrink)
In this paper we discuss a new perspective on the syntax-semantics interface. Semantics, in this new set-up, is not ‘read off’ from Logical Forms as in mainstream approaches to generative grammar. Nor is it assigned to syntactic proofs using a Curry-Howard correspondence as in versions of the Lambek Calculus, or read off from f-structures using Linear Logic as in Lexical-Functional Grammar (LFG, Kaplan & Bresnan [9]). All such approaches are based on the idea that syntactic objects (trees, proofs, fstructures) (...) are somehow prior and that semantics must be parasitic on those syntactic objects. We challenge this idea and develop a grammar in which syntax and semantics are treated in a strictly parallel fashion. The grammar will have many ideas in common with the (converging) frameworks of categorial grammar and LFG, but its treatment of the syntax-semantics interface is radically different. Also, although the meaning component of the grammar is a version of Montague semantics and although there are obvious affinities between Montague’s conception of grammar and the work presented here, the grammar is not compositional, in the sense that composition of meaning need not follow surface structure. (shrink)
Vector models of language are based on the contextual aspects of words and how they co-occur in text. Truth conditional models focus on the logical aspects of language, the denotations of phrases, and their compositional properties. In the latter approach the denotation of a sentence determines its truth conditions and can be taken to be a truth value, a set of possible worlds, a context change potential, or similar. In this short paper, we develop a vector semantics for language based (...) on the simply typed lambdacalculus. Our semantics uses techniques familiar from the truth conditional tradition and is based on a form of dynamic interpretation inspired by Heim's context updates. (shrink)
As the 19th century drew to a close, logicians formalized an ideal notion of proof. They were driven by nothing other than an abiding interest in truth, and their proofs were as ethereal as the mind of God. Yet within decades these mathematical abstractions were realized by the hand of man, in the digital stored-program computer. How it came to be recognized that proofs and programs are the same thing is a story that spans a century, a chase with as (...) many twists and turns as a thriller. At the end of the story is a new principle for designing programming languages that will guide computers into the 21st century. -/- For my money, Gentzen’s natural deduction and Church’s lambdacalculus are on a par with Einstein’s relativity and Dirac’s quantum physics for elegance and insight. And the maths are a lot simpler. I want to show you the essence of these ideas. I’ll need a few symbols, but not too many, and I’ll explain as I go along. -/- To simplify, I’ll present the story as we understand it now, with some asides to fill in the history. First, I’ll introduce Gentzen’s natural deduction, a formalism for proofs. Next, I’ll introduce Church’s lambdacalculus, a formalism for programs. Then I’ll explain why proofs and programs are really the same thing, and how simplifying a proof corresponds to executing a program. Finally, I’ll conclude with a look at how these principles are being applied to design a new generation of programming languages, particularly mobile code for the Internet. (shrink)
Textbook on Gödel’s incompleteness theorems and computability theory, based on the Open Logic Project. Covers recursive function theory, arithmetization of syntax, the first and second incompleteness theorem, models of arithmetic, second-order logic, and the lambdacalculus.
The paper develops Lambda Grammars, a form of categorial grammar that, unlike other categorial formalisms, is non-directional. Linguistic signs are represented as sequences of lambda terms and are combined with the help of linear combinators.
Combinatory logic (Curry and Feys 1958) is a “variable-free” alternative to the lambdacalculus. The two have the same expressive power but build their expressions differently. “Variable-free” semantics is, more precisely, “free of variable binding”: it has no operation like abstraction that turns a free variable into a bound one; it uses combinators—operations on functions—instead. For the general linguistic motivation of this approach, see the works of Steedman, Szabolcsi, and Jacobson, among others. The standard view in linguistics is (...) that reflexive and personal pronouns are free variables that get bound by an antecedent through some coindexing mechanism. In variable free semantics the same task is performed by some combinator that identifies two arguments of the function it operates on (a duplicator). This combinator may be built into the lexical semantics of the pronoun, into that of the antecedent, or it may be a free-floating operation applicable to predicates or larger chunks of texts, i.e. a typeshifter. This note is concerned with the case of cross-sentential anaphora. It adopts Hepple’s and Jacobson’s interpretation of pronouns as identity maps and asks how this can be extended to the cross-sentential case, assuming the dynamic semantic view of anaphora. It first outlines the possibility of interpreting indefinites that antecede non-ccommanded pronouns as existential quantifiers enriched with a duplicator. Then it argues that it is preferable to use the duplicator as a type-shifter that applies “on the fly”. The proposal has consequences for two central ingredients of the classical dynamic semantic treatment: it does away with abstraction over assignments and with treating indefinites as inherently existentially quantified. However, cross-sentential anaphora remains a matter of binding, and the idea of propositions as context change potentials is retained. (shrink)
This paper is the twin of (Duží and Jespersen, in submission), which provides a logical rule for transparent quantification into hyperprop- ositional contexts de dicto, as in: Mary believes that the Evening Star is a planet; therefore, there is a concept c such that Mary be- lieves that what c conceptualizes is a planet. Here we provide two logical rules for transparent quantification into hyperpropositional contexts de re. (As a by-product, we also offer rules for possible- world propositional contexts.) One (...) rule validates this inference: Mary believes of the Evening Star that it is a planet; therefore, there is an x such that Mary believes of x that it is a planet. The other rule validates this inference: the Evening Star is such that it is believed by Mary to be a planet; therefore, there is an x such that x is believed by Mary to be a planet. Issues unique to the de re variant include partiality and existential presupposition, sub- stitutivity of co-referential (as opposed to co-denoting or synony- mous) terms, anaphora, and active vs. passive voice. The validity of quantifying-in presupposes an extensional logic of hyperinten- sions preserving transparency and compositionality in hyperinten- sional contexts. This requires raising the bar for what qualifies as co-denotation or equivalence in extensional contexts. Our logic is Tichý’s Transparent Intensional Logic. The syntax of TIL is the typed lambdacalculus; its highly expressive semantics is based on a procedural redefinition of, inter alia, functional abstraction and application. The two non-standard features we need are a hyper- intension (called Trivialization) that presents other hyperintensions and a four-place substitution function (called Sub) defined over hy- perintensions. (shrink)
Although Kurt Gödel does not figure prominently in the history of computabilty theory, he exerted a significant influence on some of the founders of the field, both through his published work and through personal interaction. In particular, Gödel’s 1931 paper on incompleteness and the methods developed therein were important for the early development of recursive function theory and the lambdacalculus at the hands of Church, Kleene, and Rosser. Church and his students studied Gödel 1931, and Gödel taught (...) a seminar at Princeton in 1934. Seen in the historical context, Gödel was an important catalyst for the emergence of computability theory in the mid 1930s. (shrink)
I present and discuss three previously unpublished manuscripts written by Bertrand Russell in 1903, not included with similar manuscripts in Volume 4 of his Collected Papers. One is a one-page list of basic principles for his “functional theory” of May 1903, in which Russell partly anticipated the later LambdaCalculus. The next, catalogued under the title “Proof That No Function Takes All Values”, largely explores the status of Cantor’s proof that there is no greatest cardinal number in the (...) variation of the functional theory holding that only some but not all complexes can be analyzed into function and argument. The final manuscript, “Meaning and Denotation”, examines how his pre-1905 distinction between meaning and denotation is to be understood with respect to functions and their arguments. In them, Russell seems to endorse an extensional view of functions not endorsed in other works prior to the 1920s. All three manuscripts illustrate the close connection between his work on the logical paradoxes and his work on the theory of meaning. (shrink)
The main objective of this PhD Thesis is to present a method of obtaining strong normalization via natural ordinal, which is applicable to natural deduction systems and typed lambdacalculus. The method includes (a) the definition of a numerical assignment that associates each derivation (or lambda term) to a natural number and (b) the proof that this assignment decreases with reductions of maximal formulas (or redex). Besides, because the numerical assignment used coincide with the length of a (...) specific sequence of reduction - the worst reduction sequence - it is the lowest upper bound on the length of reduction sequences. The main commitment of the introduced method is that it is constructive and elementary, produced only through analyzing structural and combinatorial properties of derivations and lambda terms, without appeal to any sophisticated mathematical tool. Together with the exposition of the method, it is presented a comparative study of some articles in the literature that also get strong normalization by means we can identify with the natural ordinal methods. Among them we highlight Howard[1968], which performs an ordinal analysis of Godel’s Dialectica interpretation for intuitionistic first order arithmetic. We reveal a fact about this article not noted by the author himself: a syntactic proof of strong normalization theorem for the system of typified lambdacalculus λ⊃ is a consequence of its results. This would be the first strong normalization proof in the literature. (written in Portuguese). (shrink)
This paper argues that the theory of structured propositions is not undermined by the Russell-Myhill paradox. I develop a theory of structured propositions in which the Russell-Myhill paradox doesn't arise: the theory does not involve ramification or compromises to the underlying logic, but rather rejects common assumptions, encoded in the notation of the $\lambda$-calculus, about what properties and relations can be built. I argue that the structuralist had independent reasons to reject these underlying assumptions. The theory is given (...) both a diagrammatic representation, and a logical representation in a novel language. In the latter half of the paper I turn to some technical questions concerning the treatment of quantification, and demonstrate various equivalences between the diagrammatic and logical representations, and a fragment of the $\lambda$-calculus. (shrink)
The purpose of this article is to present several immediate consequences of the introduction of a new constant called Lambda in order to represent the object ``nothing" or ``void" into a standard set theory. The use of Lambda will appear natural thanks to its role of condition of possibility of sets. On a conceptual level, the use of Lambda leads to a legitimation of the empty set and to a redefinition of the notion of set. It lets (...) also clearly appear the distinction between the empty set, the nothing and the ur-elements. On a technical level, we introduce the notion of pre-element and we suggest a formal definition of the nothing distinct of that of the null-class. Among other results, we get a relative resolution of the anomaly of the intersection of a family free of sets and the possibility of building the empty set from ``nothing". The theory is presented with equi-consistency results . On both conceptual and technical levels, the introduction of Lambda leads to a resolution of the Russell's puzzle of the null-class. (shrink)
In this paper we introduce a Gentzen calculus for (a functionally complete variant of) Belnap's logic in which establishing the provability of a sequent in general requires \emph{two} proof trees, one establishing that whenever all premises are true some conclusion is true and one that guarantees the falsity of at least one premise if all conclusions are false. The calculus can also be put to use in proving that one statement \emph{necessarily approximates} another, where necessary approximation is a (...) natural dual of entailment. The calculus, and its tableau variant, not only capture the classical connectives, but also the `information' connectives of four-valued Belnap logics. This answers a question by Avron. (shrink)
In recent years, the e ffort to formalize erotetic inferences (i.e., inferences to and from questions) has become a central concern for those working in erotetic logic. However, few have sought to formulate a proof theory for these inferences. To fill this lacuna, we construct a calculus for (classes of) sequents that are sound and complete for two species of erotetic inferences studied by Inferential Erotetic Logic (IEL): erotetic evocation and regular erotetic implication. While an attempt has been made (...) to axiomatize the former in a sequent system, there is currently no proof theory for the latter. Moreover, the extant axiomatization of erotetic evocation fails to capture its defeasible character and provides no rules for introducing or eliminating question-forming operators. In contrast, our calculus encodes defeasibility conditions on sequents and provides rules governing the introduction and elimination of erotetic formulas. We demonstrate that an elimination theorem holds for a version of the cut rule that applies to both declarative and erotetic formulas and that the rules for the axiomatic account of question evocation in IEL are admissible in our system. (shrink)
The particular subject of this article is the very first sentence of Aristotle’s Metaphysics book Lambda: what does it really mean? I would stick to the most generous sense: (Aristotelian) theoria is about substance. Indeed, it has been often held that Lambda ignores the so-called focal meaning, and shows a remarkably rough stage of Aristotle’s conception of prime philosophy. By contrast, in this light, the very incipit of Lambda appears to testify Aristotle’s concern in an ontological foundation (...) of theoretical wisdom as such, which Lambda shares with the Metaphysics “central books”, Zeta in particular, in a not less coherent and possibly more advanced form. (shrink)
In their paper Nothing but the Truth Andreas Pietz and Umberto Rivieccio present Exactly True Logic, an interesting variation upon the four-valued logic for first-degree entailment FDE that was given by Belnap and Dunn in the 1970s. Pietz & Rivieccio provide this logic with a Hilbert-style axiomatisation and write that finding a nice sequent calculus for the logic will presumably not be easy. But a sequent calculus can be given and in this paper we will show that a (...)calculus for the Belnap-Dunn logic we have defined earlier can in fact be reused for the purpose of characterising ETL, provided a small alteration is made—initial assignments of signs to the sentences of a sequent to be proved must be different from those used for characterising FDE. While Pietz & Rivieccio define ETL on the language of classical propositional logic we also study its consequence relation on an extension of this language that is functionally complete for the underlying four truth values. On this extension the calculus gets a multiple-tree character—two proof trees may be needed to establish one proof. (shrink)
Hilbert's ε-calculus is based on an extension of the language of predicate logic by a term-forming operator εx. Two fundamental results about the ε-calculus, the first and second epsilon theorem, play a rôle similar to that which the cut-elimination theorem plays in sequent calculus. In particular, Herbrand's Theorem is a consequence of the epsilon theorems. The paper investigates the epsilon theorems and the complexity of the elimination procedure underlying their proof, as well as the length of Herbrand (...) disjunctions of existential theorems obtained by this elimination procedure. (shrink)
This paper examines systematically which features of a life story (or history) make it good for the subject herself - not aesthetically or morally good, but prudentially good. The tentative narrative calculus presented claims that the prudential narrative value of an event is a function of the extent to which it contributes to her concurrent and non-concurrent goals, the value of those goals, and the degree to which success in reaching the goals is deserved in virtue of exercising agency. (...) The narrative value of a life is a simple sum of the values of individual events that comprise it. I claim that this view best explains and support common intuitions about the significance of the shape of a life. (shrink)
I am presenting a sequent calculus that extends a nonmonotonic consequence relation over an atomic language to a logically complex language. The system is in line with two guiding philosophical ideas: (i) logical inferentialism and (ii) logical expressivism. The extension defined by the sequent rules is conservative. The conditional tracks the consequence relation and negation tracks incoherence. Besides the ordinary propositional connectives, the sequent calculus introduces a new kind of modal operator that marks implications that hold monotonically. Transitivity (...) fails, but for good reasons. Intuitionism and classical logic can easily be recovered from the system. (shrink)
To explore the extent of embeddability of Leibnizian infinitesimal calculus in first-order logic (FOL) and modern frameworks, we propose to set aside ontological issues and focus on pro- cedural questions. This would enable an account of Leibnizian procedures in a framework limited to FOL with a small number of additional ingredients such as the relation of infinite proximity. If, as we argue here, first order logic is indeed suitable for developing modern proxies for the inferential moves found in Leibnizian (...) infinitesimal calculus, then modern infinitesimal frameworks are more appropriate to interpreting Leibnizian infinitesimal calculus than modern Weierstrassian ones. (shrink)
Even though it is not very often admitted, partial functions do play a significant role in many practical applications of deduction systems. Kleene has already given a semantic account of partial functions using a three-valued logic decades ago, but there has not been a satisfactory mechanization. Recent years have seen a thorough investigation of the framework of many-valued truth-functional logics. However, strong Kleene logic, where quantification is restricted and therefore not truthfunctional, does not fit the framework directly. We solve this (...) problem by applying recent methods from sorted logics. This paper presents a tableau calculus that combines the proper treatment of partial functions with the efficiency of sorted calculi. (shrink)
A new proof style adequate for modal logics is defined from the polynomial ring calculus. The new semantics not only expresses truth conditions of modal formulas by means of polynomials, but also permits to perform deductions through polynomial handling. This paper also investigates relationships among the PRC here defined, the algebraic semantics for modal logics, equational logics, the Dijkstra???Scholten equational-proof style, and rewriting systems. The method proposed is throughly exemplified for S 5, and can be easily extended to other (...) modal logics. (shrink)
The derivative is a basic concept of differential calculus. However, if we calculate the derivative as change in distance over change in time, the result at any instant is 0/0, which seems meaningless. Hence, Newton and Leibniz used the limit to determine the derivative. Their method is valid in practice, but it is not easy to intuitively accept. Thus, this article describes the novel method of differential calculus based on the double contradiction, which is easier to accept intuitively. (...) Next, the geometrical meaning of the double contradiction is considered as follows. A tangent at a point on a convex curve is iterated. Then, the slope of the tangent at the point is sandwiched by two kinds of lines. The first kind of line crosses the curve at the original point and a point to the right of it. The second kind of line crosses the curve at the original point and a point to the left of it. Then, the double contradiction can be applied, and the slope of the tangent is determined as a single value. Finally, the meaning of this method for the foundation of mathematics is considered. We reflect on Dehaene’s notion that the foundation of mathematics is based on the intuitions, which evolve independently. Hence, there may be gaps between intuitions. In fact, the Ancient Greeks identified inconsistency between arithmetic and geometry. However, Eudoxus developed the theory of proportion, which is equivalent to the Dedekind Cut. This allows the iteration of an irrational number by rational numbers as precisely as desired. Simultaneously, we can define the irrational number by the double contradiction, although its existence is not guaranteed. Further, an area of a curved figure is iterated and defined by rectilinear figures using the double contradiction. (shrink)
John Venn has the “uneasy suspicion” that the stagnation in mathematical logic between J. H. Lambert and George Boole was due to Kant’s “disastrous effect on logical method,” namely the “strictest preservation [of logic] from mathematical encroachment.” Kant’s actual position is more nuanced, however. In this chapter, I tease out the nuances by examining his use of Leonhard Euler’s circles and comparing it with Euler’s own use. I do so in light of the developments in logical calculus from G. (...) W. Leibniz to Lambert and Gottfried Ploucquet. While Kant is evidently open to using mathematical tools in logic, his main concern is to clarify what mathematical tools can be used to achieve. For without such clarification, all efforts at introducing mathematical tools into logic would be blind if not complete waste of time. In the end, Kant would stress, the means provided by formal logic at best help us to express and order what we already know in some sense. No matter how much mathematical notations may enhance the precision of this function of formal logic, it does not change the fact that no truths can, strictly speaking, be revealed or established by means of those notations. (shrink)
The formalisation of Natural Language arguments in a formal language close to it in syntax has been a central aim of Moss’s Natural Logic. I examine how the Quantified Argument Calculus (Quarc) can handle the inferences Moss has considered. I show that they can be incorporated in existing versions of Quarc or in straightforward extensions of it, all within sound and complete systems. Moreover, Quarc is closer in some respects to Natural Language than are Moss’s systems – for instance, (...) is does not use negative nouns. The process also sheds light on formal properties and presuppositions of some inferences it formalises. Directions for future work are outlined. (shrink)
Building on the work of Peter Hinst and Geo Siegwart, we develop a pragmatised natural deduction calculus, i.e. a natural deduction calculus that incorporates illocutionary operators at the formal level, and prove its adequacy. In contrast to other linear calculi of natural deduction, derivations in this calculus are sequences of object-language sentences which do not require graphical or other means of commentary in order to keep track of assumptions or to indicate subproofs. (Translation of our German paper (...) "Ein Redehandlungskalkül. Ein pragmatisierter Kalkül des natürlichen Schließens nebst Metatheorie"; online available at http://philpapers.org/rec/CORERE.). (shrink)
The purpose of this paper is to outline an alternative approach to introductory logic courses. Traditional logic courses usually focus on the method of natural deduction or introduce predicate calculus as a system. These approaches complicate the process of learning different techniques for dealing with categorical and hypothetical syllogisms such as alternate notations or alternate forms of analyzing syllogisms. The author's approach takes up observations made by Dijkstrata and assimilates them into a reasoning process based on modified notations. The (...) author's model adopts a notation that addresses the essentials of a problem while remaining easily manipulated to serve other analytic frameworks. The author also discusses the pedagogical benefits of incorporating the model into introductory logic classes for topics ranging from syllogisms to predicate calculus. Since this method emphasizes the development of a clear and manipulable notation, students can worry less about issues of translation, can spend more energy solving problems in the terms in which they are expressed, and are better able to think in abstract terms. (shrink)
It is fair to say that Georg Wilhelm Friedrich Hegel's philosophy of mathematics and his interpretation of the calculus in particular have not been popular topics of conversation since the early part of the twentieth century. Changes in mathematics in the late nineteenth century, the new set-theoretical approach to understanding its foundations, and the rise of a sympathetic philosophical logic have all conspired to give prior philosophies of mathematics (including Hegel's) the untimely appearance of naïveté. The common view was (...) expressed by Bertrand Russell: -/- The great [mathematicians] of the seventeenth and eighteenth centuries were so much impressed by the results of their new methods that they did not trouble to examine their foundations. Although their arguments were fallacious, a special Providence saw to it that their conclusions were more or less true. Hegel fastened upon the obscurities in the foundations of mathematics, turned them into dialectical contradictions, and resolved them by nonsensical syntheses. . . .The resulting puzzles [of mathematics] were all cleared up during the nineteenth century, not by heroic philosophical doctrines such as that of Kant or that of Hegel, but by patient attention to detail (1956, 368–69). (shrink)
All first-order Gödel logics G_V with globalization operator based on truth value sets V C [0,1] where 0 and 1 lie in the perfect kernel of V are axiomatized by Ciabattoni’s hypersequent calculus HGIF.
In Hegel ou Spinoza,1 Pierre Macherey challenges the influence of Hegel’s reading of Spinoza by stressing the degree to which Spinoza eludes the grasp of the Hegelian dialectical progression of the history of philosophy. He argues that Hegel provides a defensive misreading of Spinoza, and that he had to “misread him” in order to maintain his subjective idealism. The suggestion being that Spinoza’s philosophy represents, not a moment that can simply be sublated and subsumed within the dialectical progression of the (...) history of philosophy, but rather an alternative point of view for the development of a philosophy that overcomes Hegelian idealism. Gilles Deleuze also considers Spinoza’s philosophy to resist the totalising effects of the dialectic. Indeed, Deleuze demonstrates, by means of Spinoza, that a more complex philosophy antedates Hegel’s, which cannot be supplanted by it. Spinoza therefore becomes a significant figure in Deleuze’s project of tracing an alternative lineage in the history of philosophy, which, by distancing itself from Hegelian idealism, culminates in the construction of a philosophy of difference. It is Spinoza’s role in this project that will be demonstrated in this paper by differentiating Deleuze’s interpretation of the geometrical example of Spinoza’s Letter XII (on the problem of the infinite) in Expressionism in Philosophy, Spinoza,2 from that which Hegel presents in the Science of Logic.3. (shrink)
The paper is an introduction to geometric algebra and geometric calculus for those with a knowledge of undergraduate mathematics. No knowledge of physics is required. The section Further Study lists many papers available on the web.
We provide a logical matrix semantics and a Gentzen-style sequent calculus for the first-degree entailments valid in W. T. Parry's logic of Analytic Implication. We achieve the former by introducing a logical matrix closely related to that inducing paracomplete weak Kleene logic, and the latter by presenting a calculus where the initial sequents and the left and right rules for negation are subject to linguistic constraints.
A simple interpretation of quantity calculus is given. Quantities are described as functions from objects, states or processes (or some combination of them) into numbers that satisfy the mutual measurability property. Quantity calculus is based on a notational simplification of the concept of quantity. A key element of the notational simplification is that we consider units intentionally unspecified numbers that are measures of exactly specified objects, states or processes. This interpretation of quantity calculus combines all the advantages (...) of calculating with numerical values (since the values of quantities are numbers, we can do with them everything we do with numbers) and all the advantages of calculating with classically conceived quantities (calculus is invariant to the choice of units and has built-in dimensional analysis). This also shows that the whole metaphysics of the common concept of quantities and their magnitudes is irrelevant to quantity calculus. As an application of this interpretation of quantity calculus, an easy proof of dimensional homogeneity of physical laws is given. (shrink)
This paper shows how the classical finite probability theory (with equiprobable outcomes) can be reinterpreted and recast as the quantum probability calculus of a pedagogical or "toy" model of quantum mechanics over sets (QM/sets). There are two parts. The notion of an "event" is reinterpreted from being an epistemological state of indefiniteness to being an objective state of indefiniteness. And the mathematical framework of finite probability theory is recast as the quantum probability calculus for QM/sets. The point is (...) not to clarify finite probability theory but to elucidate quantum mechanics itself by seeing some of its quantum features in a classical setting. (shrink)
El cálculo infinitesimal elaborado por Leibniz en la segunda mitad del siglo XVII tuvo, como era de esperarse, muchos adeptos pero también importantes críticos. Uno pensaría que cuatro siglos después de haber sido presentado éste, en las revistas, academias y sociedades de la época, habría ya poco qué decir sobre el mismo; sin embargo, cuando uno se acerca al cálculo de Leibniz –tal y como me sucedió hace tiempo– fácilmente puede percatarse de que el debate en torno al cálculo leibniziano (...) ha trascendido las fronteras temporales del siglo XVII y XVIII y aún sigue vigente, al menos en parte y sobre ciertos puntos específicos. Lo anterior resulta un tanto inquietante, entre otras cosas porque implica que hay que revisar el cálculo leibniziano para tratar de entender el motivo por el que se sigue debatiendo actualmente. El propósito de este artículo no es presentar las tesis principales del cálculo, tampoco hacer una defensa del mismo y mucho menos desarrollar una crítica de sus fundamentos matemáticos, epistémicos u ontológicos. Mi propósito es menos ambicioso, pretendo mostrar, en primer lugar, dos de las primeras objeciones que se formularon contra el cálculo infinitesimal y que fueron hechas por dos contemporáneos de Leibniz, a saber, Rolle y Nieuwentijit; por otro lado, y sirviéndome de lo anterior, quiero presentar dos comentaristas actuales que abordan el tema del cálculo. Lo anterior con el objetivo de ilustrar cuál es el estado de la cuestión, es decir, en qué momento está el debate sobre el cálculo infinitesimal y en qué se enfocan actualmente los comentaristas al hablar sobre él, pues al hacerlo –e incluso a veces sin pretenderlo– mantienen abierto el debate sobre los aciertos y desaciertos del método infinitesimal. (shrink)
This paper argues for the idea that in describing language we should follow Haskell Curry in distinguishing between the structure of an expression and its appearance or manifestation . It is explained how making this distinction obviates the need for directed types in type-theoretic grammars and a simple grammatical formalism is sketched in which representations at all levels are lambda terms. The lambda term representing the abstract structure of an expression is homomorphically translated to a lambda term (...) representing its manifestation, but also to a lambda term representing its semantics. (shrink)
The ‘syntax’ and ‘combinatorics’ of my title are what Curry (1961) referred to as phenogrammatics and tectogrammatics respectively. Tectogrammatics is concerned with the abstract combinatorial structure of the grammar and directly informs semantics, while phenogrammatics deals with concrete operations on syntactic data structures such as trees or strings. In a series of previous papers (Muskens, 2001a; Muskens, 2001b; Muskens, 2003) I have argued for an architecture of the grammar in which ﬁnite sequences of lambda terms are the basic data (...) structures, pairs of terms syntax, semantics for example. These sequences then combine with the help of simple generalizations of the usual abstraction and application operations. This theory, which I call Lambda Grammars and which is closely related to the independently formulated theory of Abstract Categorial Grammars (de Groote, 2001; de Groote, 2002), in fact is an implementation of Curry’s ideas: the level of tectogrammar is encoded by the sequences of lambda-terms and their ways of combination, while the syntactic terms in those sequences constitute the phenogrammatical level. In de Groote’s formulation of the theory, tectogrammar is the level of abstract terms, while phenogrammar is the level of object terms. (shrink)
The aim of this article is to free Aristotle's Metaphysics, especially book XII (Lambda), frome some metaphysical and theological presuppositions by detecting their inappropriate conceptual framwork, which one was progressive, but now holds an obsolete position. Ousia, being (not substance, a much later concept, construed to solve other problems than Aristotle's), stand for a question, not for an answer. Book Lambda develops a highly speculative argument for this queston. The famous noesis noeseos says that empirical being and knowledge (...) is the realization of a noetic structure (noesis), laid down in our most fundamental opinioons about being. (shrink)
By pure calculus of names we mean a quantifier-free theory, based on the classical propositional calculus, which defines predicates known from Aristotle’s syllogistic and Leśniewski’s Ontology. For a large fragment of the theory decision procedures, defined by a combination of simple syntactic operations and models in two-membered domains, can be used. We compare the system which employs `ε’ as the only specific term with the system enriched with functors of Syllogistic. In the former, we do not need an (...) empty name in the model, so we are able to construct a 3-valued matrix, while for the latter, for which an empty name is necessary, the respective matrices are 4-valued. (shrink)
We introduce an effective translation from proofs in the display calculus to proofs in the labelled calculus in the context of tense logics. We identify the labelled calculus proofs in the image of this translation as those built from labelled sequents whose underlying directed graph possesses certain properties. For the basic normal tense logic Kt, the image is shown to be the set of all proofs in the labelled calculus G3Kt.
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.