A complete axiomatic system CTL$_{rp}$ is introduced for a temporal logic for finitely branching $\omega^+$-trees in a temporal language extended with so called reference pointers. Syntactic and semantic interpretations are constructed for the branching time computationtreelogic CTL* into CTL$_{rp}$. In particular, that yields a complete axiomatization for the translations of all valid CTL*-formulae. Thus, the temporal logic with reference pointers is brought forward as a simpler (with no path quantifiers), but in a way (...) more expressive medium for reasoning about branching time. (shrink)
A complete axiomatic system CTL$_{rp}$ is introduced for a temporal logic for finitely branching $\omega^+$-trees in a temporal language extended with so called reference pointers. Syntactic and semantic interpretations are constructed for the branching time computationtreelogic CTL$^{*}$ into CTL$_{rp}$. In particular, that yields a complete axiomatization for the translations of all valid CTL$^{*}$-formulae. Thus, the temporal logic with reference pointers is brought forward as a simpler (with no path quantifiers), but in a way (...) more expressive medium for reasoning about branching time. (shrink)
The Computational Theory of Mind (CTM) holds that cognitive processes are essentially computational, and hence computation provides the scientific key to explaining mentality. The Representational Theory of Mind (RTM) holds that representational content is the key feature in distinguishing mental from non-mental systems. I argue that there is a deep incompatibility between these two theoretical frameworks, and that the acceptance of CTM provides strong grounds for rejecting RTM. The focal point of the incompatibility is the fact that representational content (...) is extrinsic to formal procedures as such, and the intended interpretation of syntax makes no difference to the execution of an algorithm. So the unique 'content' postulated by RTM is superfluous to the formal procedures of CTM. And once these procedures are implemented in a physical mechanism, it is exclusively the causal properties of the physical mechanism that are responsible for all aspects of the system's behaviour. So once again, postulated content is rendered superfluous. To the extent that semantic content may appear to play a role in behaviour, it must be syntactically encoded within the system, and just as in a standard computational artefact, so too with the human mind/brain - it's pure syntax all the way down to the level of physical implementation. Hence 'content' is at most a convenient meta-level gloss, projected from the outside by human theorists, which itself can play no role in cognitive processing. (shrink)
An introductory textbook on metalogic. It covers naive set theory, first-order logic, sequent calculus and natural deduction, the completeness, compactness, and Löwenheim-Skolem theorems, Turing machines, and the undecidability of the halting problem and of first-order logic. The audience is undergraduate students with some background in formal logic.
The paper shows how ideas that explain the sense of an expression as a method or algorithm for finding its reference, preshadowed in Frege’s dictum that sense is the way in which a referent is given, can be formalized on the basis of the ideas in Thomason (1980). To this end, the function that sends propositions to truth values or sets of possible worlds in Thomason (1980) must be replaced by a relation and the meaning postulates governing the behaviour of (...) this relation must be given in the form of a logic program. The resulting system does not only throw light on the properties of sense and their relation to computation, but also shows circular behaviour if some ingredients of the Liar Paradox are added. The connection is natural, as algorithms can be inherently circular and the Liar is explained as expressing one of those. Many ideas in the present paper are closely related to those in Moschovakis (1994), but receive a considerably lighter formalization. (shrink)
Throughout this paper, we are trying to show how and why our Mathematical frame-work seems inappropriate to solve problems in Theory of Computation. More exactly, the concept of turning back in time in paradoxes causes inconsistency in modeling of the concept of Time in some semantic situations. As we see in the first chapter, by introducing a version of “Unexpected Hanging Paradox”,first we attempt to open a new explanation for some paradoxes. In the second step, by applying this paradox, (...) it is demonstrated that any formalized system for the Theory of Computation based on Classical Logic and Turing Model of Computation leads us to a contradiction. We conclude that our mathematical frame work is inappropriate for Theory of Computation. Furthermore, the result provides us a reason that many problems in Complexity Theory resist to be solved.(This work is completed in 2017 -5- 2, it is in vixra in 2017-5-14, presented in Unilog 2018, Vichy). (shrink)
This paper contends that Stoic logic (i.e. Stoic analysis) deserves more attention from contemporary logicians. It sets out how, compared with contemporary propositional calculi, Stoic analysis is closest to methods of backward proof search for Gentzen-inspired substructural sequent logics, as they have been developed in logic programming and structural proof theory, and produces its proof search calculus in tree form. It shows how multiple similarities to Gentzen sequent systems combine with intriguing dissimilarities that may enrich contemporary discussion. (...) Much of Stoic logic appears surprisingly modern: a recursively formulated syntax with some truth-functional propositional operators; analogues to cut rules, axiom schemata and Gentzen’s negation-introduction rules; an implicit variable-sharing principle and deliberate rejection of Thinning and avoidance of paradoxes of implication. These latter features mark the system out as a relevance logic, where the absence of duals for its left and right introduction rules puts it in the vicinity of McCall’s connexive logic. Methodologically, the choice of meticulously formulated meta-logical rules in lieu of axiom and inference schemata absorbs some structural rules and results in an economical, precise and elegant system that values decidability over completeness. (shrink)
“Second-order Logic” in Anderson, C.A. and Zeleny, M., Eds. Logic, Meaning, and Computation: Essays in Memory of Alonzo Church. Dordrecht: Kluwer, 2001. Pp. 61–76. -/- Abstract. This expository article focuses on the fundamental differences between second- order logic and first-order logic. It is written entirely in ordinary English without logical symbols. It employs second-order propositions and second-order reasoning in a natural way to illustrate the fact that second-order logic is actually a familiar part of (...) our traditional intuitive logical framework and that it is not an artificial formalism created by specialists for technical purposes. To illustrate some of the main relationships between second-order logic and first-order logic, this paper introduces basic logic, a kind of zero-order logic, which is more rudimentary than first-order and which is transcended by first-order in the same way that first-order is transcended by second-order. The heuristic effectiveness and the historical importance of second-order logic are reviewed in the context of the contemporary debate over the legitimacy of second-order logic. Rejection of second-order logic is viewed as radical: an incipient paradigm shift involving radical repudiation of a part of our scientific tradition, a tradition that is defended by classical logicians. But it is also viewed as reactionary: as being analogous to the reactionary repudiation of symbolic logic by supporters of “Aristotelian” traditional logic. But even if “genuine” logic comes to be regarded as excluding second-order reasoning, which seems less likely today than fifty years ago, its effectiveness as a heuristic instrument will remain and its importance for understanding the history of logic and mathematics will not be diminished. Second-order logic may someday be gone, but it will never be forgotten. Technical formalisms have been avoided entirely in an effort to reach a wide audience, but every effort has been made to limit the inevitable sacrifice of rigor. People who do not know second-order logic cannot understand the modern debate over its legitimacy and they are cut-off from the heuristic advantages of second-order logic. And, what may be worse, they are cut-off from an understanding of the history of logic and thus are constrained to have distorted views of the nature of the subject. As Aristotle first said, we do not understand a discipline until we have seen its development. It is a truism that a person's conceptions of what a discipline is and of what it can become are predicated on their conception of what it has been. (shrink)
We reconsider the pragmatic interpretation of intuitionistic logic [21] regarded as a logic of assertions and their justi cations and its relations with classical logic. We recall an extension of this approach to a logic dealing with assertions and obligations, related by a notion of causal implication [14, 45]. We focus on the extension to co-intuitionistic logic, seen as a logic of hypotheses [8, 9, 13] and on polarized bi-intuitionistic logic as a (...) class='Hi'>logic of assertions and conjectures: looking at the S4 modal translation, we give a de nition of a system AHL of bi-intuitionistic logic that correctly represents the duality between intuitionistic and co-intuitionistic logic, correcting a mistake in previous work [7, 10]. A computational interpretation of cointuitionism as a distributed calculus of coroutines is then used to give an operational interpretation of subtraction.Work on linear co-intuitionism is then recalled, a linear calculus of co-intuitionistic coroutines is de ned and a probabilistic interpretation of linear co-intuitionism is given as in [9]. Also we remark that by extending the language of intuitionistic logic we can express the notion of expectation, an assertion that in all situations the truth of p is possible and that in a logic of expectations the law of double negation holds. Similarly, extending co-intuitionistic logic, we can express the notion of conjecture that p, de ned as a hypothesis that in some situation the truth of p is epistemically necessary. (shrink)
मैं कंप्यूटर के रूप में गणना और ब्रह्मांड की सीमा के कई हाल ही में चर्चा पढ़ लिया है, polymath भौतिक विज्ञानी और निर्णय सिद्धांतकार डेविड Wolpert के अद्भुत काम पर कुछ टिप्पणी खोजने की उम्मीद है, लेकिन एक भी प्रशस्ति पत्र नहीं मिला है और इसलिए मैं यह बहुत संक्षिप्त मौजूद सारांश. Wolpert कुछ आश्चर्यजनक असंभव या अधूरापन प्रमेयों साबित कर दिया (1992 से 2008-देखें arxiv dot org) अनुमान के लिए सीमा पर (कम्प्यूटेशन) कि इतने सामान्य वे गणना कर (...) डिवाइस से स्वतंत्र हैं, और यहां तक कि भौतिकी के नियमों से स्वतंत्र, इसलिए वे कंप्यूटर, भौतिक विज्ञान और मानव व्यवहार में लागू होते हैं. वे कैंटर विकर्णीकरण का उपयोग करते हैं, झूठा विरोधाभास और worldlines प्रदान करने के लिए क्या ट्यूरिंग मशीन थ्योरी में अंतिम प्रमेय हो सकता है, और प्रतीत होता है असंभव, अधूरापन, गणना की सीमा में अंतर्दृष्टि प्रदान करते हैं, और ब्रह्मांड के रूप में कंप्यूटर, सभी संभव ब्रह्मांडों और सभी प्राणियों या तंत्र में, उत्पादन, अन्य बातों के अलावा, एक गैर क्वांटम यांत्रिक अनिश्चितता सिद्धांत और एकेश्वरवाद का सबूत. वहाँ Chaitin, Solomonoff, Komolgarov और Wittgenstein के क्लासिक काम करने के लिए स्पष्ट कनेक्शन कर रहे हैं और धारणा है कि कोई कार्यक्रम (और इस तरह कोई डिवाइस) एक दृश्य उत्पन्न कर सकते हैं (या डिवाइस) अधिक से अधिक जटिलता के साथ यह पास से. कोई कह सकता है कि काम के इस शरीर का अर्थ नास्तिकता है क्योंकि भौतिक ब्रह्मांड से और विटगेनस्टीनियन दृष्टिकोण से कोई भी इकाई अधिक जटिल नहीं हो सकती है, 'अधिक जटिल' अर्थहीन है (संतोष की कोई शर्त नहीं है, अर्थात, सत्य-निर्माता या परीक्षण)। यहां तक कि एक 'भगवान' (यानी, असीम समय/स्थान और ऊर्जा के साथ एक 'डिवाइस' निर्धारित नहीं कर सकता है कि क्या एक दिया 'संख्या' 'यादृच्छिक' है, और न ही एक निश्चित तरीका है दिखाने के लिए कि एक दिया 'सूत्र', 'प्रमेय' या 'वाक्य' या 'डिवाइस' (इन सभी जटिल भाषा जा रहा है) खेल) एक विशेष 'प्रणाली' का हिस्सा है. आधुनिक दो systems दृश्यसे मानव व्यवहार के लिए एक व्यापक अप करने के लिए तारीख रूपरेखा इच्छुक लोगों को मेरी पुस्तक 'दर्शन, मनोविज्ञान, मिनडी और लुडविगमें भाषा की तार्किक संरचना से परामर्श कर सकते हैं Wittgenstein और जॉन Searle '2 एड (2019). मेरे लेखन के अधिक में रुचि रखने वालों को देख सकते हैं 'बात कर रहेबंदर- दर्शन, मनोविज्ञान, विज्ञान, धर्म और राजनीति पर एक बर्बाद ग्रह --लेख और समीक्षा 2006-2019 2 ed (2019) और आत्मघाती यूटोपियान भ्रम 21st मेंसदी 4वें एड (2019) . (shrink)
This is the first of a two-volume work combining two fundamental components of contemporary computing into classical deductive computing, a powerful form of computation, highly adequate for programming and automated theorem proving, which, in turn, have fundamental applications in areas of high complexity and/or high security such as mathematical proof, software specification and verification, and expert systems. Deductive computation is concerned with truth-preservation: This is the essence of the satisfiability problem, or SAT, the central computational problem in computability (...) and complexity theory. The Turing machine provides the classical version of this theory—classical computing—with its standard model, which is physically concretized—and thus spatial-temporally limited and restricted—in the von Neumann, or digital, computer. Although a number of new technological applications require classical deductive computation with non-classical logics, many key technologies still do well—or exclusively, for that matter—with classical logic. In this first volume, we elaborate on classical deductive computing with classical logic. The objective of the main text is to provide the reader with a thorough elaboration on both classical computing and classical deduction with the classical first-order predicate calculus with a view to computational implementations. As a complement to the mathematical-based exposition of the topics we offer the reader a very large selection of exercises. This selection aims at not only practice of discussed material, but also creative approaches to problems, for both discussed and novel contents, as well as at research into further relevant topics. (shrink)
The classical theory of computation does not represent an adequate model of reality for simulation in the social sciences. The aim of this paper is to construct a methodological perspective that is able to conciliate the formal and empirical logic of program verification in computer science, with the interpretative and multiparadigmatic logic of the social sciences. We attempt to evaluate whether social simulation implies an additional perspective about the way one can understand the concepts of program and (...)computation. We demonstrate that the logic of social simulation implies at least two distinct types of program verifications that reflect an epistemological distinction in the kind of knowledge one can have about programs. Computer programs seem to possess a causal capability (Fetzer, 1999) and an intentional capability that scientific theories seem not to possess. This distinction is associated with two types of program verification, which we call empirical and intentional verification. We demonstrate, by this means, that computational phenomena are also intentional phenomena, and that such is particularly manifest in agent-based social simulation. Ascertaining the credibility of results in social simulation requires a focus on the identification of a new category of knowledge we can have about computer programs. This knowledge should be considered an outcome of an experimental exercise, albeit not empirical, acquired within a context of limited consensus. The perspective of intentional computation seems to be the only one possible to reflect the multiparadigmatic character of social science in terms of agent-based computational social science. We contribute, additionally, to the clarification of several questions that are found in the methodological perspectives of the discipline, such as the computational nature, the logic of program scalability, and the multiparadigmatic character of agent-based simulation in the social sciences. (shrink)
It is well known that systems of action deontic logic emerging from a standard analysis of permission in terms of possibility of doing an action without incurring in a violation of the law are subject to paradoxes. In general, paradoxes are acknowledged as such if we have intuitions telling us that things should be different. The aim of this paper is to introduce a paradox-free deontic action system by (i) identifying the basic intuitions leading to the emergence of the (...) paradoxes and (ii) exploiting these intuitions in order to develop a consistent deontic framework, where it can be shown why some phenomena seem to be paradoxical and why they are not so if interpreted in a correct way. (shrink)
The previously introduced algorithm \sqema\ computes first-order frame equivalents for modal formulae and also proves their canonicity. Here we extend \sqema\ with an additional rule based on a recursive version of Ackermann's lemma, which enables the algorithm to compute local frame equivalents of modal formulae in the extension of first-order logic with monadic least fixed-points \mffo. This computation operates by transforming input formulae into locally frame equivalent ones in the pure fragment of the hybrid mu-calculus. In particular, we (...) prove that the recursive extension of \sqema\ succeeds on the class of `recursive formulae'. We also show that a certain version of this algorithm guarantees the canonicity of the formulae on which it succeeds. (shrink)
In this paper the propositional logic LTop is introduced, as an extension of classical propositional logic by adding a paraconsistent negation. This logic has a very natural interpretation in terms of topological models. The logic LTop is nothing more than an alternative presentation of modal logic S4, but in the language of a paraconsistent logic. Moreover, LTop is a logic of formal inconsistency in which the consistency and inconsistency operators have a nice topological (...) interpretation. This constitutes a new proof of S4 as being "the logic of topological spaces", but now under the perspective of paraconsistency. (shrink)
As the 19th century drew to a close, logicians formalized an ideal notion of proof. They were driven by nothing other than an abiding interest in truth, and their proofs were as ethereal as the mind of God. Yet within decades these mathematical abstractions were realized by the hand of man, in the digital stored-program computer. How it came to be recognized that proofs and programs are the same thing is a story that spans a century, a chase with as (...) many twists and turns as a thriller. At the end of the story is a new principle for designing programming languages that will guide computers into the 21st century. -/- For my money, Gentzen’s natural deduction and Church’s lambda calculus are on a par with Einstein’s relativity and Dirac’s quantum physics for elegance and insight. And the maths are a lot simpler. I want to show you the essence of these ideas. I’ll need a few symbols, but not too many, and I’ll explain as I go along. -/- To simplify, I’ll present the story as we understand it now, with some asides to fill in the history. First, I’ll introduce Gentzen’s natural deduction, a formalism for proofs. Next, I’ll introduce Church’s lambda calculus, a formalism for programs. Then I’ll explain why proofs and programs are really the same thing, and how simplifying a proof corresponds to executing a program. Finally, I’ll conclude with a look at how these principles are being applied to design a new generation of programming languages, particularly mobile code for the Internet. (shrink)
This paper will present two contributions to teaching introductory logic. The first contribution is an alternative tree proof method that differs from the traditional one-sided tree method. The second contribution combines this tree system with an index system to produce a user-friendly tree method for sentential modal logic.
In their paper Nothing but the Truth Andreas Pietz and Umberto Rivieccio present Exactly True Logic, an interesting variation upon the four-valued logic for first-degree entailment FDE that was given by Belnap and Dunn in the 1970s. Pietz & Rivieccio provide this logic with a Hilbert-style axiomatisation and write that finding a nice sequent calculus for the logic will presumably not be easy. But a sequent calculus can be given and in this paper we will show (...) that a calculus for the Belnap-Dunn logic we have defined earlier can in fact be reused for the purpose of characterising ETL, provided a small alteration is made—initial assignments of signs to the sentences of a sequent to be proved must be different from those used for characterising FDE. While Pietz & Rivieccio define ETL on the language of classical propositional logic we also study its consequence relation on an extension of this language that is functionally complete for the underlying four truth values. On this extension the calculus gets a multiple-tree character—two proof trees may be needed to establish one proof. (shrink)
The recent debate on hypercomputation has raised new questions both on the computational abilities of quantum systems and the Church-Turing Thesis role in Physics.We propose here the idea of “effective physical process” as the essentially physical notion of computation. By using the Bohm and Hiley active information concept we analyze the differences between the standard form (quantum gates) and the non-standard one (adiabatic and morphogenetic) of Quantum Computing, and we point out how its Super-Turing potentialities derive from an incomputable (...) information source in accordance with Bell’s constraints. On condition that we give up the formal concept of “universality”, the possibility to realize quantum oracles is reachable. In this way computation is led back to the logic of physical world. (shrink)
The articles in this volume present a selection of works from the Symposium on Natu-ral/Unconventional Computing at AISB/IACAP (British Society for the Study of Artificial Intelligence and the Simulation of Behaviour and The International Association for Computing and Philosophy) World Congress 2012, held at the University of Birmingham, celebrating Turing centenary. This book is about nature considered as the totality of physical existence, the universe. By physical we mean all phenomena - objects and processes - that are possible to detect (...) either directly by our senses or via instruments. Historically, there have been many ways of describ-ing the universe (cosmic egg, cosmic tree, theistic universe, mechanistic universe) and a par-ticularly prominent contemporary approach is computational universe. (shrink)
Although Fuzzy logic and Fuzzy Mathematics is a widespread subject and there is a vast literature about it, yet the use of Fuzzy issues like Fuzzy sets and Fuzzy numbers was relatively rare in time concept. This could be seen in the Fuzzy time series. In addition, some attempts are done in fuzzing Turing Machines but seemingly there is no need to fuzzy time. Throughout this article, we try to change this picture and show why it is helpful to (...) consider the instants of time as Fuzzy numbers. In physics, though there are revolutionary ideas on the time concept like B theories in contrast to A theory also about central concepts like space, momentum… it is a long time that these concepts are changed, but time is considered classically in all well-known and established physics theories. Seemingly, we stick to the classical time concept in all fields of science and we have a vast inertia to change it. Our goal in this article is to provide some bases why it is rational and reasonable to change and modify this picture. Here, the central point is the modified version of “Unexpected Hanging” paradox as it is described in "Is classical Mathematics appropriate for theory of Computation".This modified version leads us to a contradiction and based on that it is presented there why some problems in Theory of Computation are not solved yet. To resolve the difficulties arising there, we have two choices. Either “choosing” a new type of Logic like “Para-consistent Logic” to tolerate contradiction or changing and improving the time concept and consequently to modify the “Turing Computational Model”. Throughout this paper, we select the second way for benefiting from saving some aspects of Classical Logic. In chapter 2, by applying quantum Mechanics and Schrodinger equation we compute the associated fuzzy number to time. (shrink)
I use modal logic and transfinite set-theory to define metaphysical foundations for a general theory of computation. A possible universe is a certain kind of situation; a situation is a set of facts. An algorithm is a certain kind of inductively defined property. A machine is a series of situations that instantiates an algorithm in a certain way. There are finite as well as transfinite algorithms and machines of any degree of complexity (e.g., Turing and super-Turing machines and (...) more). There are physically and metaphysically possible machines. There is an iterative hierarchy of logically possible machines in the iterative hierarchy of sets. Some algorithms are such that machines that instantiate them are minds. So there is an iterative hierarchy of finitely and transfinitely complex minds. (shrink)
Searle’s Chinese Room Argument (CRA) has been the object of great interest in the philosophy of mind, artificial intelligence and cognitive science since its initial presentation in ‘Minds, Brains and Programs’ in 1980. It is by no means an overstatement to assert that it has been a main focus of attention for philosophers and computer scientists of many stripes. It is then especially interesting to note that relatively little has been said about the detailed logic of the argument, whatever (...) significance Searle intended CRA to have. The problem with the CRA is that it involves a very strong modal claim, the truth of which is both unproved and highly questionable. So it will be argued here that the CRA does not prove what it was intended to prove. (shrink)
It is here proposed an analysis of symbolic and sub-symbolic models for studying cognitive processes, centered on emergence and logical openness notions. The Theory of logical openness connects the Physics of system/environment relationships to the system informational structure. In this theory, cognitive models can be ordered according to a hierarchy of complexity depending on their logical openness degree, and their descriptive limits are correlated to Gödel-Turing Theorems on formal systems. The symbolic models with low logical openness describe cognition by means (...) of semantics which fix the system/environment relationship, while the sub-symbolic ones with high logical openness tends to seize its evolutive dynamics. An observer is defined as a system with high logical openness. In conclusion, the characteristic processes of intrinsic emergence typical of “bio-logic” - emerging of new codes-require an alternative model to Turing- computation, the natural or bio-morphic computation, whose essential features we are going here to outline. (shrink)
In the 17th century, Hobbes stated that we reason by addition and subtraction. Historians of logic note that Hobbes thought of reasoning as “a ‘species of computation’” but point out that “his writing contains in fact no attempt to work out such a project.” Though Leibniz mentions the plus/minus character of the positive and negative copulas, neither he nor Hobbes say anything about a plus/minus character of other common logical words that drive our deductive judgments, words like ‘some’, (...) ‘all’, ‘if’, and ‘and’, each of which actually turns out to have an oppositive, character that allows us, “in our silent reasoning,” to ignore its literal meaning and to reckon with it as one reckons with a plus or a minus operator in elementary algebra or arithmetic. These ‘logical constants’ of natural language figure crucially in our everyday reasoning. Because Hobbes and Leibniz did not identify them as the plus and minus words we reason with, their insight into what goes on in ‘ratiocination’ did not provide a guide for a research program that could develop a +/- logic that actually describes how we reason deductively. I will argue that such a +/- logic provides a way back from modern predicate logic—the logic of quantifiers and bound variables that is now ‘standard logic’—to an Aristotelian term logic of natural language that had been the millennial standard logic. (shrink)
Forty-two years ago, Capra published “The Tao of Physics” (Capra, 1975). In this book (page 17) he writes: “The exploration of the atomic and subatomic world in the twentieth century has …. necessitated a radical revision of many of our basic concepts” and that, unlike ‘classical’ physics, the sub-atomic and quantum “modern physics” shows resonances with Eastern thoughts and “leads us to a view of the world which is very similar to the views held by mystics of all ages and (...) traditions.“ This article stresses an analogous situation in biology with respect to a new theoretical approach for studying living systems, Integral Biomathics (IB), which also exhibits some resonances with Eastern thought. Stepping on earlier research in cybernetics1 and theoretical biology,2 IB has been developed since 2011 by over 100 scientists from a number of disciplines who have been exploring a substantial set of theoretical frameworks. From that effort, the need for a robust core model utilizing advanced mathematics and computation adequate for understanding the behavior of organisms as dynamic wholes was identified. At this end, the authors of this article have proposed WLIMES (Ehresmann and Simeonov, 2012), a formal theory for modeling living systems integrating both the Memory Evolutive Systems (Ehresmann and Vanbremeersch, 2007) and the Wandering Logic Intelligence (Simeonov, 2002b). Its principles will be recalled here with respect to their resonances to Eastern thought. (shrink)
The major point in [1] chapter 2 is the following claim: “Any formalized system for the Theory of Computation based on Classical Logic and Turing Model of Computation leads us to a contradiction.” So, in the case we wish to save Classical Logic we should change our Computational Model. As we see in chapter two, the mentioned contradiction is about and around the concept of time, as it is in the contradiction of modified version of paradox. (...) It is natural to try fabricating the paradox not by time but in some other linear ordering or the concept of space. Interestingly, the attempts to have similar contradiction by the other concepts like space and linear ordering, is failed. It is remarkable that, the paradox is considered either Epistemological or Logical traditionally, but by new considerations the new version of paradox should be considered as either Logical or Physical paradox. Hence, in order to change our Computational Model, it is natural to change the concept of time, but how? We start from some models that are different from the classical one but they are intuitively plausible. The idea of model is somewhat introduced by Brouwer and Husserl [3]. This model doesn’t refute the paradox, since the paradox and the associated contradiction would be repeated in this new model. The model is introduced in [2]. Here we give some more explanations. (shrink)
We present Logical Description Grammar (LDG), a model ofgrammar and the syntax-semantics interface based on descriptions inelementary logic. A description may simultaneously describe the syntacticstructure and the semantics of a natural language expression, i.e., thedescribing logic talks about the trees and about the truth-conditionsof the language described. Logical Description Grammars offer a naturalway of dealing with underspecification in natural language syntax andsemantics. If a logical description (up to isomorphism) has exactly onetree plus truth-conditions as a model, it completely (...) specifies thatgrammatical object. More common is the situation, corresponding tounderspecification, in which there is more than one model. A situation inwhich there are no models corresponds to an ungrammatical input. (shrink)
The master's thesis of Dr. Taraneh Javanbakht in philosophy that was published at the Université du Québec à Montréal in 2016 includes her innovations in logic and cognitive sciences as well as some parts of her philosophical system, netism. Her logic, tendential logic, that she introduced and developed and its application in cognitive science were also published in her master's thesis in philosophy.
The relationship between abstract formal procedures and the activities of actual physical systems has proved to be surprisingly subtle and controversial, and there are a number of competing accounts of when a physical system can be properly said to implement a mathematical formalism and hence perform a computation. I defend an account wherein computational descriptions of physical systems are high-level normative interpretations motivated by our pragmatic concerns. Furthermore, the criteria of utility and success vary according to our diverse purposes (...) and pragmatic goals. Hence there is no independent or uniform fact to the matter, and I advance the ‘anti-realist’ conclusion that computational descriptions of physical systems are not founded upon deep ontological distinctions, but rather upon interest-relative human conventions. Hence physical computation is a ‘conventional’ rather than a ‘natural’ kind. (shrink)
Although Fuzzy logic and Fuzzy Mathematics is a widespread subject and there is a vast literature about it, yet the use of Fuzzy issues like Fuzzy sets and Fuzzy numbers was relatively rare in time concept. This could be seen in the Fuzzy time series. In addition, some attempts are done in fuzzing Turing Machines but seemingly there is no need to fuzzy time. Throughout this article, we try to change this picture and show why it is helpful to (...) consider the instants of time as Fuzzy numbers. In physics, though there are revolutionary ideas on the time concept like B theories in contrast to A theory also about central concepts like space, momentum… it is a long time that these concepts are changed, but time is considered classically in all well-known and established physics theories. Seemingly, we stick to the classical time concept in all fields of science and we have a vast inertia to change it. Our goal in this article is to provide some bases why it is rational and reasonable to change and modify this picture. Here, the central point is the modified version of “Unexpected Hanging” paradox as it is described in "Is classical Mathematics appropriate for theory of Computation".This modified version leads us to a contradiction and based on that it is presented there why some problems in Theory of Computation are not solved yet. To resolve the difficulties arising there, we have two choices. Either “choosing” a new type of Logic like “Para-consistent Logic” to tolerate contradiction or changing and improving the time concept and consequently to modify the “Turing Computational Model”. Throughout this paper, we select the second way for benefiting from saving some aspects of Classical Logic. In chapter 2, by applying quantum Mechanics and Schrodinger equation we compute the associated fuzzy number to time. These, provides a new interpretation of Quantum Mechanics.More exactly what we see here is "Particle-Fuzzy time" interpretation of quantum Mechanics, in contrast to some other interpretations of Quantum Mechanics like " Wave-Particle" interpretation. At the end, we propound a question about the possible solution of a paradox in Physics, the contradiction between General Relativity and Quantum Mechanics. (shrink)
Physical Computation is the summation of Piccinini’s work on computation and mechanistic explanation over the past decade. It draws together material from papers published during that time, but also provides additional clarifications and restructuring that make this the definitive presentation of his mechanistic account of physical computation. This review will first give a brief summary of the account that Piccinini defends, followed by a chapter-by-chapter overview of the book, before finally discussing one aspect of the account in (...) more critical detail. (shrink)
In ‘Godel’s Way’ three eminent scientists discuss issues such as undecidability, incompleteness, randomness, computability and paraconsistency. I approach these issues from the Wittgensteinian viewpoint that there are two basic issues which have completely different solutions. There are the scientific or empirical issues, which are facts about the world that need to be investigated observationally and philosophical issues as to how language can be used intelligibly (which include certain questions in mathematics and logic), which need to be decided by looking (...) at how we actually use words in particular contexts. When we get clear about which language game we are playing, these topics are seen to be ordinary scientific and mathematical questions like any others. Wittgenstein’s insights have seldom been equaled and never surpassed and are as pertinent today as they were 80 years ago when he dictated the Blue and Brown Books. In spite of its failings—really a series of notes rather than a finished book—this is a unique source of the work of these three famous scholars who have been working at the bleeding edges of physics, math and philosophy for over half a century. Da Costa and Doria are cited by Wolpert (see below or my articles on Wolpert and my review of Yanofsky’s ‘The Outer Limits of Reason’) since they wrote on universal computation, and among his many accomplishments, Da Costa is a pioneer in paraconsistency. -/- Those wishing a comprehensive up to date framework for human behavior from the modern two systems view may consult my book ‘The Logical Structure of Philosophy, Psychology, Mind and Language in Ludwig Wittgenstein and John Searle’ 2nd ed (2019). Those interested in more of my writings may see ‘Talking Monkeys--Philosophy, Psychology, Science, Religion and Politics on a Doomed Planet--Articles and Reviews 2006-2019 3rd ed (2019), The Logical Structure of Human Behavior (2019), and Suicidal Utopian Delusions in the 21st Century 4th ed (2019) . (shrink)
The purpose of this paper is to argue against the claim that morphological computation is substantially different from other kinds of physical computation. I show that some (but not all) purported cases of morphological computation do not count as specifically computational, and that those that do are solely physical computational systems. These latter cases are not, however, specific enough: all computational systems, not only morphological ones, may (and sometimes should) be studied in various ways, including their energy (...) efficiency, cost, reliability, and durability. Second, I critically analyze the notion of “offloading” computation to the morphology of an agent or robot, by showing that, literally, computation is sometimes not offloaded but simply avoided. Third, I point out that while the morphology of any agent is indicative of the environment that it is adapted to, or informative about that environment, it does not follow that every agent has access to its morphology as the model of its environment. (shrink)
I argue that Stich's Syntactic Theory of Mind (STM) and a naturalistic narrow content functionalism run on a Language of Though story have the same exact structure. I elaborate on the argument that narrow content functionalism is either irremediably holistic in a rather destructive sense, or else doesn't have the resources for individuating contents interpersonally. So I show that, contrary to his own advertisement, Stich's STM has exactly the same problems (like holism, vagueness, observer-relativity, etc.) that he claims plague content-based (...) psychologies. So STM can't be any better than the Representational Theory of Mind (RTM) in its prospects for forming the foundations of a scientifically respectable psychology, whether or not RTM has the problems that Stich claims it does. (shrink)
This paper connects information with computation and cognition via concept of agents that appear at variety of levels of organization of physical/chemical/cognitive systems – from elementary particles to atoms, molecules, life-like chemical systems, to cognitive systems starting with living cells, up to organisms and ecologies. In order to obtain this generalized framework, concepts of information, computation and cognition are generalized. In this framework, nature can be seen as informational structure with computational dynamics, where an (info-computational) agent is needed (...) for the potential information of the world to actualize. Starting from the definition of information as the difference in one physical system that makes a difference in another physical system – which combines Bateson and Hewitt’s definitions, the argument is advanced for natural computation as a computational model of the dynamics of the physical world, where information processing is constantly going on, on a variety of levels of organization. This setting helps us to elucidate the relationships between computation, information, agency and cognition, within the common conceptual framework, with special relevance for biology and robotics. (shrink)
Here, by introducing a version of “Unexpected hanging paradox” first we try to open a new way and a new explanation for paradoxes, similar to liar paradox. Also, we will show that we have a semantic situation which no syntactical logical system could support it. Finally, we propose a claim in Theory of Computation about the consistency of this Theory. One of the major claim is:Theory of Computation and Classical Logic leads us to a contradiction.
In his entry on "Quantum Logic and Probability Theory" in the Stanford Encyclopedia of Philosophy, Alexander Wilce (2012) writes that "it is uncontroversial (though remarkable) the formal apparatus quantum mechanics reduces neatly to a generalization of classical probability in which the role played by a Boolean algebra of events in the latter is taken over the 'quantum logic' of projection operators on a Hilbert space." For a long time, Patrick Suppes has opposed this view (see, for example, the (...) paper collected in Suppes and Zanotti (1996). Instead of changing the logic and moving from a Boolean algebra to a non-Boolean algebra, one can also 'save the phenomena' by weakening the axioms of probability theory and work instead with upper and lower probabilities. However, it is fair to say that despite Suppes' efforts upper and lower probabilities are not particularly popular in physics as well as in the foundations of physics, at least so far. Instead, quantum logic is booming again, especially since quantum information and computation became hot topics. Interestingly, however, imprecise probabilities are becoming more and more popular in formal epistemology as recent work by authors such as James Joye (2010) and Roger White (2010) demonstrates. (shrink)
The mainstream view in cognitive science is that computation lies at the basis of and explains cognition. Our analysis reveals that there is no compelling evidence or argument for thinking that brains compute. It makes the case for inverting the explanatory order proposed by the computational basis of cognition thesis. We give reasons to reverse the polarity of standard thinking on this topic, and ask how it is possible that computation, natural and artificial, might be based on cognition (...) and not the other way around. (shrink)
In a recent paper we have defined an analytic tableau calculus PL_16 for a functionally complete extension of Shramko and Wansing's logic based on the trilattice SIXTEEN_3. This calculus makes it possible to define syntactic entailment relations that capture central semantic relations of the logic---such as the relations |=_t, |=_f, and |=_i that each correspond to a lattice order in SIXTEEN_3; and |=, the intersection of |=_t and |=_f,. -/- It turns out that our method of characterising these (...) semantic relations---as intersections of auxiliary relations that can be captured with the help of a single calculus---lends itself well to proving interpolation. All entailment relations just mentioned have the interpolation property, not only when they are defined with respect to a functionally complete language, but also in a range of cases where less expressive languages are considered. For example, we will show that |=, when restricted to L_{tf}, the language originally considered by Shramko and Wansing, enjoys interpolation. This answers a question that was recently posed by M. Takano. (shrink)
The journal of Cognitive Computation is defined in part by the notion that biologically inspired computational accounts are at the heart of cognitive processes in both natural and artificial systems. Many studies of various important aspects of cognition (memory, observational learning, decision making, reward prediction learning, attention control, etc.) have been made by modelling the various experimental results using ever-more sophisticated computer programs. In this manner progressive inroads have been made into gaining a better understanding of the many components (...) of cognition. Concomitantly in both science and science fiction the hope is periodically re-ignited that a manmade system can be engineered to be fully cognitive and conscious purely in virtue of its execution of an appropriate computer program. However, whilst the usefulness of the computational metaphor in many areas of psychology and neuroscience is clear, it has not gone unchallenged and in this article I will review a group of philosophical arguments that suggest either such unequivocal optimism in computationalism is misplaced—computation is neither necessary nor sufficient for cognition—or panpsychism (the belief that the physical universe is fundamentally composed of elements each of which is conscious) is true. I conclude by highlighting an alternative metaphor for cognitive processes based on communication and interaction. (shrink)
Homo deceptus is a book that brings together new ideas on language, consciousness and physics into a comprehensive theory that unifies science and philosophy in a different kind of Theory of Everything. The subject of how we are to make sense of the world is addressed in a structured and ordered manner, which starts with a recognition that scientific truths are constructed within a linguistic framework. The author argues that an epistemic foundation of natural language must be understood before laying (...) claim to any notion of reality. This foundation begins with Ludwig Wittgenstein’s Tractatus Logico-Philosophicus and the relationship of language to formal logic. Ultimately, we arrive at an answer to the question of why people believe the things they do. This is effectively a modification of Alfred Tarski’s semantic theory of truth. The second major issue addressed is the ‘dreaded’ Hard Problem of Consciousness as first stated by David Chalmers in 1995. The solution is found in the unification of consciousness, information theory and notions of physicalism. The physical world is shown to be an isomorphic representation of the phenomenological conscious experience. New concepts in understanding how language operates help to explain why this relationship has been so difficult to appreciate. The inclusion of concepts from information theory shows how a digital mechanics resolves heretofore conflicting theories in physics, cognitive science and linguistics. Scientific orthodoxy is supported, but viewed in a different light. Mainstream science is not challenged, but findings are interpreted in a manner that unifies consciousness without contradiction. Digital mechanics and formal systems of logic play central roles in combining language, consciousness and the physical world into a unified theory where all can be understood within a single consistent framework. (shrink)
The contribution of the body to cognition and control in natural and artificial agents is increasingly described as “off-loading computation from the brain to the body”, where the body is said to perform “morphological computation”. Our investigation of four characteristic cases of morphological computation in animals and robots shows that the ‘off-loading’ perspective is misleading. Actually, the contribution of body morphology to cognition and control is rarely computational, in any useful sense of the word. We thus distinguish (...) (1) morphology that facilitates control, (2) morphology that facilitates perception and the rare cases of (3) morphological computation proper, such as ‘reservoir computing.’ where the body is actually used for computation. This result contributes to the understanding of the relation between embodiment and computation: The question for robot design and cognitive science is not whether computation is offloaded to the body, but to what extent the body facilitates cognition and control – how it contributes to the overall ‘orchestration’ of intelligent behaviour. (shrink)
The Tree of Life has traditionally been understood to represent the history of species lineages. However, recently researchers have suggested that it might be better interpreted as representing the history of cellular lineages, sometimes called the Tree of Cells. This paper examines and evaluates reasons offered against this cellular interpretation of the Tree of Life. It argues that some such reasons are bad reasons, based either on a false attribution of essentialism, on a misunderstanding of the problem (...) of lineage identity, or on a limited view of scientific representation. I suggest that debate about the Tree of Cells and other successors to the traditional Tree of Life should be formulated in terms of the purposes these representations may serve. In pursuing this strategy, we see that the Tree of Cells cannot serve one purpose suggested for it: as an explanation for the hierarchical nature of taxonomy. We then explore whether, instead, the tree may play an important role in the dynamic modeling of evolution. As highly-integrated complex systems, cells may influence which lineage components can successfully transfer into them and how they change once integrated. Only if they do in fact have a substantial role to play in this process might the Tree of Cells have some claim to be the Tree of Life. (shrink)
Our commonsense notion of reality is supported by two critical assumptions for which we have little understanding: The conscious experience which underpins the observations integral to the scientific method and language, which is the method by which all theories, scientific or otherwise, are communicated. This book examines both of these matters in detail and arrives at a new theoretical foundation for understanding how nature undertakes the task of building the universe. -/- Creating Reality is a synthesis of Darwin’s The Origin (...) of Species and Douglas Hofstadter’s Gödel, Escher, Bach (GEB). It is an intellectual journey that addresses the most profound questions facing science and philosophy today and delivers the greatest transformation in the way we view the world since Darwin’s masterpiece. The book is targeted at anyone up for the cerebral challenge of thinking deeply about how we make sense of the world and our existence. The book is thoroughly researched and referenced, drawing from the most highly credential sources. -/- When we take stock of where we are in our understanding of nature, it seems that three important questions stand out for which we have few answers. More than just questions, they represent gaps in our comprehension of what makes the universe tick. These are the thematic focal points of this book: -/- • What is the nature of belief? (An examination of truth as a function of language). • What is consciousness? • What is the relationship between mathematics and the physical world? -/- The exploration of these three questions unravels the mystery behind the principal means by which we come to have knowledge of the world. So our journey begins by asking the more generic question: How do we come to know the world? The book makes no assumptions about this thing called ‘reality’ and takes a fresh look at the presuppositions underlying commonsense notions of reality. (shrink)
Understanding computation as “a process of the dynamic change of information” brings to look at the different types of computation and information. Computation of information does not exist alone by itself but is to be considered as part of a system that uses it for some given purpose. Information can be meaningless like a thunderstorm noise, it can be meaningful like an alert signal, or like the representation of a desired food. A thunderstorm noise participates to the (...) generation of meaningful information about coming rain. An alert signal has a meaning as allowing a safety constraint to be satisfied. The representation of a desired food participates to the satisfaction of some metabolic constraints for the organism. Computations on information and representations will be different in nature and in complexity as the systems that link them have different constraints to satisfy. Animals have survival constraints to satisfy. Humans have many specific constraints coming in addition. And computers will compute what the designer and programmer ask for. We propose to analyze the different relations between information, meaning and representation by taking an evolutionary approach on the systems that link them. Such a bottom-up approach allows starting with simple organisms and avoids an implicit focus on humans, which is the most complex and difficult case. To make available a common background usable for the many different cases, we use a systemic tool that defines the generation of meaningful information by and for a system submitted to a constraint [Menant, 2003]. This systemic tool allows to position information, meaning and representations for systems relatively to environmental entities in an evolutionary perspective. We begin by positioning the notions of information, meaning and representation and recall the characteristics of the Meaning Generator System (MGS) that link a system submitted to a constraint to its environment. We then use the MGS for animals and highlight the network nature of the interrelated meanings about an entity of the environment. This brings us to define the representation of an item for an agent as being the network of meanings relative to the item for the agent. Such meaningful representations embed the agents in their environments and are far from the Good Old Fashion Artificial Intelligence type ones. The MGS approach is then used for humans with a limitation resulting of the unknown nature of human consciousness. Application of the MGS to artificial systems brings to look for compatibilities with different levels of Artificial Intelligence (AI) like embodied-situated AI, the Guidance Theory of Representations, and enactive AI. Concerns relative to different types of autonomy and organic or artificial constraints are highlighted. We finish by summarizing the points addressed and by proposing some continuations. (shrink)
In this paper, I defend a grammatical account of scalar implicatures. In particular, I submit new evidence in favor of the contextual blindness principle, assumed in recent versions of the grammatical account. I argue that mismatching scalar implicatures can be generated even when the restrictor of the universal quantifier in a universal alternative is contextually known to be empty. The crucial evidence consists of a hitherto unnoticed oddness asymmetry between formally analogous existential sentences with reference failure NPs. I conclude that (...) the generation of mismatching scalar implicatures does not require contextual access. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.