The aim of this paper is to describe and analyze the epistemological justification of a proposal initially made by the biomathematician Robert Rosen in 1958. In this theoretical proposal, Rosen suggests using the mathematical concept of “category” and the correlative concept of “natural equivalence” in mathematical modeling applied to living beings. Our questions are the following: According to Rosen, to what extent does the mathematical notion of category give access to more “natural” formalisms in the modeling of (...) living beings? Is the so -called “naturalness” of some kinds of equivalences (which the mathematical notion of category makes it possible to generalize and to put at the forefront) analogous to the naturalness of living systems? Rosen appears to answer “yes” and to ground this transfer of the concept of “natural equivalence” in biology on such an analogy. But this hypothesis, although fertile, remains debatable. Finally, this paper makes a brief account of the later evolution of Rosen’s arguments about this topic. In particular, it sheds light on the new role played by the notion of “category” in his more recent objections to the computational models that have pervaded almost every domain of biology since the 1990s. (shrink)
Throughout this paper, we are trying to show how and why our Mathematical frame-work seems inappropriate to solve problems in Theory of Computation. More exactly, the concept of turning back in time in paradoxes causes inconsistency in modeling of the concept of Time in some semantic situations. As we see in the first chapter, by introducing a version of “Unexpected Hanging Paradox”,first we attempt to open a new explanation for some paradoxes. In the second step, by applying (...) this paradox, it is demonstrated that any formalized system for the Theory of Computation based on Classical Logic and Turing Model of Computation leads us to a contradiction. We conclude that our mathematical frame work is inappropriate for Theory of Computation. Furthermore, the result provides us a reason that many problems in Complexity Theory resist to be solved.(This work is completed in 2017 -5- 2, it is in vixra in 2017-5-14, presented in Unilog 2018, Vichy). (shrink)
At first sight the Theory of Computation i) relies on a kind of mathematics based on the notion of potential infinity; ii) its theoretical organization is irreducible to an axiomatic one; rather it is organized in order to solve a problem: “What is a computation?”; iii) it makes essential use of doubly negated propositions of non-classical logic, in particular in the word expressions of the Church-Turing’s thesis; iv) its arguments include ad absurdum proofs. Under such aspects, it (...) is like many other scientific theories, in particular the first theories of both mechanical machines and heat machines. A more accurate examination of Theory of Computation shows a difference from the above mentioned theories in its essentially including an odd notion, “thesis”, to which no theorem corresponds. On the other hand, arguments of each of the other theories conclude a doubly negative predicate which then, by applying the inverse translation of the ‘negative one’, is translated into the corresponding affirmative predicate. By also taking into account three criticisms to the current Theory of Computation a rational re-formulation of it is sketched out; to Turing-Church thesis of the usual theory corresponds a similar proposition, yet connecting physical total computation functions with constructive mathematical total computation functions. -/- . (shrink)
A practical viewpoint links reality, representation, and language to calculation by the concept of Turing (1936) machine being the mathematical model of our computers. After the Gödel incompleteness theorems (1931) or the insolvability of the so-called halting problem (Turing 1936; Church 1936) as to a classical machine of Turing, one of the simplest hypotheses is completeness to be suggested for two ones. That is consistent with the provability of completeness by means of two independent Peano arithmetics discussed in Section (...) I. Many modifications of Turing machines cum quantum ones are researched in Section II for the Halting problem and completeness, and the model of two independent Turing machines seems to generalize them. Then, that pair can be postulated as the formal definition of reality therefore being complete unlike any of them standalone, remaining incomplete without its complementary counterpart. Representation is formal defined as a one-to-one mapping between the two Turing machines, and the set of all those mappings can be considered as “language” therefore including metaphors as mappings different than representation. Section III investigates that formal relation of “reality”, “representation”, and “language” modeled by (at least two) Turing machines. The independence of (two) Turing machines is interpreted by means of game theory and especially of the Nash equilibrium in Section IV. Choice and information as the quantity of choices are involved. That approach seems to be equivalent to that based on set theory and the concept of actual infinity in mathematics and allowing of practical implementations. (shrink)
The Turing machine is one of the simple abstract computational devices that can be used to investigate the limits of computability. In this paper, they are considered from several points of view that emphasize the importance and the relativity of mathematical languages used to describe the Turing machines. A deep investigation is performed on the interrelations between mechanical computations and their mathematical descriptions emerging when a human (the researcher) starts to describe a Turing machine (the object of the (...) study) by different mathematical languages (the instruments of investigation). Together with traditional mathematical languages using such concepts as ‘enumerable sets’ and ‘continuum’ a new computational methodology allowing one to measure the number of elements of different infinite sets is used in this paper. It is shown how mathematical languages used to describe the machines limit our possibilities to observe them. In particular, notions of observable deterministic and non-deterministic Turing machines are introduced and conditions ensuring that the latter can be simulated by the former are established. (shrink)
Review of Dowek, Gilles, Computation, Proof, Machine, Cambridge University Press, Cambridge, 2015. Translation of Les Métamorphoses du calcul, Le Pommier, Paris, 2007. Translation from the French by Pierre Guillot and Marion Roman.
According to the computational theory of mind , to think is to compute. But what is meant by the word 'compute'? The generally given answer is this: Every case of computing is a case of manipulating symbols, but not vice versa - a manipulation of symbols must be driven exclusively by the formal properties of those symbols if it is qualify as a computation. In this paper, I will present the following argument. Words like 'form' and 'formal' are (...) ambiguous, as they can refer to form in either the syntactic or the morphological sense. CTM fails on each disambiguation, and the arguments for CTM immediately cease to be compelling once we register that ambiguity. The terms 'mechanical' and 'automatic' are comparably ambiguous. Once these ambiguities are exposed, it turns out that there is no possibility of mechanizing thought, even if we confine ourselves to domains where all problems can be settled through decision-procedures. The impossibility of mechanizing thought thus has nothing to do with recherché mathematical theorems, such as those proven by Gödel and Rosser. A related point is that CTM involves, and is guilty of reinforcing, a misunderstanding of the concept of an algorithm. (shrink)
The present volume is an introduction to the use of tools from computability theory and reverse mathematics to study combinatorial principles, in particular Ramsey's theorem and special cases such as Ramsey's theorem for pairs. It would serve as an excellent textbook for graduate students who have completed a course on computability theory.
For millennia, knowledge has eluded a precise definition. The industrialization of knowledge (IoK) and the associated proliferation of the so-called knowledge communities in the last few decades caused this state of affairs to deteriorate, namely by creating a trio composed of data, knowledge, and information (DIK) that is not unlike the aporia of the trinity in philosophy. This calls for a general theory of knowledge (ToK) that can work as a foundation for a science of knowledge (SoK) and additionally (...) distinguishes knowledge from both data and information. In this paper, I attempt to sketch this generality via the establishing of both knowledge structures and knowledge systems that can then be adopted/adapted by the diverse communities for the respective knowledge technologies and practices. This is achieved by means of a formal–indeed, mathematical–approach to epistemological matters a.k.a. formal epistemology. The corresponding application focus is on knowledge systems implementable as computer programs. (shrink)
Scientists use models to know the world. It i susually assumed that mathematicians doing pure mathematics do not. Mathematicians doing pure mathematics prove theorems about mathematical entities like sets, numbers, geometric figures, spaces, etc., they compute various functions and solve equations. In this paper, I want to exhibit models build by mathematicians to study the fundamental components of spaces and, more generally, of mathematical forms. I focus on one area of mathematics where models occupy a central role, namely (...) homotopy theory. I argue that mathematicians introduce genuine models and I offer a rough classification of these models. (shrink)
This paper is concerned with the construction of theories of software systems yielding adequate predictions of their target systems’ computations. It is first argued that mathematical theories of programs are not able to provide predictions that are consistent with observed executions. Empirical theories of software systems are here introduced semantically, in terms of a hierarchy of computational models that are supplied by formal methods and testing techniques in computer science. Both deductive top-down and inductive bottom-up approaches in the discovery (...) of semantic software theories are refused to argue in favour of the abductive process of hypothesising and refining models at each level in the hierarchy, until they become satisfactorily predictive. Empirical theories of computational systems are required to be modular, as modular are most software verification and testing activities. We argue that logic relations must be thereby defined among models representing different modules in a semantic theory of a modular software system. We exclude that scientific structuralism is able to define module relations needed in software modular theories. The algebraic Theory of Institutions is finally introduced to specify the logic structure of modular semantic theories of computational systems. (shrink)
Moral reasoning traditionally distinguishes two types of evil:moral (ME) and natural (NE). The standard view is that ME is the product of human agency and so includes phenomena such as war,torture and psychological cruelty; that NE is the product of nonhuman agency, and so includes natural disasters such as earthquakes, floods, disease and famine; and finally, that more complex cases are appropriately analysed as a combination of ME and NE. Recently, as a result of developments in autonomous agents in cyberspace, (...) a new class of interesting and important examples of hybrid evil has come to light. In this paper, it is called artificial evil (AE) and a case is made for considering it to complement ME and NE to produce a more adequate taxonomy. By isolating the features that have led to the appearance of AE, cyberspace is characterised as a self-contained environment that forms the essential component in any foundation of the emerging field of Computer Ethics (CE). It is argued that this goes someway towards providing a methodological explanation of why cyberspace is central to so many of CE's concerns; and it is shown how notions of good and evil can be formulated in cyberspace. Of considerable interest is how the propensity for an agent's action to be morally good or evil can be determined even in the absence of biologically sentient participants and thus allows artificial agents not only to perpetrate evil (and fort that matter good) but conversely to `receive' or `suffer from' it. The thesis defended is that the notion of entropy structure, which encapsulates human value judgement concerning cyberspace in a formal mathematical definition, is sufficient to achieve this purpose and, moreover, that the concept of AE can be determined formally, by mathematical methods. A consequence of this approach is that the debate on whether CE should be considered unique, and hence developed as a Macroethics, may be viewed, constructively,in an alternative manner. The case is made that whilst CE issues are not uncontroversially unique, they are sufficiently novel to render inadequate the approach of standard Macroethics such as Utilitarianism and Deontologism and hence to prompt the search for a robust ethical theory that can deal with them successfully. The name Information Ethics (IE) is proposed for that theory. Itis argued that the uniqueness of IE is justified by its being non-biologically biased and patient-oriented: IE is an Environmental Macroethics based on the concept of data entity rather than life. It follows that the novelty of CE issues such as AE can be appreciated properly because IE provides a new perspective (though not vice versa). In light of the discussion provided in this paper, it is concluded that Computer Ethics is worthy of independent study because it requires its own application-specific knowledge and is capable of supporting a methodological foundation, Information Ethics. (shrink)
Moral reasoning traditionally distinguishes two types of evil: moral and natural. The standard view is that ME is the product of human agency and so includes phenomena such as war, torture and psychological cruelty; that NE is the product of nonhuman agency, and so includes natural disasters such as earthquakes, floods, disease and famine; and finally, that more complex cases are appropriately analysed as a combination of ME and NE. Recently, as a result of developments in autonomous agents in cyberspace, (...) a new class of interesting and important examples of hybrid evil has come to light. In this paper, it is called artificial evil and a case is made for considering it to complement ME and NE to produce a more adequate taxonomy. By isolating the features that have led to the appearance of AE, cyberspace is characterised as a self-contained environment that forms the essential component in any foundation of the emerging field of Computer Ethics. It is argued that this goes some way towards providing a methodological explanation of why cyberspace is central to so many of CE’s concerns; and it is shown how notions of good and evil can be formulated in cyberspace. Of considerable interest is how the propensity for an agent’s action to be morally good or evil can be determined even in the absence of biologically sentient participants and thus allows artificial agents not only to perpetrate evil but conversely to ‘receive’ or ‘suffer from’ it. The thesis defended is that the notion of entropy structure, which encapsulates human value judgement concerning cyberspace in a formal mathematical definition, is sufficient to achieve this purpose and, moreover, that the concept of AE can be determined formally, by mathematical methods. A consequence of this approach is that the debate on whether CE should be considered unique, and hence developed as a Macroethics, may be viewed, constructively, in an alternative manner. The case is made that whilst CE issues are not uncontroversially unique, they are sufficiently novel to render inadequate the approach of standard Macroethics such as Utilitarianism and Deontologism and hence to prompt the search for a robust ethical theory that can deal with them successfully. The name Information Ethics is proposed for that theory. It is argued that the uniqueness of IE is justified by its being non-biologically biased and patient-oriented: IE is an Environmental Macroethics based on the concept of data entity rather than life. It follows that the novelty of CE issues such as AE can be appreciated properly because IE provides a new perspective. In light of the discussion provided in this paper, it is concluded that Computer Ethics is worthy of independent study because it requires its own application-specific knowledge and is capable of supporting a methodological foundation, Information Ethics. (shrink)
The Computational Theory of Mind (CTM) holds that cognitive processes are essentially computational, and hence computation provides the scientific key to explaining mentality. The Representational Theory of Mind (RTM) holds that representational content is the key feature in distinguishing mental from non-mental systems. I argue that there is a deep incompatibility between these two theoretical frameworks, and that the acceptance of CTM provides strong grounds for rejecting RTM. The focal point of the incompatibility is the fact that (...) representational content is extrinsic to formal procedures as such, and the intended interpretation of syntax makes no difference to the execution of an algorithm. So the unique 'content' postulated by RTM is superfluous to the formal procedures of CTM. And once these procedures are implemented in a physical mechanism, it is exclusively the causal properties of the physical mechanism that are responsible for all aspects of the system's behaviour. So once again, postulated content is rendered superfluous. To the extent that semantic content may appear to play a role in behaviour, it must be syntactically encoded within the system, and just as in a standard computational artefact, so too with the human mind/brain - it's pure syntax all the way down to the level of physical implementation. Hence 'content' is at most a convenient meta-level gloss, projected from the outside by human theorists, which itself can play no role in cognitive processing. (shrink)
General Relativity says gravity is a push caused by space-time's curvature. Combining General Relativity with E=mc2 results in distances being totally deleted from space-time/gravity by future technology, and in expansion or contraction of the universe as a whole being eliminated. The road to these conclusions has branches shining light on supersymmetry and superconductivity. This push of gravitational waves may be directed from intergalactic space towards galaxy centres, helping to hold galaxies together and also creating supermassive black holes. Together with the (...) waves' possible production of "dark" matter in higher dimensions, there's ample reason to believe knowledge of gravitational waves has barely begun. Advanced waves are usually discarded by scientists because they're thought to violate the causality principle. Just as advanced waves are usually discarded, very few physicists or mathematicians will venture to ascribe a physical meaning to Wick rotation and "imaginary" time. Here, that maths (when joined with Mobius-strip and Klein-bottle topology) unifies space and time into one space-time, and allows construction of what may be called "imaginary computers". This research idea you're reading is not intended to be a formal theory presenting scientific jargon and mathematical formalism. (shrink)
मैं कंप्यूटर के रूप में गणना और ब्रह्मांड की सीमा के कई हाल ही में चर्चा पढ़ लिया है, polymath भौतिक विज्ञानी और निर्णय सिद्धांतकार डेविड Wolpert के अद्भुत काम पर कुछ टिप्पणी खोजने की उम्मीद है, लेकिन एक भी प्रशस्ति पत्र नहीं मिला है और इसलिए मैं यह बहुत संक्षिप्त मौजूद सारांश. Wolpert कुछ आश्चर्यजनक असंभव या अधूरापन प्रमेयों साबित कर दिया (1992 से 2008-देखें arxiv dot org) अनुमान के लिए सीमा पर (कम्प्यूटेशन) कि इतने सामान्य वे गणना कर (...) डिवाइस से स्वतंत्र हैं, और यहां तक कि भौतिकी के नियमों से स्वतंत्र, इसलिए वे कंप्यूटर, भौतिक विज्ञान और मानव व्यवहार में लागू होते हैं. वे कैंटर विकर्णीकरण का उपयोग करते हैं, झूठा विरोधाभास और worldlines प्रदान करने के लिए क्या ट्यूरिंग मशीन थ्योरी में अंतिम प्रमेय हो सकता है, और प्रतीत होता है असंभव, अधूरापन, गणना की सीमा में अंतर्दृष्टि प्रदान करते हैं, और ब्रह्मांड के रूप में कंप्यूटर, सभी संभव ब्रह्मांडों और सभी प्राणियों या तंत्र में, उत्पादन, अन्य बातों के अलावा, एक गैर क्वांटम यांत्रिक अनिश्चितता सिद्धांत और एकेश्वरवाद का सबूत. वहाँ Chaitin, Solomonoff, Komolgarov और Wittgenstein के क्लासिक काम करने के लिए स्पष्ट कनेक्शन कर रहे हैं और धारणा है कि कोई कार्यक्रम (और इस तरह कोई डिवाइस) एक दृश्य उत्पन्न कर सकते हैं (या डिवाइस) अधिक से अधिक जटिलता के साथ यह पास से. कोई कह सकता है कि काम के इस शरीर का अर्थ नास्तिकता है क्योंकि भौतिक ब्रह्मांड से और विटगेनस्टीनियन दृष्टिकोण से कोई भी इकाई अधिक जटिल नहीं हो सकती है, 'अधिक जटिल' अर्थहीन है (संतोष की कोई शर्त नहीं है, अर्थात, सत्य-निर्माता या परीक्षण)। यहां तक कि एक 'भगवान' (यानी, असीम समय/स्थान और ऊर्जा के साथ एक 'डिवाइस' निर्धारित नहीं कर सकता है कि क्या एक दिया 'संख्या' 'यादृच्छिक' है, और न ही एक निश्चित तरीका है दिखाने के लिए कि एक दिया 'सूत्र', 'प्रमेय' या 'वाक्य' या 'डिवाइस' (इन सभी जटिल भाषा जा रहा है) खेल) एक विशेष 'प्रणाली' का हिस्सा है. आधुनिक दो systems दृश्यसे मानव व्यवहार के लिए एक व्यापक अप करने के लिए तारीख रूपरेखा इच्छुक लोगों को मेरी पुस्तक 'दर्शन, मनोविज्ञान, मिनडी और लुडविगमें भाषा की तार्किक संरचना से परामर्श कर सकते हैं Wittgenstein और जॉन Searle '2 एड (2019). मेरे लेखन के अधिक में रुचि रखने वालों को देख सकते हैं 'बात कर रहेबंदर- दर्शन, मनोविज्ञान, विज्ञान, धर्म और राजनीति पर एक बर्बाद ग्रह --लेख और समीक्षा 2006-2019 2 ed (2019) और आत्मघाती यूटोपियान भ्रम 21st मेंसदी 4वें एड (2019) . (shrink)
Saul Kripke once noted that there is a tight connection between computation and de re knowledge of whatever the computation acts upon. For example, the Euclidean algorithm can produce knowledge of which number is the greatest common divisor of two numbers. Arguably, algorithms operate directly on syntactic items, such as strings, and on numbers and the like only via how the numbers are represented. So we broach matters of notation. The purpose of this article is to explore the (...) relationship between the notations acceptable for computation, the usual idealizations involved in theories of computability, flowing from Alan Turing’s monumental work, and de re propositional attitudes toward numbers and other mathematical objects. (shrink)
Reverse mathematics studies which subsystems of second order arithmetic are equivalent to key theorems of ordinary, non-set-theoretic mathematics. The main philosophical application of reverse mathematics proposed thus far is foundational analysis, which explores the limits of different foundations for mathematics in a formally precise manner. This paper gives a detailed account of the motivations and methodology of foundational analysis, which have heretofore been largely left implicit in the practice. It then shows how this account can be fruitfully applied in the (...) evaluation of major foundational approaches by a careful examination of two case studies: a partial realization of Hilbert’s program due to Simpson [1988], and predicativism in the extended form due to Feferman and Schütte. -/- Shore [2010, 2013] proposes that equivalences in reverse mathematics be proved in the same way as inequivalences, namely by considering only omega-models of the systems in question. Shore refers to this approach as computational reverse mathematics. This paper shows that despite some attractive features, computational reverse mathematics is inappropriate for foundational analysis, for two major reasons. Firstly, the computable entailment relation employed in computational reverse mathematics does not preserve justification for the foundational programs above. Secondly, computable entailment is a Pi-1-1 complete relation, and hence employing it commits one to theoretical resources which outstrip those available within any foundational approach that is proof-theoretically weaker than Pi-1-1-CA0. (shrink)
Information Theory, Evolution and The Origin ofLife: The Origin and Evolution of Life as a Digital Message: How Life Resembles a Computer, Second Edition. Hu- bert P. Yockey, 2005, Cambridge University Press, Cambridge: 400 pages, index; hardcover, US $60.00; ISBN: 0-521-80293-8. The reason that there are principles of biology that cannot be derived from the laws of physics and chemistry lies simply in the fact that the genetic information content of the genome for constructing even the simplest organisms is (...) much larger than the information content of these laws. Yockey in his previous book (1992, 335) In this new book, Information Theory, Evolution and The Origin ofLife, Hubert Yockey points out that the digital, segregated, and linear character of the genetic information system has a fundamental significance. If inheritance would blend and not segregate, Darwinian evolution would not occur. If inheritance would be analog, instead of digital, evolution would be also impossible, because it would be impossible to remove the effect of noise. In this way, life is guided by information, and so information is a central concept in molecular biology. The author presents a picture of how the main concepts of the genetic code were developed. He was able to show that despite Francis Crick's belief that the Central Dogma is only a hypothesis, the Central Dogma of Francis Crick is a mathematical consequence of the redundant nature of the genetic code. The redundancy arises from the fact that the DNA and mRNA alphabet is formed by triplets of 4 nucleotides, and so the number of letters (triplets) is 64, whereas the proteome alphabet has only 20 letters (20 amino acids), and so the translation from the larger alphabet to the smaller one is necessarily redundant. Except for Tryptohan and Methionine, all amino acids are coded by more than one triplet, therefore, it is undecidable which source code letter was actually sent from mRNA. This proof has a corollary telling that there are no such mathematical constraints for protein-protein communication. With this clarification, Yockey contributes to diminishing the widespread confusion related to such a central concept like the Central Dogma. Thus the Central Dogma prohibits the origin of life "proteins first." Proteins can not be generated by "self-organization." Understanding this property of the Central Dogma will have a serious impact on research on the origin of life. (shrink)
2nd edition. Many-valued logics are those logics that have more than the two classical truth values, to wit, true and false; in fact, they can have from three to infinitely many truth values. This property, together with truth-functionality, provides a powerful formalism to reason in settings where classical logic—as well as other non-classical logics—is of no avail. Indeed, originally motivated by philosophical concerns, these logics soon proved relevant for a plethora of applications ranging from switching theory to cognitive modeling, (...) and they are today in more demand than ever, due to the realization that inconsistency and vagueness in knowledge bases and information processes are not only inevitable and acceptable, but also perhaps welcome. The main modern applications of (any) logic are to be found in the digital computer, and we thus require the practical knowledge how to computerize—which also means automate—decisions (i.e. reasoning) in many-valued logics. This, in turn, necessitates a mathematical foundation for these logics. This book provides both these mathematical foundation and practical knowledge in a rigorous, yet accessible, text, while at the same time situating these logics in the context of the satisfiability problem (SAT) and automated deduction. The main text is complemented with a large selection of exercises, a plus for the reader wishing to not only learn about, but also do something with, many-valued logics. (shrink)
Until recently, discussion of virtues in the philosophy of mathematics has been fleeting and fragmentary at best. But in the last few years this has begun to change. As virtue theory has grown ever more influential, not just in ethics where virtues may seem most at home, but particularly in epistemology and the philosophy of science, some philosophers have sought to push virtues out into unexpected areas, including mathematics and its philosophy. But there are some mathematicians already there, ready (...) to meet them, who have explicitly invoked virtues in discussing what is necessary for a mathematician to succeed. In both ethics and epistemology, virtue theory tends to emphasize character virtues, the acquired excellences of people. But people are not the only sort of thing whose excellences may be identified as virtues. Theoretical virtues have attracted attention in the philosophy of science as components of an account of theory choice. Within the philosophy of mathematics, and mathematics itself, attention to virtues has emerged from a variety of disparate sources. Theoretical virtues have been put forward both to analyse the practice of proof and to justify axioms; intellectual virtues have found multiple applications in the epistemology of mathematics; and ethical virtues have been offered as a basis for understanding the social utility of mathematical practice. Indeed, some authors have advocated virtue epistemology as the correct epistemology for mathematics (and perhaps even as the basis for progress in the metaphysics of mathematics). This topical collection brings together several of the researchers who have begun to study mathematical practices from a virtue perspective with the intention of consolidating and encouraging this trend. (shrink)
I have read many recent discussions of the limits of computation and the universe as computer, hoping to find some comments on the amazing work of polymath physicist and decision theorist David Wolpert but have not found a single citation and so I present this very brief summary. Wolpert proved some stunning impossibility or incompleteness theorems (1992 to 2008-see arxiv.org) on the limits to inference (computation) that are so general they are independent of the device doing the (...) class='Hi'>computation, and even independent of the laws of physics, so they apply across computers, physics, and human behavior. They make use of Cantor's diagonalization, the liar paradox and worldlines to provide what may be the ultimate theorem in Turing Machine Theory, and seemingly provide insights into impossibility, incompleteness, the limits of computation,and the universe as computer, in all possible universes and all beings or mechanisms, generating, among other things,a non- quantum mechanical uncertainty principle and a proof of monotheism. There are obvious connections to the classic work of Chaitin, Solomonoff, Komolgarov and Wittgenstein and to the notion that no program (and thus no device) can generate a sequence (or device) with greater complexity than it possesses. One might say this body of work implies atheism since there cannot be any entity more complex than the physical universe and from the Wittgensteinian viewpoint, ‘more complex’ is meaningless (has no conditions of satisfaction, i.e., truth-maker or test). Even a ‘God’ (i.e., a ‘device’ with limitless time/space and energy) cannot determine whether a given ‘number’ is ‘random’ nor can find a certain way to show that a given ‘formula’, ‘theorem’ or ‘sentence’ or ‘device’ (all these being complex language games) is part of a particular ‘system’. -/- Those wishing a comprehensive up to date framework for human behavior from the modern two systems view may consult my article The Logical Structure of Philosophy, Psychology, Mind and Language as Revealed in Wittgenstein and Searle 59p(2016). For all my articles on Wittgenstein and Searle see my e-book ‘The Logical Structure of Philosophy, Psychology, Mind and Language in Wittgenstein and Searle 367p (2016). Those interested in all my writings in their most recent versions may consult my e-book Philosophy, Human Nature and the Collapse of Civilization - Articles and Reviews 2006-2016’ 662p (2016). -/- All of my papers and books have now been published in revised versions both in ebooks and in printed books. -/- Talking Monkeys: Philosophy, Psychology, Science, Religion and Politics on a Doomed Planet - Articles and Reviews 2006-2017 (2017) https://www.amazon.com/dp/B071HVC7YP. -/- The Logical Structure of Philosophy, Psychology, Mind and Language in Ludwig Wittgenstein and John Searle--Articles and Reviews 2006-2016 (2017) https://www.amazon.com/dp/B071P1RP1B. -/- Suicidal Utopian Delusions in the 21st century: Philosophy, Human Nature and the Collapse of Civilization - Articles and Reviews 2006-2017 (2017) https://www.amazon.com/dp/B0711R5LGX . (shrink)
The notion of computability is developed through the study of the behavior of a set of languages interpreted over the natural numbers which contain their own fully defined satisfaction predicate and whose only other vocabulary is limited to "0", individual variables, the successor function, the identity relation and operators for disjunction, conjunction, and existential quantification.
This paper is on Aristotle's conception of the continuum. It is argued that although Aristotle did not have the modern conception of real numbers, his account of the continuum does mirror the topology of the real number continuum in modern mathematics especially as seen in the work of Georg Cantor. Some differences are noted, particularly as regards Aristotle's conception of number and the modern conception of real numbers. The issue of whether Aristotle had the notion of open versus closed intervals (...) is discussed. Finally, it is suggested that one reason there is a common structure between Aristotle's account of the continuum and that found in Cantor's definition of the real number continuum is that our intuitions about the continuum have their source in the experience of the real spatiotemporal world. A plea is made to consider Aristotle's abstractionist philosophy of mathematics anew. (shrink)
Analysis is given of the Omega Point cosmology, an extensively peer-reviewed proof (i.e., mathematical theorem) published in leading physics journals by professor of physics and mathematics Frank J. Tipler, which demonstrates that in order for the known laws of physics to be mutually consistent, the universe must diverge to infinite computational power as it collapses into a final cosmological singularity, termed the Omega Point. The theorem is an intrinsic component of the Feynman-DeWitt-Weinberg quantum gravity/Standard Model Theory of Everything (...) (TOE) describing and unifying all the forces in physics, of which itself is also required by the known physical laws. With infinite computational resources, the dead can be resurrected--never to die again--via perfect computer emulation of the multiverse from its start at the Big Bang. Miracles are also physically allowed via electroweak quantum tunneling controlled by the Omega Point cosmological singularity. The Omega Point is a different aspect of the Big Bang cosmological singularity--the first cause--and the Omega Point has all the haecceities claimed for God in the traditional religions. -/- From this analysis, conclusions are drawn regarding the social, ethical, economic and political implications of the Omega Point cosmology. (shrink)
Hannes Leitgeb formulated eight norms for theories of truth in his paper [5]: `What Theories of Truth Should be Like (but Cannot be)'. We shall present in this paper a theory of truth for suitably constructed languages which contain the first-order language of set theory, and prove that it satisfies all those norms.
In this paper a class of languages which are formal enough for mathematical reasoning is introduced. Its languages are called mathematically agreeable. Languages containing a given MA language L, and being sublanguages of L augmented by a monadic predicate, are constructed. A mathematicaltheory of truth (shortly MTT) is formulated for some of those languages. MTT makes them fully interpreted MA languages which posses their own truth predicates. MTT is shown to conform well with the eight norms (...) formulated for theories of truth in the paper 'What Theories of Truth Should be Like (but Cannot be)', by Hannes Leitgeb. MTT is also free from infinite regress, providing a proper framework to study the regress problem. Main tools used in proofs are Zermelo-Fraenkel (ZF) set theory and classical logic. (shrink)
The relationship between abstract formal procedures and the activities of actual physical systems has proved to be surprisingly subtle and controversial, and there are a number of competing accounts of when a physical system can be properly said to implement a mathematical formalism and hence perform a computation. I defend an account wherein computational descriptions of physical systems are high-level normative interpretations motivated by our pragmatic concerns. Furthermore, the criteria of utility and success vary according to our diverse (...) purposes and pragmatic goals. Hence there is no independent or uniform fact to the matter, and I advance the ‘anti-realist’ conclusion that computational descriptions of physical systems are not founded upon deep ontological distinctions, but rather upon interest-relative human conventions. Hence physical computation is a ‘conventional’ rather than a ‘natural’ kind. (shrink)
Investigation into the sequence structure of the genetic code by means of an informatic approach is a real success story. The features of human language are also the object of investigation within the realm of formal language theories. They focus on the common rules of a universal grammar that lies behind all languages and determine generation of syntactic structures. This universal grammar is a depiction of material reality, i.e., the hidden logical order of things and its relations determined by natural (...) laws. Therefore mathematics is viewed not only as an appropriate tool to investigate human language and genetic code structures through computer sciencebased formal language theory but is itself a depiction of material reality. This confusion between language as a scientific tool to describe observations/experiences within cognitive constructed models and formal language as a direct depiction of material reality occurs not only in current approaches but was the central focus of the philosophy of science debate in the twentieth century, with rather unexpected results. This article recalls these results and their implications for more recent mathematical approaches that also attempt to explain the evolution of human language. (shrink)
This dissertation examines aspects of the interplay between computing and scientific practice. The appropriate foundational framework for such an endeavour is rather real computability than the classical computability theory. This is so because physical sciences, engineering, and applied mathematics mostly employ functions defined in continuous domains. But, contrary to the case of computation over natural numbers, there is no universally accepted framework for real computation; rather, there are two incompatible approaches --computable analysis and BSS model--, both claiming (...) to formalise algorithmic computation and to offer foundations for scientific computing. -/- The dissertation consists of three parts. In the first part, we examine what notion of 'algorithmic computation' underlies each approach and how it is respectively formalised. It is argued that the very existence of the two rival frameworks indicates that 'algorithm' is not one unique concept in mathematics, but it is used in more than one way. We test this hypothesis for consistency with mathematical practice as well as with key foundational works that aim to define the term. As a result, new connections between certain subfields of mathematics and computer science are drawn, and a distinction between 'algorithms' and 'effective procedures' is proposed. -/- In the second part, we focus on the second goal of the two rival approaches to real computation; namely, to provide foundations for scientific computing. We examine both frameworks in detail, what idealisations they employ, and how they relate to floating-point arithmetic systems used in real computers. We explore limitations and advantages of both frameworks, and answer questions about which one is preferable for computational modelling and which one for addressing general computability issues. -/- In the third part, analog computing and its relation to analogue (physical) modelling in science are investigated. Based on some paradigmatic cases of the former, a certain view about the nature of computation is defended, and the indispensable role of representation in it is emphasized and accounted for. We also propose a novel account of the distinction between analog and digital computation and, based on it, we compare analog computational modelling to physical modelling. It is concluded that the two practices, despite their apparent similarities, are orthogonal. (shrink)
Very plausibly, nothing can be a genuine computing system unless it meets an input-sensitivity requirement. Otherwise all sorts of objects, such as rocks or pails of water, can count as performing computations, even such as might suffice for mentality—thus threatening computationalism about the mind with panpsychism. Maudlin in J Philos 86:407–432, ( 1989 ) and Bishop ( 2002a , b ) have argued, however, that such a requirement creates difficulties for computationalism about conscious experience, putting it in conflict with the (...) very intuitive thesis that conscious experience supervenes on physical activity. Klein in Synthese 165:141–153, ( 2008 ) proposes a way for computationalists about experience to avoid panpsychism while still respecting the supervenience of experience on activity. I argue that his attempt to save computational theories of experience from Maudlin’s and Bishop’s critique fails. (shrink)
Throughout what is now the more than 50-year history of the computer many theories have been advanced regarding the contribution this machine would make to changes both in the structure of society and in ways of thinking. Like other theories regarding the future, these should also be taken with a pinch of salt. The history of the development of computer technology contains many predictions which have failed to come true and many applications that have not been foreseen. While we must (...) reserve judgment as to the question of the impact on the structure of society and human thought, there is no reason to wait for history when it comes to the question: what are the properties that could give the computer such far-reaching importance? The present book is intended as an answer to this question. The fact that this is a theoretical analysis is due to the nature of the subject. No other possibilities are available because such a description of the properties of the computer must be valid for any kind of application. An additional demand is that the description should be capable of providing an account of the properties which permit and limit these possible applications, just as it must make it possible to characterize a computer as distinct from a) other machines whether clocks, steam engines, thermostats, or mechanical and automatic calculating machines, b) other symbolic media whether printed, mechanical, or electronic and c) other symbolic languages whether ordinary languages, spoken or written or formal languages. This triple limitation, however, (with regard to other machines, symbolic media and symbolic languages) raises a theoretical question as it implies a meeting between concepts of mechanical-deterministic systems, which stem from mathematical physics, and concepts of symbolic systems which stem from the description of symbolic activities common to the humanities. The relationship between science and the humanities has traditionally been seen from a dualistic perspective, as a relationship between two clearly separate subject areas, each studied on its own set of premises and using its own methods. In the present case, however, this perspective cannot be maintained since there is both a common subject area and a new - and specific - kind of interaction between physical and symbolic processes. (shrink)
Some authors have begun to appeal directly to studies of argumentation in their analyses of mathematical practice. These include researchers from an impressively diverse range of disciplines: not only philosophy of mathematics and argumentation theory, but also psychology, education, and computer science. This introduction provides some background to their work.
We overview logical and computational explanations of the notion of tractability as applied in cognitive science. We start by introducing the basics of mathematical theories of complexity: computability theory, computational complexity theory, and descriptive complexity theory. Computational philosophy of mind often identifies mental algorithms with computable functions. However, with the development of programming practice it has become apparent that for some computable problems finding effective algorithms is hardly possible. Some problems need too much computational resource, e.g., (...) time or memory, to be practically computable. Computational complexity theory is concerned with the amount of resources required for the execution of algorithms and, hence, the inherent difficulty of computational problems. An important goal of computational complexity theory is to categorize computational problems via complexity classes, and in particular, to identify efficiently solvable problems and draw a line between tractability and intractability. -/- We survey how complexity can be used to study computational plausibility of cognitive theories. We especially emphasize methodological and mathematical assumptions behind applying complexity theory in cognitive science. We pay special attention to the examples of applying logical and computational complexity toolbox in different domains of cognitive science. We focus mostly on theoretical and experimental research in psycholinguistics and social cognition. (shrink)
Mathematical models are a well established tool in most natural sciences. Although models have been neglected by the philosophy of science for a long time, their epistemological status as a link between theory and reality is now fairly well understood. However, regarding the epistemological status of mathematical models in the social sciences, there still exists a considerable unclarity. In my paper I argue that this results from specific challenges that mathematical models and especially computer simulations face (...) in the social sciences. The most important difference between the social sciences and the natural sciences with respect to modeling is that in the social sciences powerful and well confirmed background theories (like Newtonian mechanics, quantum mechanics or the theory of relativity in physics) do not exist in the social sciences. Therefore, an epistemology of models that is formed on the role model of physics may not be appropriate for the social sciences. I discuss the challenges that modeling faces in the social sciences and point out their epistemological consequences. The most important consequences are that greater emphasis must be placed on empirical validation than on theoretical validation and that the relevance of purely theoretical simulations is strongly limited. (shrink)
Does consciousness collapse the quantum wave function? This idea was taken seriously by John von Neumann and Eugene Wigner but is now widely dismissed. We develop the idea by combining a mathematicaltheory of consciousness (integrated information theory) with an account of quantum collapse dynamics (continuous spontaneous localization). Simple versions of the theory are falsified by the quantum Zeno effect, but more complex versions remain compatible with empirical evidence. In principle, versions of the theory can (...) be tested by experiments with quantum computers. The upshot is not that consciousness-collapse interpretations are clearly correct, but that there is a research program here worth exploring. (shrink)
In this paper, the issues of computability and constructivity in the mathematics of physics are discussed. The sorts of questions to be addressed are those which might be expressed, roughly, as: Are the mathematical foundations of our current theories unavoidably non-constructive: or, Are the laws of physics computable?
The notion of implicit commitment has played a prominent role in recent works in logic and philosophy of mathematics. Although implicit commitment is often associated with highly technical studies, it remains so far an elusive notion. In particular, it is often claimed that the acceptance of a mathematicaltheory implicitly commits one to the acceptance of a Uniform Reflection Principle for it. However, philosophers agree that a satisfactory analysis of the transition from a theory to its reflection (...) principle is still lacking. We provide an axiomatization of the minimal commitments implicit in the acceptance of a mathematicaltheory. The theory entails that the Uniform Reflection Principle is part of one's implicit commitments, and sheds light on the reason why this is so. We argue that the theory has interesting epistemological consequences in that it explains how justified belief in the axioms of a theory can be preserved to the corresponding reflection principle. The theory also improves on recent proposals for the analysis of implicit commitment based on truth or epistemic notions. (shrink)
I argue that Stich's Syntactic Theory of Mind (STM) and a naturalistic narrow content functionalism run on a Language of Though story have the same exact structure. I elaborate on the argument that narrow content functionalism is either irremediably holistic in a rather destructive sense, or else doesn't have the resources for individuating contents interpersonally. So I show that, contrary to his own advertisement, Stich's STM has exactly the same problems (like holism, vagueness, observer-relativity, etc.) that he claims plague (...) content-based psychologies. So STM can't be any better than the Representational Theory of Mind (RTM) in its prospects for forming the foundations of a scientifically respectable psychology, whether or not RTM has the problems that Stich claims it does. (shrink)
In this paper, a number of traditional models related to the percolation theory has been considered by means of new computational methodology that does not use Cantor’s ideas and describes infinite and infinitesimal numbers in accordance with the principle ‘The part is less than the whole’. It gives a possibility to work with finite, infinite, and infinitesimal quantities numerically by using a new kind of a compute - the Infinity Computer – introduced recently in [18]. The new approach does (...) not contradict Cantor. In contrast, it can be viewed as an evolution of his deep ideas regarding the existence of different infinite numbers in a more applied way. Site percolation and gradient percolation have been studied by applying the new computational tools. It has been established that in an infinite system the phase transition point is not really a point as with respect of traditional approach. In light of new arithmetic it appears as a critical interval, rather than a critical point. Depending on “microscope” we use this interval could be regarded as finite, infinite and infinitesimal short interval. Using new approach we observed that in vicinity of percolation threshold we have many different infinite clusters instead of one infinite cluster that appears in traditional consideration. (shrink)
In a series of articles we try to show the need of a novel Theory for Theory of Computation based on considering time as a Fuzzy concept. Time is a central concept In Physics. First we were forced to consider some changes and modifications in the Theories of Physics. In the second step and throughout this article we show the positive Impact of this modification on Theory of Computation and Complexity Theory to rebuild it (...) in a more successful and fruitful approach. We call this novel Theory TC*. (shrink)
This paper is a contribution to graded model theory, in the context of mathematical fuzzy logic. We study characterizations of classes of graded structures in terms of the syntactic form of their first-order axiomatization. We focus on classes given by universal and universal-existential sentences. In particular, we prove two amalgamation results using the technique of diagrams in the setting of structures valued on a finite MTL-algebra, from which analogues of the Łoś–Tarski and the Chang–Łoś–Suszko preservation theorems follow.
Much problem solving and learning research in math and science has focused on formal representations. Recently researchers have documented the use of unschooled strategies for solving daily problems -- informal strategies which can be as effective, and sometimes as sophisticated, as school-taught formalisms. Our research focuses on how formal and informal strategies interact in the process of doing and learning mathematics. We found that combining informal and formal strategies is more effective than single strategies. We provide a theoretical account of (...) this multiple strategy effect and have begun to formulate this theory in an ACT-R computer model. We show why students may reach common impasses in the use of written algebra, and how subsequent or concurrent use of informal strategies leads to better problem-solving performance. Formal strategies facilitate computation because of their abstract and syntactic nature; however, abstraction can lead to nonsensical interpretations and conceptual errors. Reapplying the formal strategy will not repair such errors; switching to an informal one may. We explain the multiple strategy effect as a complementary relationship between the computational efficiency of formal strategies and the sense-making function of informal strategies. (shrink)
In the last couple of years, a few seemingly independent debates on scientific explanation have emerged, with several key questions that take different forms in different areas. For example, the questions what makes an explanation distinctly mathematical and are there any non-causal explanations in sciences (i.e., explanations that don’t cite causes in the explanans) sometimes take a form of the question of what makes mathematical models explanatory, especially whether highly idealized models in science can be explanatory and in (...) virtue of what they are explanatory. These questions raise further issues about counterfactuals, modality, and explanatory asymmetries: i.e., do mathematical and non-causal explanations support counterfactuals, and how ought we to understand explanatory asymmetries in non-causal explanations? Even though these are very common issues in the philosophy of physics and mathematics, they can be found in different guises in the philosophy of biology where there is the statistical interpretation of the Modern Synthesis theory of evolution, according to which the post-Darwinian theory of natural selection explains evolutionary change by citing statistical properties of populations and not the causes of changes. These questions also arise in philosophy of ecology or neuroscience in regard to the nature of topological explanations. The question here is can the mathematical (or more precisely topological) properties in network models in biology, ecology, neuroscience, and computer science be explanatory of physical phenomena, or are they just different ways to represent causal structures. The aim of this special issue is to unify all these debates around several overlapping questions. These questions are: are there genuinely or distinctively mathematical and non-causal explanations?; are all distinctively mathematical explanations also non-causal; in virtue of what they are explanatory; does the instantiation, implementation, or in general, applicability of mathematical structures to a variety of phenomena and systems play any explanatory role? This special issue provides a platform for unifying the debates around several key issues and thus opens up avenues for better understanding of mathematical and non-causal explanations in general, but also, it will enable even better understanding of key issues within each of the debates. (shrink)
This paper concerns “human symbolic output,” or strings of characters produced by humans in our various symbolic systems; e.g., sentences in a natural language, mathematical propositions, and so on. One can form a set that consists of all of the strings of characters that have been produced by at least one human up to any given moment in human history. We argue that at any particular moment in human history, even at moments in the distant future, this set is (...) finite. But then, given fundamental results in recursion theory, the set will also be recursive, recursively enumerable, axiomatizable, and could be the output of a Turing machine. We then argue that it is impossible to produce a string of symbols that humans could possibly produce but no Turing machine could. Moreover, we show that any given string of symbols that we could produce could also be the output of a Turing machine. Our arguments have implications for Hilbert’s sixth problem and the possibility of axiomatizing particular sciences, they undermine at least two distinct arguments against the possibility of Artificial Intelligence, and they entail that expert systems that are the equals of human experts are possible, and so at least one of the goals of Artificial Intelligence can be realized, at least in principle. (shrink)
A recently developed computational methodology for executing numerical calculations with infinities and infinitesimals is described in this paper. The approach developed has a pronounced applied character and is based on the principle “The part is less than the whole” introduced by the ancient Greeks. This principle is applied to all numbers (finite, infinite, and infinitesimal) and to all sets and processes (finite and infinite). The point of view on infinities and infinitesimals (and in general, on Mathematics) presented in this paper (...) uses strongly physical ideas emphasizing interrelations that hold between a mathematical object under observation and the tools used for this observation. It is shown how a new numeral system allowing one to express different infinite and infinitesimal quantities in a unique framework can be used for theoretical and computational purposes. Numerous examples dealing with infinite sets, divergent series, limits, and probability theory are given. (shrink)
While Thomas Kuhn's theory of scientific revolutions does not specifically deal with validation, the validation of simulations can be related in various ways to Kuhn's theory: 1) Computer simulations are sometimes depicted as located between experiments and theoretical reasoning, thus potentially blurring the line between theory and empirical research. Does this require a new kind of research logic that is different from the classical paradigm which clearly distinguishes between theory and empirical observation? I argue that this (...) is not the case. 2) Another typical feature of computer simulations is their being ``motley'' (Winsberg 2003) with respect to the various premises that enter into simulations. A possible consequence is that in case of failure it can become difficult to tell which of the premises is to blame. Could this issue be understood as fostering Kuhn's mild relativism with respect to theory choice? I argue that there is no need to worry about relativism with respect to computer simulations, in particular. 3) The field of social simulations, in particular, still lacks a common understanding concerning the requirements of empirical validation of simulations. Does this mean that social simulations are still in a pre-scientific state in the sense of Kuhn? My conclusion is that despite ongoing efforts to promote quality standards in this field, lack of proper validation is still a problem of many published simulation studies and that, at least large parts of social simulations must be considered as pre-scientific. (shrink)
The ultimate goal of research into computational intelligence is the construction of a fully embodied and fully autonomous artificial agent. This ultimate artificial agent must not only be able to act, but it must be able to act morally. In order to realize this goal, a number of challenges must be met, and a number of questions must be answered, the upshot being that, in doing so, the form of agency to which we must aim in developing artificial agents comes (...) into focus. This chapter explores these issues, and from its results details a novel approach to meeting the given conditions in a simple architecture of information processing. (shrink)
Narrative passages told from a character's perspective convey the character's thoughts and perceptions. We present a discourse process that recognizes characters'.
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.