I have read many recent discussions of the limits of computation and the universe as computer, hoping to find some comments on the amazing work of polymath physicist and decision theorist David Wolpert but have not found a single citation and so I present this very brief summary. Wolpert proved some stunning impossibility or incompleteness theorems (1992 to 2008-see arxiv.org) on the limits to inference (computation) that are so general they are independent of the device doing the computation, (...) and even independent of the laws of physics, so they apply across computers, physics, and human behavior. They make use of Cantor's diagonalization, the liar paradox and worldlines to provide what may be the ultimate theorem in Turing Machine Theory, and seemingly provide insights into impossibility,incompleteness, the limits of computation,and the universe as computer, in all possible universes and all beings or mechanisms, generating, among other things,a non-quantum mechanical uncertainty principle and a proof of monotheism. (shrink)
I have read many recent discussions of the limits of computation and the universe as computer, hoping to find some comments on the amazing work of polymath physicist and decision theorist David Wolpert but have not found a single citation and so I present this very brief summary. Wolpert proved some stunning impossibility or incompleteness theorems (1992 to 2008-see arxiv dot org) on the limits to inference (computation) that are so general they are independent of the device doing (...) the computation, and even independent of the laws of physics, so they apply across computers, physics, and human behavior. They make use of Cantor's diagonalization, the liar paradox and worldlines to provide what may be the ultimate theorem in Turing Machine Theory, and seemingly provide insights into impossibility, incompleteness, the limits of computation, and the universe as computer, in all possible universes and all beings or mechanisms, generating, among other things, a non- quantum mechanical uncertainty principle and a proof of monotheism. There are obvious connections to the classic work of Chaitin, Solomonoff, Komolgarov and Wittgenstein and to the notion that no program (and thus no device) can generate a sequence (or device) with greater complexity than it possesses. One might say this body of work implies atheism since there cannot be any entity more complex than the physical universe and from the Wittgensteinian viewpoint, ‘more complex’ is meaningless (has no conditions of satisfaction, i.e., truth-maker or test). Even a ‘God’ (i.e., a ‘device’with limitless time/space and energy) cannot determine whether a given ‘number’ is ‘random’, nor find a certain way to show that a given ‘formula’, ‘theorem’ or ‘sentence’ or ‘device’ (all these being complex language games) is part of a particular ‘system’. -/- Those wishing a comprehensive up to date framework for human behavior from the modern two systems view may consult my book ‘The Logical Structure of Philosophy, Psychology, Mind and Language in Ludwig Wittgenstein and John Searle’ 2nd ed (2019). Those interested in more of my writings may see ‘Talking Monkeys--Philosophy, Psychology, Science, Religion and Politics on a Doomed Planet--Articles and Reviews 2006-2019 2nd ed (2019) and Suicidal Utopian Delusions in the 21st Century 4th ed (2019) . (shrink)
I have read many recent discussions of the limits of computation and the universe as computer, hoping to find some comments on the amazing work of polymath physicist and decision theorist David Wolpert but have not found a single citation and so I present this very brief summary. Wolpert proved some stunning impossibility or incompleteness theorems (1992 to 2008-see arxiv.org) on the limits to inference (computation) that are so general they are independent of the device doing the computation, (...) and even independent of the laws of physics, so they apply across computers, physics, and human behavior. They make use of Cantor's diagonalization, the liar paradox and worldlines to provide what may be the ultimate theorem in Turing Machine Theory, and seemingly provide insights into impossibility, incompleteness, the limits of computation,and the universe as computer, in all possible universes and all beings or mechanisms, generating, among other things,a non- quantum mechanical uncertainty principle and a proof of monotheism. There are obvious connections to the classic work of Chaitin, Solomonoff, Komolgarov and Wittgenstein and to the notion that no program (and thus no device) can generate a sequence (or device) with greater complexity than it possesses. One might say this body of work implies atheism since there cannot be any entity more complex than the physical universe and from the Wittgensteinian viewpoint, ‘more complex’ is meaningless (has no conditions of satisfaction, i.e., truth-maker or test). Even a ‘God’ (i.e., a ‘device’ with limitless time/space and energy) cannot determine whether a given ‘number’ is ‘random’ nor can find a certain way to show that a given ‘formula’, ‘theorem’ or ‘sentence’ or ‘device’ (all these being complex language games) is part of a particular ‘system’. -/- Those wishing a comprehensive up to date framework for human behavior from the modern two systems view may consult my article The Logical Structure of Philosophy, Psychology, Mind and Language as Revealed in Wittgenstein and Searle 59p(2016). For all my articles on Wittgenstein and Searle see my e-book ‘The Logical Structure of Philosophy, Psychology, Mind and Language in Wittgenstein and Searle 367p (2016). Those interested in all my writings in their most recent versions may consult my e-book Philosophy, Human Nature and the Collapse of Civilization - Articles and Reviews 2006-2016’ 662p (2016). -/- All of my papers and books have now been published in revised versions both in ebooks and in printed books. -/- Talking Monkeys: Philosophy, Psychology, Science, Religion and Politics on a Doomed Planet - Articles and Reviews 2006-2017 (2017) https://www.amazon.com/dp/B071HVC7YP. -/- The Logical Structure of Philosophy, Psychology, Mind and Language in Ludwig Wittgenstein and John Searle--Articles and Reviews 2006-2016 (2017) https://www.amazon.com/dp/B071P1RP1B. -/- Suicidal Utopian Delusions in the 21st century: Philosophy, Human Nature and the Collapse of Civilization - Articles and Reviews 2006-2017 (2017) https://www.amazon.com/dp/B0711R5LGX . (shrink)
I give a detailed review of 'The Outer Limits of Reason' by Noson Yanofsky 403(2013) from a unified perspective of Wittgenstein and evolutionary psychology. I indicate that the difficulty with such issues as paradox in language and math, incompleteness, undecidability, computability, the brain and the universe as computers etc., all arise from the failure to look carefully at our use of language in the appropriate context and hence the failure to separate issues of scientific fact from issues of how (...) language works. I discuss Wittgenstein's views on incompleteness, paraconsistency and undecidability and the work of Wolpert on the limits to computation. -/- Those wishing a comprehensive up to date account of Wittgenstein, Searle and their analysis of behavior from the modern two systems view may consult my article The Logical Structure of Philosophy, Psychology, Mind and Language as Revealed in Wittgenstein and Searle (2016). Those interested in all my writings in their most recent versions may download from this site my e-book ‘Philosophy, Human Nature and the Collapse of Civilization Michael Starks (2016)- Articles and Reviews 2006-2016’ by Michael Starks First Ed. 662p (2016). -/- All of my papers and books have now been published in revised versions both in ebooks and in printed books. -/- Talking Monkeys: Philosophy, Psychology, Science, Religion and Politics on a Doomed Planet - Articles and Reviews 2006-2017 (2017) https://www.amazon.com/dp/B071HVC7YP. -/- The Logical Structure of Philosophy, Psychology, Mind and Language in Ludwig Wittgenstein and John Searle--Articles and Reviews 2006-2016 (2017) https://www.amazon.com/dp/B071P1RP1B. -/- Suicidal Utopian Delusions in the 21st century: Philosophy, Human Nature and the Collapse of Civilization - Articles and Reviews 2006-2017 (2017) https://www.amazon.com/dp/B0711R5LGX . (shrink)
It is commonly thought that such topics as Impossibility, Incompleteness, Paraconsistency, Undecidability, Randomness, Computability, Paradox, Uncertainty and the Limits of Reason are disparate scientific physical or mathematical issues having little or nothing in common. I suggest that they are largely standard philosophical problems (i.e., language games) which were resolved by Wittgenstein over 80 years ago. -/- Wittgenstein also demonstrated the fatal error in regarding mathematics or language or our behavior in general as a unitary coherent logical ‘system,’ rather than (...) as a motley of pieces assembled by the random processes of natural selection. “Gödel shows us an unclarity in the concept of ‘mathematics’, which is indicated by the fact that mathematics is taken to be a system” and we can say (contra nearly everyone) that is all that Gödel and Chaitin show. Wittgenstein commented many times that ‘truth’ in math means axioms or the theorems derived from axioms, and ‘false’ means that one made a mistake in using the definitions, and this is utterly different from empirical matters where one applies a test. Wittgenstein often noted that to be acceptable as mathematics in the usual sense, it must be useable in other proofs and it must have real world applications, but neither is the case with Godel’s Incompleteness. Since it cannot be proved in a consistent system (here Peano Arithmetic but a much wider arena for Chaitin), it cannot be used in proofs and, unlike all the ‘rest’ of PA it cannot be used in the real world either. As Rodych notes “…Wittgenstein holds that a formal calculus is only a mathematical calculus (i.e., a mathematical language-game) if it has an extra- systemic application in a system of contingent propositions (e.g., in ordinary counting and measuring or in physics) …” Another way to say this is that one needs a warrant to apply our normal use of words like ‘proof’, ‘proposition’, ‘true’, ‘incomplete’, ‘number’, and ‘mathematics’ to a result in the tangle of games created with ‘numbers’ and ‘plus’ and ‘minus’ signs etc., and with -/- ‘Incompleteness’ this warrant is lacking. Rodych sums it up admirably. “On Wittgenstein’s account, there is no such thing as an incomplete mathematical calculus because ‘in mathematics, everything is algorithm [and syntax] and nothing is meaning [semantics]…” -/- I make some brief remarks which note the similarities of these ‘mathematical’ issues to economics, physics, game theory, and decision theory. -/- Those wishing further comments on philosophy and science from a Wittgensteinian two systems of thought viewpoint may consult my other writings -- Talking Monkeys--Philosophy, Psychology, Science, Religion and Politics on a Doomed Planet--Articles and Reviews 2006-2019 3rd ed (2019), The Logical Structure of Philosophy, Psychology, Mind and Language in Ludwig Wittgenstein and John Searle 2nd ed (2019), Suicide by Democracy 4th ed (2019), The Logical Structure of Human Behavior (2019), The Logical Structure of Consciousness (2019, Understanding the Connections between Science, Philosophy, Psychology, Religion, Politics, and Economics and Suicidal Utopian Delusions in the 21st Century 5th ed (2019), Remarks on Impossibility, Incompleteness, Paraconsistency, Undecidability, Randomness, Computability, Paradox, Uncertainty and the Limits of Reason in Chaitin, Wittgenstein, Hofstadter, Wolpert, Doria, da Costa, Godel, Searle, Rodych, Berto, Floyd, Moyal-Sharrock and Yanofsky (2019), and The Logical Structure of Philosophy, Psychology, Sociology, Anthropology, Religion, Politics, Economics, Literature and History (2019). (shrink)
I give a detailed review of 'The Outer Limits of Reason' by Noson Yanofsky from a unified perspective of Wittgenstein and evolutionary psychology. I indicate that the difficulty with such issues as paradox in language and math, incompleteness, undecidability, computability, the brain and the universe as computers etc., all arise from the failure to look carefully at our use of language in the appropriate context and hence the failure to separate issues of scientific fact from issues of how language (...) works. I discuss Wittgenstein's views on incompleteness, paraconsistency and undecidability and the work of Wolpert on the limits to computation. To sum it up: The Universe According to Brooklyn---Good Science, Not So Good Philosophy. -/- Those wishing a comprehensive up to date framework for human behavior from the modern two systems view may consult my book ‘The Logical Structure of Philosophy, Psychology, Mind and Language in Ludwig Wittgenstein and John Searle’ 2nd ed (2019). Those interested in more of my writings may see ‘Talking Monkeys--Philosophy, Psychology, Science, Religion and Politics on a Doomed Planet--Articles and Reviews 2006-2019 3rd ed (2019) and Suicidal Utopian Delusions in the 21st Century 4th ed (2019) . (shrink)
It is commonly thought that Impossibility, Incompleteness, Paraconsistency, Undecidability, Randomness, Computability, Paradox, Uncertainty and the Limits of Reason are disparate scientific physical or mathematical issues having little or nothing in common. I suggest that they are largely standard philosophical problems (i.e., language games) which were mostly resolved by Wittgenstein over 80years ago. -/- “What we are ‘tempted to say’ in such a case is, of course, not philosophy, but it is its raw material. Thus, for example, what a mathematician (...) is inclined to say about the objectivity and reality of mathematical facts, is not a philosophy of mathematics, but something for philosophical treatment.” Wittgenstein PI 234 -/- "Philosophers constantly see the method of science before their eyes and are irresistibly tempted to ask and answer questions in the way science does. This tendency is the real source of metaphysics and leads the philosopher into complete darkness." Wittgenstein -/- I provide a brief summary of some of the major findings of two of the most eminent students of behavior of modern times, Ludwig Wittgenstein and John Searle, on the logical structure of intentionality (mind, language, behavior), taking as my starting point Wittgenstein’s fundamental discovery –that all truly ‘philosophical’ problems are the same—confusions about how to use language in a particular context, and so all solutions are the same—looking at how language can be used in the context at issue so that its truth conditions (Conditions of Satisfaction or COS) are clear. The basic problem is that one can say anything, but one cannot mean (state clear COS for) any arbitrary utterance and meaning is only possible in a very specific context. -/- I dissect some writings of a few of the major commentators on these issues from a Wittgensteinian viewpoint in the framework of the modern perspective of the two systems of thought (popularized as ‘thinking fast, thinking slow’), employing a new table of intentionality and new dual systems nomenclature. I show that this is a powerful heuristic for describing the true nature of these putative scientific, physical or mathematical issues which are really best approached as standard philosophical problems of how language is to be used (language games in Wittgenstein’s terminology). -/- It is my contention that the table of intentionality (rationality, mind, thought, language, personality etc.) that features prominently here describes more or less accurately, or at least serves as an heuristic for, how we think and behave, and so it encompasses not merely philosophy and psychology, but everything else (history, literature, mathematics, politics etc.). Note especially that intentionality and rationality as I (along with Searle, Wittgenstein and others) view it, includes both conscious deliberative linguistic System 2 and unconscious automated prelinguistic System 1 actions or reflexes. (shrink)
Classical interpretations of Goedels formal reasoning, and of his conclusions, implicitly imply that mathematical languages are essentially incomplete, in the sense that the truth of some arithmetical propositions of any formal mathematical language, under any interpretation, is, both, non-algorithmic, and essentially unverifiable. However, a language of general, scientific, discourse, which intends to mathematically express, and unambiguously communicate, intuitive concepts that correspond to scientific investigations, cannot allow its mathematical propositions to be interpreted ambiguously. Such a language must, therefore, define mathematical truth (...) verifiably. We consider a constructive interpretation of classical, Tarskian, truth, and of Goedel's reasoning, under which any formal system of Peano Arithmetic---classically accepted as the foundation of all our mathematical Languages---is verifiably complete in the above sense. We show how some paradoxical concepts of Quantum mechanics can, then, be expressed, and interpreted, naturally under a constructive definition of mathematical truth. (shrink)
Throughout this paper, we are trying to show how and why our Mathematical frame-work seems inappropriate to solve problems in Theory of Computation. More exactly, the concept of turning back in time in paradoxes causes inconsistency in modeling of the concept of Time in some semantic situations. As we see in the first chapter, by introducing a version of “Unexpected Hanging Paradox”,first we attempt to open a new explanation for some paradoxes. In the second step, by applying this paradox, it (...) is demonstrated that any formalized system for the Theory of Computation based on Classical Logic and Turing Model of Computation leads us to a contradiction. We conclude that our mathematical frame work is inappropriate for Theory of Computation. Furthermore, the result provides us a reason that many problems in Complexity Theory resist to be solved.(This work is completed in 2017 -5- 2, it is in vixra in 2017-5-14, presented in Unilog 2018, Vichy). (shrink)
This is the first of a two-volume work combining two fundamental components of contemporary computing into classical deductive computing, a powerful form of computation, highly adequate for programming and automated theorem proving, which, in turn, have fundamental applications in areas of high complexity and/or high security such as mathematical proof, software specification and verification, and expert systems. Deductive computation is concerned with truth-preservation: This is the essence of the satisfiability problem, or SAT, the central computational problem in computability and complexity (...) theory. The Turing machine provides the classical version of this theory—classical computing—with its standard model, which is physically concretized—and thus spatial-temporally limited and restricted—in the von Neumann, or digital, computer. Although a number of new technological applications require classical deductive computation with non-classical logics, many key technologies still do well—or exclusively, for that matter—with classical logic. In this first volume, we elaborate on classical deductive computing with classical logic. The objective of the main text is to provide the reader with a thorough elaboration on both classical computing and classical deduction with the classical first-order predicate calculus with a view to computational implementations. As a complement to the mathematical-based exposition of the topics we offer the reader a very large selection of exercises. This selection aims at not only practice of discussed material, but also creative approaches to problems, for both discussed and novel contents, as well as at research into further relevant topics. (shrink)
This paper argues for a view of free will that I will call the conceptual impossibility of the truth of free will error theory - the conceptual impossibility thesis. I will argue that given the concept of free will we in fact deploy, it is impossible for our free will judgements - judgements regarding whether some action is free or not - to be systematically false. Since we do judge many of our actions to be free, it follows from the (...) conceptual impossibility thesis that many of our actions are in fact free. Hence it follows that free will error theory - the view that no judgement of the form ‘action A was performed freely’ - is false. I will show taking seriously the conceptual impossibility thesis helps makes good sense of some seemingly inconsistent results in recent experimental philosophy work on determinism and our concept of free will. Further, I will present some reasons why we should expect to find similar results for every other factor we might have thought was important for free will. (shrink)
I develop a theory of counterfactuals about relative computability, i.e. counterfactuals such as 'If the validity problem were algorithmically decidable, then the halting problem would also be algorithmically decidable,' which is true, and 'If the validity problem were algorithmically decidable, then arithmetical truth would also be algorithmically decidable,' which is false. These counterfactuals are counterpossibles, i.e. they have metaphysically impossible antecedents. They thus pose a challenge to the orthodoxy about counterfactuals, which would treat them as uniformly true. What’s more, I (...) argue that these counterpossibles don’t just appear in the periphery of relative computability theory but instead they play an ineliminable role in the development of the theory. Finally, I present and discuss a model theory for these counterfactuals that is a straightforward extension of the familiar comparative similarity models. (shrink)
This paper concerns “human symbolic output,” or strings of characters produced by humans in our various symbolic systems; e.g., sentences in a natural language, mathematical propositions, and so on. One can form a set that consists of all of the strings of characters that have been produced by at least one human up to any given moment in human history. We argue that at any particular moment in human history, even at moments in the distant future, this set is finite. (...) But then, given fundamental results in recursion theory, the set will also be recursive, recursively enumerable, axiomatizable, and could be the output of a Turing machine. We then argue that it is impossible to produce a string of symbols that humans could possibly produce but no Turing machine could. Moreover, we show that any given string of symbols that we could produce could also be the output of a Turing machine. Our arguments have implications for Hilbert’s sixth problem and the possibility of axiomatizing particular sciences, they undermine at least two distinct arguments against the possibility of Artificial Intelligence, and they entail that expert systems that are the equals of human experts are possible, and so at least one of the goals of Artificial Intelligence can be realized, at least in principle. (shrink)
The paper investigates how the mathematical languages used to describe and to observe automatic computations influence the accuracy of the obtained results. In particular, we focus our attention on Single and Multi-tape Turing machines which are described and observed through the lens of a new mathematical language which is strongly based on three methodological ideas borrowed from Physics and applied to Mathematics, namely: the distinction between the object (we speak here about a mathematical object) of an observation and the (...) instrument used for this observation; interrelations holding between the object and the tool used for the observation; the accuracy of the observation determined by the tool. Results of the observation executed by the traditional and new languages are compared and discussed. (shrink)
Although Kurt Gödel does not figure prominently in the history of computabilty theory, he exerted a significant influence on some of the founders of the field, both through his published work and through personal interaction. In particular, Gödel’s 1931 paper on incompleteness and the methods developed therein were important for the early development of recursive function theory and the lambda calculus at the hands of Church, Kleene, and Rosser. Church and his students studied Gödel 1931, and Gödel taught a seminar (...) at Princeton in 1934. Seen in the historical context, Gödel was an important catalyst for the emergence of computability theory in the mid 1930s. (shrink)
According to Field’s influential incompleteness objection, Tarski’s semantic theory of truth is unsatisfactory since the definition that forms its basis is incomplete in two distinct senses: (1) it is physicalistically inadequate, and for this reason, (2) it is conceptually deficient. In this paper, I defend the semantic theory of truth against the incompleteness objection by conceding (1) but rejecting (2). After arguing that Davidson and McDowell’s reply to the incompleteness objection fails to pass muster, I argue that, within the constraints (...) of a non-reductive physicalism and a holism concerning the concepts of truth, reference and meaning, conceding Field’s physicalistic inadequacy conclusion while rejecting his conceptual deficiency conclusion is a promising reply to the incompleteness objection. (shrink)
Various moral conundrums plague population ethics: The Non-Identity Problem, The Procreation Asymmetry, The Repugnant Conclusion, and more. I argue that the aforementioned moral conundrums have a structure neatly accounted for, and solved by, some ideas in computability theory. I introduce a mathematical model based on computability theory and show how previous arguments pertaining to these conundrums fit into the model. This paper proceeds as follows. First, I do a very brief survey of the history of computability theory in moral philosophy. (...) Second, I follow various papers, and show how their arguments fit into, or don't fit into, our model. Third, I discuss the implications of our model to the question why the human race should or should not continue to exist. Finally, I show that our model ineluctably leads us to a Confucian moral principle. (shrink)
This paper critically evaluates what it identifies as ‘the institutional theory of freedom’ developed within recent neo-Hegelian philosophy. While acknowledging the gains made against the Kantian theory of autonomy as detachment it is argued that the institutional theory ultimately undermines the very meaning of practical agency. By tying agency to institutionally sustained recognition it effectively excludes the exercise of practical reason geared toward emancipation from a settled normative order. Adorno's notion of autonomy as resistance is enlisted to develop an account (...) of practical reason that is neither institutionally constrained nor without appropriate consideration of the historical location of the practical agent. (shrink)
The four sections of this article are reactions to a few interconnected problems that Mario Bunge addresses in his The Sociology-Philosophy Connection , which can be seen as a continuation and summary of his two recent major volumes Finding Philosophy in Social Science and Social Science under Debate: A Philosophical Perspective . Bunge’s contribution to the philosophy of the social sciences has been sufficiently acclaimed. (See in particular two special issues of this journal dedicated to his social philosophy: "Systems and (...) Mechanisms. A Symposium on Mario Bunge’s Philosophy of Social Science," Philosophy of the Social Sciences 34, nos. 2 and 3.) The author discusses therefore only those solutions in Bunge’s book that seem most problematic, namely, Bunge’s proposal to expel charlatans from universities; his treatment of social laws; his notions of mechanisms, "mechanismic explanation," and systemism; and his reading of Popper’s social philosophy. Key Words: theory • laws • mechanism • explanation • Popper. (shrink)
The notion of computability is developed through the study of the behavior of a set of languages interpreted over the natural numbers which contain their own fully defined satisfaction predicate and whose only other vocabulary is limited to "0", individual variables, the successor function, the identity relation and operators for disjunction, conjunction, and existential quantification.
Textbook on Gödel’s incompleteness theorems and computability theory, based on the Open Logic Project. Covers recursive function theory, arithmetization of syntax, the first and second incompleteness theorem, models of arithmetic, second-order logic, and the lambda calculus.
The problem of emergence in physical theories makes necessary to build a general theory of the relationships between the observed system and the observing system. It can be shown that there exists a correspondence between classical systems and computational dynamics according to the Shannon-Turing model. A classical system is an informational closed system with respect to the observer; this characterizes the emergent processes in classical physics as phenomenological emergence. In quantum systems, the analysis based on the computation theory fails. (...) It is here shown that a quantum system is an informational open system with respect to the observer and able to exhibit processes of observational, radical emergence. Finally, we take into consideration the role of computation in describing the physical world. (shrink)
Ignited by Einstein and Bohr a century ago, the philosophical struggle about Reality is yet unfinished, with no signs of a swift resolution. Despite vast technological progress fueled by the iconic EPR paper (EPR), the intricate link between ontic and epistemic aspects of Quantum Theory (QT) has greatly hindered our grip on Reality and further progress in physical theory. Fallacies concealed by tortuous logical negations made EPR comprehension much harder than it could have been had Einstein written it himself in (...) German. It is plagued with preconceptions about what a physical property is, the 'Uncertainty Principle', and the Principle of Locality. Numerous interpretations of QT vis à vis Reality exist and are keenly disputed. This is the first of a series of articles arguing for a physical interpretation called ‘The Ontic Probability Interpretation’ (TOPI). A gradual explanation of TOPI is given intertwined with a meticulous logico-philosophical scrutiny of EPR. Part I focuses on the meaning of Einstein’s ‘Incompleteness’ claim. A conceptual confusion, a preconception about Reality, and a flawed dichotomy are shown to be severe obstacles for the EPR argument to succeed. Part II analyzes Einstein’s ‘Incompleteness/Nonlocality Dilemma’. Future articles will further explain TOPI, demonstrating its soundness and potential for nurturing theoretical progress. (shrink)
According to the computational theory of mind , to think is to compute. But what is meant by the word 'compute'? The generally given answer is this: Every case of computing is a case of manipulating symbols, but not vice versa - a manipulation of symbols must be driven exclusively by the formal properties of those symbols if it is qualify as a computation. In this paper, I will present the following argument. Words like 'form' and 'formal' are ambiguous, as (...) they can refer to form in either the syntactic or the morphological sense. CTM fails on each disambiguation, and the arguments for CTM immediately cease to be compelling once we register that ambiguity. The terms 'mechanical' and 'automatic' are comparably ambiguous. Once these ambiguities are exposed, it turns out that there is no possibility of mechanizing thought, even if we confine ourselves to domains where all problems can be settled through decision-procedures. The impossibility of mechanizing thought thus has nothing to do with recherché mathematical theorems, such as those proven by Gödel and Rosser. A related point is that CTM involves, and is guilty of reinforcing, a misunderstanding of the concept of an algorithm. (shrink)
"There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact" Mark Twain-Life on the Mississippi -/- This is a lovely book full of fascinating info on the evolution of physics and cosmology. Its main theme is how the idea of higher dimensional geometry created by Riemann, recently extended to 24 dimensions by string theory, has revolutionized our understanding of the universe. Everyone knows that Riemann created multidimensional geometry in 1854 (...) but it is amazing to learn that he also was a physicist who believed that it held the key to explaining the fundamental laws of physics. Maxwell´s equations did not exist then and Riemann´s untimely death at age 39 prevented his pursuit of these ideas. Both he and his British translator Clifford believed that magnetic and electric fields resulted from the bending of space in the 4th dimension-more than 50 years before Einstein! The fourth dimension became a standard subject in the popular media for the next 50 years with several stories by HG Wells using it and even Lenin wrote about it. The American mathematician Hinton had widely publicized his idea that light is a vibration in the 4th spatial dimension. Amazingly, physicists and most mathematicians forgot about it and when Einstein was looking for the math needed to encompass general relativity 60 years later, he had never heard of Riemannian geometry. He spent 3 years trying to find the equations for general relativity and only after a math friend told him about Riemann was he able to complete his work. Riemann´s equations with four dimensional metric tensors describing every point in space were incorporated almost unchanged into relativity. And on and on it goes. Since this review I have written a great deal on the language games of math and science, uncertainty, incompleteness, the limits of computation etc., so those interested should find them useful since this volume like most science frequently wanders across the line into philosophy (scientism). -/- Those wishing a comprehensive up to date framework for human behavior from the modern two systems view may consult my book ‘The Logical Structure of Philosophy, Psychology, Mind and Language in Ludwig Wittgenstein and John Searle’ 2nd ed (2019). Those interested in more of my writings may see ‘Talking Monkeys--Philosophy, Psychology, Science, Religion and Politics on a Doomed Planet--Articles and Reviews 2006-2019 3rd ed (2019), The Logical Structure of Human Behavior (2019), and Suicidal Utopian Delusions in the 21st Century 4th ed (2019). (shrink)
In ‘Godel’s Way’ three eminent scientists discuss issues such as undecidability, incompleteness, randomness, computability and paraconsistency. I approach these issues from the Wittgensteinian viewpoint that there are two basic issues which have completely different solutions. There are the scientific or empirical issues, which are facts about the world that need to be investigated observationally and philosophical issues as to how language can be used intelligibly (which include certain questions in mathematics and logic), which need to be decided by looking at (...) how we actually use words in particular contexts. When we get clear about which language game we are playing, these topics are seen to be ordinary scientific and mathematical questions like any others. Wittgenstein’s insights have seldom been equaled and never surpassed and are as pertinent today as they were 80 years ago when he dictated the Blue and Brown Books. In spite of its failings—really a series of notes rather than a finished book—this is a unique source of the work of these three famous scholars who have been working at the bleeding edges of physics, math and philosophy for over half a century. Da Costa and Doria are cited by Wolpert (see below or my articles on Wolpert and my review of Yanofsky’s ‘The Outer Limits of Reason’) since they wrote on universal computation, and among his many accomplishments, Da Costa is a pioneer in paraconsistency. -/- Those wishing a comprehensive up to date framework for human behavior from the modern two systems view may consult my book ‘The Logical Structure of Philosophy, Psychology, Mind and Language in Ludwig Wittgenstein and John Searle’ 2nd ed (2019). Those interested in more of my writings may see ‘Talking Monkeys--Philosophy, Psychology, Science, Religion and Politics on a Doomed Planet--Articles and Reviews 2006-2019 3rd ed (2019), The Logical Structure of Human Behavior (2019), and Suicidal Utopian Delusions in the 21st Century 4th ed (2019) . (shrink)
After pinpointing a conceptual confusion (TCC), a Reality preconception (TRP1), and a fallacious dichotomy (TFD), the famous EPR/EPRB argument for correlated ‘particles’ is studied in the light of the Ontic Probability Interpretation (TOPI) of Quantum Theory (QT). Another Reality preconception (TRP2) is identified, showing that EPR used and ignored QT predictions in a single paralogism. Employing TFD and TRP2, EPR unveiled a contradiction veiled in its premises. By removing nonlocality from QT’s Ontology by fiat, EPR preordained its incompleteness. The Petitio (...) Principii fallacy was at work from the outset. Einstein surmised the solution to his incompleteness/nonlocality dilemma in 1949, but never abandoned his philosophical stance. It is concluded that there are no definitions of Reality: we have to accept that Reality may not conform to our prejudices and, if an otherwise successful theory predicts what we do not believe in, no gedankenexperiment will help because our biases may slither through. Only actual experiments could assist in solving Einstein’s dilemma, as proven in the last 50 years. Notwithstanding, EPR is one of the most influential papers in history and has immensely sparked both conceptual and technological progress. Future articles will further explain TOPI, demonstrating its soundness and potential for nurturing theoretical advance. (shrink)
Philosophical questions about minds and computation need to focus squarely on the mathematical theory of Turing machines (TM's). Surrogate TM's such as computers or formal systems lack abilities that make Turing machines promising candidates for possessors of minds. Computers are only universal Turing machines (UTM's)—a conspicuous but unrepresentative subclass of TM. Formal systems are only static TM's, which do not receive inputs from external sources. The theory of TM computation clearly exposes the failings of two prominent critiques, (...) Searle's Chinese room (1980) and arguments from Gödel's Incompleteness theorems (e.g., Lucas, 1961; Penrose, 1989), both of which fall short of addressing the complete TM model. Both UTM-computers and formal systems provide an unsound basis for debate. In particular, their special natures easily foster the misconception that computation entails intrinsically meaningless symbol manipulation. This common view is incorrect with respect to full-fledged TM's, which can process inputs non-formally, i.e., in a subjective and dynamically evolving fashion. To avoid a distorted understanding of the theory of computation, philosophical judgments and discussions should be grounded firmly upon the complete Turing machine model, the proper model for real computers. (shrink)
According to a theorem recently proved in the theory of logical aggregation, any nonconstant social judgment function that satisfies independence of irrelevant alternatives (IIA) is dictatorial. We show that the strong and not very plausible IIA condition can be replaced with a minimal independence assumption plus a Pareto-like condition. This new version of the impossibility theorem likens it to Arrow’s and arguably enhances its paradoxical value.
Review of Dowek, Gilles, Computation, Proof, Machine, Cambridge University Press, Cambridge, 2015. Translation of Les Métamorphoses du calcul, Le Pommier, Paris, 2007. Translation from the French by Pierre Guillot and Marion Roman.
The aim of this paper is to comprehensively question the validity of the standard way of interpreting Chaitin's famous incompleteness theorem, which says that for every formalized theory of arithmetic there is a finite constant c such that the theory in question cannot prove any particular number to have Kolmogorov complexity larger than c. The received interpretation of theorem claims that the limiting constant is determined by the complexity of the theory itself, which is assumed to be good measure of (...) the strength of the theory. I exhibit certain strong counterexamples and establish conclusively that the received view is false. Moreover, I show that the limiting constants provided by the theorem do not in any way reflect the power of formalized theories, but that the values of these constants are actually determined by the chosen coding of Turing machines, and are thus quite accidental. (shrink)
This paper provides an introductory review of the theory of judgment aggregation. It introduces the paradoxes of majority voting that originally motivated the field, explains several key results on the impossibility of propositionwise judgment aggregation, presents a pedagogical proof of one of those results, discusses escape routes from the impossibility and relates judgment aggregation to some other salient aggregation problems, such as preference aggregation, abstract aggregation and probability aggregation. The present illustrative rather than exhaustive review is intended to give readers (...) new to the field of judgment aggregation a sense of this rapidly growing research area. (shrink)
'Computationalism' is a relatively vague term used to describe attempts to apply Turing's model of computation to phenomena outside its original purview: in modelling the human mind, in physics, mathematics, etc. Early versions of computationalism faced strong objections from many (and varied) quarters, from philosophers to practitioners of the aforementioned disciplines. Here we will not address the fundamental question of whether computational models are appropriate for describing some or all of the wide range of processes that they have been (...) applied to, but will focus instead on whether `renovated' versions of the \textit{new computationalism} shed any new light on or resolve previous tensions between proponents and skeptics. We find this, however, not to be the case, because the 'new computationalism' falls short by using limited versions of "traditional computation", or proposing computational models that easily fall within the scope of Turing's original model, or else proffering versions of hypercomputation with its many pitfalls. (shrink)
This chapter briefly reviews the present state of judgment aggregation theory and tentatively suggests a future direction for that theory. In the review, we start by emphasizing the difference between the doctrinal paradox and the discursive dilemma, two idealized examples which classically serve to motivate the theory, and then proceed to reconstruct it as a brand of logical theory, unlike in some other interpretations, using a single impossibility theorem as a key to its technical development. In the prospective part, having (...) mentioned existing applications to social choice theory and computer science, which we do not discuss here, we consider a potential application to law and economics. This would be based on a deeper exploration of the doctrinal paradox and its relevance to the functioning of collegiate courts. On this topic, legal theorists have provided empirical observations and theoretical hints that judgment aggregation theorists would be in a position to clarify and further elaborate. As a general message, the chapter means to suggest that the future of judgment aggregation theory lies with its applications rather than its internal theoretical development. (shrink)
This is a review of The Turing Guide (2017), written by Jack Copeland, Jonathan Bowen, Mark Sprevak, Robin Wilson, and others. The review includes a new sociological approach to the problem of computability in physics.
F.A. Hayek essentially quit economic theory and gave up the phenomena of industrial fluctuations as an explicit object of theoretical investigation following the publication of his last work in technical economics, 1941’s The Pure Theory of Capital. Nonetheless, several of Hayek’s more methodologically-oriented writings bear important implications for economic phenomena, especially those of industrial fluctuations. Decisions (usually, for Hayek, of a political nature) taken on the basis of a “pretence” of knowledge impede the operation of the price system’s belief-coordinating function (...) and thereby contribute to episodes of economic disequilibrium. Moreover, this later account – which I call Hayek’s epistemic theory of industrial fluctuations – implies certain aspects of his earlier theory. The two accounts are connected in virtue of the role that ignorance and the limits of human knowledge play in each. Indeed, it turns out that – substantively, if not methodologically – Hayek’s early theory of the cycle is a special case of the more general epistemic account. (shrink)
It is here proposed an analysis of symbolic and sub-symbolic models for studying cognitive processes, centered on emergence and logical openness notions. The Theory of logical openness connects the Physics of system/environment relationships to the system informational structure. In this theory, cognitive models can be ordered according to a hierarchy of complexity depending on their logical openness degree, and their descriptive limits are correlated to Gödel-Turing Theorems on formal systems. The symbolic models with low logical openness describe cognition (...) by means of semantics which fix the system/environment relationship, while the sub-symbolic ones with high logical openness tends to seize its evolutive dynamics. An observer is defined as a system with high logical openness. In conclusion, the characteristic processes of intrinsic emergence typical of “bio-logic” - emerging of new codes-require an alternative model to Turing- computation, the natural or bio-morphic computation, whose essential features we are going here to outline. (shrink)
[Müller, Vincent C. (ed.), (2013), Philosophy and theory of artificial intelligence (SAPERE, 5; Berlin: Springer). 429 pp. ] --- Can we make machines that think and act like humans or other natural intelligent agents? The answer to this question depends on how we see ourselves and how we see the machines in question. Classical AI and cognitive science had claimed that cognition is computation, and can thus be reproduced on other computing machines, possibly surpassing the abilities of human intelligence. This (...) consensus has now come under threat and the agenda for the philosophy and theory of AI must be set anew, re-defining the relation between AI and Cognitive Science. We can re-claim the original vision of general AI from the technical AI disciplines; we can reject classical cognitive science and replace it with a new theory (e.g. embodied); or we can try to find new ways to approach AI, for example from neuroscience or from systems theory. To do this, we must go back to the basic questions on computing, cognition and ethics for AI. The 30 papers in this volume provide cutting-edge work from leading researchers that define where we stand and where we should go from here. (shrink)
The present volume is an introduction to the use of tools from computability theory and reverse mathematics to study combinatorial principles, in particular Ramsey's theorem and special cases such as Ramsey's theorem for pairs. It would serve as an excellent textbook for graduate students who have completed a course on computability theory.
Two radically different views about time are possible. According to the first, the universe is three dimensional. It has a past and a future, but that does not mean it is spread out in time as it is spread out in the three dimensions of space. This view requires that there is an unambiguous, absolute, cosmic-wide "now" at each instant. According to the second view about time, the universe is four dimensional. It is spread out in both space and time (...) - in space-time in short. Special and general relativity rule out the first view. There is, according to relativity theory, no such thing as an unambiguous, absolute cosmic-wide "now" at each instant. However, we have every reason to hold that both special and general relativity are false. Not only does the historical record tell us that physics advances from one false theory to another. Furthermore, elsewhere I have shown that we must interpret physics as having established physicalism - in so far as physics can ever establish anything theoretical. Physicalism, here, is to be interpreted as the thesis that the universe is such that some unified "theory of everything" is true. Granted physicalism, it follows immediately that any physical theory that is about a restricted range of phenomena only, cannot be true, whatever its empirical success may be. It follows that both special and general relativity are false. This does not mean of course that the implication of these two theories that there is no unambiguous cosmic-wide "now" at each instant is false. It still may be the case that the first view of time, indicated at the outset, is false. Are there grounds for holding that an unambiguous cosmic-wide "now" does exist, despite special and general relativity, both of which imply that it does not exist? There are such grounds. Elsewhere I have argued that, in order to solve the quantum wave/particle problem and make sense of the quantum domain we need to interpret quantum theory as a fundamentally probabilistic theory, a theory which specifies how quantum entities - electrons, photons, atoms - interact with one another probabilistically. It is conceivable that this is correct, and the ultimate laws of the universe are probabilistic in character. If so, probabilistic transitions could define unambiguous, absolute cosmic-wide "nows" at each instant. It is entirely unsurprising that special and general relativity have nothing to say about the matter. Both theories are pre-quantum mechanical, classical theories, and general relativity in particular is deterministic. The universe may indeed be three dimensional, with a past and a future, but not spread out in four dimensional space-time, despite the fact that relativity theories appear to rule this out. These considerations, finally, have implications for views about the arrow of time and free will. (shrink)
On pense généralement que l'impossibilité, l'incomplétdulité, la paracohérence, l'indécidabilité, le hasard, la calcul, le paradoxe, l'incertitude et les limites de la raison sont des questions scientifiques physiques ou mathématiques disparates ayant peu ou rien dans terrain d'entente. Je suggère qu'ils sont en grande partie des problèmes philosophiques standard (c.-à-d., jeux de langue) qui ont été la plupart du temps résolus par Wittgenstein plus de 80 ans. Je fournis un bref résumé de quelques-unes des principales conclusions de deux des plus éminents (...) étudiants du comportement des temps modernes, Ludwig Wittgenstein et John Searle, sur la structure logique de l'intentionnalité (esprit, langue, comportement), en prenant comme point de départ La découverte fondamentale de Wittgenstein, à savoir que tous les problèmes véritablement « philosophiques » sont les mêmes, les confusions sur la façon d'utiliser la langue dans un contexte particulier, et donc toutes les solutions sont les mêmes— en regardant comment la langue peut être utilisée dans le contexte en cause afin que sa vérité (Conditions de satisfaction ou COS) sont claires. Le problème fondamental est que l'on peut dire n'importe quoi, mais on ne peut pas signifier (état clair COS pour) toute déclaration arbitraire et le sens n'est possible que dans un contexte très spécifique. Je dissé que quelques écrits de quelques-uns des principaux commentateurs sur ces questions d'un point de vue wittgensteinien dans le cadre de la perspective moderne des deux systèmes de pensée (popularisé comme «penser vite, penser lentement»), en utilisant une nouvelle table de intentionnalité et la nomenclature de nouveaux systèmes doubles. Je montre qu'il s'agit d'un puissant heuristique pour décrire la vraie nature de ces questions scientifiques, physiques ou mathématiques putatives qui sont vraiment mieux abordés comme des problèmes philosophiques standard de la façon dont la langue doit être utilisée (jeux de langue dans Wittgenstein terminologie). (shrink)
Hal ini sering berpikir bahwa kemustahilan, ketidaklengkapan, Paraconsistency, Undecidability, Randomness, komputasi, Paradox, ketidakpastian dan batas alasan yang berbeda ilmiah fisik atau matematika masalah memiliki sedikit atau tidak ada dalam Umum. Saya menyarankan bahwa mereka sebagian besar masalah filosofis standar (yaitu, Permainan bahasa) yang sebagian besar diselesaikan oleh Wittgenstein lebih dari 80years yang lalu. -/- "Apa yang kita ' tergoda untuk mengatakan ' dalam kasus seperti ini, tentu saja, bukan filsafat, tetapi bahan baku. Jadi, misalnya, apa yang seorang matematikawan cenderung mengatakan (...) tentang objektivitas dan realitas fakta matematika, bukan filsafat matematika, tetapi sesuatu untuk pengobatan filosofis. " Wittgenstein PI 234 -/- "Filsuf terus melihat metode ilmu di depan mata mereka dan tak tertahankan tergoda untuk bertanya dan menjawab pertanyaan dalam cara ilmu tidak. Kecenderungan ini adalah sumber nyata metafisika dan memimpin filsuf menjadi gelap gulita. " Wittgenstein -/- Aku memberikan ringkasan singkat dari beberapa temuan utama dari dua siswa yang paling terkemuka perilaku zaman modern, Ludwig Wittgenstein dan John Searle, pada struktur Logis intensionality (pikiran, bahasa, perilaku), mengambil sebagai titik awal Penemuan fundamental Wittgenstein – bahwa semua masalah ' filosofis ' adalah sama — kebingungan tentang bagaimana menggunakan bahasa dalam konteks tertentu, sehingga semua solusi sama — melihat bagaimana bahasa dapat digunakan dalam konteks yang menjadi masalah sehingga kebenaranNya kondisi (kondisi kepuasan atau COS) jelas. Masalah dasar adalah bahwa seseorang dapat mengatakan apa-apa, tetapi orang tidak dapat berarti (negara yang jelas cos untuk) sembarang ucapan dan makna hanya mungkin dalam konteks yang sangat spesifik. -/- Saya membedah beberapa tulisan dari beberapa komentator utama pada isu ini dari sudut pandang Wittgensteinian dalam kerangka perspektif modern dari dua sistem pemikiran (Dipopulerkan sebagai ' berpikir cepat, berpikir lambat '), mempekerjakan meja baru intensionality dan baru sistem ganda nomenklatur. Saya menunjukkan bahwa ini adalah heuristik yang kuat untuk menggambarkan sifat sebenarnya dari hal ini ilmiah, fisik atau matematika masalah yang benar-benar terbaik didekati sebagai masalah filosofis standar bagaimana bahasa yang akan digunakan (permainan bahasa di Wittgenstein's terminologi). -/- Ini adalah pendapat saya bahwa tabel intensionality (rasionalitas, pikiran, pikiran, bahasa, kepribadian dll) yang fitur mencolok di sini menggambarkan lebih atau kurang akurat, atau setidaknya berfungsi sebagai heuristic untuk, bagaimana kita berpikir dan berperilaku, dan sehingga mencakup tidak hanya filsafat dan psikologi, tetapi segala sesuatu yang lain (sejarah, sastra, matematika, politik dll). Perhatikan terutama bahwa intensionalitas dan rasionalitas sebagai I (bersama dengan Searle, Wittgenstein dan lain-lain) melihatnya, mencakup baik sistem linguistik pertimbangan sadar 2 dan tidak disadari otomatis sistem prelinguistik 1 tindakan atau refleks. (shrink)
The Computational Theory of Mind (CTM) holds that cognitive processes are essentially computational, and hence computation provides the scientific key to explaining mentality. The Representational Theory of Mind (RTM) holds that representational content is the key feature in distinguishing mental from non-mental systems. I argue that there is a deep incompatibility between these two theoretical frameworks, and that the acceptance of CTM provides strong grounds for rejecting RTM. The focal point of the incompatibility is the fact that representational content is (...) extrinsic to formal procedures as such, and the intended interpretation of syntax makes no difference to the execution of an algorithm. So the unique 'content' postulated by RTM is superfluous to the formal procedures of CTM. And once these procedures are implemented in a physical mechanism, it is exclusively the causal properties of the physical mechanism that are responsible for all aspects of the system's behaviour. So once again, postulated content is rendered superfluous. To the extent that semantic content may appear to play a role in behaviour, it must be syntactically encoded within the system, and just as in a standard computational artefact, so too with the human mind/brain - it's pure syntax all the way down to the level of physical implementation. Hence 'content' is at most a convenient meta-level gloss, projected from the outside by human theorists, which itself can play no role in cognitive processing. (shrink)
Inspired by impossibility theorems of social choice theory, many democratic theorists have argued that aggregative forms of democracy cannot lend full democratic justification for the collective decisions reached. Hence, democratic theorists have turned their attention to deliberative democracy, according to which “outcomes are democratically legitimate if and only if they could be the object of a free and reasoned agreement among equals” (Cohen 1997a, 73). However, relatively little work has been done to offer a formal theory of democratic deliberation. This (...) article helps fill that gap by offering a formal theory of three different modes of democratic deliberation: myopic discussion, constructive discussion, and debate. We show that myopic discussion suffers from indeterminacy of long run outcomes, while constructive discussion and debate are conclusive. Finally, unlike the other two modes of deliberation, debate is path independent and converges to a unique compromise position, irrespective of the initial status quo. (shrink)
In this article, after presenting the basic idea of causal accounts of implementation and the problems they are supposed to solve, I sketch the model of computation preferred by Chalmers and argue that it is too limited to do full justice to computational theories in cognitive science. I also argue that it does not suffice to replace Chalmers’ favorite model with a better abstract model of computation; it is necessary to acknowledge the causal structure of physical computers that is not (...) accommodated by the models used in computability theory. Additionally, an alternative mechanistic proposal is outlined. (shrink)
Doy una revisión detallada de ' los límites externos de la razón ' por Noson Yanofsky desde una perspectiva unificada de Wittgenstein y la psicología evolutiva. Yo indiqué que la dificultad con cuestiones como la paradoja en el lenguaje y las matemáticas, la incompletitud, la indeterminación, la computabilidad, el cerebro y el universo como ordenadores, etc., surgen de la falta de mirada cuidadosa a nuestro uso del lenguaje en el adecuado contexto y, por tanto, el Error al separar los problemas (...) de hecho científico de las cuestiones de cómo funciona el lenguaje. Discuto las opiniones de Wittgenstein sobre la incompletitud, la paracoherencia y la indecisión y el trabajo de Wolpert en los límites de la computación. Resumiendo: el universo según Brooklyn---buena ciencia, no tan buena filosofía. Aquellos que deseen un marco completo hasta la fecha para el comportamiento humano de la moderna dos sistemas punto de vista puede consultar mi libros Talking Monkeys 3ª ed (2019), Estructura Logica de Filosofia, Psicología, Mente y Lenguaje en Ludwig Wittgenstein y John Searle 2ª ed (2019), Suicidio pela Democracia 4ª ed (2019), La Estructura Logica del Comportamiento Humano (2019), The Logical Structure de la Conciencia (2019, Entender las Conexiones entre Ciencia, Filosofía, Psicología, Religión, Política y Economía (2019), Delirios Utópicos Suicidas en el siglo 21 5ª ed (2019), Observaciones sobre Imposibilidad, Incompletitud, Paraconsistencia, Indecidibilidad, Aleatoriedad, Computabilidad, Paradoja e Incertidumbre en Chaitin, Wittgenstein, Hofstadter, Wolpert, Doria, da Costa, Godel, Searle, Rodych Berto, Floyd, Moyal-Sharrock y Yanofsky y otros. (shrink)
The naive theory of properties states that for every condition there is a property instantiated by exactly the things which satisfy that condition. The naive theory of properties is inconsistent in classical logic, but there are many ways to obtain consistent naive theories of properties in nonclassical logics. The naive theory of classes adds to the naive theory of properties an extensionality rule or axiom, which states roughly that if two classes have exactly the same members, they are identical. In (...) this paper we examine the prospects for obtaining a satisfactory naive theory of classes. We start from a result by Ross Brady, which demonstrates the consistency of something resembling a naive theory of classes. We generalize Brady’s result somewhat and extend it to a recent system developed by Andrew Bacon. All of the theories we prove consistent contain an extensionality rule or axiom. But we argue that given the background logics, the relevant extensionality principles are too weak. For example, in some of these theories, there are universal classes which are not declared coextensive. We elucidate some very modest demands on extensionality, designed to rule out this kind of pathology. But we close by proving that even these modest demands cannot be jointly satisfied. In light of this new impossibility result, the prospects for a naive theory of classes are bleak. (shrink)
Some have argued that the possibility of faultless disagreement gives relativist semantic theories an important explanatory advantage over their absolutist and contextualist rivals. Here I combat this argument, focusing on the specific case of aesthetic discourse. My argument has two stages. First, I argue that while relativists may be able to account for the possibility of faultless aesthetic disagreement, they nevertheless face difficulty in accounting for the intuitive limits of faultless disagreement. Second, I develop a new non-relativist theory which (...) can account for the full range of data regarding faultless disagreement. This view—‘Humean Absolutism’—integrates two of Hume’s central principles from Of the Standard of Taste into a truth-conditional framework, resulting in a non-bivalent theory of aesthetic truth. I argue that Humean Absolutism can underwrite the possibility of faultless disagreement whilst retaining reasonable limits around the phenomenon. I close by relating this positive account of faultless disagreement to broader issues concerning the cognitive role of truth-value gaps. -/- *NB - this is an unpublished paper and is no longer in progress.*. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.