Yuri Matiyasevich's theorem states that the set of all Diophantine equations which have a solution in non-negative integers is not recursive. Craig Smoryński's theorem states that the set of all Diophantine equations which have at most finitely many solutions in non-negative integers is not recursively enumerable. Let R be a subring of Q with or without 1. By H_{10}(R), we denote the problem of whether there exists an algorithm which for any given Diophantine equation with integer coefficients, (...) can decide whether or not the equation has a solution in R. We prove that a positive solution to H_{10}(R) implies that the set of all Diophantine equations with a finite number of solutions in R is recursively enumerable. We show the converse implication for every infinite set R \subseteq Q such that there exist computable functions \tau_1,\tau_2:N \to Z which satisfy (\forall n \in N \tau_2(n) \neq 0) \wedge ({\frac{\tau_1(n)}{\tau_2(n)}: n \in N}=R). This implication for R=N guarantees that Smoryński's theorem follows from Matiyasevich's theorem. Harvey Friedman conjectures that the set of all polynomials of several variables with integer coefficients that have a rational solution is not recursive. Harvey Friedman conjectures that the set of all polynomials of several variables with integer coefficients that have only finitely many rational solutions is not recursively enumerable. These conjectures are equivalent by our results for R=Q. (shrink)
Heinrich Behmann (1891-1970) obtained his Habilitation under David Hilbert in Göttingen in 1921 with a thesis on the decision problem. In his thesis, he solved - independently of Löwenheim and Skolem's earlier work - the decision problem for monadic second-order logic in a framework that combined elements of the algebra of logic and the newer axiomatic approach to logic then being developed in Göttingen. In a talk given in 1921, he outlined this solution, but also presented important programmatic (...) remarks on the significance of the decision problem and of decision procedures more generally. The text of this talk as well as a partial English translation are included. (shrink)
Reid, Constance. Hilbert (a Biography). Reviewed by Corcoran in Philosophy of Science 39 (1972), 106–08. -/- Constance Reid was an insider of the Berkeley-Stanford logic circle. Her San Francisco home was in Ashbury Heights near the homes of logicians such as Dana Scott and John Corcoran. Her sister Julia Robinson was one of the top mathematical logicians of her generation, as was Julia’s husband Raphael Robinson for whom Robinson Arithmetic was named. Julia was a Tarski PhD and, in recognition of (...) a distinguished career, was elected President of the American Mathematics Society. https://en.wikipedia.org/wiki/Julia_Robinson http://www.awm-math.org/noetherbrochure/Robinson82.html. (shrink)
The Born’s rule to interpret the square of wave function as the probability to get a specific value in measurement has been accepted as a postulate in foundations of quantum mechanics. Although there have been so many attempts at deriving this rule theoretically using different approaches such as frequency operator approach, many-world theory, Bayesian probability and envariance, literature shows that arguments in each of these methods are circular. In view of absence of a convincing theoretical proof, recently some researchers have (...) carried out experiments to validate the rule up-to maximum possible accuracy using multi-order interference (Sinha et al, Science, 329, 418 [2010]). But, a convincing analytical proof of Born’s rule will make us understand the basic process responsible for exact square dependency of probability on wave function. In this paper, by generalizing the method of calculating probability in common experience into quantum mechanics, we prove the Born’s rule for statistical interpretation of wave function. (shrink)
Introduction to mathematical logic, part 2.Textbook for students in mathematical logic and foundations of mathematics. Platonism, Intuition, Formalism. Axiomatic set theory. Around the Continuum Problem. Axiom of Determinacy. Large Cardinal Axioms. Ackermann's Set Theory. First order arithmetic. Hilbert's10thproblem. Incompleteness theorems. Consequences. Connected results: double incompleteness theorem, unsolvability of reasoning, theorem on the size of proofs, diophantine incompleteness, Loeb's theorem, consistent universal statements are provable, Berry's paradox, incompleteness and Chaitin's theorem. Around Ramsey's theorem.
Gaisi Takeuti (1926–2017) is one of the most distinguished logicians in proof theory after Hilbert and Gentzen. He extensively extended Hilbert's program in the sense that he formulated Gentzen's sequent calculus, conjectured that cut-elimination holds for it (Takeuti's conjecture), and obtained several stunning results in the 1950–60s towards the solution of his conjecture. Though he has been known chiefly as a great mathematician, he wrote many papers in English and Japanese where he expressed his philosophical thoughts. In particular, he (...) used several keywords such as "active intuition" and "self-reflection" from Nishida's philosophy. In this paper, we aim to describe a general outline of our project to investigate Takeuti's philosophy of mathematics. In particular, after reviewing Takeuti's proof-theoretic results briefly, we describe some key elements in Takeuti's texts. By explaining these texts, we point out the connection between Takeuti's proof theory and Nishida's philosophy and explain the future goals of our project. (shrink)
A Mathematical Review by John Corcoran, SUNY/Buffalo -/- Macbeth, Danielle Diagrammatic reasoning in Frege's Begriffsschrift. Synthese 186 (2012), no. 1, 289–314. ABSTRACT This review begins with two quotations from the paper: its abstract and the first paragraph of the conclusion. The point of the quotations is to make clear by the “give-them-enough-rope” strategy how murky, incompetent, and badly written the paper is. I know I am asking a lot, but I have to ask you to read the quoted passages—aloud if (...) possible. Don’t miss the silly attempt to recycle Kant’s quip “Concepts without intuitions are empty; intuitions without concepts are blind”. What the paper was aiming at includes the absurdity: “Proofs without definitions are empty; definitions without proofs are, if not blind, then dumb.” But the author even bollixed this. The editor didn’t even notice. The copy-editor missed it. And the author’s proof-reading did not catch it. In order not to torment you I will quote the sentence as it appears: “In a slogan: proofs without definitions are empty, merely the aimless manipulation of signs according to rules; and definitions without proofs are, if no blind, then dumb.”[sic] The rest of my review discusses the paper’s astounding misattribution to contemporary logicians of the information-theoretic approach. This approach was cruelly trashed by Quine in his 1970 Philosophy of Logic, and thereafter ignored by every text I know of. The paper under review attributes generally to modern philosophers and logicians views that were never espoused by any of the prominent logicians—such as Hilbert, Gödel, Tarski, Church, and Quine—apparently in an attempt to distance them from Frege: the focus of the article. On page 310 we find the following paragraph. “In our logics it is assumed that inference potential is given by truth-conditions. Hence, we think, deduction can be nothing more than a matter of making explicit information that is already contained in one’s premises. If the deduction is valid then the information contained in the conclusion must be contained already in the premises; if that information is not contained already in the premises […], then the argument cannot be valid.” Although the paper is meticulous in citing supporting literature for less questionable points, no references are given for this. In fact, the view that deduction is the making explicit of information that is only implicit in premises has not been espoused by any standard symbolic logic books. It has only recently been articulated by a small number of philosophical logicians from a younger generation, for example, in the prize-winning essay by J. Sagüillo, Methodological practice and complementary concepts of logical consequence: Tarski’s model-theoretic consequence and Corcoran’s information-theoretic consequence, History and Philosophy of Logic, 30 (2009), pp. 21–48. The paper omits definitions of key terms including ‘ampliative’, ‘explicatory’, ‘inference potential’, ‘truth-condition’, and ‘information’. The definition of prime number on page 292 is as follows: “To say that a number is prime is to say that it is not divisible without remainder by another number”. This would make one be the only prime number. The paper being reviewed had the benefit of two anonymous referees who contributed “very helpful comments on an earlier draft”. Could these anonymous referees have read the paper? -/- J. Corcoran, U of Buffalo, SUNY -/- PS By the way, if anyone has a paper that has been turned down by other journals, any journal that would publish something like this might be worth trying. (shrink)
Girolamo Saccheri (1667--1733) was an Italian Jesuit priest, scholastic philosopher, and mathematician. He earned a permanent place in the history of mathematics by discovering and rigorously deducing an elaborate chain of consequences of an axiom-set for what is now known as hyperbolic (or Lobachevskian) plane geometry. Reviewer's remarks: (1) On two pages of this book Saccheri refers to his previous and equally original book Logica demonstrativa (Turin, 1697) to which 14 of the 16 pages of the editor's "Introduction" are devoted. (...) At the time of the first edition, 1920, the editor was apparently not acquainted with the secondary literature on Logica demonstrativa which continued to grow in the period preceding the second edition \ref[see D. J. Struik, in Dictionary of scientific biography, Vol. 12, 55--57, Scribner's, New York, 1975]. Of special interest in this connection is a series of three articles by A. F. Emch [Scripta Math. 3 (1935), 51--60; Zbl 10, 386; ibid. 3 (1935), 143--152; Zbl 11, 193; ibid. 3 (1935), 221--333; Zbl 12, 98]. (2) It seems curious that modern writers believe that demonstration of the "nondeducibility" of the parallel postulate vindicates Euclid whereas at first Saccheri seems to have thought that demonstration of its "deducibility" is what would vindicate Euclid. Saccheri is perfectly clear in his commitment to the ancient (and now discredited) view that it is wrong to take as an "axiom" a proposition which is not a "primal verity", which is not "known through itself". So it would seem that Saccheri should think that he was convicting Euclid of error by deducing the parallel postulate. The resolution of this confusion is that Saccheri thought that he had proved, not merely that the parallel postulate was true, but that it was a "primal verity" and, thus, that Euclid was correct in taking it as an "axiom". As implausible as this claim about Saccheri may seem, the passage on p. 237, lines 3--15, seems to admit of no other interpretation. Indeed, Emch takes it this way. (3) As has been noted by many others, Saccheri was fascinated, if not obsessed, by what may be called "reflexive indirect deductions", indirect deductions which show that a conclusion follows from given premises by a chain of reasoning beginning with the given premises augmented by the denial of the desired conclusion and ending with the conclusion itself. It is obvious, of course, that this is simply a species of ordinary indirect deduction; a conclusion follows from given premises if a contradiction is deducible from those given premises augmented by the denial of the conclusion---and it is immaterial whether the contradiction involves one of the premises, the denial of the conclusion, or even, as often happens, intermediate propositions distinct from the given premises and the denial of the conclusion. Saccheri seemed to think that a proposition proved in this way was deduced from its own denial and, thus, that its denial was self-contradictory (p. 207). Inference from this mistake to the idea that propositions proved in this way are "primal verities" would involve yet another confusion. The reviewer gratefully acknowledges extensive communication with his former doctoral students J. Gasser and M. Scanlan. ADDED 14 March 14, 2015: (1) Wikipedia reports that many of Saccheri's ideas have a precedent in the 11th Century Persian polymath Omar Khayyám's Discussion of Difficulties in Euclid, a fact ignored in most Western sources until recently. It is unclear whether Saccheri had access to this work in translation, or developed his ideas independently. (2) This book is another exemplification of the huge difference between indirect deduction and indirect reduction. Indirect deduction requires making an assumption that is inconsistent with the premises previously adopted. This means that the reasoner must perform a certain mental act of assuming a certain proposition. It case the premises are all known truths, indirect deduction—which would then be indirect proof—requires the reasoner to assume a falsehood. This fact has been noted by several prominent mathematicians including Hardy, Hilbert, and Tarski. Indirect reduction requires no new assumption. Indirect reduction is simply a transformation of an argument in one form into another argument in a different form. In an indirect reduction one proposition in the old premise set is replaced by the contradictory opposite of the old conclusion and the new conclusion becomes the contradictory opposite of the replaced premise. Roughly and schematically, P,Q/R becomes P,~R/~Q or ~R, Q/~P. Saccheri’s work involved indirect deduction not indirect reduction. (3) The distinction between indirect deduction and indirect reduction has largely slipped through the cracks, the cracks between medieval-oriented logic and modern-oriented logic. The medievalists have a heavy investment in reduction and, though they have heard of deduction, they think that deduction is a form of reduction, or vice versa, or in some cases they think that the word ‘deduction’ is the modern way of referring to reduction. The modernists have no interest in reduction, i.e. in the process of transforming one argument into another having exactly the same number of premises. Modern logicians, like Aristotle, are concerned with deducing a single proposition from a set of propositions. Some focus on deducing a single proposition from the null set—something difficult to relate to reduction. (shrink)
In a quantum universe with a strong arrow of time, we postulate a low-entropy boundary condition to account for the temporal asymmetry. In this paper, I show that the Past Hypothesis also contains enough information to simplify the quantum ontology and define a unique initial condition in such a world. First, I introduce Density Matrix Realism, the thesis that the quantum universe is described by a fundamental density matrix that represents something objective. This stands in sharp contrast to Wave Function (...) Realism, the thesis that the quantum universe is described by a wave function that represents something objective. Second, I suggest that the Past Hypothesis is sufficient to determine a unique and simple density matrix. This is achieved by what I call the Initial Projection Hypothesis: the initial density matrix of the universe is the normalized projection onto the special low-dimensional Hilbert space. Third, because the initial quantum state is unique and simple, we have a strong case for the \emph{Nomological Thesis}: the initial quantum state of the universe is on a par with laws of nature. This new package of ideas has several interesting implications, including on the harmony between statistical mechanics and quantum mechanics, the dynamic unity of the universe and the subsystems, and the alleged conflict between Humean supervenience and quantum entanglement. (shrink)
David Hilbert's finitistic standpoint is a conception of elementary number theory designed to answer the intuitionist doubts regarding the security and certainty of mathematics. Hilbert was unfortunately not exact in delineating what that viewpoint was, and Hilbert himself changed his usage of the term through the 1920s and 30s. The purpose of this paper is to outline what the main problems are in understanding Hilbert and Bernays on this issue, based on some publications by them which have so far (...) received little attention, and on a number of philosophical reconstructions of the viewpoint (in particular, by Hand, Kitcher, and Tait). (shrink)
We discuss the philosophical implications of formal results showing the con- sequences of adding the epsilon operator to intuitionistic predicate logic. These results are related to Diaconescu’s theorem, a result originating in topos theory that, translated to constructive set theory, says that the axiom of choice (an “existence principle”) implies the law of excluded middle (which purports to be a logical principle). As a logical choice principle, epsilon allows us to translate that result to a logical setting, where one can (...) get an analogue of Diaconescu’s result, but also can disentangle the roles of certain other assumptions that are hidden in mathematical presentations. It is our view that these results have not received the attention they deserve: logicians are unlikely to read a discussion because the results considered are “already well known,” while the results are simultaneously unknown to philosophers who do not specialize in what most philosophers will regard as esoteric logics. This is a problem, since these results have important implications for and promise signif i cant illumination of contem- porary debates in metaphysics. The point of this paper is to make the nature of the results clear in a way accessible to philosophers who do not specialize in logic, and in a way that makes clear their implications for contemporary philo- sophical discussions. To make the latter point, we will focus on Dummettian discussions of realism and anti-realism. Keywords: epsilon, axiom of choice, metaphysics, intuitionistic logic, Dummett, realism, antirealism. (shrink)
The central motivating idea behind the development of this work is the concept of prespace, a hypothetical structure that is postulated by some physicists to underlie the fabric of space or space-time. I consider how such a structure could relate to space and space-time, and the rest of reality as we know it, and the implications of the existence of this structure for quantum theory. Understanding how this structure could relate to space and to the rest of reality requires, I (...) believe, that we consider how space itself relates to reality, and how other so-called "spaces" used in physics relate to reality. In chapter 2, I compare space and space-time to other spaces used in physics, such as configuration space, phase space and Hilbert space. I support what is known as the "property view" of space, opposing both the traditional views of space and space-time, substantivalism and relationism. I argue that all these spaces are property spaces. After examining the relationships of these spaces to causality, I argue that configuration space has, due to its role in quantum mechanics, a special status in the microscopic world similar to the status of position space in the macroscopic world. In chapter 3, prespace itself is considered. One way of approaching this structure is through the comparison of the prespace structure with a computational system, in particular to a cellular automaton, in which space or space-time and all other physical quantities are broken down into discrete units. I suggest that one way open for a prespace metaphysics can be found if physics is made fully discrete in this way. I suggest as a heuristic principle that the physical laws of our world are such that the computational cost of implementing those laws on an arbitrary computational system is minimized, adapting a heuristic principle of this type proposed by Feynman. In chapter 4, some of the ideas of the previous chapters are applied in an examination of the physics and metaphysics of quantum theory. I first discuss the "measurement problem" of quantum mechanics: this problem and its proposed solution are the primary subjects of chapter 4. It turns out that considering how quantum theory could be made fully discrete leads naturally to a suggestion of how standard linear quantum mechanics could be modified to give rise to a solution to the measurement problem. The computational heuristic principle reinforces the same solution. I call the modified quantum mechanics Critical Complexity Quantum Mechanics (CCQM). I compare CCQM with some of the other proposed solutions to the measurement problem, in particular the spontaneous localization model of Ghirardi, Rimini and Weber. Finally, in chapters 5 and 6, I argue that the measure of complexity of quantum mechanical states I introduce in CCQM also provides a new definition of entropy for quantum mechanics, and suggests a solution to the problem of providing an objective foundation for statistical mechanics, thermodynamics, and the arrow of time. (shrink)
This paper concerns “human symbolic output,” or strings of characters produced by humans in our various symbolic systems; e.g., sentences in a natural language, mathematical propositions, and so on. One can form a set that consists of all of the strings of characters that have been produced by at least one human up to any given moment in human history. We argue that at any particular moment in human history, even at moments in the distant future, this set is finite. (...) But then, given fundamental results in recursion theory, the set will also be recursive, recursively enumerable, axiomatizable, and could be the output of a Turing machine. We then argue that it is impossible to produce a string of symbols that humans could possibly produce but no Turing machine could. Moreover, we show that any given string of symbols that we could produce could also be the output of a Turing machine. Our arguments have implications for Hilbert’s sixth problem and the possibility of axiomatizing particular sciences, they undermine at least two distinct arguments against the possibility of Artificial Intelligence, and they entail that expert systems that are the equals of human experts are possible, and so at least one of the goals of Artificial Intelligence can be realized, at least in principle. (shrink)
We show how removing faith-based beliefs in current philosophies of classical and constructive mathematics admits formal, evidence-based, definitions of constructive mathematics; of a constructively well-defined logic of a formal mathematical language; and of a constructively well-defined model of such a language. -/- We argue that, from an evidence-based perspective, classical approaches which follow Hilbert's formal definitions of quantification can be labelled `theistic'; whilst constructive approaches based on Brouwer's philosophy of Intuitionism can be labelled `atheistic'. -/- We then adopt what (...) may be labelled a finitary, evidence-based, `agnostic' perspective and argue that Brouwerian atheism is merely a restricted perspective within the finitary agnostic perspective, whilst Hilbertian theism contradicts the finitary agnostic perspective. -/- We then consider the argument that Tarski's classic definitions permit an intelligence---whether human or mechanistic---to admit finitary, evidence-based, definitions of the satisfaction and truth of the atomic formulas of the first-order Peano Arithmetic PA over the domain N of the natural numbers in two, hitherto unsuspected and essentially different, ways. -/- We show that the two definitions correspond to two distinctly different---not necessarily evidence-based but complementary---assignments of satisfaction and truth to the compound formulas of PA over N. -/- We further show that the PA axioms are true over N, and that the PA rules of inference preserve truth over N, under both the complementary interpretations; and conclude some unsuspected constructive consequences of such complementarity for the foundations of mathematics, logic, philosophy, and the physical sciences. -/- . (shrink)
Walter Dubislav (1895–1937) was a leading member of the Berlin Group for scientific philosophy. This “sister group” of the more famous Vienna Circle emerged around Hans Reichenbach’s seminars at the University of Berlin in 1927 and 1928. Dubislav was to collaborate with Reichenbach, an association that eventuated in their conjointly conducting university colloquia. Dubislav produced original work in philosophy of mathematics, logic, and science, consequently following David Hilbert’s axiomatic method. This brought him to defend formalism in these disciplines as well (...) as to explore the problems of substantiating (Begründung) human knowledge. Dubislav also developed elements of general philosophy of science. Sadly, the political changes in Germany in 1933 proved ruinous to Dubislav. He published scarcely anything after Hitler came to power and in 1937 committed suicide under tragic circumstances. The intent here is to pass in review Dubislav’s philosophy of logic, mathematics, and science and so to shed light on some seminal yet hitherto largely neglected currents in the history of philosophy of science. (shrink)
Hilbert’s program was an ambitious and wide-ranging project in the philosophy and foundations of mathematics. In order to “dispose of the foundational questions in mathematics once and for all,” Hilbert proposed a two-pronged approach in 1921: first, classical mathematics should be formalized in axiomatic systems; second, using only restricted, “finitary” means, one should give proofs of the consistency of these axiomatic systems. Although Gödel’s incompleteness theorems show that the program as originally conceived cannot be carried out, it had many partial (...) successes, and generated important advances in logical theory and metatheory, both at the time and since. The article discusses the historical background and development of Hilbert’s program, its philosophical underpinnings and consequences, and its subsequent development and influences since the 1930s. (shrink)
Jakob Friedrich Fries (1773-1843): A Philosophy of the Exact Sciences -/- Shortened version of the article of the same name in: Tabula Rasa. Jenenser magazine for critical thinking. 6th of November 1994 edition -/- 1. Biography -/- Jakob Friedrich Fries was born on the 23rd of August, 1773 in Barby on the Elbe. Because Fries' father had little time, on account of his journeying, he gave up both his sons, of whom Jakob Friedrich was the elder, to the Herrnhut Teaching (...) Institution in Niesky in 1778. Fries attended the theological seminar in Niesky in autumn 1792, which lasted for three years. There he (secretly) began to study Kant. The reading of Kant's works led Fries, for the first time, to a deep philosophical satisfaction. His enthusiasm for Kant is to be understood against the background that a considerable measure of Kant's philosophy is based on a firm foundation of what happens in an analogous and similar manner in mathematics. -/- During this period he also read Heinrich Jacobi's novels, as well as works of the awakening classic German literature; in particular Friedrich Schiller's works. In 1795, Fries arrived at Leipzig University to study law. During his time in Leipzig he became acquainted with Fichte's philosophy. In autumn of the same year he moved to Jena to hear Fichte at first hand, but was soon disappointed. -/- During his first sojourn in Jenaer (1796), Fries got to know the chemist A. N. Scherer who was very influenced by the work of the chemist A. L. Lavoisier. Fries discovered, at Scherer's suggestion, the law of stoichiometric composition. Because he felt that his work still need some time before completion, he withdrew as a private tutor to Zofingen (in Switzerland). There Fries worked on his main critical work, and studied Newton's "Philosophiae naturalis principia mathematica". He remained a lifelong admirer of Newton, whom he praised as a perfectionist of astronomy. Fries saw the final aim of his mathematical natural philosophy in the union of Newton's Principia with Kant's philosophy. -/- With the aim of qualifying as a lecturer, he returned to Jena in 1800. Now Fries was known from his independent writings, such as "Reinhold, Fichte and Schelling" (1st edition in 1803), and "Systems of Philosophy as an Evident Science" (1804). The relationship between G. W. F. Hegel and Fries did not develop favourably. Hegel speaks of "the leader of the superficial army", and at other places he expresses: "he is an extremely narrow-minded bragger". On the other hand, Fries also has an unfavourable take on Hegel. He writes of the "Redundancy of the Hegelistic dialectic" (1828). In his History of Philosophy (1837/40) he writes of Hegel, amongst other things: "Your way of philosophising seems just to give expression to nonsense in the shortest possible way". In this work, Fries appears to argue with Hegel in an objective manner, and expresses a positive attitude to his work. -/- In 1805, Fries was appointed professor for philosophy in Heidelberg. In his time spent in Heidelberg, he married Caroline Erdmann. He also sealed his friendships with W. M. L. de Wette and F. H. Jacobi. Jacobi was amongst the contemporaries who most impressed Fries during this period. In Heidelberg, Fries wrote, amongst other things, his three-volume main work New Critique of Reason (1807). -/- In 1816 Fries returned to Jena. When in 1817 the Wartburg festival took place, Fries was among the guests, and made a small speech. 1819 was the so-called "Great Year" for Fries: His wife Caroline died, and Karl Sand, a member of a student fraternity, and one of Fries' former students stabbed the author August von Kotzebue to death. Fries was punished with a philosophy teaching ban but still received a professorship for physics and mathematics. Only after a period of years, and under restrictions, he was again allowed to read philosophy. From now on, Fries was excluded from political influence. The rest of his life he devoted himself once again to philosophical and natural studies. During this period, he wrote "Mathematical Natural Philosophy" (1822) and the "History of Philosophy" (1837/40). -/- Fries suffered from a stroke on New Year's Day 1843, and a second stroke, on the 10th of August 1843 ended his life. -/- 2. Fries' Work Fries left an extensive body of work. A look at the subject areas he worked on makes us aware of the universality of his thinking. Amongst these subjects are: Psychic anthropology, psychology, pure philosophy, logic, metaphysics, ethics, politics, religious philosophy, aesthetics, natural philosophy, mathematics, physics and medical subjects, to which, e.g., the text "Regarding the optical centre in the eye together with general remarks about the theory of seeing" (1839) bear witness. With popular philosophical writings like the novel "Julius and Evagoras" (1822), or the arabesque "Longing, and a Trip to the Middle of Nowhere" (1820), he tried to make his philosophy accessible to a broader public. Anthropological considerations are shown in the methodical basis of his philosophy, and to this end, he provides the following didactic instruction for the study of his work: "If somebody wishes to study philosophy on the basis of this guide, I would recommend that after studying natural philosophy, a strict study of logic should follow in order to peruse metaphysics and its applied teachings more rapidly, followed by a strict study of criticism, followed once again by a return to an even closer study of metaphysics and its applied teachings." -/- 3. Continuation of Fries' work through the Friesian School -/- Fries' ideas found general acceptance amongst scientists and mathematicians. A large part of the followers of the "Fries School of Thought" had a scientific or mathematical background. Amongst them were biologist Matthias Jakob Schleiden, mathematics and science specialist philosopher Ernst Friedrich Apelt, the zoologist Oscar Schmidt, and the mathematician Oscar Xavier Schlömilch. Between the years 1847 and 1849, the treatises of the "Fries School of Thought", with which the publishers aimed to pursue philosophy according to the model of the natural sciences appeared. In the Kant-Fries philosophy, they saw the realisation of this ideal. The history of the "New Fries School of Thought" began in 1903. It was in this year that the philosopher Leonard Nelson gathered together a small discussion circle in Goettingen. Amongst the founding members of this circle were: A. Rüstow, C. Brinkmann and H. Goesch. In 1904 L. Nelson, A. Rüstow, H. Goesch and the student W. Mecklenburg travelled to Thuringia to find the missing Fries writings. In the same year, G. Hessenberg, K. Kaiser and Nelson published the first pamphlet from their first volume of the "Treatises of the Fries School of Thought, New Edition". -/- The school set out with the aim of searching for the missing Fries' texts, and re-publishing them with a view to re-opening discussion of Fries' brand of philosophy. The members of the circle met regularly for discussions. Additionally, larger conferences took place, mostly during the holidays. Featuring as speakers were: Otto Apelt, Otto Berg, Paul Bernays, G. Fraenkel, K. Grelling, G. Hessenberg, A. Kronfeld, O. Meyerhof, L. Nelson and R. Otto. On the 1st of March 1913, the Jakob-Friedrich-Fries society was founded. Whilst the Fries' school of thought dealt in continuum with the advancement of the Kant-Fries philosophy, the members of the Jakob-Friedrich-Fries society's main task was the dissemination of the Fries' school publications. In May/June, 1914, the organisations took part in their last common conference before the gulf created by the outbreak of the First World War. Several members died during the war. Others returned disabled. The next conference took place in 1919. A second conference followed in 1921. Nevertheless, such intensive work as had been undertaken between 1903 and 1914 was no longer possible. -/- Leonard Nelson died in October 1927. In the 1930's, the 6th and final volume of "Treatises of the Fries School of Thought, New Edition" was published. Franz Oppenheimer, Otto Meyerhof, Minna Specht and Grete Hermann were involved in their publication. -/- 4. About Mathematical Natural Philosophy -/- In 1822, Fries' "Mathematical Natural Philosophy" appeared. Fries rejects the speculative natural philosophy of his time - above all Schelling's natural philosophy. A natural study, founded on speculative philosophy, ceases with its collection, arrangement and order of well-known facts. Only a mathematical natural philosophy can deliver the necessary explanatory reasoning. The basic dictum of his mathematical natural philosophy is: "All natural theories must be definable using purely mathematically determinable reasons of explanation." Fries is of the opinion that science can attain completeness only by the subordination of the empirical facts to the metaphysical categories and mathematical laws. -/- The crux of Fries' natural philosophy is the thought that mathematics must be made fertile for use by the natural sciences. However, pure mathematics displays solely empty abstraction. To be able to apply them to the sensory world, an intermediatory connection is required. Mathematics must be connected to metaphysics. The pure mechanics, consisting of three parts are these: a) A study of geometrical movement, which considers solely the direction of the movement, b) A study of kinematics, which considers velocity in Addition, c) A study of dynamic movement, which also incorporates mass and power, as well as direction and velocity. -/- Of great interest is Fries' natural philosophy in view of its methodology, particularly with regard to the doctrine "leading maxims". Fries calls these "leading maxims" "heuristic", "because they are principal rules for scientific invention". -/- Fries' philosophy found great recognition with Carl Friedrich Gauss, amongst others. Fries asked for Gauss's opinion on his work "An Attempt at a Criticism based on the Principles of the Probability Calculus" (1842). Gauss also provided his opinions on "Mathematical Natural Philosophy" (1822) and on Fries' "History of Philosophy". Gauss acknowledged Fries' philosophy and wrote in a letter to Fries: "I have always had a great predilection for philosophical speculation, and now I am all the more happy to have a reliable teacher in you in the study of the destinies of science, from the most ancient up to the latest times, as I have not always found the desired satisfaction in my own reading of the writings of some of the philosophers. In particular, the writings of several famous (maybe better, so-called famous) philosophers who have appeared since Kant have reminded me of the sieve of a goat-milker, or to use a modern image instead of an old-fashioned one, of Münchhausen's plait, with which he pulled himself from out of the water. These amateurs would not dare make such a confession before their Masters; it would not happen were they were to consider the case upon its merits. I have often regretted not living in your locality, so as to be able to glean much pleasurable entertainment from philosophical verbal discourse." -/- The starting point of the new adoption of Fries was Nelson's article "The critical method and the relation of psychology to philosophy" (1904). Nelson dedicates special attention to Fries' re-interpretation of Kant's deduction concept. Fries awards Kant's criticism the rationale of anthropological idiom, in that he is guided by the idea that one can examine in a psychological way which knowledge we have "a priori", and how this is created, so that we can therefore recognise our own knowledge "a priori" in an empirical way. Fries understands deduction to mean an "awareness residing darkly in us is, and only open to basic metaphysical principles through conscious reflection.". -/- Nelson has pointed to an analogy between Fries' deduction and modern metamathematics. In the same manner, as with the anthropological deduction of the content of the critical investigation into the metaphysical object show, the content of mathematics become, in David Hilbert's view, the object of metamathematics. -/-. (shrink)
In the 1920s, David Hilbert proposed a research program with the aim of providing mathematics with a secure foundation. This was to be accomplished by first formalizing logic and mathematics in their entirety, and then showing---using only so-called finitistic principles---that these formalizations are free of contradictions. ;In the area of logic, the Hilbert school accomplished major advances both in introducing new systems of logic, and in developing central metalogical notions, such as completeness and decidability. The analysis of unpublished material presented (...) in Chapter 2 shows that a completeness proof for propositional logic was found by Hilbert and his assistant Paul Bernays already in 1917--18, and that Bernays's contribution was much greater than is commonly acknowledged. Aside from logic, the main technical contribution of Hilbert's Program are the development of formal mathematical theories and proof-theoretical investigations thereof, in particular, consistency proofs. In this respect Wilhelm Ackermann's 1924 dissertation is a milestone both in the development of the Program and in proof theory in general. Ackermann gives a consistency proof for a second-order version of primitive recursive arithmetic which, surprisingly, explicitly uses a finitistic version of transfinite induction up to www . He also gave a faulty consistency proof for a system of second-order arithmetic based on Hilbert's &egr;-substitution method. Detailed analyses of both proofs in Chapter 3 shed light on the development of finitism and proof theory in the 1920s as practiced in Hilbert's school. ;In a series of papers, Charles Parsons has attempted to map out a notion of mathematical intuition which he also brings to bear on Hilbert's finitism. According to him, mathematical intuition fails to be able to underwrite the kind of intuitive knowledge Hilbert thought was attainable by the finitist. It is argued in Chapter 4 that the extent of finitistic knowledge which intuition can provide is broader than Parsons supposes. According to another influential analysis of finitism due to W. W. Tait, finitist reasoning coincides with primitive recursive reasoning. The acceptance of non-primitive recursive methods in Ackermann's dissertation presented in Chapter 3, together with additional textual evidence presented in Chapter 4, shows that this identification is untenable as far as Hilbert's conception of finitism is concerned. Tait's conception, however, differs from Hilbert's in important respects, yet it is also open to criticisms leading to the conclusion that finitism encompasses more than just primitive recursive reasoning. (shrink)
1971. Discourse Grammars and the Structure of Mathematical Reasoning II: The Nature of a Correct Theory of Proof and Its Value, Journal of Structural Learning 3, #2, 1–16. REPRINTED 1976. Structural Learning II Issues and Approaches, ed. J. Scandura, Gordon & Breach Science Publishers, New York, MR56#15263. -/- This is the second of a series of three articles dealing with application of linguistics and logic to the study of mathematical reasoning, especially in the setting of a concern for improvement of (...) mathematical education. The present article presupposes the previous one. Herein we develop our ideas of the purposes of a theory of proof and the criterion of success to be applied to such theories. In addition we speculate at length concerning the specific kinds of uses to which a successful theory of proof may be put vis-a-vis improvement of various aspects of mathematical education. The final article will deal with the construction of such a theory. The 1st is the 1971. Discourse Grammars and the Structure of Mathematical Reasoning I: Mathematical Reasoning and Stratification of Language, Journal of Structural Learning 3, #1, 55–74. https://www.academia.edu/s/fb081b1886?source=link . (shrink)
We consider the argument that Tarski's classic definitions permit an intelligence---whether human or mechanistic---to admit finitary evidence-based definitions of the satisfaction and truth of the atomic formulas of the first-order Peano Arithmetic PA over the domain N of the natural numbers in two, hitherto unsuspected and essentially different, ways: (1) in terms of classical algorithmic verifiabilty; and (2) in terms of finitary algorithmic computability. We then show that the two definitions correspond to two distinctly different assignments of satisfaction and truth (...) to the compound formulas of PA over N---I_PA(N; SV ) and I_PA(N; SC). We further show that the PA axioms are true over N, and that the PA rules of inference preserve truth over N, under both I_PA(N; SV ) and I_PA(N; SC). We then show: (a) that if we assume the satisfaction and truth of the compound formulas of PA are always non-finitarily decidable under I_PA(N; SV ), then this assignment corresponds to the classical non-finitary putative standard interpretation I_PA(N; S) of PA over the domain N; and (b) that the satisfaction and truth of the compound formulas of PA are always finitarily decidable under the assignment I_PA(N; SC), from which we may finitarily conclude that PA is consistent. We further conclude that the appropriate inference to be drawn from Goedel's 1931 paper on undecidable arithmetical propositions is that we can define PA formulas which---under interpretation---are algorithmically verifiable as always true over N, but not algorithmically computable as always true over N. We conclude from this that Lucas' Goedelian argument is validated if the assignment I_PA(N; SV ) can be treated as circumscribing the ambit of human reasoning about `true' arithmetical propositions, and the assignment I_PA(N; SC) as circumscribing the ambit of mechanistic reasoning about `true' arithmetical propositions. (shrink)
What is so special and mysterious about the Continuum, this ancient, always topical, and alongside the concept of integers, most intuitively transparent and omnipresent conceptual and formal medium for mathematical constructions and the battle field of mathematical inquiries ? And why it resists the century long siege by best mathematical minds of all times committed to penetrate once and for all its set-theoretical enigma ? -/- The double-edged purpose of the present study is to save from the transfinite deadlock of (...) higher set theory the jewel of mathematical Continuum -- this genuine, even if mostly forgotten today raison d'etre of all set-theoretical enterprises to Infinity and beyond, from Georg Cantor to W. Hugh Woodin to Buzz Lightyear, by simultaneously exhibiting the limits and pitfalls of all old and new reductionist foundational approaches to mathematical truth: be it Cantor's or post-Cantorian Idealism, Brouwer's or post-Brouwerian Constructivism, Hilbert's or post-Hilbertian Formalism, Goedel's or post-Goedelian Platonism. -/- In the spirit of Zeno's paradoxes, but with the enormous historical advantage of hindsight, we claim that Cantor's set-theoretical methodology, powerful and reach in proof-theoretic and similar applications as it might be, is inherently limited by its epistemological framework of transfinite local causality, and neither can be held accountable for the properties of the Continuum already acquired through geometrical, analytical, and arithmetical studies, nor can it be used for an adequate, conceptually sensible, operationally workable, and axiomatically sustainable re-creation of the Continuum. -/- From a strictly mathematical point of view, this intrinsic limitation of the constative and explicative power of higher set theory finds its explanation in the identified in this study ultimate phenomenological obstacle to Cantor's transfinite construction, similar to topological obstacles in homotopy theory and theoretical physics: the entanglement capacity of the mathematical Continuum. (shrink)
MethodologyA new hypothesis on the basic features characterizing the Foundations of Mathematics is suggested.Application of the methodBy means of it, the several proposals, launched around the year 1900, for discovering the FoM are characterized. It is well known that the historical evolution of these proposals was marked by some notorious failures and conflicts. Particular attention is given to Cantor's programme and its improvements. Its merits and insufficiencies are characterized in the light of the new conception of the FoM. After the (...) failures of Frege's and Cantor's programmes owing to the discoveries of an antinomy and internal contradictions, respectively, the two remaining, more radical programmes, i.e. Hilbert's and Brouwer's, generated a great debate; the explanation given here is their mutual incommensurability, defined by means of the differences in their foundational features.ResultsThe ignorance of this phenomenon explains the inconclusiveness of a century-long debate between the advocates of these two proposals. Which however have been so greatly improved as to closely approach or even recognize some basic features of the FoM.Discussion on the resultsYet, no proposal has recognized the alternative basic feature to Hilbert's main one, the deductive organization of a theory, although already half a century before the births of all the programmes this alternative was substantially instantiated by Lobachevsky's theory on parallel lines. Some conclusive considerations of a historical and philosophical nature are offered. In particular, the conclusive birth of a pluralism in the FoM is stressed. (shrink)
Mathematical objects are divided into (1) those which are autonomous, i.e., not dependent for their existence upon mathematicians’ conscious acts, and (2) intentional objects, which are so dependent. Platonist philosophy of mathematics argues that all objects belong to group (1), Brouwer’s intuitionism argues that all belong to group (2). Here we attempt to develop a dualist ontology of mathematics (implicit in the work of, e.g., Hilbert), exploiting the theories of Meinong, Husserl and Ingarden on the relations between autonomous and intentional (...) objects. In particular we develop a phenomenology of mathematical works, which has the stratified intentional structure discovered by Ingarden in his study of the literary work. (shrink)
Quantities like mass and temperature are properties that come in degrees. And those degrees (e.g. 5 kg) are properties that are called the magnitudes of the quantities. Some philosophers (e.g., Byrne 2003; Byrne & Hilbert 2003; Schroer 2010) talk about magnitudes of phenomenal qualities as if some of our phenomenal qualities are quantities. The goal of this essay is to explore the anti-physicalist implication of this apparently innocent way of conceptualizing phenomenal quantities. I will first argue for a metaphysical thesis (...) about the nature of magnitudes based on Yablo’s proportionality requirement of causation. Then, I will show that, if some phenomenal qualities are indeed quantities, there can be no demonstrative concepts about some of our phenomenal feelings. That presents a significant restriction on the way physicalists can account for the epistemic gap between the phenomenal and the physical. I’ll illustrate the restriction by showing how that rules out a popular physicalist response to the Knowledge Argument. (shrink)
JOHN CORCORAN AND WILIAM FRANK. Surprises in logic. Bulletin of Symbolic Logic. 19 253. Some people, not just beginning students, are at first surprised to learn that the proposition “If zero is odd, then zero is not odd” is not self-contradictory. Some people are surprised to find out that there are logically equivalent false universal propositions that have no counterexamples in common, i. e., that no counterexample for one is a counterexample for the other. Some people would be surprised to (...) find out that in normal first-order logic existential import is quite common: some universals “Everything that is S is P” —actually quite a few—imply their corresponding existentials “Something that is S is P”. Anyway, perhaps contrary to its title, this paper is not a cataloging of surprises in logic but rather about the mistakes that did or might have or might still lead people to think that there are no surprises in logic. The paper cataloging of surprises in logic is on our “to-do” list. -/- ► JOHN CORCORAN AND WILIAM FRANK, Surprises in logic. Philosophy, University at Buffalo, Buffalo, NY 14260-4150, USA E-mail: corcoran@buffalo.edu There are many surprises in logic. Peirce gave us a few. Russell gave Frege one. Löwenheim gave Zermelo one. Gödel gave some to Hilbert. Tarski gave us several. When we get a surprise, we are often delighted, puzzled, or skeptical. Sometimes we feel or say “Nice!”, “Wow, I didn’t know that!”, “Is that so?”, or the like. Every surprise belongs to someone. There are no disembodied surprises. Saying there are surprises in logic means that logicians experience surprises doing logic—not that among logical propositions some are intrinsically or objectively “surprising”. The expression “That isn’t surprising” often denigrates logical results. Logicians often aim for surprises. In fact, [1] argues that logic’s potential for surprises helps motivate its study and, indeed, helps justify logic’s existence as a discipline. Besides big surprises that change logicians’ perspectives, the logician’s daily life brings little surprises, e.g. that Gödel’s induction axiom alone implies Robinson’s axiom. Sometimes wild guesses succeed. Sometimes promising ideas fail. Perhaps one of the least surprising things about logic is that it is full of surprises. Against the above is Wittgenstein’s surprising conclusion : “Hence there can never be surprises in logic”. This paper unearths basic mistakes in [2] that might help to explain how Wittgenstein arrived at his false conclusion and why he never caught it. The mistakes include: unawareness that surprise is personal, confusing logicians having certainty with propositions having logical necessity, confusing definitions with criteria, and thinking that facts demonstrate truths. People demonstrate truths using their deductive know-how and their knowledge of facts: facts per se are epistemically inert. [1] JOHN CORCORAN, Hidden consequence and hidden independence. This Bulletin, vol.16, p. 443. [2] LUDWIG WITTGENSTEIN, Tractatus Logico-Philosophicus, Kegan Paul, London, 1921. -/-. (shrink)
Hilbert’s choice operators τ and ε, when added to intuitionistic logic, strengthen it. In the presence of certain extensionality axioms they produce classical logic, while in the presence of weaker decidability conditions for terms they produce various superintuitionistic intermediate logics. In this thesis, I argue that there are important philosophical lessons to be learned from these results. To make the case, I begin with a historical discussion situating the development of Hilbert’s operators in relation to his evolving program in the (...) foundations of mathematics and in relation to philosophical motivations leading to the development of intuitionistic logic. This sets the stage for a brief description of the relevant part of Dummett’s program to recast debates in metaphysics, and in particular disputes about realism and anti-realism, as closely intertwined with issues in philosophical logic, with the acceptance of classical logic for a domain reflecting a commitment to realism for that domain. Then I review extant results about what is provable and what is not when one adds epsilon to intuitionistic logic, largely due to Bell and DeVidi, and I give several new proofs of intermediate logics from intuitionistic logic+ε without identity. With all this in hand, I turn to a discussion of the philosophical significance of choice operators. Among the conclusions I defend are that these results provide a finer-grained basis for Dummett’s contention that commitment to classically valid but intuitionistically invalid principles reflect metaphysical commitments by showing those principles to be derivable from certain existence assumptions; that Dummett’s framework is improved by these results as they show that questions of realism and anti-realism are not an “all or nothing” matter, but that there are plausibly metaphysical stances between the poles of anti-realism and realism, because different sorts of ontological assumptions yield intermediate rather than classical logic; and that these intermediate positions between classical and intuitionistic logic link up in interesting ways with our intuitions about issues of objectivity and reality, and do so usefully by linking to questions around intriguing everyday concepts such as “is smart,” which I suggest involve a number of distinct dimensions which might themselves be objective, but because of their multivalent structure are themselves intermediate between being objective and not. Finally, I discuss the implications of these results for ongoing debates about the status of arbitrary and ideal objects in the foundations of logic, showing among other things that much of the discussion is flawed because it does not recognize the degree to which the claims being made depend on the presumption that one is working with a very strong logic. (shrink)
Since the pioneering work of Birkhoff and von Neumann, quantum logic has been interpreted as the logic of (closed) subspaces of a Hilbert space. There is a progression from the usual Boolean logic of subsets to the "quantum logic" of subspaces of a general vector space--which is then specialized to the closed subspaces of a Hilbert space. But there is a "dual" progression. The notion of a partition (or quotient set or equivalence relation) is dual (in a category-theoretic sense) to (...) the notion of a subset. Hence the Boolean logic of subsets has a dual logic of partitions. Then the dual progression is from that logic of partitions to the quantum logic of direct-sum decompositions (i.e., the vector space version of a set partition) of a general vector space--which can then be specialized to the direct-sum decompositions of a Hilbert space. This allows the logic to express measurement by any self-adjoint operators rather than just the projection operators associated with subspaces. In this introductory paper, the focus is on the quantum logic of direct-sum decompositions of a finite-dimensional vector space (including such a Hilbert space). The primary special case examined is finite vector spaces over ℤ₂ where the pedagogical model of quantum mechanics over sets (QM/Sets) is formulated. In the Appendix, the combinatorics of direct-sum decompositions of finite vector spaces over GF(q) is analyzed with computations for the case of QM/Sets where q=2. (shrink)
This paper is about Poincaré’s view of the foundations of geometry. According to the established view, which has been inherited from the logical positivists, Poincaré, like Hilbert, held that axioms in geometry are schemata that provide implicit definitions of geometric terms, a view he expresses by stating that the axioms of geometry are “definitions in disguise.” I argue that this view does not accord well with Poincaré’s core commitment in the philosophy of geometry: the view that geometry is the study (...) of groups of operations. In place of the established view I offer a revised view, according to which Poincaré held that axioms in geometry are in fact assertions about invariants of groups. Groups, as forms of the understanding, are prior in conception to the objects of geometry and afford the proper definition of those objects, according to Poincaré. Poincaré’s view therefore contrasts sharply with Kant’s foundation of geometry in a unique form of sensibility. According to my interpretation, axioms are not definitions in disguise because they themselves implicitly define their terms, but rather because they disguise the definitions which imply them. (shrink)
In their paper Nothing but the Truth Andreas Pietz and Umberto Rivieccio present Exactly True Logic, an interesting variation upon the four-valued logic for first-degree entailment FDE that was given by Belnap and Dunn in the 1970s. Pietz & Rivieccio provide this logic with a Hilbert-style axiomatisation and write that finding a nice sequent calculus for the logic will presumably not be easy. But a sequent calculus can be given and in this paper we will show that a calculus for (...) the Belnap-Dunn logic we have defined earlier can in fact be reused for the purpose of characterising ETL, provided a small alteration is made—initial assignments of signs to the sentences of a sequent to be proved must be different from those used for characterising FDE. While Pietz & Rivieccio define ETL on the language of classical propositional logic we also study its consequence relation on an extension of this language that is functionally complete for the underlying four truth values. On this extension the calculus gets a multiple-tree character—two proof trees may be needed to establish one proof. (shrink)
Some of the most important developments of symbolic logic took place in the 1920s. Foremost among them are the distinction between syntax and semantics and the formulation of questions of completeness and decidability of logical systems. David Hilbert and his students played a very important part in these developments. Their contributions can be traced to unpublished lecture notes and other manuscripts by Hilbert and Bernays dating to the period 1917-1923. The aim of this paper is to describe these results, focussing (...) primarily on propositional logic, and to put them in their historical context. It is argued that truth-value semantics, syntactic ("Post-") and semantic completeness, decidability, and other results were first obtained by Hilbert and Bernays in 1918, and that Bernays's role in their discovery and the subsequent development of mathematical logic is much greater than has so far been acknowledged. (shrink)
In spite of the many efforts made to clarify von Neumann’s methodology of science, one crucial point seems to have been disregarded in recent literature: his closeness to Hilbert’s spirit. In this paper I shall claim that the scientific methodology adopted by von Neumann in his later foundational reflections originates in the attempt to revaluate Hilbert’s axiomatics in the light of Gödel’s incompleteness theorems. Indeed, axiomatics continues to be pursued by the Hungarian mathematician in the spirit of Hilbert’s school. I (...) shall argue this point by examining four basic ideas embraced by von Neumann in his foundational considerations: a) the conservative attitude to assume in mathematics; b) the role that mathematics and the axiomatic approach have to play in all that is science; c) the notion of success as an alternative methodological criterion to follow in scientific research; d) the empirical and, at the same time, abstract nature of mathematical thought. Once these four basic ideas have been accepted, Hilbert’s spirit in von Neumann’s methodology of science will become clear. (shrink)
In this paper I discuss the Newman problem in the context of contemporary epistemic structural realism (ESR). I formulate Newman’s objection in terms that apply to today’s ESR and then evaluate a defence of ESR based on Carnap’s use of Ramsey sentences and Hilbert’s ε-operator. I show that this defence improves the situation by allowing a formal stipulation of non-structural constraints. However, it fails short of achieving object individuation in the context of satisfying the Ramsified form of a theory. (...) Thus, while limiting the scope of Newman’s argument, Carnap sentences do not fully solve the problem. (shrink)
Reverse mathematics studies which subsystems of second order arithmetic are equivalent to key theorems of ordinary, non-set-theoretic mathematics. The main philosophical application of reverse mathematics proposed thus far is foundational analysis, which explores the limits of different foundations for mathematics in a formally precise manner. This paper gives a detailed account of the motivations and methodology of foundational analysis, which have heretofore been largely left implicit in the practice. It then shows how this account can be fruitfully applied in the (...) evaluation of major foundational approaches by a careful examination of two case studies: a partial realization of Hilbert’s program due to Simpson [1988], and predicativism in the extended form due to Feferman and Schütte. -/- Shore [2010, 2013] proposes that equivalences in reverse mathematics be proved in the same way as inequivalences, namely by considering only omega-models of the systems in question. Shore refers to this approach as computational reverse mathematics. This paper shows that despite some attractive features, computational reverse mathematics is inappropriate for foundational analysis, for two major reasons. Firstly, the computable entailment relation employed in computational reverse mathematics does not preserve justification for the foundational programs above. Secondly, computable entailment is a Pi-1-1 complete relation, and hence employing it commits one to theoretical resources which outstrip those available within any foundational approach that is proof-theoretically weaker than Pi-1-1-CA0. (shrink)
In previous work, I introduced a complete axiomatization of classical non-tautologies based essentially on Łukasiewicz’s rejection method. The present paper provides a new, Hilbert-type axiomatization (along with related systems to axiomatize classical contradictions, non-contradictions, contingencies and non-contingencies respectively). This new system is mathematically less elegant, but the format of the inferential rules and the structure of the completeness proof possess some intrinsic interest and suggests instructive comparisons with the logic of tautologies.
After sketching the main lines of Hilbert's program, certain well-known and influential interpretations of the program are critically evaluated, and an alternative interpretation is presented. Finally, some recent developments in logic related to Hilbert's program are reviewed.
In this survey, a recent computational methodology paying a special attention to the separation of mathematical objects from numeral systems involved in their representation is described. It has been introduced with the intention to allow one to work with infinities and infinitesimals numerically in a unique computational framework in all the situations requiring these notions. The methodology does not contradict Cantor’s and non-standard analysis views and is based on the Euclid’s Common Notion no. 5 “The whole is greater than the (...) part” applied to all quantities (finite, infinite, and infinitesimal) and to all sets and processes (finite and infinite). The methodology uses a computational device called the Infinity Computer (patented in USA and EU) working numerically (recall that traditional theories work with infinities and infinitesimals only symbolically) with infinite and infinitesimal numbers that can be written in a positional numeral system with an infinite radix. It is argued that numeral systems involved in computations limit our capabilities to compute and lead to ambiguities in theoretical assertions, as well. The introduced methodology gives the possibility to use the same numeral system for measuring infinite sets, working with divergent series, probability, fractals, optimization problems, numerical differentiation, ODEs, etc. (recall that traditionally different numerals lemniscate; Aleph zero, etc. are used in different situations related to infinity). Numerous numerical examples and theoretical illustrations are given. The accuracy of the achieved results is continuously compared with those obtained by traditional tools used to work with infinities and infinitesimals. In particular, it is shown that the new approach allows one to observe mathematical objects involved in the Hypotheses of Continuum and the Riemann zeta function with a higher accuracy than it is done by traditional tools. It is stressed that the hardness of both problems is not related to their nature but is a consequence of the weakness of traditional numeral systems used to study them. It is shown that the introduced methodology and numeral system change our perception of the mathematical objects studied in the two problems. (shrink)
In order to explain Wittgenstein’s account of the reality of completed infinity in mathematics, a brief overview of Cantor’s initial injection of the idea into set- theory, its trajectory and the philosophic implications he attributed to it will be presented. Subsequently, we will first expound Wittgenstein’s grammatical critique of the use of the term ‘infinity’ in common parlance and its conversion into a notion of an actually existing infinite ‘set’. Secondly, we will delve into Wittgenstein’s technical critique of the concept (...) of ‘denumerability’ as it is presented in set theory as well as his philosophic refutation of Cantor’s Diagonal Argument and the implications of such a refutation onto the problems of the Continuum Hypothesis and Cantor’s Theorem. Throughout, the discussion will be placed within the historical and philosophical framework of the Grundlagenkrise der Mathematik and Hilbert’s problems. (shrink)
The period from 1900 to 1935 was particularly fruitful and important for the development of logic and logical metatheory. This survey is organized along eight "itineraries" concentrating on historically and conceptually linked strands in this development. Itinerary I deals with the evolution of conceptions of axiomatics. Itinerary II centers on the logical work of Bertrand Russell. Itinerary III presents the development of set theory from Zermelo onward. Itinerary IV discusses the contributions of the algebra of logic tradition, in particular, Löwenheim (...) and Skolem. Itinerary V surveys the work in logic connected to the Hilbert school, and itinerary V deals specifically with consistency proofs and metamathematics, including the incompleteness theorems. Itinerary VII traces the development of intuitionistic and many-valued logics. Itinerary VIII surveys the development of semantical notions from the early work on axiomatics up to Tarski's work on truth. (shrink)
Gauss’s quadratic reciprocity theorem is among the most important results in the history of number theory. It’s also among the most mysterious: since its discovery in the late 18th century, mathematicians have regarded reciprocity as a deeply surprising fact in need of explanation. Intriguingly, though, there’s little agreement on how the theorem is best explained. Two quite different kinds of proof are most often praised as explanatory: an elementary argument that gives the theorem an intuitive geometric interpretation, due to Gauss (...) and Eisenstein, and a sophisticated proof using algebraic number theory, due to Hilbert. Philosophers have yet to look carefully at such explanatory disagreements in mathematics. I do so here. According to the view I defend, there are two important explanatory virtues—depth and transparency—which different proofs (and other potential explanations) possess to different degrees. Although not mutually exclusive in principle, the packages of features associated with the two stand in some tension with one another, so that very deep explanations are rarely transparent, and vice versa. After developing the theory of depth and transparency and applying it to the case of quadratic reciprocity, I draw some morals about the nature of mathematical explanation. (shrink)
This article presents modal versions of resource-conscious logics. We concentrate on extensions of variants of linear logic with one minimal non-normal modality. In earlier work, where we investigated agency in multi-agent systems, we have shown that the results scale up to logics with multiple non-minimal modalities. Here, we start with the language of propositional intuitionistic linear logic without the additive disjunction, to which we add a modality. We provide an interpretation of this language on a class of Kripke resource models (...) extended with a neighbourhood function: modal Kripke resource models. We propose a Hilbert-style axiomatisation and a Gentzen-style sequent calculus. We show that the proof theories are sound and complete with respect to the class of modal Kripke resource models. We show that the sequent calculus admits cut elimination and that proof-search is in PSPACE. We then show how to extend the results when non-commutative connectives are added to the language. Finally, we put the l.. (shrink)
ABSTRACT Topos quantum theory is standardly portrayed as a kind of ‘neo-realist’ reformulation of quantum mechanics.1 1 In this article, I study the extent to which TQT can really be characterized as a realist formulation of the theory, and examine the question of whether the kind of realism that is provided by TQT satisfies the philosophical motivations that are usually associated with the search for a realist reformulation of quantum theory. Specifically, I show that the notion of the quantum state (...) is problematic for those who view TQT as a realist reformulation of quantum theory. 1Introduction 2Topos Quantum Theory 2.1Phase space 2.2Hilbert space 2.3Beyond Hilbert space 2.4Defining realism 2.5The spectral presheaf 2.6The logic of topos quantum theory 3Interpreting States in Topos Quantum Theory 4Interpreting Truth Values and Clopen Subobjects in Topos Quantum Theory 4.1Interpreting the truth values 4.2Interpreting Subcl 5Neo-realism 5.1The covariant approach 6Conclusion. (shrink)
Given the hard problem of consciousness (Chalmers, 1995) there are no brain electrophysiological correlates of the subjective experience (the felt quality of redness or the redness of red, the experience of dark and light, the quality of depth in a visual field, the sound of a clarinet, the smell of mothball, bodily sensations from pains to orgasms, mental images that are conjured up internally, the felt quality of emotion, the experience of a stream of conscious thought or the phenomenology (...) of thought). However, there are brain occipital and left temporal electrophysiological correlates of the subjective experience (Pereira, 2015). Notwithstanding, as evoked signal, the change in event-related brain potentials phase (frequency is the change in phase over time) is instantaneous, that is, the frequency will transiently be infinite: a transient peak in frequency (positive or negative), if any, is instantaneous in electroencephalogram averaging or filtering that the event-related brain potentials required and the underlying structure of the event-related brain potentials in the frequency domain cannot be accounted, for example, by the Wavelet Transform or the Fast Fourier Transform analysis, because they require that frequency is derived by convolution rather than by differentiation. However, as I show in the current original research report, one suitable method for analyse the instantaneous change in event-related brain potentials phase and accounted for a transient peak in frequency (positive or negative), if any, in the underlying structure of the event-related brain potentials is the Empirical Mode Decomposition with post processing (Xie et al., 2014) Ensemble Empirical Mode Decomposition. (shrink)
This paper outlines the intellectual biography of Walter Dubislav. Besides being a leading member of the Berlin Group headed by Hans Reichenbach, Dubislav played a defining role as well in the Society for Empirical/Scientific Philosophy in Berlin. A student of David Hilbert, Dubislav applied the method of axiomatic to produce original work in logic and formalist philosophy of mathematics. He also introduced the elements of a formalist philosophy of science and addressed more general problems concerning the substantiation of human knowledge. (...) What set Dubislav apart from the other logical empiricists was his expertise in the history of logic and exact philosophy which enabled him to elucidate and advance the thinking in both disciplines. In the realm of logic proper, Dubislav is best known for his pioneering work in theory of definitions. What is more, he did original work on the so called ‘quasi truth-tables’ which aided Reichenbach in developing his logic of probability. Dubislav also elaborated an influe.. (shrink)
Hyperboolean algebras are Boolean algebras with operators, constructed as algebras of complexes (or, power structures) of Boolean algebras. They provide an algebraic semantics for a modal logic (called here a {\em hyperboolean modal logic}) with a Kripke semantics accordingly based on frames in which the worlds are elements of Boolean algebras and the relations correspond to the Boolean operations. We introduce the hyperboolean modal logic, give a complete axiomatization of it, and show that it lacks the finite model property. The (...) method of axiomatization hinges upon the fact that a "difference" operator is definable in hyperboolean algebras, and makes use of additional non-Hilbert-style rules. Finally, we discuss a number of open questions and directions for further research. (shrink)
Our conscious minds exist in the Universe, therefore they should be identified with physical states that are subject to physical laws. In classical theories of mind, the mental states are identified with brain states that satisfy the deterministic laws of classical mechanics. This approach, however, leads to insurmountable paradoxes such as epiphenomenal minds and illusionary free will. Alternatively, one may identify mental states with quantum states realized within the brain and try to resolve the above paradoxes using the standard Hilbert (...) space formalism of quantum mechanics. In this essay, we first show that identification of mind states with quantum states within the brain is biologically feasible, and then elaborating on the mathematical proofs of two quantum mechanical no-go theorems, we explain why quantum theory might have profound implications for the scientific understanding of one's mental states, self identity, beliefs and free will. (shrink)
Hilbert's ε-calculus is based on an extension of the language of predicate logic by a term-forming operator εx. Two fundamental results about the ε-calculus, the first and second epsilon theorem, play a rôle similar to that which the cut-elimination theorem plays in sequent calculus. In particular, Herbrand's Theorem is a consequence of the epsilon theorems. The paper investigates the epsilon theorems and the complexity of the elimination procedure underlying their proof, as well as the length of Herbrand disjunctions of (...) existential theorems obtained by this elimination procedure. (shrink)
This book treats ancient logic: the logic that originated in Greece by Aristotle and the Stoics, mainly in the hundred year period beginning about 350 BCE. Ancient logic was never completely ignored by modern logic from its Boolean origin in the middle 1800s: it was prominent in Boole’s writings and it was mentioned by Frege and by Hilbert. Nevertheless, the first century of mathematical logic did not take it seriously enough to study the ancient logic texts. A renaissance in ancient (...) logic studies occurred in the early 1950s with the publication of the landmark Aristotle’s Syllogistic by Jan Łukasiewicz, Oxford UP 1951, 2nd ed. 1957. Despite its title, it treats the logic of the Stoics as well as that of Aristotle. Łukasiewicz was a distinguished mathematical logician. He had created many-valued logic and the parenthesis-free prefix notation known as Polish notation. He co-authored with Alfred Tarski’s an important paper on metatheory of propositional logic and he was one of Tarski’s the three main teachers at the University of Warsaw. Łukasiewicz’s stature was just short of that of the giants: Aristotle, Boole, Frege, Tarski and Gödel. No mathematical logician of his caliber had ever before quoted the actual teachings of ancient logicians. -/- Not only did Łukasiewicz inject fresh hypotheses, new concepts, and imaginative modern perspectives into the field, his enormous prestige and that of the Warsaw School of Logic reflected on the whole field of ancient logic studies. Suddenly, this previously somewhat dormant and obscure field became active and gained in respectability and importance in the eyes of logicians, mathematicians, linguists, analytic philosophers, and historians. Next to Aristotle himself and perhaps the Stoic logician Chrysippus, Łukasiewicz is the most prominent figure in ancient logic studies. A huge literature traces its origins to Łukasiewicz. -/- This Ancient Logic and Its Modern Interpretations, is based on the 1973 Buffalo Symposium on Modernist Interpretations of Ancient Logic, the first conference devoted entirely to critical assessment of the state of ancient logic studies. (shrink)
“Negative probability” in practice. Quantum Communication: Very small phase space regions turn out to be thermodynamically analogical to those of superconductors. Macro-bodies or signals might exist in coherent or entangled state. Such physical objects having unusual properties could be the basis of quantum communication channels or even normal physical ones … Questions and a few answers about negative probability: Why does it appear in quantum mechanics? It appears in phase-space formulated quantum mechanics; next, in quantum correlations … and for wave-particle (...) dualism. Its meaning:- mathematically: a ratio of two measures (of sets), which are not collinear; physically: the ratio of the measurements of two physical quantities, which are not simultaneously measurable. The main innovation is in the mapping between phase and Hilbert space, since both are sums. Phase space is a sum of cells, and Hilbert space is a sum of qubits. The mapping is reduced to the mapping of a cell into a qubit and vice versa. Negative probability helps quantum mechanics to be represented quasi-statistically by quasi-probabilistic distributions. Pure states of negative probability cannot exist, but they, where the conditions for their expression exists, decrease the sum probability of the integrally positive regions of the distributions. They reflect the immediate interaction (interference) of probabilities common in quantum mechanics. (shrink)
Classical physics and quantum physics suggest two meta-physical types of reality: the classical notion of a objectively definite reality with properties "all the way down," and the quantum notion of an objectively indefinite type of reality. The problem of interpreting quantum mechanics (QM) is essentially the problem of making sense out of an objectively indefinite reality. These two types of reality can be respectively associated with the two mathematical concepts of subsets and quotient sets (or partitions) which are (...) category-theoretically dual to one another and which are developed in two mathematical logics, the usual Boolean logic of subsets and the more recent logic of partitions. Our sense-making strategy is "follow the math" by showing how the logic and mathematics of set partitions can be transported in a natural way to Hilbert spaces where it yields the mathematical machinery of QM--which shows that the mathematical framework of QM is a type of logical system over ℂ. And then we show how the machinery of QM can be transported the other way down to the set-like vector spaces over ℤ₂ showing how the classical logical finite probability calculus (in a "non-commutative" version) is a type of "quantum mechanics" over ℤ₂, i.e., over sets. In this way, we try to make sense out of objective indefiniteness and thus to interpret quantum mechanics. (shrink)
"The definitive clarification of the nature of the infinite has become necessary, not merely for the special interests of the individual sciences, but rather for the honour of the human understanding itself. The infinite has always stirred the emotions of mankind more deeply than any other question; the infinite has stimulated and fertilized reason as few other ideas have ; but also the infinite, more than other notion, is in need of clarification." (David Hilbert 1925).
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.