A proof of Fermat’s last theorem is demonstrated. It is very brief, simple, elementary, and absolutely arithmetical. The necessary premises for the proof are only: the three definitive properties of the relation of equality (identity, symmetry, and transitivity), modus tollens, axiom of induction, the proof of Fermat’s last theorem in the case of.
The previous two parts of the paper demonstrate that the interpretation of Fermat’s last theorem (FLT) in Hilbert arithmetic meant both in a narrow sense and in a wide sense can suggest a proof by induction in Part I and by means of the Kochen - Specker theorem in Part II. The same interpretation can serve also for a proof FLT based on Gleason’s theorem and partly similar to that in Part II. The concept of (probabilistic) measure of a (...) subspace of Hilbert space and especially its uniqueness can be unambiguously linked to that of partial algebra or incommensurability, or interpreted as a relation of the two dual branches of Hilbert arithmetic in a wide sense. The investigation of the last relation allows for FLT and Gleason’s theorem to be equated in a sense, as two dual counterparts, and the former to be inferred from the latter, as well as vice versa under an additional condition relevant to the Gödel incompleteness of arithmetic to set theory. The qubit Hilbert space itself in turn can be interpreted by the unity of FLT and Gleason’s theorem. The proof of such a fundamental result in number theory as FLT by means of Hilbert arithmetic in a wide sense can be generalized to an idea about “quantum number theory”. It is able to research mathematically the origin of Peano arithmetic from Hilbert arithmetic by mediation of the “nonstandard bijection” and its two dual branches inherently linking it to information theory. Then, infinitesimal analysis and its revolutionary application to physics can be also re-realized in that wider context, for example, as an exploration of the way for physical quantity of time (respectively, for time derivative in any temporal process considered in physics) to appear at all. Finally, the result admits a philosophical reflection of how any hierarchy arises or changes itself only thanks to its dual and idempotent counterpart. (shrink)
In a previous paper, an elementary and thoroughly arithmetical proof of Fermat’s last theorem by induction has been demonstrated if the case for “n = 3” is granted as proved only arithmetically (which is a fact a long time ago), furthermore in a way accessible to Fermat himself though without being absolutely and precisely correct. The present paper elucidates the contemporary mathematical background, from which an inductive proof of FLT can be inferred since its proof for the case for (...) “n = 3” has been known for a long time. It needs “Hilbert mathematics”, which is inherently complete unlike the usual “Gödel mathematics”, and based on “Hilbert arithmetic” to generalize Peano arithmetic in a way to unify it with the qubit Hilbert space of quantum information. An “epoché to infinity” (similar to Husserl’s “epoché to reality”) is necessary to map Hilbert arithmetic into Peano arithmetic in order to be relevant to Fermat’s age. Furthermore, the two linked semigroups originating from addition and multiplication and from the Peano axioms in the final analysis can be postulated algebraically as independent of each other in a “Hamilton” modification of arithmetic supposedly equivalent to Peano arithmetic. The inductive proof of FLT can be deduced absolutely precisely in that Hamilton arithmetic and the pransfered as a corollary in the standard Peano arithmetic furthermore in a way accessible in Fermat’s epoch and thus, to himself in principle. A future, second part of the paper is outlined, getting directed to an eventual proof of the case “n=3” based on the qubit Hilbert space and the Kochen-Specker theorem inferable from it. (shrink)
The paper is a continuation of another paper published as Part I. Now, the case of “n=3” is inferred as a corollary from the Kochen and Specker theorem (1967): the eventual solutions of Fermat’s equation for “n=3” would correspond to an admissible disjunctive division of qubit into two absolutely independent parts therefore versus the contextuality of any qubit, implied by the Kochen – Specker theorem. Incommensurability (implied by the absence of hidden variables) is considered as dual to quantum contextuality. The (...) relevant mathematical structure is Hilbert arithmetic in a wide sense, in the framework of which Hilbert arithmetic in a narrow sense and the qubit Hilbert space are dual to each other. A few cases involving set theory are possible: (1) only within the case “n=3” and implicitly, within any next level of “n” in Fermat’s equation; (2) the identification of the case “n=3” and the general case utilizing the axiom of choice rather than the axiom of induction. If the former is the case, the application of set theory and arithmetic can remain disjunctively divided: set theory, “locally”, within any level; and arithmetic, “globally”, to all levels. If the latter is the case, the proof is thoroughly within set theory. Thus, the relevance of Yablo’s paradox to the statement of Fermat’s last theorem is avoided in both cases. The idea of “arithmetic mechanics” is sketched: it might deduce the basic physical dimensions of mechanics (mass, time, distance) from the axioms of arithmetic after a relevant generalization, Furthermore, a future Part III of the paper is suggested: FLT by mediation of Hilbert arithmetic in a wide sense can be considered as another expression of Gleason’s theorem in quantum mechanics: the exclusions about (n = 1, 2) in both theorems as well as the validity for all the rest values of “n” can be unified after the theory of quantum information. The availability (respectively, non-availability) of solutions of Fermat’s equation can be proved as equivalent to the non-availability (respectively, availability) of a single probabilistic measure as to Gleason’s theorem. (shrink)
Conventional wisdom dictates that proofs of mathematical propositions should be treated as necessary, and sufficient, for entailing `significant' mathematical truths only if the proofs are expressed in a---minimally, deemed consistent---formal mathematical theory in terms of: * Axioms/Axiom schemas * Rules of Deduction * Definitions * Lemmas * Theorems * Corollaries. Whilst Andrew Wiles' proof of Fermat'sLast Theorem FLT, which appeals essentially to geometrical properties of real and complex numbers, can be treated as meeting this criteria, it nevertheless (...) leaves two questions unanswered: (i) Why is x^n +y^n = z^n solvable only for n < 3 if x, y, z, n are natural numbers? (ii) What technique might Fermat have used that led him to, even if only briefly, believe he had `a truly marvellous demonstration' of FLT? Prevailing post-Wiles wisdom---leaving (i) essentially unaddressed---dismisses Fermat's claim as a conjecture without a plausible proof of FLT. -/- However, we posit that providing evidence-based answers to both queries is necessary not only for treating FLT as significant, but also for understanding why FLT can be treated as a true arithmetical proposition. We thus argue that proving a theorem formally from explicit, and implicit, premises/axioms using rules of deduction---as currently accepted---is a meaningless game, of little scientific value, in the absence of evidence that has already established---unambiguously---why the premises/axioms and rules of deduction can be treated, and categorically communicated, as pre-formal truths in Marcus Pantsar's sense. Consequently, only evidence-based, pre-formal, truth can entail formal provability; and the formal proof of any significant mathematical theorem cannot entail its pre-formal truth as evidence-based. It can only identify the explicit/implicit premises that have been used to evidence the, already established, pre-formal truth of a mathematical proposition. Hence visualising and understanding the evidence-based, pre-formal, truth of a mathematical proposition is the only raison d'etre for subsequently seeking a formal proof of the proposition within a formal mathematical language (whether first-order or second order set theory, arithmetic, geometry, etc.) By this yardstick Andrew Wiles' proof of FLT fails to meet the required, evidence-based, criteria for entailing a true arithmetical proposition. -/- Moreover, we offer two scenarios as to why/how Fermat could have laconically concluded in his recorded marginal noting that FLT is a true arithmetical proposition---even though he either did not (or could not to his own satisfaction) succeed in cogently evidencing, and recording, why FLT can be treated as an evidence-based, pre-formal, arithmetical truth (presumably without appeal to properties of real and complex numbers). It is primarily such a putative, unrecorded, evidence-based reasoning underlying Fermat's laconic assertion which this investigation seeks to reconstruct; and to justify by appeal to a plausible resolution of some philosophical ambiguities concerning the relation between evidence-based, pre-formal, truth and formal provability. (shrink)
Fermat’s Least Time Principle has a long history. World’s foremost academies of the day championed by their most prestigious philosophers competed for the glory and prestige that went with the solution of the refraction problem of light. The controversy, known as Descartes - Fermat controversy was due to the contradictory views held by Descartes and Fermat regarding the relative speeds of light in different media. Descartes with his mechanical philosophy insisted that every natural phenomenon must be explained by mechanical principles. (...) Fermat on the other hand insisted an end purpose for every motion. For example, least time of travel and not the least distance of travel is the end purpose for motion of light. This implied a thinking nature, which was rejected by Descartes. Surprisingly, with contradictory assumptions regarding the relative speeds of light in different media, both Descartes and Fermat came to the same result that the ratio of sines of angles of incidence and refraction is a constant. Fermat’s result came to be known as the ‘Fermat’s least time principle’. We show in this article that Fermat’s least time principle violates a fundamental theorem in geometry – the Ptolemy’s theorem. That leads to the invalidity of Fermat’s principle. -/- . (shrink)
Non-commuting quantities and hidden parameters – Wave-corpuscular dualism and hidden parameters – Local or nonlocal hidden parameters – Phase space in quantum mechanics – Weyl, Wigner, and Moyal – Von Neumann’s theorem about the absence of hidden parameters in quantum mechanics and Hermann – Bell’s objection – Quantum-mechanical and mathematical incommeasurability – Kochen – Specker’s idea about their equivalence – The notion of partial algebra – Embeddability of a qubit into a bit – Quantum computer is not Turing machine – (...) Is continuality universal? – Diffeomorphism and velocity – Einstein’s general principle of relativity – „Mach’s principle“ – The Skolemian relativity of the discrete and the continuous – The counterexample in § 6 of their paper – About the classical tautology which is untrue being replaced by the statements about commeasurable quantum-mechanical quantities – Logical hidden parameters – The undecidability of the hypothesis about hidden parameters – Wigner’s work and и Weyl’s previous one – Lie groups, representations, and psi-function – From a qualitative to a quantitative expression of relativity − psi-function, or the discrete by the random – Bartlett’s approach − psi-function as the characteristic function of random quantity – Discrete and/ or continual description – Quantity and its “digitalized projection“ – The idea of „velocity−probability“ – The notion of probability and the light speed postulate – Generalized probability and its physical interpretation – A quantum description of macro-world – The period of the as-sociated de Broglie wave and the length of now – Causality equivalently replaced by chance – The philosophy of quantum information and religion – Einstein’s thesis about “the consubstantiality of inertia ant weight“ – Again about the interpretation of complex velocity – The speed of time – Newton’s law of inertia and Lagrange’s formulation of mechanics – Force and effect – The theory of tachyons and general relativity – Riesz’s representation theorem – The notion of covariant world line – Encoding a world line by psi-function – Spacetime and qubit − psi-function by qubits – About the physical interpretation of both the complex axes of a qubit – The interpretation of the self-adjoint operators components – The world line of an arbitrary quantity – The invariance of the physical laws towards quantum object and apparatus – Hilbert space and that of Minkowski – The relationship between the coefficients of -function and the qubits – World line = psi-function + self-adjoint operator – Reality and description – Does a „curved“ Hilbert space exist? – The axiom of choice, or when is possible a flattening of Hilbert space? – But why not to flatten also pseudo-Riemannian space? – The commutator of conjugate quantities – Relative mass – The strokes of self-movement and its philosophical interpretation – The self-perfection of the universe – The generalization of quantity in quantum physics – An analogy of the Feynman formalism – Feynman and many-world interpretation – The psi-function of various objects – Countable and uncountable basis – Generalized continuum and arithmetization – Field and entanglement – Function as coding – The idea of „curved“ Descartes product – The environment of a function – Another view to the notion of velocity-probability – Reality and description – Hilbert space as a model both of object and description – The notion of holistic logic – Physical quantity as the information about it – Cross-temporal correlations – The forecasting of future – Description in separable and inseparable Hilbert space – „Forces“ or „miracles“ – Velocity or time – The notion of non-finite set – Dasein or Dazeit – The trajectory of the whole – Ontological and onto-theological difference – An analogy of the Feynman and many-world interpretation − psi-function as physical quantity – Things in the world and instances in time – The generation of the physi-cal by mathematical – The generalized notion of observer – Subjective or objective probability – Energy as the change of probability per the unite of time – The generalized principle of least action from a new view-point – The exception of two dimensions and Fermat’s last theorem. (shrink)
Andrew Wiles' analytic proof of Fermat'sLast Theorem FLT, which appeals to geometrical properties of real and complex numbers, leaves two questions unanswered: (i) What technique might Fermat have used that led him to, even if only briefly, believe he had `a truly marvellous demonstration' of FLT? (ii) Why is x^n+y^n=z^n solvable only for n<3? In this inter-disciplinary perspective, we offer insight into, and answers to, both queries; yielding a pre-formal proof of why FLT can be treated as (...) a true arithmetical proposition (one which, moreover, might not be provable formally in the first-order Peano Arithmetic PA), where we admit only elementary (i.e., number-theoretic) reasoning, without appeal to analytic properties of real and complex numbers. We cogently argue, further, that any formal proof of FLT needs---as is implicitly suggested by Wiles' proof---to appeal essentially to formal geometrical properties of formal arithmetical propositions. (shrink)
The paper introduces and utilizes a few new concepts: “nonstandard Peano arithmetic”, “complementary Peano arithmetic”, “Hilbert arithmetic”. They identify the foundations of both mathematics and physics demonstrating the equivalence of the newly introduced Hilbert arithmetic and the separable complex Hilbert space of quantum mechanics in turn underlying physics and all the world. That new both mathematical and physical ground can be recognized as information complemented and generalized by quantum information. A few fundamental mathematical problems of the present such as Fermat’s (...)last theorem, four-color theorem as well as its new-formulated generalization as “four-letter theorem”, Poincaré’s conjecture, “P vs NP” are considered over again, from and within the new-founding conceptual reference frame of information, as illustrations. Simple or crucially simplifying solutions and proofs are demonstrated. The link between the consistent completeness of the system mathematics-physics on the ground of information and all the great mathematical problems of the present (rather than the enumerated ones) is suggested. (shrink)
God's Dice.Vasil Penchev - 2015 - In S. Oms, J. Martínez, M. García-Carpintero & J. Díez (eds.), Actas: VIII Conference of the Spanish Society for Logic, Methodology, and Philosophy of Sciences. Barcelona: Universitat de Barcelona. pp. 297-303.details
Einstein wrote his famous sentence "God does not play dice with the universe" in a letter to Max Born in 1920. All experiments have confirmed that quantum mechanics is neither wrong nor “incomplete”. One can says that God does play dice with the universe. Let quantum mechanics be granted as the rules generalizing all results of playing some imaginary God’s dice. If that is the case, one can ask how God’s dice should look like. God’s dice turns out to be (...) a qubit and thus having the shape of a unit ball. Any item in the universe as well the universe itself is both infinitely many rolls and a single roll of that dice for it has infinitely many “sides”. Thus both the smooth motion of classical physics and the discrete motion introduced in addition by quantum mechanics can be described uniformly correspondingly as an infinite series converges to some limit and as a quantum jump directly into that limit. The second, imaginary dimension of God’s dice corresponds to energy, i.e. to the velocity of information change between two probabilities in both series and jump. (shrink)
In this multi-disciplinary investigation we show how an evidence-based perspective of quantification---in terms of algorithmic verifiability and algorithmic computability---admits evidence-based definitions of well-definedness and effective computability, which yield two unarguably constructive interpretations of the first-order Peano Arithmetic PA---over the structure N of the natural numbers---that are complementary, not contradictory. The first yields the weak, standard, interpretation of PA over N, which is well-defined with respect to assignments of algorithmically verifiable Tarskian truth values to the formulas of PA under the interpretation. (...) The second yields a strong, finitary, interpretation of PA over N, which is well-defined with respect to assignments of algorithmically computable Tarskian truth values to the formulas of PA under the interpretation. We situate our investigation within a broad analysis of quantification vis a vis: * Hilbert's epsilon-calculus * Goedel's omega-consistency * The Law of the Excluded Middle * Hilbert's omega-Rule * An Algorithmic omega-Rule * Gentzen's Rule of Infinite Induction * Rosser's Rule C * Markov's Principle * The Church-Turing Thesis * Aristotle's particularisation * Wittgenstein's perspective of constructive mathematics * An evidence-based perspective of quantification. By showing how these are formally inter-related, we highlight the fragility of both the persisting, theistic, classical/Platonic interpretation of quantification grounded in Hilbert's epsilon-calculus; and the persisting, atheistic, constructive/Intuitionistic interpretation of quantification rooted in Brouwer's belief that the Law of the Excluded Middle is non-finitary. We then consider some consequences for mathematics, mathematics education, philosophy, and the natural sciences, of an agnostic, evidence-based, finitary interpretation of quantification that challenges classical paradigms in all these disciplines. (shrink)
My aim in this paper is to explain what Condorcet’s jury theorem is, and to examine its central assumptions, its significance to the epistemic theory of democracy and its connection with Rousseau’s theory of general will. In the first part of the paper I will analyze an epistemic theory of democracy and explain how its connection with Condorcet’s jury theorem is twofold: the theorem is at the same time a contributing historical source, and the model used by the authors to (...) this day. In the second part I will specify the purposes of the theorem itself, and examine its underlying assumptions. Third part will be about an interpretation of Rousseau’s theory, which is given by Grofman and Feld relying on Condorcet’s jury theorem, and about criticisms of such interpretation. In the fourth, and last, part I will focus on one particular assumption of Condorcet’s theorem, which proves to be especially problematic if we would like to apply the theorem under real-life conditions; namely, the assumption that voters choose between two options only. (shrink)
Leibniz proposed the ‘Most Determined Path Principle’ in seventeenth century. According to it, ‘ease’ of travel is the end purpose of motion. Using this principle and his calculus method he demonstrated Snell’s Laws of reflection and refraction. This method shows that light follows extremal (local minimum or maximum) time path in going from one point to another, either directly along a straight line path or along a broken line path when it undergoes reflection or refraction at plane or spherical (concave (...) or convex) surfaces. The extremal time path avoided the criticism that Fermat’s least time path was subjected to, by Cartesians who cited examples of reflections at spherical surfaces where light took the path of longest time. Thereby it became the standard method of demonstration of Snell’s Laws. Ptolemy’s theorem is a fundamental theorem in geometry. A special case of it offers a method of finding the minimum sum of the two distances of a point from two given fixed points. We show in this paper that Leibniz’s calculus proof of Snell’s Laws violates Ptolemy’s theorem, whereby Leibniz’s proof becomes invalid. (shrink)
We generalize and extend the class of Sahlqvist formulae in arbitrary polyadic modal languages, to the class of so called inductive formulae. To introduce them we use a representation of modal polyadic languages in a combinatorial style and thus, in particular, develop what we believe to be a better syntactic approach to elementary canonical formulae altogether. By generalizing the method of minimal valuations à la Sahlqvist–van Benthem and the topological approach of Sambin and Vaccaro we prove that all inductive formulae (...) are elementary canonical and thus extend Sahlqvist’s theorem over them. In particular, we give a simple example of an inductive formula which is not frame-equivalent to any Sahlqvist formula. Then, after a deeper analysis of the inductive formulae as set-theoretic operators in descriptive and Kripke frames, we establish a somewhat stronger model-theoretic characterization of these formulae in terms of a suitable equivalence to syntactically simpler formulae in the extension of the language with reversive modalities. Lastly, we study and characterize the elementary canonical formulae in reversive languages with nominals, where the relevant notion of persistence is with respect to discrete frames. (shrink)
Two of the most important ideas in the philosophy of law are the “Coase Theorem” and the “Prisoner’s Dilemma.” In this paper, the authors explore the relation between these two influential models through a creative thought-experiment. Specifically, the paper presents a pure Coasean version of the Prisoner’s Dilemma, one in which property rights are well-defined and transactions costs are zero (i.e. the prisoners are allowed to openly communicate and bargain with each other), in order to test the truth value of (...) the Coase Theorem. In addition, the paper explores what effect (a) uncertainty, (b) exponential discounting, (c) and elasticity have on the behavior of the prisoners in the Coasean version of the dilemma. Lastly, the paper considers the role of the prosecutor (and third-parties generally) in the Prisoner’s Dilemma and closes with some parting thoughts about the complexity of the dilemma. The authors then conclude by identifying the conditions under which the Prisoner’s Dilemma refutes the Coase Theorem. (shrink)
Olfactory perception provides a promising test case for enactivism, since smelling involves actively sampling our surrounding environment by sniffing. Smelling deploys implicit skillful knowledge of how our movement and the airflow around us yield olfactory experiences. The hybrid nature of olfactory experience makes it an ideal test case for enactivism with its esteem for touch and theoretical roots in vision. Olfaction is like vision in facilitating the perception of distal objects, yet it requires us to breath in and physically contact (...) the sensory object in a manner similar to touch. The paper offers an analysis of the central theoretical components of enactivism, whose soundness and empirical viability are tested using the empirical literature on sniffing. It will be shown that even if sniffing is an essential component of olfaction, the motoric component is not necessary for perceiving smells, which is contrary to the most crucial tenet of enactivism. Thus, the paper concludes that the theory cannot account for olfactory perception. (shrink)
This paper critically engages Philip Mirowki's essay, "The scientific dimensions of social knowledge and their distant echoes in 20th-century American philosophy of science." It argues that although the cold war context of anti-democratic elitism best suited for making decisions about engaging in nuclear war may seem to be politically and ideologically motivated, in fact we need to carefully consider the arguments underlying the new rational choice based political philosophies of the post-WWII era typified by Arrow's impossibility theorem. A distrust of (...) democratic decision-making principles may be developed by social scientists whose leanings may be toward the left or right side of the spectrum of political practices. (shrink)
The theory of evolution, which provides the conceptual framework for all modern research in organismal biology and informs research in molecular bi- ology, has gone through several stages of expansion and refinement. Darwin and Wallace (1858) of course proposed the original idea, centering on the twin concepts of natural selection and common descent. Shortly thereafter, Wallace and August Weismann worked toward the complete elimination of any Lamarckian vestiges from the theory, leaning in particular on Weismann’s (1893) concept of the separation (...) of soma and germ, resulting in what is some- times referred to as “neo-Darwinism”. (shrink)
The aim of this paper is to comprehensively question the validity of the standard way of interpreting Chaitin's famous incompleteness theorem, which says that for every formalized theory of arithmetic there is a finite constant c such that the theory in question cannot prove any particular number to have Kolmogorov complexity larger than c. The received interpretation of theorem claims that the limiting constant is determined by the complexity of the theory itself, which is assumed to be good measure of (...) the strength of the theory. I exhibit certain strong counterexamples and establish conclusively that the received view is false. Moreover, I show that the limiting constants provided by the theorem do not in any way reflect the power of formalized theories, but that the values of these constants are actually determined by the chosen coding of Turing machines, and are thus quite accidental. (shrink)
The determinism-free will debate is perhaps as old as philosophy itself and has been engaged in from a great variety of points of view including those of scientific, theological, and logical character. This chapter focuses on two arguments from logic. First, there is an argument in support of determinism that dates back to Aristotle, if not farther. It rests on acceptance of the Law of Excluded Middle, according to which every proposition is either true or false, no matter whether the (...) proposition is about the past, present or future. In particular, the argument goes, whatever one does or does not do in the future is determined in the present by the truth or falsity of the corresponding proposition. The second argument coming from logic is much more modern and appeals to Gödel's incompleteness theorems to make the case against determinism and in favour of free will, insofar as that applies to the mathematical potentialities of human beings. The claim more precisely is that as a consequence of the incompleteness theorems, those potentialities cannot be exactly circumscribed by the output of any computing machine even allowing unlimited time and space for its work. The chapter concludes with some new considerations that may be in favour of a partial mechanist account of the mathematical mind. (shrink)
A group of the last notebooks that Nietzsche wrote from 1888 to the final notebook of 1889. -/- Translator Daniel Fidel Ferrer. See: "Nietzsche's Notebooks in English: a Translator's Introduction and Afterward". pages 265-272. Total pages 390. Translation done June 2012. -/- Nietzsche's notebooks from the last productive year of life, 1888. Nietzsche's unpublished writings called the Nachlass. These are notebooks (Notizheft) from the year 1888 up to early January 1889. Nietzsche stopped writing entirely after January 6, 1889. (...) -/- The German notebooks of Nietzsche's included in these English translations: 12[1-2] Anfang 1888 13[1-5] Anfang 1888 - Fruhjahr 1888 14[1-227] Fruhjahr 1888 (first note says: Nizza, den 25. Marz 1888) 15[1-120] Fruhjahr 1888 16[1-89] Fruhjahr - Sommer 1888 17[1-9] Mai - Juni 1888 18[1-17] Juli - August 1888 19[1-11] September 1888 20[1-168] Sommer 1888 21[1-8] Herbst 1888 22[1-29] September - Oktober 1888 23[1-14] Oktober 1888 24[1-10] Oktober - November 1888 25[1-21] December 1888 - Januar 1889. (shrink)
These are the 22 notebooks of Nietzsche’s last notebooks from 1886-1889. Nietzsche stopped writing entirely around 6th of January 1889. There are 1785 notes translated here. This group of notes translated in this book is not complete for the year 1886. There are at least two other notebooks that were done in the year 1886. However, Nietzsche wrote in his notebooks sometime from back to front and currently the notebooks are only in a general chronological order. Refer to the (...) German Nietzsche philology literature for more exact dating. (shrink)
Condorcet's famous jury theorem reaches an optimistic conclusion on the correctness of majority decisions, based on two controversial premises about voters: they are competent and vote independently, in a technical sense. I carefully analyse these premises and show that: whether a premise is justi…ed depends on the notion of probability considered; none of the notions renders both premises simultaneously justi…ed. Under the perhaps most interesting notions, the independence assumption should be weakened.
During 17th century a scientific controversy existed on the derivation of Snell’s laws of reflection and refraction. Descartes gave a derivation of the laws, independent of the minimality of travel time of a ray of light between two given points. Fermat and Leibniz gave a derivation of the laws, based on the minimality of travel time of a ray of light between two given points. Leibniz’s calculus method became the standard method of derivation of the two laws. We demonstrate in (...) this article that Snell’s law of reflection follows from simple results of geometry. We do not use the concept of motion or the time of travel in our demonstration. That is, time plays no role in our demonstration. (shrink)
Suszko’s Thesis is a philosophical claim regarding the nature of many-valuedness. It was formulated by the Polish logician Roman Suszko during the middle 70s and states the existence of “only but two truth values”. The thesis is a reaction against the notion of many-valuedness conceived by Jan Łukasiewicz. Reputed as one of the modern founders of many-valued logics, Łukasiewicz considered a third undeter- mined value in addition to the traditional Fregean values of Truth and Falsehood. For Łukasiewicz, his third value (...) could be seen as a step beyond the Aristotelian dichotomy of Being and non-Being. According to Suszko, Łukasiewicz’s ideas rested on a confusion between algebraic values (what sentences describe/denote) and log- ical values (truth and falsity). Thus, Łukasiewicz’s third undetermined value is no more than an algebraic value, a possible denotation for a sentence, but not a genuine logical value. Suszko’s Thesis is endorsed by a formal result baptized as Suszko’s Reduction, a theorem that states every Tarskian logic may be characterized by a two-valued semantics. The present study is intended as a thorough investigation of Suszko’s thesis and its implications. The first part is devoted to the historical roots of many-valuedness and introduce Suszko’s main motivations in formulating the double character of truth-values by drawing the distinction in between algebraic and logical values. The second part explores Suszko’s Reduction and presents the developments achieved from it; the properties of two-valued semantics in comparison to many-valued semantics are also explored and discussed. Last but not least, the third part investigates the notion of logical values in the context of non-Tarskian notions of entailment; the meaning of Suszko’s thesis within such frameworks is also discussed. Moreover, the philosophical foundations for non-Tarskian notions of entailment are explored in the light of recent debates concerning logical pluralism. (shrink)
In response to recent work on the aggregation of individual judgments on logically connected propositions into collective judgments, it is often asked whether judgment aggregation is a special case of Arrowian preference aggregation. We argue for the converse claim. After proving two impossibility theorems on judgment aggregation (using "systematicity" and "independence" conditions, respectively), we construct an embedding of preference aggregation into judgment aggregation and prove Arrow’s theorem (stated for strict preferences) as a corollary of our second result. Although we thereby (...) provide a new proof of Arrow’s theorem, our main aim is to identify the analogue of Arrow’s theorem in judgment aggregation, to clarify the relation between judgment and preference aggregation, and to illustrate the generality of the judgment aggregation model. JEL Classi…cation: D70, D71.. (shrink)
In this paper I uncover the identities of the interlocutors of Pierre Bayle's Entretiens de Maxime et de Themiste, and I show the significance of these identities for a proper understanding of the Entretiens and of Bayle's thought more generally. Maxime and Themiste represent the philosophers of late antiquity, Maximus of Tyre and Themistius. Bayle brought these philosophers into dialogue in order to suggest that the problem of evil, though insoluble by means of speculative reason, could be dissolved and thus (...) avoided through mutual toleration. I conclude by comparing Bayle's "theodicy of toleration" with Kant's notion of authentic theodicy. (shrink)
This paper will present an analysis of the relational aspect of Brentano’s last theory of intentionality. My main thesis is that Brentano, at the end of his life, considered relations (relatives) without existent terms to be genuine relations (relatives). Thus, intentionality is a non-reducible real relation (the thinking subject is a non-reducible real relative) regardless of whether or not the object exists. I will use unpublished texts from the Brentanian Nachlass to support my argument.
In this short survey article, I discuss Bell’s theorem and some strategies that attempt to avoid the conclusion of non-locality. I focus on two that intersect with the philosophy of probability: (1) quantum probabilities and (2) superdeterminism. The issues they raised not only apply to a wide class of no-go theorems about quantum mechanics but are also of general philosophical interest.
Classical interpretations of Goedels formal reasoning, and of his conclusions, implicitly imply that mathematical languages are essentially incomplete, in the sense that the truth of some arithmetical propositions of any formal mathematical language, under any interpretation, is, both, non-algorithmic, and essentially unverifiable. However, a language of general, scientific, discourse, which intends to mathematically express, and unambiguously communicate, intuitive concepts that correspond to scientific investigations, cannot allow its mathematical propositions to be interpreted ambiguously. Such a language must, therefore, define mathematical truth (...) verifiably. We consider a constructive interpretation of classical, Tarskian, truth, and of Goedel's reasoning, under which any formal system of Peano Arithmetic---classically accepted as the foundation of all our mathematical Languages---is verifiably complete in the above sense. We show how some paradoxical concepts of Quantum mechanics can, then, be expressed, and interpreted, naturally under a constructive definition of mathematical truth. (shrink)
In the context of EPR-Bohm type experiments and spin detections confined to spacelike hypersurfaces, a local, deterministic and realistic model within a Friedmann-Robertson-Walker spacetime with a constant spatial curvature (S^3 ) is presented that describes simultaneous measurements of the spins of two fermions emerging in a singlet state from the decay of a spinless boson. Exact agreement with the probabilistic predictions of quantum theory is achieved in the model without data rejection, remote contextuality, superdeterminism or backward causation. A singularity-free Clifford-algebraic (...) representation of S^3 with vanishing spatial curvature and non-vanishing torsion is then employed to transform the model in a more elegant form. Several event-by-event numerical simulations of the model are presented, which confirm our analytical results with the accuracy of 4 parts in 10^4 . Possible implications of our results for practical applications such as quantum security protocols and quantum computing are briefly discussed. (shrink)
We note that a plural version of logicism about arithmetic is suggested by the standard reading of Hume's Principle in terms of `the number of Fs/Gs'. We lay out the resources needed to prove a version of Frege's principle in plural, rather than second-order, logic. We sketch a proof of the theorem and comment philosophically on the result, which sits well with a metaphysics of natural numbers as plural properties.
In this paper I argue for an association between impurity and explanatory power in contemporary mathematics. This proposal is defended against the ancient and influential idea that purity and explanation go hand-in-hand (Aristotle, Bolzano) and recent suggestions that purity/impurity ascriptions and explanatory power are more or less distinct (Section 1). This is done by analyzing a central and deep result of additive number theory, Szemerédi’s theorem, and various of its proofs (Section 2). In particular, I focus upon the radically impure (...) (ergodic) proof due to Furstenberg (Section 3). Furstenberg’s ergodic proof is striking because it utilizes intuitively foreign and infinitary resources to prove a finitary combinatorial result and does so in a perspicuous fashion. I claim that Furstenberg’s proof is explanatory in light of its clear expression of a crucial structural result, which provides the “reason why” Szemerédi’s theorem is true. This is, however, rather surprising: how can such intuitively different conceptual resources “get a grip on” the theorem to be proved? I account for this phenomenon by articulating a new construal of the content of a mathematical statement, which I call structural content (Section 4). I argue that the availability of structural content saves intuitive epistemic distinctions made in mathematical practice and simultaneously explicates the intervention of surprising and explanatorily rich conceptual resources. Structural content also disarms general arguments for thinking that impurity and explanatory power might come apart. Finally, I sketch a proposal that, once structural content is in hand, impure resources lead to explanatory proofs via suitably understood varieties of simplification and unification (Section 5). (shrink)
The following essay reconsiders the ontological and logical issues around Frege’s Basic Law (V). If focuses less on Russell’s Paradox, as most treatments of Frege’s Grundgesetze der Arithmetik (GGA)1 do, but rather on the relation between Frege’s Basic Law (V) and Cantor’s Theorem (CT). So for the most part the inconsistency of Naïve Comprehension (in the context of standard Second Order Logic) will not concern us, but rather the ontological issues central to the conflict between (BLV) and (CT). These ontological (...) issues are interesting in their own right. And if and only if in case ontological considerations make a strong case for something like (BLV) we have to trouble us with inconsistency and paraconsistency. These ontological issues also lead to a renewed methodological reflection what to assume or recognize as an axiom. (shrink)
Riker (1982) famously argued that Arrow’s impossibility theorem undermined the logical foundations of “populism”, the view that in a democracy, laws and policies ought to express “the will of the people”. In response, his critics have questioned the use of Arrow’s theorem on the grounds that not all configurations of preferences are likely to occur in practice; the critics allege, in particular, that majority preference cycles, whose possibility the theorem exploits, rarely happen. In this essay, I argue that the critics’ (...) rejoinder to Riker misses the mark even if its factual claim about preferences is correct: Arrow’s theorem and related results threaten the populist’s principle of democratic legitimacy even if majority preference cycles never occur. In this particular context, the assumption of an unrestricted domain is justified irrespective of the preferences citizens are likely to have. (shrink)
On the heels of Franzén's fine technical exposition of Gödel's incompleteness theorems and related topics (Franzén 2004) comes this survey of the incompleteness theorems aimed at a general audience. Gödel's Theorem: An Incomplete Guide to its Use and Abuse is an extended and self-contained exposition of the incompleteness theorems and a discussion of what informal consequences can, and in particular cannot, be drawn from them.
Bell’s theorem has fascinated physicists and philosophers since his 1964 paper, which was written in response to the 1935 paper of Einstein, Podolsky, and Rosen. Bell’s theorem and its many extensions have led to the claim that quantum mechanics and by inference nature herself are nonlocal in the sense that a measurement on a system by an observer at one location has an immediate effect on a distant entangled system. Einstein was repulsed by such “spooky action at a distance” and (...) was led to question whether quantum mechanics could provide a complete description of physical reality. In this paper I argue that quantum mechanics does not require spooky action at a distance of any kind and yet it is entirely reasonable to question the assumption that quantum mechanics can provide a complete description of physical reality. The magic of entangled quantum states has little to do with entanglement and everything to do with superposition, a property of all quantum systems and a foundational tenet of quantum mechanics. (shrink)
To counter a general belief that all the paradoxes stem from a kind of circularity (or involve some self--reference, or use a diagonal argument) Stephen Yablo designed a paradox in 1993 that seemingly avoided self--reference. We turn Yablo's paradox, the most challenging paradox in the recent years, into a genuine mathematical theorem in Linear Temporal Logic (LTL). Indeed, Yablo's paradox comes in several varieties; and he showed in 2004 that there are other versions that are equally paradoxical. Formalizing these versions (...) of Yablo's paradox, we prove some theorems in LTL. This is the first time that Yablo's paradox(es) become new(ly discovered) theorems in mathematics and logic. (shrink)
In this article, it is suggested that a pedagogical point of departure in the teaching of classical mechanics is the Liouville theorem. The theorem is interpreted to define the condition that describe the conservation of information in classical mechanics. The Hamilton equations and the Hamilton principle of least action are derived from the Liouville theorem.
The program put forward in von Wright's last works defines deontic logic as ``a study of conditions which must be satisfied in rational norm-giving activity'' and thus introduces the perspective of logical pragmatics. In this paper a formal explication for von Wright's program is proposed within the framework of set-theoretic approach and extended to a two-sets model which allows for the separate treatment of obligation-norms and permission norms. The three translation functions connecting the language of deontic logic with the (...) language of the extended set-theoretical approach are introduced, and used in proving the correspondence between the deontic theorems, on one side, and the perfection properties of the norm-set and the ``counter-set'', on the other side. In this way the possibility of reinterpretation of standard deontic logic as the theory of perfection properties that ought to be achieved in norm-giving activity has been formally proved. The extended set-theoretic approach is applied to the problem of rationality of principles of completion of normative systems. The paper concludes with a plaidoyer for logical pragmatics turn envisaged in the late phase of Von Wright's work in deontic logic. (shrink)
It has been known for a few years that no more than Pi-1-1 comprehension is needed for the proof of "Frege's Theorem". One can at least imagine a view that would regard Pi-1-1 comprehension axioms as logical truths but deny that status to any that are more complex—a view that would, in particular, deny that full second-order logic deserves the name. Such a view would serve the purposes of neo-logicists. It is, in fact, no part of my view that, say, (...) Delta-3-1 comprehension axioms are not logical truths. What I am going to suggest, however, is that there is a special case to be made on behalf of Pi-1-1 comprehension. Making the case involves investigating extensions of first-order logic that do not rely upon the presence of second-order quantifiers. A formal system for so-called "ancestral logic" is developed, and it is then extended to yield what I call "Arché logic". (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.