A computational methodology called Grossone Infinity Computing introduced with the intention to allow one to work with infinities and infinitesimals numerically has been applied recently to a number of problems in numerical mathematics (optimization, numerical differentiation, numerical algorithms for solving ODEs, etc.). The possibility to use a specially developed computational device called the Infinity Computer (patented in USA and EU) for working with infinite and infinitesimal numbers numerically gives an additional advantage to this approach in comparison with traditional methodologies studying (...) infinities and infinitesimals only symbolically. The grossone methodology uses the Euclid’s Common Notion no. 5 ‘The whole is greater than the part’ and applies it to finite, infinite, and infinitesimal quantities and to finite and infinite sets and processes. It does not contradict Cantor’s and non-standard analysis views on infinity and can be considered as an applied development of their ideas. In this paper we consider infinite series and a particular attention is dedicated to divergent series with alternate signs. The Riemannseriestheorem states that conditionally convergent series can be rearranged in such a way that they either diverge or converge to an arbitrary real number. It is shown here that Riemann’s result is a consequence of the fact that symbol ∞ used traditionally does not allow us to express quantitatively the number of addends in the series, in other words, it just shows that the number of summands is infinite and does not allows us to count them. The usage of the grossone methodology allows us to see that (as it happens in the case where the number of addends is finite) rearrangements do not change the result for any sum with a fixed infinite number of summands. There are considered some traditional summation techniques such as Ramanujan summation producing results where to divergent series containing infinitely many positive integers negative results are assigned. It is shown that the careful counting of the number of addends in infinite series allows us to avoid this kind of results if grossone-based numerals are used. (shrink)
Using Riemann’s Rearrangement Theorem, Øystein Linnebo (2020) argues that, if it were possible to apply an infinite positive weight and an infinite negative weight to a working scale, the resulting net weight could end up being any real number, depending on the procedure by which these weights are applied. Appealing to the First Postulate of Archimedes’ treatise on balance, I argue instead that the scale would always read 0 kg. Along the way, we stop to consider an infinitely (...) jittery flea, an infinitely protracted border conflict, and an infinitely electric glass rod. (shrink)
The work provides comprehensively definitive, unconditional proofs of Riemann's hypothesis, Goldbach's conjecture, the 'twin primes' conjecture, the Collatz conjecture, the Newcomb-Benford theorem, and the Quine-Putnam Indispensability thesis. The proofs validate holonomic metamathematics, meta-ontology, new number theory, new proof theory, new philosophy of logic, and unconditional disproof of the P/NP problem. The proofs, metatheory, and definitions are also confirmed and verified with graphic proof of intrinsic enabling and sustaining principles of reality.
The work provides comprehensively definitive, unconditional proofs of Riemann's hypothesis, Goldbach's conjecture, the 'twin primes' conjecture, the Collatz conjecture, the Newcomb-Benford theorem, and the Quine-Putnam Indispensability thesis. The proofs validate holonomic metamathematics, meta-ontology, new number theory, new proof theory, new philosophy of logic, and unconditional disproof of the P/NP problem. The proofs, metatheory, and definitions are also confirmed and verified with graphic proof of intrinsic enabling and sustaining principles of reality.
The Four-Colour Theorem (4CT) proof, presented to the mathematical community in a pair of papers by Appel and Haken in the late 1970's, provoked a series of philosophical debates. Many conceptual points of these disputes still require some elucidation. After a brief presentation of the main ideas of Appel and Haken’s procedure for the proof and a reconstruction of Thomas Tymoczko’s argument for the novelty of 4CT’s proof, we shall formulate some questions regarding the connections between the points (...) raised by Tymoczko and some Wittgensteinian topics in the philosophy of mathematics such as the importance of the surveyability as a criterion for distinguishing mathematical proofs from empirical experiments. Our aim is to show that the “characteristic Wittgensteinian invention” (Mühlhölzer 2006) – the strong distinction between proofs and experiments – can shed some light in the conceptual confusions surrounding the Four-Colour Theorem. (shrink)
Carleson’s celebrated theorem of 1965 [1] asserts the pointwise convergence of the partial Fourier sums of square integrable functions. The Fourier transform has a formulation on each of the Euclidean groups R , Z andΤ .Carleson’s original proof worked on Τ . Fefferman’s proof translates very easily to R . M´at´e [2] extended Carleson’s proof to Z . Each of the statements of the theorem can be stated in terms of a maximal Fourier multiplier theorem [5]. Inequalities (...) for such operators can be transferred between these three Euclidean groups, and was done P. Auscher and M.J. Carro [3]. But L. Carleson’s original proof and another proofs very long and very complicated. We give a very short and very “simple” proof of this fact. Our proof uses PNSA technique only, developed in part I, and does not uses complicated technical formations unavoidable by the using of purely standard approach to the present problems. In contradiction to Carleson’s method, which is based on profound properties of trigonometric series, the proposed approach is quite general and allows to research a wide class of analogous problems for the general orthogonal series. (shrink)
FOURTH EUROPEAN CONGRESS OF MATHEMATICS STOCKHOLM,SWEDEN JUNE27 - JULY 2, 2004 Contributed papers L. Carleson’s celebrated theorem of 1965 [1] asserts the pointwise convergence of the partial Fourier sums of square integrable functions. The Fourier transform has a formulation on each of the Euclidean groups R , Z and Τ .Carleson’s original proof worked on Τ . Fefferman’s proof translates very easily to R . M´at´e [2] extended Carleson’s proof to Z . Each of the statements of the (...)theorem can be stated in terms of a maximal Fourier multiplier theorem [5]. Inequalities for such operators can be transferred between these three Euclidean groups, and was done P. Auscher and M.J. Carro [3]. But L. Carleson’s original proof and another proofs very long and very complicated. We give a very short and very “simple” proof of this fact. Our proof uses PNSA technique only, developed in part I, and does not uses complicated technical formations unavoidable by the using of purely standard approach to the present problems. In contradiction to Carleson’s method, which is based on profound properties of trigonometric series, the proposed approach is quite general and allows to research a wide class of analogous problems for the general orthogonal series. (shrink)
Population axiology is the study of the conditions under which one state of affairs is better than another, when the states of affairs in ques- tion may differ over the numbers and the identities of the persons who ever live. Extant theories include totalism, averagism, variable value theories, critical level theories, and “person-affecting” theories. Each of these the- ories is open to objections that are at least prima facie serious. A series of impossibility theorems shows that this is no (...) coincidence: it can be proved, for various sets of prima facie intuitively compelling desiderata, that no axiology can simultaneously satisfy all the desiderata on the list. One’s choice of population axiology appears to be a choice of which intuition one is least unwilling to give up. (shrink)
Quantum mechanics was reformulated as an information theory involving a generalized kind of information, namely quantum information, in the end of the last century. Quantum mechanics is the most fundamental physical theory referring to all claiming to be physical. Any physical entity turns out to be quantum information in the final analysis. A quantum bit is the unit of quantum information, and it is a generalization of the unit of classical information, a bit, as well as the quantum information itself (...) is a generalization of classical information. Classical information refers to finite series or sets while quantum information, to infinite ones. Quantum information as well as classical information is a dimensionless quantity. Quantum information can be considered as a “bridge” between the mathematical and physical. The standard and common scientific epistemology grants the gap between the mathematical models and physical reality. The conception of truth as adequacy is what is able to transfer “over” that gap. One should explain how quantum information being a continuous transition between the physical and mathematical may refer to truth as adequacy and thus to the usual scientific epistemology and methodology. If it is the overall substance of anything claiming to be physical, one can question how different and dimensional physical quantities appear. Quantum information can be discussed as the counterpart of action. Quantum information is what is conserved, action is what is changed in virtue of the fundamental theorems of Emmy Noether (1918). The gap between mathematical models and physical reality, needing truth as adequacy to be overcome, is substituted by the openness of choice. That openness in turn can be interpreted as the openness of the present as a different concept of truth recollecting Heidegger’s one as “unconcealment” (ἀλήθεια). Quantum information as what is conserved can be thought as the conservation of that openness. (shrink)
In this text the ancient philosophical question of determinism (“Does every event have a cause ?”) will be re-examined. In the philosophy of science and physics communities the orthodox position states that the physical world is indeterministic: quantum events would have no causes but happen by irreducible chance. Arguably the clearest theorem that leads to this conclusion is Bell’s theorem. The commonly accepted ‘solution’ to the theorem is ‘indeterminism’, in agreement with the Copenhagen interpretation. Here it is (...) recalled that indeterminism is not really a physical but rather a philosophical hypothesis, and that it has counterintuitive and far-reaching implications. At the same time another solution to Bell’s theorem exists, often termed ‘superdeterminism’ or ‘total determinism’. Superdeterminism appears to be a philosophical position that is centuries and probably millennia old: it is for instance Spinoza’s determinism. If Bell’s theorem has both indeterministic and deterministic solutions, choosing between determinism and indeterminism is a philosophical question, not a matter of physical experimentation, as is widely believed. If it is impossible to use physics for deciding between both positions, it is legitimate to ask which philosophical theories are of help. Here it is argued that probability theory – more precisely the interpretation of probability – is instrumental for advancing the debate. It appears that the hypothesis of determinism allows to answer a series of precise questions from probability theory, while indeterminism remains silent for these questions. From this point of view determinism appears to be the more reasonable assumption, after all. (shrink)
The latest draft (posted 05/14/22) of this short, concise work of proof, theory, and metatheory provides summary meta-proofs and verification of the work and results presented in the Theory and Metatheory of Atemporal Primacy and Riemann, Metatheory, and Proof. In this version, several new and revised definitions of terms were added to subsection SS.1; and many corrected equations, theorems, metatheorems, proofs, and explanations are included in the main text. The body of the text is approximately 18 pages, with 3 (...) sections; 1, an Introduction (with a 108 page listing of key terms & definitions), sect. 2, the Results, sect. 3, Discussion (commentary & predictions), and sect. 4, Works Cited. As much as possible, the style is intended for readability and understanding by very bright children (with some interest & knowledge of maths, etc.) and very interested nonprofessionals. The results of this project also enable upgrades of number theory, set theory, proof theory, metamathematics, the foundations of science, and quantum mechanics theory (etc.). (shrink)
In a series of formal studies and less formal applications, Hong and Page offer a ‘diversity trumps ability’ result on the basis of a computational experiment accompanied by a mathematical theorem as explanatory background (Hong & Page 2004, 2009; Page 2007, 2011). “[W]e find that a random collection of agents drawn from a large set of limited-ability agents typically outperforms a collection of the very best agents from that same set” (2004, p. 16386). The result has been extremely (...) influential as an epistemic justification for diversity policy initiatives. Here we show that the ‘diversity trumps ability’ result is tied to the particular random landscape used in Hong and Page’s simulation. We argue against interpreting results on that random landscape in terms of ‘ability’ or ‘expertise.’ These concepts are better modeled on smother and more realistic landscapes, but keeping other parameters the same those are landscapes on which it is groups of the best performing that do better. Smoother landscapes seem to vindicate both the concept and the value of expertise. Change in other parameters, however, also vindicates diversity. With an increase in the pool of available heuristics, diverse groups again do better. Group dynamics makes a difference as well; simultaneous ‘tournament’ deliberation in a group in place of the ‘relay’ deliberation in Hong and Page’s original model further emphasizes an advantage for diversity. ‘Tournament’ 2 dynamics particularly shows the advantage of mixed groups that include both experts and non-experts. As a whole, our modeling results suggest that relative to problem characteristics and conceptual resources, the wisdom of crowds and the wisdom of the few each have a place. We regard ours as a step toward attempting to calibrate their relative virtues in different modelled contexts of intellectual exploration. (shrink)
Rosenkranz devised two bimodal epistemic logics: an idealized one and a realistic one. The former is shown to be sound with respect to a class of neighborhood frames called i-frames. Rosenkranz designed a specific i-frame able to invalidate a series of undesired formulas, proving that these are not theorems of the idealized logic. Nonetheless, an unwanted formula and an unwanted rule of inference are not invalidated. Invalidating the former guarantees the distinction between the two modal operators characteristic of the (...) logic, while invalidating the latter is crucial in order to deal with the problem of logical omniscience. In this paper, I present an i-frame able to invalidate all the undesired formulas already invalidated by Rosenkranz, together with the missing formula and rule of inference. (shrink)
This is a series of lectures on formal decision theory held at the University of Bayreuth during the summer terms 2008 and 2009. It largely follows the book from Michael D. Resnik: Choices. An Introduction to Decision Theory, 5th ed. Minneapolis London 2000 and covers the topics: -/- Decisions under ignorance and risk Probability calculus (Kolmogoroff Axioms, Bayes' Theorem) Philosophical interpretations of probability (R. v. Mises, Ramsey-De Finetti) Neuman-Morgenstern Utility Theory Introductory Game Theory Social Choice Theory (Sen's Paradox (...) of Liberalism, Arrow's Theorem) . (shrink)
In this survey, a recent computational methodology paying a special attention to the separation of mathematical objects from numeral systems involved in their representation is described. It has been introduced with the intention to allow one to work with infinities and infinitesimals numerically in a unique computational framework in all the situations requiring these notions. The methodology does not contradict Cantor’s and non-standard analysis views and is based on the Euclid’s Common Notion no. 5 “The whole is greater than the (...) part” applied to all quantities (finite, infinite, and infinitesimal) and to all sets and processes (finite and infinite). The methodology uses a computational device called the Infinity Computer (patented in USA and EU) working numerically (recall that traditional theories work with infinities and infinitesimals only symbolically) with infinite and infinitesimal numbers that can be written in a positional numeral system with an infinite radix. It is argued that numeral systems involved in computations limit our capabilities to compute and lead to ambiguities in theoretical assertions, as well. The introduced methodology gives the possibility to use the same numeral system for measuring infinite sets, working with divergent series, probability, fractals, optimization problems, numerical differentiation, ODEs, etc. (recall that traditionally different numerals lemniscate; Aleph zero, etc. are used in different situations related to infinity). Numerous numerical examples and theoretical illustrations are given. The accuracy of the achieved results is continuously compared with those obtained by traditional tools used to work with infinities and infinitesimals. In particular, it is shown that the new approach allows one to observe mathematical objects involved in the Hypotheses of Continuum and the Riemann zeta function with a higher accuracy than it is done by traditional tools. It is stressed that the hardness of both problems is not related to their nature but is a consequence of the weakness of traditional numeral systems used to study them. It is shown that the introduced methodology and numeral system change our perception of the mathematical objects studied in the two problems. (shrink)
The article is a plea for ethicists to regard probability as one of their most important concerns. It outlines a series of topics of central importance in ethical theory in which probability is implicated, often in a surprisingly deep way, and lists a number of open problems. Topics covered include: interpretations of probability in ethical contexts; the evaluative and normative significance of risk or uncertainty; uses and abuses of expected utility theory; veils of ignorance; Harsanyi’s aggregation theorem; population (...) size problems; equality; fairness; giving priority to the worse off; continuity; incommensurability; nonexpected utility theory; evaluative measurement; aggregation; causal and evidential decision theory; act consequentialism; rule consequentialism; and deontology. (shrink)
It is common to assume that the problem of induction arises only because of small sample sizes or unreliable data. In this paper, I argue that the piecemeal collection of data can also lead to underdetermination of theories by evidence, even if arbitrarily large amounts of completely reliable experimental and observational data are collected. Specifically, I focus on the construction of causal theories from the results of many studies (perhaps hundreds), including randomized controlled trials and observational studies, where the studies (...) focus on overlapping, but not identical, sets of variables. Two theorems reveal that, for any collection of variables V, there exist fundamentally different causal theories over V that cannot be distinguished unless all variables are simultaneously measured. Underdetermination can result from piecemeal measurement, regardless of the quantity and quality of the data. Moreover, I generalize these results to show that, a priori, it is impossible to choose a series of small (in terms of number of variables) observational studies that will be most informative with respect to the causal theory describing the variables under investigation. This final result suggests that scientific institutions may need to play a larger role in coordinating differing research programs during inquiry. (shrink)
This paper presents a new kind of problem in the ethics of distribution. The problem takes the form of several “calibration dilemmas,” in which intuitively reasonable aversion to small-stakes inequalities requires leading theories of distribution to recommend intuitively unreasonable aversion to large-stakes inequalities. We first lay out a series of such dilemmas for prioritarian theories. We then consider a widely endorsed family of egalitarian views and show that they are subject to even more forceful calibration dilemmas than prioritarian theories. (...) Finally, we show that our results challenge common utilitarian accounts of the badness of inequalities in resources. (shrink)
While the philosophers of science discuss the General Relativity, the mathematical physicists do not question it. Therefore, there is a conflict. From the theoretical point view “the question of precisely what Einstein discovered remains unanswered, for we have no consensus over the exact nature of the theory 's foundations. Is this the theory that extends the relativity of motion from inertial motion to accelerated motion, as Einstein contended? Or is it just a theory that treats gravitation geometrically in the spacetime (...) setting?”. “The voices of dissent proclaim that Einstein was mistaken over the fundamental ideas of his own theory and that their basic principles are simply incompatible with this theory. Many newer texts make no mention of the principles Einstein listed as fundamental to his theory; they appear as neither axiom nor theorem. At best, they are recalled as ideas of purely historical importance in the theory's formation. The very name General Relativity is now routinely condemned as a misnomer and its use often zealously avoided in favour of, say , Einstein's theory of gravitation What has complicated an easy resolution of the debate are the alterations of Einstein's own position on the foundations of his theory”, (Norton, 1993). Of other hand from the mathematical point view the “General Relativity had been formulated as a messy set of partial differential equations in a single coordinate system. People were so pleased when they found a solution that they didn't care that it probably had no physical significance” (Hawking and Penrose, 1996). So, during a time, the declaration of quantum theorists:“I take the positivist viewpoint that a physical theory is just a mathematical model and that it is meaningless to ask whether it corresponds to reality. All that one can ask is that its predictions should be in agreement with observation.” (Hawking and Penrose, 1996)seemed to solve the problem, but recently achieved with the help of the tightly and collectively synchronized clocks in orbit frontally contradicts fundamental assumptions of the theory of Relativity. These observations are in disagree from predictions of the theory of Relativity. (Hatch, 2004a, 2004b, 2007). The mathematical model was developed first by Grossmann who presented it, in 1913, as the mathematical part of the Entwurf theory, still referred to a curved Minkowski spacetime. Einstein completed the mathematical model, in 1915, formulated for Riemann ́s spacetimes. In this paper, we present as of General Relativity currently remains only the mathematical model, darkened with the results of Hatch and, course, we conclude that a Einstein ́s gravity theory does not exist. (shrink)
Quantum complementarity is interpreted in terms of duality and opposition. Any two conjugates are considered both as dual and opposite. Thus quantum mechanics introduces a mathematical model of them in an exact and experimental science. It is based on the complex Hilbert space, which coincides with the dual one. The two dual Hilbert spaces model both duality and opposition to resolve unifying the quantum and smooth motions. The model involves necessarily infinity even in any finitely dimensional subspace of the complex (...) Hilbert space being due to the complex basis. Furthermore, infinity is what unifies duality and opposition, universality and openness, completeness and incompleteness in it. The deduced core of quantum complementarity in terms of infinity, duality and opposition allows of resolving a series of various problems in different branches of philosophy: the common structure of incompleteness in Gödel’s (1931) theorems and Enstein, Podolsky, and Rosen’s argument (1935); infinity as both complete and incomplete; grounding and self-grounding, metaphor and representation between language and reality, choice and information, the totality and an observer, the basic idea of philosophical phenomenology. The main conclusion is: Quantum complementarity unifies duality and opposition in a consistent way underlying the physical world. (shrink)
The book is devoted to the contemporary stage of quantum mechanics – quantum information, and especially to its philosophical interpretation and comprehension: the first one of a series monographs about the philosophy of quantum information. The second will consider Be l l ’ s inequalities, their modified variants and similar to them relations. The beginning of quantum information was in the thirties of the last century. Its speed development has started over the last two decades. The main phenomenon is (...) entanglement. The subareas are quantum computer, quantum communication (and teleportation), and quantum cryptography. The book offers the following main conceptions, theses and hypotheses: – dualistic Phythagoreanism as a new kind among the interpretations of quantum mechanics and information: arithmetical, logical, and metamathematical one; – Gödel ’ s first incompleteness theorem is an undecidable proposition, and consequently the second one,too. – a partial rehabilitation of Hilbert ’ s program for the self-foundation of mathematics; – the dual-foundation of mathematics; – Skolemian relativity between: Cantor ’s kinds of infinity, finiteness and infinity, discreteness and continuity, completeness and incompleteness, etc.; – information is a physical quantity representing the non-reducibility of a system to its parts, particularly nonaddtivity; – there exist pure relations «by itself», which cannot be reduced to predications; – energy conservation can and should be generalized; – Einstein’ s «general covariance» or «principle of relativity» can and should be generalized to cover discrete morphisms where the notion of velocity does not make sense. (shrink)
Quantum computer is considered as a generalization of Turing machine. The bits are substituted by qubits. In turn, a "qubit" is the generalization of "bit" referring to infinite sets or series. It extends the consept of calculation from finite processes and algorithms to infinite ones, impossible as to any Turing machines (such as our computers). However, the concept of quantum computer mets all paradoxes of infinity such as Gödel's incompletness theorems (1931), etc. A philosophical reflection on how quantum computer (...) might implement the idea of "infinite calculation" is the main subject. (shrink)
The problem of indeterminism in quantum mechanics usually being considered as a generalization determinism of classical mechanics and physics for the case of discrete (quantum) changes is interpreted as an only mathematical problem referring to the relation of a set of independent choices to a well-ordered series therefore regulated by the equivalence of the axiom of choice and the well-ordering “theorem”. The former corresponds to quantum indeterminism, and the latter, to classical determinism. No other premises (besides the above (...) only mathematical equivalence) are necessary to explain how the probabilistic causation of quantum mechanics refers to the unambiguous determinism of classical physics. The same equivalence underlies the mathematical formalism of quantum mechanics. It merged the well-ordered components of the vectors of Heisenberg’s matrix mechanics and the non-ordered members of the wave functions of Schrödinger’s undulatory mechanics. The mathematical condition of that merging is just the equivalence of the axiom of choice and the well-ordering theorem implying in turn Max Born’s probabilistic interpretation of quantum mechanics. Particularly, energy conservation is justified differently than classical physics. It is due to the equivalence at issue rather than to the principle of least action. One may involve two forms of energy conservation corresponding whether to the smooth changes of classical physics or to the discrete changes of quantum mechanics. Further both kinds of changes can be equated to each other under the unified energy conservation as well as the conditions for the violation of energy conservation to be investigated therefore directing to a certain generalization of energy conservation. (shrink)
The aim of this paper is to argue that the (alleged) indeterminism of quantum mechanics, claimed by adherents of the Copenhagen interpretation since Born (1926), can be proved from Chaitin's follow-up to Goedel's (first) incompleteness theorem. In comparison, Bell's (1964) theorem as well as the so-called free will theorem-originally due to Heywood and Redhead (1983)-left two loopholes for deterministic hidden variable theories, namely giving up either locality (more precisely: local contextuality, as in Bohmian mechanics) or free choice (...) (i.e. uncorrelated measurement settings, as in 't Hooft's cellular automaton interpretation of quantum mechanics). The main point is that Bell and others did not exploit the full empirical content of quantum mechanics, which consists of long series of outcomes of repeated measurements (idealized as infinite binary sequences): their arguments only used the long-run relative frequencies derived from such series, and hence merely asked hidden variable theories to reproduce single-case Born probabilities defined by certain entangled bipartite states. If we idealize binary outcome strings of a fair quantum coin flip as infinite sequences, quantum mechanics predicts that these typically (i.e. almost surely) have a property called 1-randomness in logic, which is much stronger than uncomputability. This is the key to my claim, which is admittedly based on a stronger (yet compelling) notion of determinism than what is common in the literature on hidden variable theories. (shrink)
God's Dice.Vasil Penchev - 2015 - In S. Oms, J. Martínez, M. García-Carpintero & J. Díez (eds.), Actas: VIII Conference of the Spanish Society for Logic, Methodology, and Philosophy of Sciences. Barcelona: Universitat de Barcelona. pp. 297-303.details
Einstein wrote his famous sentence "God does not play dice with the universe" in a letter to Max Born in 1920. All experiments have confirmed that quantum mechanics is neither wrong nor “incomplete”. One can says that God does play dice with the universe. Let quantum mechanics be granted as the rules generalizing all results of playing some imaginary God’s dice. If that is the case, one can ask how God’s dice should look like. God’s dice turns out to be (...) a qubit and thus having the shape of a unit ball. Any item in the universe as well the universe itself is both infinitely many rolls and a single roll of that dice for it has infinitely many “sides”. Thus both the smooth motion of classical physics and the discrete motion introduced in addition by quantum mechanics can be described uniformly correspondingly as an infinite series converges to some limit and as a quantum jump directly into that limit. The second, imaginary dimension of God’s dice corresponds to energy, i.e. to the velocity of information change between two probabilities in both series and jump. (shrink)
In this short survey article, I discuss Bell’s theorem and some strategies that attempt to avoid the conclusion of non-locality. I focus on two that intersect with the philosophy of probability: (1) quantum probabilities and (2) superdeterminism. The issues they raised not only apply to a wide class of no-go theorems about quantum mechanics but are also of general philosophical interest.
When does opinion formation within an interacting group lead to consensus, polarization or fragmentation? The article investigates various models for the dynamics of continuous opinions by analytical methods as well as by computer simulations. Section 2 develops within a unified framework the classical model of consensus formation, the variant of this model due to Friedkin and Johnsen, a time-dependent version and a nonlinear version with bounded confidence of the agents. Section 3 presents for all these models major analytical results. Section (...) 4 gives an extensive exploration of the nonlinear model with bounded confidence by a series of computer simulations. An appendix supplies needed mathematical definitions, tools, and theorems. (shrink)
Representation theorems are often taken to provide the foundations for decision theory. First, they are taken to characterize degrees of belief and utilities. Second, they are taken to justify two fundamental rules of rationality: that we should have probabilistic degrees of belief and that we should act as expected utility maximizers. We argue that representation theorems cannot serve either of these foundational purposes, and that recent attempts to defend the foundational importance of representation theorems are unsuccessful. As a result, we (...) should reject these claims, and lay the foundations of decision theory on firmer ground. (shrink)
Jury theorems are mathematical theorems about the ability of collectives to make correct decisions. Several jury theorems carry the optimistic message that, in suitable circumstances, ‘crowds are wise’: many individuals together (using, for instance, majority voting) tend to make good decisions, outperforming fewer or just one individual. Jury theorems form the technical core of epistemic arguments for democracy, and provide probabilistic tools for reasoning about the epistemic quality of collective decisions. The popularity of jury theorems spans across various disciplines such (...) as economics, political science, philosophy, and computer science. This entry reviews and critically assesses a variety of jury theorems. It first discusses Condorcet's initial jury theorem, and then progressively introduces jury theorems with more appropriate premises and conclusions. It explains the philosophical foundations, and relates jury theorems to diversity, deliberation, shared evidence, shared perspectives, and other phenomena. It finally connects jury theorems to their historical background and to democratic theory, social epistemology, and social choice theory. (shrink)
A look at the dynamical concept of space and space-generating processes to be found in Kant, J.F. Herbart and the mathematician Bernhard Riemann's philosophical writings.
To counter a general belief that all the paradoxes stem from a kind of circularity (or involve some self--reference, or use a diagonal argument) Stephen Yablo designed a paradox in 1993 that seemingly avoided self--reference. We turn Yablo's paradox, the most challenging paradox in the recent years, into a genuine mathematical theorem in Linear Temporal Logic (LTL). Indeed, Yablo's paradox comes in several varieties; and he showed in 2004 that there are other versions that are equally paradoxical. Formalizing these (...) versions of Yablo's paradox, we prove some theorems in LTL. This is the first time that Yablo's paradox(es) become new(ly discovered) theorems in mathematics and logic. (shrink)
We give a review and critique of jury theorems from a social-epistemology perspective, covering Condorcet’s (1785) classic theorem and several later refinements and departures. We assess the plausibility of the conclusions and premises featuring in jury theorems and evaluate the potential of such theorems to serve as formal arguments for the ‘wisdom of crowds’. In particular, we argue (i) that there is a fundamental tension between voters’ independence and voters’ competence, hence between the two premises of most jury theorems; (...) (ii) that the (asymptotic) conclusion that ‘huge groups are infallible’, reached by many jury theorems, is an artifact of unjustified premises; and (iii) that the (nonasymptotic) conclusion that ‘larger groups are more reliable’, also reached by many jury theorems, is not an artifact and should be regarded as the more adequate formal rendition of the ‘wisdom of crowds’. (shrink)
In response to recent work on the aggregation of individual judgments on logically connected propositions into collective judgments, it is often asked whether judgment aggregation is a special case of Arrowian preference aggregation. We argue for the converse claim. After proving two impossibility theorems on judgment aggregation (using "systematicity" and "independence" conditions, respectively), we construct an embedding of preference aggregation into judgment aggregation and prove Arrow’s theorem (stated for strict preferences) as a corollary of our second result. Although we (...) thereby provide a new proof of Arrow’s theorem, our main aim is to identify the analogue of Arrow’s theorem in judgment aggregation, to clarify the relation between judgment and preference aggregation, and to illustrate the generality of the judgment aggregation model. JEL Classi…cation: D70, D71.. (shrink)
Any intermediate propositional logic can be extended to a calculus with epsilon- and tau-operators and critical formulas. For classical logic, this results in Hilbert’s $\varepsilon $ -calculus. The first and second $\varepsilon $ -theorems for classical logic establish conservativity of the $\varepsilon $ -calculus over its classical base logic. It is well known that the second $\varepsilon $ -theorem fails for the intuitionistic $\varepsilon $ -calculus, as prenexation is impossible. The paper investigates the effect of adding critical $\varepsilon $ (...) - and $\tau $ -formulas and using the translation of quantifiers into $\varepsilon $ - and $\tau $ -terms to intermediate logics. It is shown that conservativity over the propositional base logic also holds for such intermediate ${\varepsilon \tau }$ -calculi. The “extended” first $\varepsilon $ -theorem holds if the base logic is finite-valued Gödel–Dummett logic, and fails otherwise, but holds for certain provable formulas in infinite-valued Gödel logic. The second $\varepsilon $ -theorem also holds for finite-valued first-order Gödel logics. The methods used to prove the extended first $\varepsilon $ -theorem for infinite-valued Gödel logic suggest applications to theories of arithmetic. (shrink)
The standard representation theorem for expected utility theory tells us that if a subject’s preferences conform to certain axioms, then she can be represented as maximising her expected utility given a particular set of credences and utilities—and, moreover, that having those credences and utilities is the only way that she could be maximising her expected utility. However, the kinds of agents these theorems seem apt to tell us anything about are highly idealised, being always probabilistically coherent with infinitely precise (...) degrees of belief and full knowledge of all a priori truths. Ordinary subjects do not look very rational when compared to the kinds of agents usually talked about in decision theory. In this paper, I will develop an expected utility representation theorem aimed at the representation of those who are neither probabilistically coherent, logically omniscient, nor expected utility maximisers across the board—that is, agents who are frequently irrational. The agents in question may be deductively fallible, have incoherent credences, limited representational capacities, and fail to maximise expected utility for all but a limited class of gambles. (shrink)
Amalgamating evidence of different kinds for the same hypothesis into an overall confirmation is analogous, I argue, to amalgamating individuals’ preferences into a group preference. The latter faces well-known impossibility theorems, most famously “Arrow’s Theorem”. Once the analogy between amalgamating evidence and amalgamating preferences is tight, it is obvious that amalgamating evidence might face a theorem similar to Arrow’s. I prove that this is so, and end by discussing the plausibility of the axioms required for the theorem.
In this paper a symmetry argument against quantity absolutism is amended. Rather than arguing against the fundamentality of intrinsic quantities on the basis of transformations of basic quantities, e.g. mass doubling, a class of symmetries defined by the Π-theorem are used. This theorem is a fundamental result of dimensional analysis and shows that all unit-invariant equations which adequately represent physical systems can be put into the form of a function of dimensionless quantities. Quantity transformations that leave those dimensionless (...) quantities invariant are empirical and dynamical symmetries. The proposed symmetries of the original argument fail to be both dynamical and empirical symmetries and are open to counterexamples. The amendment of the original argument requires consideration of the relationships between quantity dimensions, particularly the constraint of dimensional homogeneity on our physical equations. The discussion raises a pertinent issue: what is the modal status of the constants of nature which figure in the laws? Two positions, constant necessitism and constant contingentism, are introduced and their relationships to absolutism and comparativism undergo preliminary investigation. It is argued that the absolutist can only reject the amended symmetry argument by accepting constant necessitism, which has a costly outcome: unit transformations are no longer symmetries. (shrink)
The previous two parts of the paper demonstrate that the interpretation of Fermat’s last theorem (FLT) in Hilbert arithmetic meant both in a narrow sense and in a wide sense can suggest a proof by induction in Part I and by means of the Kochen - Specker theorem in Part II. The same interpretation can serve also for a proof FLT based on Gleason’s theorem and partly similar to that in Part II. The concept of (probabilistic) measure (...) of a subspace of Hilbert space and especially its uniqueness can be unambiguously linked to that of partial algebra or incommensurability, or interpreted as a relation of the two dual branches of Hilbert arithmetic in a wide sense. The investigation of the last relation allows for FLT and Gleason’s theorem to be equated in a sense, as two dual counterparts, and the former to be inferred from the latter, as well as vice versa under an additional condition relevant to the Gödel incompleteness of arithmetic to set theory. The qubit Hilbert space itself in turn can be interpreted by the unity of FLT and Gleason’s theorem. The proof of such a fundamental result in number theory as FLT by means of Hilbert arithmetic in a wide sense can be generalized to an idea about “quantum number theory”. It is able to research mathematically the origin of Peano arithmetic from Hilbert arithmetic by mediation of the “nonstandard bijection” and its two dual branches inherently linking it to information theory. Then, infinitesimal analysis and its revolutionary application to physics can be also re-realized in that wider context, for example, as an exploration of the way for physical quantity of time (respectively, for time derivative in any temporal process considered in physics) to appear at all. Finally, the result admits a philosophical reflection of how any hierarchy arises or changes itself only thanks to its dual and idempotent counterpart. (shrink)
The aggregation of individual judgments over interrelated propositions is a newly arising field of social choice theory. I introduce several independence conditions on judgment aggregation rules, each of which protects against a specific type of manipulation by agenda setters or voters. I derive impossibility theorems whereby these independence conditions are incompatible with certain minimal requirements. Unlike earlier impossibility results, the main result here holds for any (non-trivial) agenda. However, independence conditions arguably undermine the logical structure of judgment aggregation. I therefore (...) suggest restricting independence to premises, which leads to a generalised premise-based procedure. This procedure is proven to be possible if the premises are logically independent. (shrink)
This paper begins with a puzzle regarding Lewis' theory of radical interpretation. On the one hand, Lewis convincingly argued that the facts about an agent's sensory evidence and choices will always underdetermine the facts about her beliefs and desires. On the other hand, we have several representation theorems—such as those of (Ramsey 1931) and (Savage 1954)—that are widely taken to show that if an agent's choices satisfy certain constraints, then those choices can suffice to determine her beliefs and desires. In (...) this paper, I will argue that Lewis' conclusion is correct: choices radically underdetermine beliefs and desires, and representation theorems provide us with no good reasons to think otherwise. Any tension with those theorems is merely apparent, and relates ultimately to the difference between how 'choices' are understood within Lewis' theory and the problematic way that they're represented in the context of the representation theorems. For the purposes of radical interpretation, representation theorems like Ramsey's and Savage's just aren't very relevant after all. (shrink)
A proof of Fermat’s last theorem is demonstrated. It is very brief, simple, elementary, and absolutely arithmetical. The necessary premises for the proof are only: the three definitive properties of the relation of equality (identity, symmetry, and transitivity), modus tollens, axiom of induction, the proof of Fermat’s last theorem in the case of.
Pettit (2012) presents a model of popular control over government, according to which it consists in the government being subject to those policy-making norms that everyone accepts. In this paper, I provide a formal statement of this interpretation of popular control, which illuminates its relationship to other interpretations of the idea with which it is easily conflated, and which gives rise to a theorem, similar to the famous Gibbard-Satterthwaite theorem. The theorem states that if government policy is (...) subject to popular control, as Pettit interprets it, and policy responds positively to changes in citizens' normative attitudes, then there is a single individual whose normative attitudes unilaterally determine policy. I use the model and theorem as an illustrative example to discuss the role of mathematics in normative political theory. (shrink)
Peer review is often taken to be the main form of quality control on academic research. Usually journals carry this out. However, parts of maths and physics appear to have a parallel, crowd-sourced model of peer review, where papers are posted on the arXiv to be publicly discussed. In this paper we argue that crowd-sourced peer review is likely to do better than journal-solicited peer review at sorting papers by quality. Our argument rests on two key claims. First, crowd-sourced peer (...) review will lead on average to more reviewers per paper than journal-solicited peer review. Second, due to the wisdom of the crowds, more reviewers will tend to make better judgments than fewer. We make the second claim precise by looking at the Condorcet Jury Theorem as well as two related jury theorems developed specifically to apply to peer review. (shrink)
This paper generalises the classical Condorcet jury theorem from majority voting over two options to plurality voting over multiple options. The paper further discusses the debate between epistemic and procedural democracy and situates its formal results in that debate. The paper finally compares a number of different social choice procedures for many-option choices in terms of their epistemic merits. An appendix explores the implications of some of the present mathematical results for the question of how probable majority cycles (as (...) in Condorcet's paradox) are in large electorates. (shrink)
This paper deals with, prepositional calculi with strong negation (N-logics) in which the Craig interpolation theorem holds. N-logics are defined to be axiomatic strengthenings of the intuitionistic calculus enriched with a unary connective called strong negation. There exists continuum of N-logics, but the Craig interpolation theorem holds only in 14 of them.
Famous results by David Lewis show that plausible-sounding constraints on the probabilities of conditionals or evaluative claims lead to unacceptable results, by standard probabilistic reasoning. Existing presentations of these results rely on stronger assumptions than they really need. When we strip these arguments down to a minimal core, we can see both how certain replies miss the mark, and also how to devise parallel arguments for other domains, including epistemic “might,” probability claims, claims about comparative value, and so on. A (...) popular reply to Lewis's results is to claim that conditional claims, or claims about subjective value, lack truth conditions. For this strategy to have a chance of success, it needs to give up basic structural principles about how epistemic states can be updated—in a way that is strikingly parallel to the commitments of the project of dynamic semantics. (shrink)
Our conscious minds exist in the Universe, therefore they should be identified with physical states that are subject to physical laws. In classical theories of mind, the mental states are identified with brain states that satisfy the deterministic laws of classical mechanics. This approach, however, leads to insurmountable paradoxes such as epiphenomenal minds and illusionary free will. Alternatively, one may identify mental states with quantum states realized within the brain and try to resolve the above paradoxes using the standard Hilbert (...) space formalism of quantum mechanics. In this essay, we first show that identification of mind states with quantum states within the brain is biologically feasible, and then elaborating on the mathematical proofs of two quantum mechanical no-go theorems, we explain why quantum theory might have profound implications for the scientific understanding of one's mental states, self identity, beliefs and free will. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.