Standard approaches to proper names, based on Kripke's views, hold that the semantic values of expressions are (set-theoretic) functions from possible worlds to extensions and that names are rigid designators, i.e.\ that their values are \emph{constant} functions from worlds to entities. The difficulties with these approaches are well-known and in this paper we develop an alternative. Based on earlier work on a higher order logic that is \emph{truly intensional} in the sense that it does not validate the axiom scheme (...) of Extensionality, we develop a simple theory of names in which Kripke's intuitions concerning rigidity are accounted for, but the more unpalatable consequences of standard implementations of his theory are avoided. The logic uses Frege's distinction between sense and reference and while it accepts the rigidity of names it rejects the view that names have direct reference. Names have constant denotations across possible worlds, but the semantic value of a name is not determined by its denotation. (shrink)
The naive theory of properties states that for every condition there is a property instantiated by exactly the things which satisfy that condition. The naive theory of properties is inconsistent in classical logic, but there are many ways to obtain consistent naive theories of properties in nonclassical logics. The naive theory of classes adds to the naive theory of properties an extensionality rule or axiom, which states roughly that if two classes have exactly the same members, they are (...) identical. In this paper we examine the prospects for obtaining a satisfactory naive theory of classes. We start from a result by Ross Brady, which demonstrates the consistency of something resembling a naive theory of classes. We generalize Brady’s result somewhat and extend it to a recent system developed by Andrew Bacon. All of the theories we prove consistent contain an extensionality rule or axiom. But we argue that given the background logics, the relevant extensionality principles are too weak. For example, in some of these theories, there are universal classes which are not declared coextensive. We elucidate some very modest demands on extensionality, designed to rule out this kind of pathology. But we close by proving that even these modest demands cannot be jointly satisfied. In light of this new impossibility result, the prospects for a naive theory of classes are bleak. (shrink)
I focus on three mereological principles: the Extensionality of Parthood (EP), the Uniqueness of Composition (UC), and the Extensionality of Composition (EC). These principles are not equivalent. Nonetheless, they are closely related (and often equated) as they all reflect the basic nominalistic dictum, No difference without a difference maker. And each one of them—individually or collectively—has been challenged on philosophical grounds. In the first part I argue that such challenges do not quite threaten EP insofar as they are (...) either self-defeating or unsupported. In the second part I argue that they hardly undermine the tenability of EC and UC as well. (shrink)
In this paper we define intensional models for the classical theory of types, thus arriving at an intensional type logic ITL. Intensional models generalize Henkin's general models and have a natural definition. As a class they do not validate the axiom of Extensionality. We give a cut-free sequent calculus for type theory and show completeness of this calculus with respect to the class of intensional models via a model existence theorem. After this we turn our attention to applications. (...) Firstly, it is argued that, since ITL is truly intensional, it can be used to model ascriptions of propositional attitude without predicting logical omniscience. In order to illustrate this a small fragment of English is defined and provided with an ITL semantics. Secondly, it is shown that ITL models contain certain objects that can be identified with possible worlds. Essential elements of modal logic become available within classical type theory once the axiom of Extensionality is given up. (shrink)
In the early 1900s, Russell began to recognize that he, and many other mathematicians, had been using assertions like the Axiom of Choice implicitly, and without explicitly proving them. In working with the Axioms of Choice, Infinity, and Reducibility, and his and Whitehead’s Multiplicative Axiom, Russell came to take the position that some axioms are necessary to recovering certain results of mathematics, but may not be proven to be true absolutely. The essay traces historical roots of, and motivations (...) for, Russell’s method of analysis, which are intended to shed light on his view about the status of mathematical axioms. I describe the position Russell develops in consequence as “immanent logicism,” in contrast to what Irving (1989) describes as “epistemic logicism.” Immanent logicism allows Russell to avoid the logocentric predicament, and to propose a method for discovering structural relationships of dependence within mathematical theories. (shrink)
The Principle of Ariadne, formulated in 1988 ago by Walter Carnielli and Carlos Di Prisco and later published in 1993, is an infinitary principle that is independent of the Axiom of Choice in ZF, although it can be consistently added to the remaining ZF axioms. The present paper surveys, and motivates, the foundational importance of the Principle of Ariadne and proposes the Ariadne Game, showing that the Principle of Ariadne, corresponds precisely to a winning strategy for the Ariadne Game. (...) Some relations to other alternative. set-theoretical principles are also briefly discussed. (shrink)
The aim of this paper is to show that every topological space gives rise to a wealth of topological models of the modal logic S4.1. The construction of these models is based on the fact that every space defines a Boolean closure algebra (to be called a McKinsey algebra) that neatly reflects the structure of the modal system S4.1. It is shown that the class of topological models based on McKinsey algebras contains a canonical model that can be used to (...) prove a completeness theorem for S4.1. Further, it is shown that the McKinsey algebra MKX of a space X endoewed with an alpha-topologiy satisfies Esakia's GRZ axiom. (shrink)
We examine some of Connes’ criticisms of Robinson’s infinitesimals starting in 1995. Connes sought to exploit the Solovay model S as ammunition against non-standard analysis, but the model tends to boomerang, undercutting Connes’ own earlier work in functional analysis. Connes described the hyperreals as both a “virtual theory” and a “chimera”, yet acknowledged that his argument relies on the transfer principle. We analyze Connes’ “dart-throwing” thought experiment, but reach an opposite conclusion. In S , all definable sets of reals are (...) Lebesgue measurable, suggesting that Connes views a theory as being “virtual” if it is not definable in a suitable model of ZFC. If so, Connes’ claim that a theory of the hyperreals is “virtual” is refuted by the existence of a definable model of the hyperreal field due to Kanovei and Shelah. Free ultrafilters aren’t definable, yet Connes exploited such ultrafilters both in his own earlier work on the classification of factors in the 1970s and 80s, and in Noncommutative Geometry, raising the question whether the latter may not be vulnerable to Connes’ criticism of virtuality. We analyze the philosophical underpinnings of Connes’ argument based on Gödel’s incompleteness theorem, and detect an apparent circularity in Connes’ logic. We document the reliance on non-constructive foundational material, and specifically on the Dixmier trace −∫ (featured on the front cover of Connes’ magnum opus) and the Hahn–Banach theorem, in Connes’ own framework. We also note an inaccuracy in Machover’s critique of infinitesimal-based pedagogy. (shrink)
The principle of maximal entropy (further abbreviated as “MaxEnt”) can be founded on the formal mechanism, in which future transforms into past by the mediation of present. This allows of MaxEnt to be investigated by the theory of quantum information. MaxEnt can be considered as an inductive analog or generalization of “Occam’s razor”. It depends crucially on choice and thus on information just as all inductive methods of reasoning. The essence shared by Occam’s razor and MaxEnt is for the relevant (...) data known till now to be postulated as an enough fundament of conclusion. That axiom is the kind of choice grounding both principles. Popper’s falsifiability (1935) can be discussed as a complement to them: That axiom (or axiom scheme) is always sufficient but never necessary condition of conclusion therefore postulating the choice in the base of MaxEnt. Furthermore, the abstraction axiom (or axiom scheme) relevant to set theory (e.g. the axiom scheme of specification in ZFC) involves choice analogically. (shrink)
A principle, according to which any scientific theory can be mathematized, is investigated. That theory is presupposed to be a consistent text, which can be exhaustedly represented by a certain mathematical structure constructively. In thus used, the term “theory” includes all hypotheses as yet unconfirmed as already rejected. The investigation of the sketch of a possible proof of the principle demonstrates that it should be accepted rather a metamathematical axiom about the relation of mathematics and reality. Its investigation needs (...) philosophical means. Husserl’s phenomenology is what is used, and then the conception of “bracketing reality” is modelled to generalize Peano arithmetic in its relation to set theory in the foundation of mathematics. The obtained model is equivalent to the generalization of Peano arithmetic by means of replacing the axiom of induction with that of transfinite induction. A comparison to Mach’s doctrine is used to be revealed the fundamental and philosophical reductionism of Husserl’s phenomenology leading to a kind of Pythagoreanism in the final analysis. Accepting or rejecting the principle, two kinds of mathematics appear differing from each other by its relation to reality. Accepting the principle, mathematics has to include reality within itself in a kind of Pythagoreanism. These two kinds are called in paper correspondingly Hilbert mathematics and Gödel mathematics. The sketch of the proof of the principle demonstrates that the generalization of Peano arithmetic as above can be interpreted as a model of Hilbert mathematics into Gödel mathematics therefore showing that the former is not less consistent than the latter, and the principle is an independent axiom. An information interpretation of Hilbert mathematics is involved. It is a kind of ontology of information. Thus the problem which of the two mathematics is more relevant to our being is discussed. An information interpretation of the Schrödinger equation is involved to illustrate the above problem. (shrink)
Peano arithmetic cannot serve as the ground of mathematics for it is inconsistent to infinity, and infinity is necessary for its foundation. Though Peano arithmetic cannot be complemented by any axiom of infinity, there exists at least one (logical) axiomatics consistent to infinity. That is nothing else than a new reading at issue and comparative interpretation of Gödel’s papers (1930; 1931) meant here. Peano arithmetic admits anyway generalizations consistent to infinity and thus to some addable axiom(s) of infinity. (...) The most utilized example of those generalizations is the complex Hilbert space. Any generalization of Peano arithmetic consistent to infinity, e.g. the complex Hilbert space, can serve as a foundation for mathematics to found itself and by itself. (shrink)
The quantum information introduced by quantum mechanics is equivalent to that generalization of the classical information from finite to infinite series or collections. The quantity of information is the quantity of choices measured in the units of elementary choice. The qubit can be interpreted as that generalization of bit, which is a choice among a continuum of alternatives. The axiom of choice is necessary for quantum information. The coherent state is transformed into a well-ordered series of results in time (...) after measurement. The quantity of quantum information is the ordinal corresponding to the infinity series in question. (shrink)
C. I. Lewis (I883-I964) was the first major figure in history and philosophy of logic—-a field that has come to be recognized as a separate specialty after years of work by Ivor Grattan-Guinness and others (Dawson 2003, 257).Lewis was among the earliest to accept the challenges offered by this field; he was the first who had the philosophical and mathematical talent, the philosophical, logical, and historical background, and the patience and dedication to objectivity needed to excel. He was blessed with (...) many fortunate circumstances, not least of which was entering the field when mathematical logic, after only six decades of toil, had just reaped one of its most important harvests with publication of the monumental Principia Mathematica. It was a time of joyful optimism which demanded an historical account and a sober philosophical critique. Lewis was one of the first to apply to mathematical logic the Aristotelian dictum that we do not understand a living institution until we see it growing from its birth. (shrink)
Suppose there is a domain of discourse of English, then everything of which any predicate is true is a member of that domain. If English has a domain of discourse, then, since ‘is a domain of discourse of English’ is itself a predicate of English and true of that domain, that domain is a member of itself. But nothing is a member of itself. Thus English has no domain of discourse. We defend this argument and go on to argue to (...) the same conclusion without relying on the supposition that English is a language which contains the predicate ‘is a domain of discourse of English’. (shrink)
The paper contains an overview of the most important results presented in the monograph of the author "Teorie Językow Syntaktycznie-Kategorialnych" ("Theories of Syntactically-Categorial Languages" (in Polish), PWN, Warszawa-Wrocław 1985. In the monograph four axiomatic systems of syntactically-categorial languages are presented. The ﬁrst two refer to languages of expression-tokens. The others also takes into consideration languages of expression-types. Generally, syntactically-categorial languages are languages built in accordance with principles of the theory of syntactic categories introduced by S. Leśniewski [1929,1930]; they are connected (...) with- the Ajdukiewicz’s work [1935] which was a continuation of Leśniewski’s idea and further developed and popularized in the research on categorial grammars, by Y. Bar-Hillel [1950,1953,1964]. To assign a suitable syntactic category to each word of the vocabulary is the main idea of syntactically-categorial approach to language. Compound expressions are built from the words of the vocabulary and then a suitable syntactic-category is assigned to each of them. A language built in this way should be decidable, which means that there should exist an algorithm for deciding about each expression of it, whether it is well-formed or is syntactically connected . The traditional, originating from Husserl, understanding of the syntactic category confronts some diﬃculties. This notion is deﬁned by abstraction using the concept of aﬃliation of two expressions to the same syntactic category. (shrink)
The link between the high-order metaphysics and abstractions, on the one hand, and choice in the foundation of set theory, on the other hand, can distinguish unambiguously the “good” principles of abstraction from the “bad” ones and thus resolve the “bad company problem” as to set theory. Thus it implies correspondingly a more precise definition of the relation between the axiom of choice and “all company” of axioms in set theory concerning directly or indirectly abstraction: the principle of abstraction, (...)axiom of comprehension, axiom scheme of specification, axiom scheme of separation, subset axiom scheme, axiom scheme of replacement, axiom of unrestricted comprehension, axiom of extensionality, etc. (shrink)
Arthur Clark and Michael Kube–McDowell (“The Triger”, 2000) suggested the sci-fi idea about the direct transformation from a chemical substance to another by the action of a newly physical, “Trigger” field. Karl Brohier, a Nobel Prize winner, who is a dramatic persona in the novel, elaborates a new theory, re-reading and re-writing Pauling’s “The Nature of the Chemical Bond”; according to Brohier: “Information organizes and differentiates energy. It regularizes and stabilizes matter. Information propagates through matter-energy and mediates the interactions of (...) matter-energy.” Dr Horton, his collaborator in the novel replies: “If the universe consists of energy and information, then the Trigger somehow alters the information envelope of certain substances –“. “Alters it, scrambles it, overwhelms it, destabilizes it” Brohier adds. There is a scientific debate whether or how far chemistry is fundamentally reducible to quantum mechanics. Nevertheless, the fact that many essential chemical properties and reactions are at least partly representable in terms of quantum mechanics is doubtless. For the quantum mechanics itself has been reformulated as a theory of a special kind of information, quantum information, chemistry might be in turn interpreted in the same terms. Wave function, the fundamental concept of quantum mechanics, can be equivalently defined as a series of qubits, eventually infinite. A qubit, being defined as the normed superposition of the two orthogonal subspaces of the complex Hilbert space, can be interpreted as a generalization of the standard bit of information as to infinite sets or series. All “forces” in the Standard model, which are furthermore essential for chemical transformations, are groups [U(1),SU(2),SU(3)] of the transformations of the complex Hilbert space and thus, of series of qubits. One can suggest that any chemical substances and changes are fundamentally representable as quantum information and its transformations. If entanglement is interpreted as a physical field, though any group above seems to be unattachable to it, it might be identified as the “Triger field”. It might cause a direct transformation of any chemical substance by from a remote distance. Is this possible in principle? (shrink)
Much of the ontology made in the analytic tradition of philosophy nowadays is founded on some of Quine’s proposals. His naturalism and the binding between existence and quantification are respectively two of his very influential metaphilosophical and methodological theses. Nevertheless, many of his specific claims are quite controversial and contemporaneously have few followers. Some of them are: (a) his rejection of higher-order logic; (b) his resistance in accepting the intensionality of ontological commitments; (c) his rejection of first-order modal logic; and (...) (d) his rejection of the distinction between analytic and synthetic statements. I intend to argue that these controversial negative claims are just interconnected consequences of those much more accepted and apparently less harmful metaphilosophical and methodological theses, and that the glue linking all these consequences to its causes is the notion of extensionality. (shrink)
We discuss the philosophical implications of formal results showing the con- sequences of adding the epsilon operator to intuitionistic predicate logic. These results are related to Diaconescu’s theorem, a result originating in topos theory that, translated to constructive set theory, says that the axiom of choice (an “existence principle”) implies the law of excluded middle (which purports to be a logical principle). As a logical choice principle, epsilon allows us to translate that result to a logical setting, where one (...) can get an analogue of Diaconescu’s result, but also can disentangle the roles of certain other assumptions that are hidden in mathematical presentations. It is our view that these results have not received the attention they deserve: logicians are unlikely to read a discussion because the results considered are “already well known,” while the results are simultaneously unknown to philosophers who do not specialize in what most philosophers will regard as esoteric logics. This is a problem, since these results have important implications for and promise signif i cant illumination of contem- porary debates in metaphysics. The point of this paper is to make the nature of the results clear in a way accessible to philosophers who do not specialize in logic, and in a way that makes clear their implications for contemporary philo- sophical discussions. To make the latter point, we will focus on Dummettian discussions of realism and anti-realism. Keywords: epsilon, axiom of choice, metaphysics, intuitionistic logic, Dummett, realism, antirealism. (shrink)
We show how removing faith-based beliefs in current philosophies of classical and constructive mathematics admits formal, evidence-based, definitions of constructive mathematics; of a constructively well-defined logic of a formal mathematical language; and of a constructively well-defined model of such a language. -/- We argue that, from an evidence-based perspective, classical approaches which follow Hilbert's formal definitions of quantification can be labelled `theistic'; whilst constructive approaches based on Brouwer's philosophy of Intuitionism can be labelled `atheistic'. -/- We then adopt what may (...) be labelled a finitary, evidence-based, `agnostic' perspective and argue that Brouwerian atheism is merely a restricted perspective within the finitary agnostic perspective, whilst Hilbertian theism contradicts the finitary agnostic perspective. -/- We then consider the argument that Tarski's classic definitions permit an intelligence---whether human or mechanistic---to admit finitary, evidence-based, definitions of the satisfaction and truth of the atomic formulas of the first-order Peano Arithmetic PA over the domain N of the natural numbers in two, hitherto unsuspected and essentially different, ways. -/- We show that the two definitions correspond to two distinctly different---not necessarily evidence-based but complementary---assignments of satisfaction and truth to the compound formulas of PA over N. -/- We further show that the PA axioms are true over N, and that the PA rules of inference preserve truth over N, under both the complementary interpretations; and conclude some unsuspected constructive consequences of such complementarity for the foundations of mathematics, logic, philosophy, and the physical sciences. -/- . (shrink)
The syllogistic figures and moods can be taken to be argument schemata as can the rules of the Stoic propositional logic. Sentence schemata have been used in axiomatizations of logic only since the landmark 1927 von Neumann paper [31]. Modern philosophers know the role of schemata in explications of the semantic conception of truth through Tarski’s 1933 Convention T [42]. Mathematical logicians recognize the role of schemata in first-order number theory where Peano’s second-order Induction Axiom is approximated by Herbrand’s (...) Induction-Axiom Schema [23]. Similarly, in first-order set theory, Zermelo’s second-order Separation Axiom is approximated by Fraenkel’s first-order Separation Schema [17]. In some of several closely related senses, a schema is a complex system having multiple components one of which is a template-text or scheme-template, a syntactic string composed of one or more “blanks” and also possibly significant words and/or symbols. In accordance with a side condition the template-text of a schema is used as a “template” to specify a multitude, often infinite, of linguistic expressions such as phrases, sentences, or argument-texts, called instances of the schema. The side condition is a second component. The collection of instances may but need not be regarded as a third component. The instances are almost always considered to come from a previously identified language (whether formal or natural), which is often considered to be another component. This article reviews the often-conflicting uses of the expressions ‘schema’ and ‘scheme’ in the literature of logic. It discusses the different definitions presupposed by those uses. And it examines the ontological and epistemic presuppositions circumvented or mooted by the use of schemata, as well as the ontological and epistemic presuppositions engendered by their use. In short, this paper is an introduction to the history and philosophy of schemata. (shrink)
Formalizing Euclid’s first axiom. Bulletin of Symbolic Logic. 20 (2014) 404–5. (Coauthor: Daniel Novotný) -/- Euclid [fl. 300 BCE] divides his basic principles into what came to be called ‘postulates’ and ‘axioms’—two words that are synonyms today but which are commonly used to translate Greek words meant by Euclid as contrasting terms. -/- Euclid’s postulates are specifically geometric: they concern geometric magnitudes, shapes, figures, etc.—nothing else. The first: “to draw a line from any point to any point”; the last: (...) the parallel postulate. -/- Euclid’s axioms are general principles of magnitude: they concern geometric magnitudes and magnitudes of other kinds as well even numbers. The first is often translated “Things that equal the same thing equal one another”. -/- There are other differences that are or might become important. -/- Aristotle [fl. 350 BCE] meticulously separated his basic principles [archai, singular archê] according to subject matter: geometrical, arithmetic, astronomical, etc. However, he made no distinction that can be assimilated to Euclid’s postulate/axiom distinction. -/- Today we divide basic principles into non-logical [topic-specific] and logical [topic-neutral] but this too is not the same as Euclid’s. In this regard it is important to be cognizant of the difference between equality and identity—a distinction often crudely ignored by modern logicians. Tarski is a rare exception. The four angles of a rectangle are equal to—not identical to—one another; the size of one angle of a rectangle is identical to the size of any other of its angles. No two angles are identical to each other. -/- The sentence ‘Things that equal the same thing equal one another’ contains no occurrence of the word ‘magnitude’. This paper considers the problem of formalizing the proposition Euclid intended as a principle of magnitudes while being faithful to the logical form and to its information content. (shrink)
The Pigou-Dalton (PD) principle recommends a non-leaky, non-rank-switching transfer of goods from someone with more goods to someone with less. This Article defends the PD principle as an aspect of distributive justice—enabling the comparison of two distributions, neither completely equal, as more or less just. It shows how the PD principle flows from a particular view, adumbrated by Thomas Nagel, about the grounding of distributive justice in individuals’ “claims.” And it criticizes two competing frameworks for thinking about justice that less (...) clearly support the principle: the veil-of-ignorance framework, and Larry Temkin’s proposal that fairer distributions are those concerning which individuals have fewer “complaints.” -/- The Article also clarifies the relation between the PD principle and prioritarianism. Prioritarians will surely endorse the PD principle (with the “good” individual well-being), but they are also committed to a distinct axiom of separability: the moral value of someone’s well-being change does not depend upon her position relative to others. The PD principle neither implies separability, nor is implied by it. Although prioritarianism is very plausible, the case for the PD principle is yet more compelling than for the combination of that principle with separability. In discussing prioritarianism, we should differentiate between these two, logically independent aspects of the view. -/- . (shrink)
The cognition of quantum processes raises a series of questions about ordering and information connecting the states of one and the same system before and after measurement: Quantum measurement, quantum in-variance and the non-locality of quantum information are considered in the paper from an epistemological viewpoint. The adequate generalization of ‘measurement’ is discussed to involve the discrepancy, due to the fundamental Planck constant, between any quantum coherent state and its statistical representation as a statistical ensemble after measurement. Quantum in-variance designates (...) the relation of any quantum coherent state to the corresponding statistical ensemble of measured results. A set-theory corollary is the curious in-variance to the axiom of choice: Any coherent state excludes any well-ordering and thus excludes also the axiom of choice. However the above equivalence requires it to be equated to a well-ordered set after measurement and thus requires the axiom of choice for it to be able to be obtained. Quantum in-variance underlies quantum information and reveals it as the relation of an unordered quantum “much” (i.e. a coherent state) and a well-ordered “many” of the measured results (i.e. a statistical ensemble). It opens up to a new horizon, in which all physical processes and phenomena can be interpreted as quantum computations realizing relevant operations and algorithms on quantum information. All phenomena of entanglement can be described in terms of the so defined quantum information. Quantum in-variance elucidates the link between general relativity and quantum mechanics and thus, the problem of quantum gravity. The non-locality of quantum information unifies the exact position of any space-time point of a smooth trajectory and the common possibility of all space-time points due to a quantum leap. This is deduced from quantum in-variance. Epistemology involves the relation of ordering and thus a generalized kind of information, quantum one, to explain the special features of the cognition in quantum mechanics. (shrink)
The text is a continuation of the article of the same name published in the previous issue of Philosophical Alternatives. The philosophical interpretations of the Kochen- Specker theorem (1967) are considered. Einstein's principle regarding the,consubstantiality of inertia and gravity" (1918) allows of a parallel between descriptions of a physical micro-entity in relation to the macro-apparatus on the one hand, and of physical macro-entities in relation to the astronomical mega-entities on the other. The Bohmian interpretation ( 1952) of quantum mechanics proposes (...) that all quantum systems be interpreted as dissipative ones and that the theorem be thus derstood. The conclusion is that the continual representation, by force or (gravitational) field between parts interacting by means of it, of a system is equivalent to their mutual entanglement if representation is discrete. Gravity (force field) and entanglement are two different, correspondingly continual and discrete, images of a single common essence. General relativity can be interpreted as a superluminal generalization of special relativity. The postulate exists of an alleged obligatory difference between a model and reality in science and philosophy. It can also be deduced by interpreting a corollary of the heorem. On the other hand, quantum mechanics, on the basis of this theorem and of V on Neumann's (1932), introduces the option that a model be entirely identified as the modeled reality and, therefore, that absolutely reality be recognized: this is a non-standard hypothesis in the epistemology of science. Thus, the true reality begins to be understood mathematically, i.e. in a Pythagorean manner, for its identification with its mathematical model. A few linked problems are highlighted: the role of the axiom of choice forcorrectly interpreting the theorem; whether the theorem can be considered an axiom; whether the theorem can be considered equivalent to the negation of the axiom. (shrink)
DEFINING OUR TERMS A “paradox" is an argumentation that appears to deduce a conclusion believed to be false from premises believed to be true. An “inconsistency proof for a theory" is an argumentation that actually deduces a negation of a theorem of the theory from premises that are all theorems of the theory. An “indirect proof of the negation of a hypothesis" is an argumentation that actually deduces a conclusion known to be false from the hypothesis alone or, more commonly, (...) from the hypothesis augmented by a set of premises known to be true. A “direct proof of a hypothesis" is an argumentation that actually deduces the hypothesis itself from premises known to be true. Since `appears', `believes' and `knows' all make elliptical reference to a participant, it is clear that `paradox', `indirect proof' and `direct proof' are all participant-relative. PARTICIPANT RELATIVITY In normal mathematical writing the participant is presumed to be “the community of mathematicians" or some more or less well-defined subcommunity and, therefore, omission of explicit reference to the participant is often warranted. However, in historical, critical, or philosophical writing focused on emerging branches of mathematics such omission often invites confusion. One and the same argumentation has been a paradox for one mathematician, an inconsistency proof for another, and an indirect proof to a third. One and the same argumentation-text can appear to one mathematician to express an indirect proof while appearing to another mathematician to express a direct proof. WHAT IS A PARADOX’S SOLUTION? Of the above four sorts of argumentation only the paradox invites “solution" or “resolution", and ordinarily this is to be accomplished either by discovering a logical fallacy in the “reasoning" of the argumentation or by discovering that the conclusion is not really false or by discovering that one of the premises is not really true. Resolution of a paradox by a participant amounts to reclassifying a formerly paradoxical argumentation either as a “fallacy", as a direct proof of its conclusion, as an indirect proof of the negation of one of its premises, as an inconsistency proof, or as something else depending on the participant's state of knowledge or belief. This illustrates why an argumentation which is a paradox to a given mathematician at a given time may well not be a paradox to the same mathematician at a later time. -/- The present article considers several set-theoretic argumentations that appeared in the period 1903-1908. The year 1903 saw the publication of B. Russell's Principles of mathematics, [Cambridge Univ. Press, Cambridge, 1903; Jbuch 34, 62]. The year 1908 saw the publication of Russell's article on type theory as well as Ernst Zermelo's two watershed articles on the axiom of choice and the foundations of set theory. The argumentations discussed concern “the largest cardinal", “the largest ordinal", the well-ordering principle, “the well-ordering of the continuum", denumerability of ordinals and denumerability of reals. The article shows that these argumentations were variously classified by various mathematicians and that the surrounding atmosphere was one of confusion and misunderstanding, partly as a result of failure to make or to heed distinctions similar to those made above. The article implies that historians have made the situation worse by not observing or not analysing the nature of the confusion. -/- RECOMMENDATION This well-written and well-documented article exemplifies the fact that clarification of history can be achieved through articulation of distinctions that had not been articulated (or were not being heeded) at the time. The article presupposes extensive knowledge of the history of mathematics, of mathematics itself (especially set theory) and of philosophy. It is therefore not to be recommended for casual reading. AFTERWORD: This review was written at the same time Corcoran was writing his signature “Argumentations and logic”[249] that covers much of the same ground in much more detail. https://www.academia.edu/14089432/Argumentations_and_Logic . (shrink)
The present crisis of foundations in Fundamental Science is manifested as a comprehensive conceptual crisis, crisis of understanding, crisis of interpretation and representation, crisis of methodology, loss of certainty. Fundamental Science "rested" on the understanding of matter, space, nature of the "laws of nature", fundamental constants, number, time, information, consciousness. The question "What is fundametal?" pushes the mind to other questions → Is Fundamental Science fundamental? → What is the most fundamental in the Universum?.. Physics, do not be afraid of (...) Metaphysics! Levels of fundamentality. The problem №1 of Fundamental Science is the ontological justification (basification) of mathematics. To understand is to "grasp" Structure ("La Structure mère"). Key ontological ideas for emerging from the crisis of understanding: total unification of matter across all levels of the Universum, one ontological superaxiom, one ontological superprinciple. The ontological construction method of the knowledge basis (framework, carcass, foundation). The triune (absolute, ontological) space of eternal generation of new structures and meanings. Super concept of the scientific world picture of the Information era - Ontological (structural, cosmic) memory as "soul of matter", measure of the Universum being as the holistic generating process. The result of the ontological construction of the knowledge basis: primordial (absolute) generating structure is the most fundamental in the Universum. (shrink)
According to dispositionalism about modality, a proposition <p> is possible just in case something has, or some things have, a power or disposition for its truth; and <p> is necessary just in case nothing has a power for its falsity. But are there enough powers to go around? In Yates (2015) I argued that in the case of mathematical truths such as <2+2=4>, nothing has the power to bring about their falsity or their truth, which means they come out both (...) necessary and not possible. Combining this with axiom (T), it is easy to derive a contradiction. I suggested that dispositionalists ought to retreat a little and say that <p> is possible just in case either p, or there is a power to bring it about that p, grounding the possibility of mathematical propositions in their truth rather than in powers. Vetter’s (2015) has the resources to provide a response to my argument, and in her (2018) she explicitly addresses it by arguing for a plenitude of powers, based on the idea that dispositions come in degrees, with necessary properties a limiting case of dispositionality. On this view there is a power for <2+2=4>, without there being a power to bring about its truth. In this paper I argue that Vetter’s case for plenitude does not work. However, I suggest, if we are prepared to accept metaphysical causation, a case can be made that there is indeed a power for <2+2=4>. (shrink)
The conventional wisdom has it that between 1905 and 1919 Russell was critical to pragmatism. In particular, in two essays written in 1908–9, he sharply attacked the pragmatist theory of truth, emphasizing that truth is not relative to human practice. In fact, however, Russell was much more indebted to the pragmatists, in particular to William James, as usually believed. For example, he borrowed from James two key concepts of his new epistemology: sense-data, and the distinction between knowledge by acquaintance and (...) knowledge by description. Reasonable explanation of this is that, historically, Russell’s logical realism and James’s pragmatism have the same roots—the German philosopher Rudolph Hermann Lotze (1817–1881). In this paper we are going to explore the fact that in 1905, under Lotze’s influence, Russell married propositions with beliefs. A few years later this step also made Russell prone to embrace the theory of truth-making that has its roots in James. In contrast to the concept of sense-data and to the distinction between knowledge by acquaintance and knowledge by description, however, the understanding that we believe propositions—and not, for example, simply grasp them—was in tension with Russell’s Principle of Extensionality, according to which propositions can be logically connected with other propositions only as truth-functions. The point is that when we judge a mind-relation (for example, a relation of belief) to a proposition, the latter cannot be determined as true or false. The two most talented pupils of Russell, Wittgenstein and Ramsey, severely criticized the central place propositional attitudes play in Russell’s logic. Wittgenstein analyzed “A believes that p” to “ ‘p’ says p” (5.542). Ramsey criticized Russell’s beliefs in propositions the other way round: He stressed that belief is an ambiguous term that can be interpreted for the better in the sense of pragmatism. Prima facie surprisingly, he maintained that his “pragmatism is derived from Mr Russell.” (1927: 51). (shrink)
The paper surveys the currently available axiomatizations of common belief (CB) and common knowledge (CK) by means of modal propositional logics. (Throughout, knowledge- whether individual or common- is defined as true belief.) Section 1 introduces the formal method of axiomatization followed by epistemic logicians, especially the syntax-semantics distinction, and the notion of a soundness and completeness theorem. Section 2 explains the syntactical concepts, while briefly discussing their motivations. Two standard semantic constructions, Kripke structures and neighbourhood structures, are introduced in Sections (...) 3 and 4, respectively. It is recalled that Aumann's partitional model of CK is a particular case of a definition in terms of Kripke structures. The paper also restates the well-known fact that Kripke structures can be regarded as particular cases of neighbourhood structures. Section 3 reviews the soundness and completeness theorems proved w.r.t. the former structures by Fagin, Halpern, Moses and Vardi, as well as related results by Lismont. Section 4 reviews the corresponding theorems derived w.r.t. the latter structures by Lismont and Mongin. A general conclusion of the paper is that the axiomatization of CB does not require as strong systems of individual belief as was originally thought- only monotonicity has thusfar proved indispensable. Section 5 explains another consequence of general relevance: despite the "infinitary" nature of CB, the axiom systems of this paper admit of effective decision procedures, i.e., they are decidable in the logician's sense. (shrink)
Spinoza's causal axiom is at the foundation of the Ethics. I motivate, develop and defend a new interpretation that I call the ‘causally restricted interpretation’. This interpretation solves several longstanding puzzles and helps us better understand Spinoza's arguments for some of his most famous doctrines, including his parallelism doctrine and his theory of sense perception. It also undermines a widespread view about the relationship between the three fundamental, undefined notions in Spinoza's metaphysics: causation, conception and inherence.
Girolamo Saccheri (1667--1733) was an Italian Jesuit priest, scholastic philosopher, and mathematician. He earned a permanent place in the history of mathematics by discovering and rigorously deducing an elaborate chain of consequences of an axiom-set for what is now known as hyperbolic (or Lobachevskian) plane geometry. Reviewer's remarks: (1) On two pages of this book Saccheri refers to his previous and equally original book Logica demonstrativa (Turin, 1697) to which 14 of the 16 pages of the editor's "Introduction" are (...) devoted. At the time of the first edition, 1920, the editor was apparently not acquainted with the secondary literature on Logica demonstrativa which continued to grow in the period preceding the second edition \ref[see D. J. Struik, in Dictionary of scientific biography, Vol. 12, 55--57, Scribner's, New York, 1975]. Of special interest in this connection is a series of three articles by A. F. Emch [Scripta Math. 3 (1935), 51--60; Zbl 10, 386; ibid. 3 (1935), 143--152; Zbl 11, 193; ibid. 3 (1935), 221--333; Zbl 12, 98]. (2) It seems curious that modern writers believe that demonstration of the "nondeducibility" of the parallel postulate vindicates Euclid whereas at first Saccheri seems to have thought that demonstration of its "deducibility" is what would vindicate Euclid. Saccheri is perfectly clear in his commitment to the ancient (and now discredited) view that it is wrong to take as an "axiom" a proposition which is not a "primal verity", which is not "known through itself". So it would seem that Saccheri should think that he was convicting Euclid of error by deducing the parallel postulate. The resolution of this confusion is that Saccheri thought that he had proved, not merely that the parallel postulate was true, but that it was a "primal verity" and, thus, that Euclid was correct in taking it as an "axiom". As implausible as this claim about Saccheri may seem, the passage on p. 237, lines 3--15, seems to admit of no other interpretation. Indeed, Emch takes it this way. (3) As has been noted by many others, Saccheri was fascinated, if not obsessed, by what may be called "reflexive indirect deductions", indirect deductions which show that a conclusion follows from given premises by a chain of reasoning beginning with the given premises augmented by the denial of the desired conclusion and ending with the conclusion itself. It is obvious, of course, that this is simply a species of ordinary indirect deduction; a conclusion follows from given premises if a contradiction is deducible from those given premises augmented by the denial of the conclusion---and it is immaterial whether the contradiction involves one of the premises, the denial of the conclusion, or even, as often happens, intermediate propositions distinct from the given premises and the denial of the conclusion. Saccheri seemed to think that a proposition proved in this way was deduced from its own denial and, thus, that its denial was self-contradictory (p. 207). Inference from this mistake to the idea that propositions proved in this way are "primal verities" would involve yet another confusion. The reviewer gratefully acknowledges extensive communication with his former doctoral students J. Gasser and M. Scanlan. ADDED 14 March 14, 2015: (1) Wikipedia reports that many of Saccheri's ideas have a precedent in the 11th Century Persian polymath Omar Khayyám's Discussion of Difficulties in Euclid, a fact ignored in most Western sources until recently. It is unclear whether Saccheri had access to this work in translation, or developed his ideas independently. (2) This book is another exemplification of the huge difference between indirect deduction and indirect reduction. Indirect deduction requires making an assumption that is inconsistent with the premises previously adopted. This means that the reasoner must perform a certain mental act of assuming a certain proposition. It case the premises are all known truths, indirect deduction—which would then be indirect proof—requires the reasoner to assume a falsehood. This fact has been noted by several prominent mathematicians including Hardy, Hilbert, and Tarski. Indirect reduction requires no new assumption. Indirect reduction is simply a transformation of an argument in one form into another argument in a different form. In an indirect reduction one proposition in the old premise set is replaced by the contradictory opposite of the old conclusion and the new conclusion becomes the contradictory opposite of the replaced premise. Roughly and schematically, P,Q/R becomes P,~R/~Q or ~R, Q/~P. Saccheri’s work involved indirect deduction not indirect reduction. (3) The distinction between indirect deduction and indirect reduction has largely slipped through the cracks, the cracks between medieval-oriented logic and modern-oriented logic. The medievalists have a heavy investment in reduction and, though they have heard of deduction, they think that deduction is a form of reduction, or vice versa, or in some cases they think that the word ‘deduction’ is the modern way of referring to reduction. The modernists have no interest in reduction, i.e. in the process of transforming one argument into another having exactly the same number of premises. Modern logicians, like Aristotle, are concerned with deducing a single proposition from a set of propositions. Some focus on deducing a single proposition from the null set—something difficult to relate to reduction. (shrink)
The system R, or more precisely the pure implicational fragment R›, is considered by the relevance logicians as the most important. The another central system of relevance logic has been the logic E of entailment that was supposed to capture strict relevant implication. The next system of relevance logic is RM or R-mingle. The question is whether adding mingle axiom to R› yields the pure implicational fragment RM› of the system? As concerns the weak systems there are at least (...) two approaches to the problem. First of all, it is possible to restrict a validity of some theorems. In another approach we can investigate even weaker logics which have no theorems and are characterized only by rules of deducibility. (shrink)
According to Cantor (Mathematische Annalen 21:545–586, 1883 ; Cantor’s letter to Dedekind, 1899 ) a set is any multitude which can be thought of as one (“jedes Viele, welches sich als Eines denken läßt”) without contradiction—a consistent multitude. Other multitudes are inconsistent or paradoxical. Set theoretical paradoxes have common root—lack of understanding why some multitudes are not sets. Why some multitudes of objects of thought cannot themselves be objects of thought? Moreover, it is a logical truth that such multitudes do (...) exist. However we do not understand this logical truth so well as we understand, for example, the logical truth $${\forall x \, x = x}$$ . In this paper we formulate a logical truth which we call the productivity principle. Rusell (Proc Lond Math Soc 4(2):29–53, 1906 ) was the first one to formulate this principle, but in a restricted form and with a different purpose. The principle explicates a logical mechanism that lies behind paradoxical multitudes, and is understandable as well as any simple logical truth. However, it does not explain the concept of set. It only sets logical bounds of the concept within the framework of the classical two valued $${\in}$$ -language. The principle behaves as a logical regulator of any theory we formulate to explain and describe sets. It provides tools to identify paradoxical classes inside the theory. We show how the known paradoxical classes follow from the productivity principle and how the principle gives us a uniform way to generate new paradoxical classes. In the case of ZFC set theory the productivity principle shows that the limitation of size principles are of a restrictive nature and that they do not explain which classes are sets. The productivity principle, as a logical regulator, can have a definite heuristic role in the development of a consistent set theory. We sketch such a theory—the cumulative cardinal theory of sets. The theory is based on the idea of cardinality of collecting objects into sets. Its development is guided by means of the productivity principle in such a way that its consistency seems plausible. Moreover, the theory inherits good properties from cardinal conception and from cumulative conception of sets. Because of the cardinality principle it can easily justify the replacement axiom, and because of the cumulative property it can easily justify the power set axiom and the union axiom. It would be possible to prove that the cumulative cardinal theory of sets is equivalent to the Morse–Kelley set theory. In this way we provide a natural and plausibly consistent axiomatization for the Morse–Kelley set theory. (shrink)
An axiom of medical research ethics is that a protocol is moral only if it has a “favorable risk-benefit ratio”. This axiom is usually interpreted in the following way: a medical research protocol is moral only if it has a positive expected value -- that is, if it is likely to do more good (to both subjects and society) than harm. I argue that, thus interpreted, the axiom has two problems. First, it is unusable, because it requires (...) us to know more about the potential outcomes of research than we ever could. Second, it is false, because it conflicts with the so-called “soft paternalist” principles of liberal democracy. In place of this flawed rule I propose a new way of making risk-benefit assessments, one that does comport with the principles of liberalism. I argue that a protocol is moral only if it would be entered into by competent subjects who are informed about the protocol. The new rule this eschews all pseudo-utilitarian calculation about the protocol’s likely harms and benefits. (shrink)
We review a recent approach to the foundations of quantum mechanics inspired by quantum information theory. The approach is based on a general framework, which allows one to address a large class of physical theories which share basic information-theoretic features. We first illustrate two very primitive features, expressed by the axioms of causality and purity-preservation, which are satisfied by both classical and quantum theory. We then discuss the axiom of purification, which expresses a strong version of the Conservation of (...) Information and captures the core of a vast number of protocols in quantum information. Purification is a highly non-classical feature and leads directly to the emergence of entanglement at the purely conceptual level, without any reference to the superposition principle. Supplemented by a few additional requirements, satisfied by classical and quantum theory, it provides a complete axiomatic characterization of quantum theory for finite dimensional systems. (shrink)
I propose a relevance-based independence axiom on how to aggregate individual yes/no judgments on given propositions into collective judgments: the collective judgment on a proposition depends only on people’s judgments on propositions which are relevant to that proposition. This axiom contrasts with the classical independence axiom: the collective judgment on a proposition depends only on people’s judgments on the same proposition. I generalize the premise-based rule and the sequential-priority rule to an arbitrary priority order of the propositions, (...) instead of a dichotomous premise/conclusion order resp. a linear priority order. I prove four impossibility theorems on relevance-based aggregation. One theorem simultaneously generalizes Arrow’s Theorem (in its general and indiﬀerence-free versions) and the well-known Arrow-like theorem in judgment aggregation. (shrink)
As is well-known, the formal system in which Frege works in his Grundgesetze der Arithmetik is formally inconsistent, Russell?s Paradox being derivable in it.This system is, except for minor differences, full second-order logic, augmented by a single non-logical axiom, Frege?s Axiom V. It has been known for some time now that the first-order fragment of the theory is consistent. The present paper establishes that both the simple and the ramified predicative second-order fragments are consistent, and that Robinson arithmetic, (...) Q, is relatively interpretable in the simple predicative fragment. The philosophical significance of the result is discussed. (shrink)
The purpose of this paper is to challenge some widespread assumptions about the role of the modal axiom 4 in a theory of vagueness. In the context of vagueness, axiom 4 usually appears as the principle ‘If it is clear (determinate, definite) that A, then it is clear (determinate, definite) that it is clear (determinate, definite) that A’, or, more formally, CA → CCA. We show how in the debate over axiom 4 two different notions of clarity (...) are in play (Williamson-style "luminosity" or self-revealing clarity and concealeable clarity) and what their respective functions are in accounts of higher-order vagueness. On this basis, we argue first that, contrary to common opinion, higher-order vagueness and S4 are perfectly compatible. This is in response to claims like that by Williamson that, if vagueness is defined with the help of a clarity operator that obeys axiom 4, higher-order vagueness disappears. Second, we argue that, contrary to common opinion, (i) bivalence-preservers (e.g. epistemicists) can without contradiction condone axiom 4 (by adopting what elsewhere we call columnar higher-order vagueness), and (ii) bivalence-discarders (e.g. open-texture theorists, supervaluationists) can without contradiction reject axiom 4. Third, we rebut a number of arguments that have been produced by opponents of axiom 4, in particular those by Williamson. (The paper is pitched towards graduate students with basic knowledge of modal logic.). (shrink)
Axiom weakening is a novel technique that allows for fine-grained repair of inconsistent ontologies. In a multi-agent setting, integrating ontologies corresponding to multiple agents may lead to inconsistencies. Such inconsistencies can be resolved after the integrated ontology has been built, or their generation can be prevented during ontology generation. We implement and compare these two approaches. First, we study how to repair an inconsistent ontology resulting from a voting-based aggregation of views of heterogeneous agents. Second, we prevent the generation (...) of inconsistencies by letting the agents engage in a turn-based rational protocol about the axioms to be added to the integrated ontology. We instantiate the two approaches using real-world ontologies and compare them by measuring the levels of satisfaction of the agents w.r.t. the ontology obtained by the two procedures. (shrink)
A number of philosophers think that grounding is, in some sense, well-founded. This thesis, however, is not always articulated precisely, nor is there a consensus in the literature as to how it should be characterized. In what follows, I consider several principles that one might have in mind when asserting that grounding is well-founded, and I argue that one of these principles, which I call ‘full foundations’, best captures the relevant claim. My argument is by the process of elimination. For (...) each of the inadequate principles, I illustrate its inadequacy by showing either that it excludes cases that should not be ruled out by a well-foundedness axiom for grounding, or that it admits cases that should be ruled out. (shrink)
One of the most prominent myths in analytic philosophy is the so- called “Fregean Axiom”, according to which the reference of a sentence is a truth value. In contrast to this referential semantics, a use-based formal semantics will be constructed in which the logical value of a sentence is not its putative referent but the information it conveys. Let us call by “Question Answer Semantics” (thereafter: QAS) the corresponding formal semantics: a non-Fregean many-valued logic, where the meaning of any (...) sentence is an ordered n-tupled of yes-no answers to corresponding questions. A sample of philosophical problems will be approached in order to justify the relevance of QAS. These include: (1) illocutionary forces, and the logical analysis of speech-acts; (2) the variety of logical negations, and their characterization in terms of restricted ranges of logical values; (3) change in meaning, and the use of dynamic oppositions for belief sets. (shrink)
In the contemporary philosophy of set theory, discussion of new axioms that purport to resolve independence necessitates an explanation of how they come to be justified. Ordinarily, justification is divided into two broad kinds: intrinsic justification relates to how `intuitively plausible' an axiom is, whereas extrinsic justification supports an axiom by identifying certain `desirable' consequences. This paper puts pressure on how this distinction is formulated and construed. In particular, we argue that the distinction as often presented is neither (...) well-demarcated nor sufficiently precise. Instead, we suggest that the process of justification in set theory should not be thought of as neatly divisible in this way, but should rather be understood as a conceptually indivisible notion linked to the goal of explanation. (shrink)
I prove that invoking the univalence axiom is equivalent to arguing 'without loss of generality' (WLOG) within Propositional Univalent Foundations (PropUF), the fragment of Univalent Foundations (UF) in which all homotopy types are mere propositions. As a consequence, I argue that practicing mathematicians, in accepting WLOG as a valid form of argument, implicitly accept the univalence axiom and that UF rightly serves as a Foundation for Mathematical Practice. By contrast, ZFC is inconsistent with WLOG as it is applied, (...) and therefore cannot serve as a foundation for practice. (shrink)
This is a critical review of Roger Crisp's The Cosmos of Duty. The review praises the book but, among other things, takes issue with some of Crisp's criticisms of Sidgwick's view that resolution of the free will problem is of limited significance to ethics and with Crisp's claim that in Methods III.xiii Sidgwick defends an axiom of prudence that undergirds rational egoism.
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.