Spinoza's causal axiom is at the foundation of the Ethics. I motivate, develop and defend a new interpretation that I call the ‘causally restricted interpretation’. This interpretation solves several longstanding puzzles and helps us better understand Spinoza's arguments for some of his most famous doctrines, including his parallelism doctrine and his theory of sense perception. It also undermines a widespread view about the relationship between the three fundamental, undefined notions in Spinoza's metaphysics: causation, conception and inherence.
Formalizing Euclid’s first axiom. Bulletin of Symbolic Logic. 20 (2014) 404–5. (Coauthor: Daniel Novotný) -/- Euclid [fl. 300 BCE] divides his basic principles into what came to be called ‘postulates’ and ‘axioms’—two words that are synonyms today but which are commonly used to translate Greek words meant by Euclid as contrasting terms. -/- Euclid’s postulates are specifically geometric: they concern geometric magnitudes, shapes, figures, etc.—nothing else. The first: “to draw a line from any point to any point”; the last: (...) the parallel postulate. -/- Euclid’s axioms are general principles of magnitude: they concern geometric magnitudes and magnitudes of other kinds as well even numbers. The first is often translated “Things that equal the same thing equal one another”. -/- There are other differences that are or might become important. -/- Aristotle [fl. 350 BCE] meticulously separated his basic principles [archai, singular archê] according to subject matter: geometrical, arithmetic, astronomical, etc. However, he made no distinction that can be assimilated to Euclid’s postulate/axiom distinction. -/- Today we divide basic principles into non-logical [topic-specific] and logical [topic-neutral] but this too is not the same as Euclid’s. In this regard it is important to be cognizant of the difference between equality and identity—a distinction often crudely ignored by modern logicians. Tarski is a rare exception. The four angles of a rectangle are equal to—not identical to—one another; the size of one angle of a rectangle is identical to the size of any other of its angles. No two angles are identical to each other. -/- The sentence ‘Things that equal the same thing equal one another’ contains no occurrence of the word ‘magnitude’. This paper considers the problem of formalizing the proposition Euclid intended as a principle of magnitudes while being faithful to the logical form and to its information content. (shrink)
Ontology engineering is a hard and error-prone task, in which small changes may lead to errors, or even produce an inconsistent ontology. As ontologies grow in size, the need for automated methods for repairing inconsistencies while preserving as much of the original knowledge as possible increases. Most previous approaches to this task are based on removing a few axioms from the ontology to regain consistency. We propose a new method based on weakening these axioms to make them less restrictive, employing (...) the use of refinement operators. We introduce the theoretical framework for weakening DL ontologies, propose algorithms to repair ontologies based on the framework, and provide an analysis of the computational complexity. Through an empirical analysis made over real-life ontologies, we show that our approach preserves significantly more of the original knowledge of the ontology than removing axioms. (shrink)
Axiom weakening is a novel technique that allows for fine-grained repair of inconsistent ontologies. In a multi-agent setting, integrating ontologies corresponding to multiple agents may lead to inconsistencies. Such inconsistencies can be resolved after the integrated ontology has been built, or their generation can be prevented during ontology generation. We implement and compare these two approaches. First, we study how to repair an inconsistent ontology resulting from a voting-based aggregation of views of heterogeneous agents. Second, we prevent the generation (...) of inconsistencies by letting the agents engage in a turn-based rational protocol about the axioms to be added to the integrated ontology. We instantiate the two approaches using real-world ontologies and compare them by measuring the levels of satisfaction of the agents w.r.t. the ontology obtained by the two procedures. (shrink)
The purpose of this paper is to challenge some widespread assumptions about the role of the modal axiom 4 in a theory of vagueness. In the context of vagueness, axiom 4 usually appears as the principle ‘If it is clear (determinate, definite) that A, then it is clear (determinate, definite) that it is clear (determinate, definite) that A’, or, more formally, CA → CCA. We show how in the debate over axiom 4 two different notions of clarity (...) are in play (Williamson-style "luminosity" or self-revealing clarity and concealeable clarity) and what their respective functions are in accounts of higher-order vagueness. On this basis, we argue first that, contrary to common opinion, higher-order vagueness and S4 are perfectly compatible. This is in response to claims like that by Williamson that, if vagueness is defined with the help of a clarity operator that obeys axiom 4, higher-order vagueness disappears. Second, we argue that, contrary to common opinion, (i) bivalence-preservers (e.g. epistemicists) can without contradiction condone axiom 4 (by adopting what elsewhere we call columnar higher-order vagueness), and (ii) bivalence-discarders (e.g. open-texture theorists, supervaluationists) can without contradiction reject axiom 4. Third, we rebut a number of arguments that have been produced by opponents of axiom 4, in particular those by Williamson. (The paper is pitched towards graduate students with basic knowledge of modal logic.). (shrink)
The naive theory of properties states that for every condition there is a property instantiated by exactly the things which satisfy that condition. The naive theory of properties is inconsistent in classical logic, but there are many ways to obtain consistent naive theories of properties in nonclassical logics. The naive theory of classes adds to the naive theory of properties an extensionality rule or axiom, which states roughly that if two classes have exactly the same members, they are identical. (...) In this paper we examine the prospects for obtaining a satisfactory naive theory of classes. We start from a result by Ross Brady, which demonstrates the consistency of something resembling a naive theory of classes. We generalize Brady’s result somewhat and extend it to a recent system developed by Andrew Bacon. All of the theories we prove consistent contain an extensionality rule or axiom. But we argue that given the background logics, the relevant extensionality principles are too weak. For example, in some of these theories, there are universal classes which are not declared coextensive. We elucidate some very modest demands on extensionality, designed to rule out this kind of pathology. But we close by proving that even these modest demands cannot be jointly satisfied. In light of this new impossibility result, the prospects for a naive theory of classes are bleak. (shrink)
The Principle of Ariadne, formulated in 1988 ago by Walter Carnielli and Carlos Di Prisco and later published in 1993, is an infinitary principle that is independent of the Axiom of Choice in ZF, although it can be consistently added to the remaining ZF axioms. The present paper surveys, and motivates, the foundational importance of the Principle of Ariadne and proposes the Ariadne Game, showing that the Principle of Ariadne, corresponds precisely to a winning strategy for the Ariadne Game. (...) Some relations to other alternative. set-theoretical principles are also briefly discussed. (shrink)
Standard approaches to proper names, based on Kripke's views, hold that the semantic values of expressions are (set-theoretic) functions from possible worlds to extensions and that names are rigid designators, i.e.\ that their values are \emph{constant} functions from worlds to entities. The difficulties with these approaches are well-known and in this paper we develop an alternative. Based on earlier work on a higher order logic that is \emph{truly intensional} in the sense that it does not validate the axiom scheme (...) of Extensionality, we develop a simple theory of names in which Kripke's intuitions concerning rigidity are accounted for, but the more unpalatable consequences of standard implementations of his theory are avoided. The logic uses Frege's distinction between sense and reference and while it accepts the rigidity of names it rejects the view that names have direct reference. Names have constant denotations across possible worlds, but the semantic value of a name is not determined by its denotation. (shrink)
The aim of this paper is to show that every topological space gives rise to a wealth of topological models of the modal logic S4.1. The construction of these models is based on the fact that every space defines a Boolean closure algebra (to be called a McKinsey algebra) that neatly reflects the structure of the modal system S4.1. It is shown that the class of topological models based on McKinsey algebras contains a canonical model that can be used to (...) prove a completeness theorem for S4.1. Further, it is shown that the McKinsey algebra MKX of a space X endoewed with an alpha-topologiy satisfies Esakia's GRZ axiom. (shrink)
The system R, or more precisely the pure implicational fragment R›, is considered by the relevance logicians as the most important. The another central system of relevance logic has been the logic E of entailment that was supposed to capture strict relevant implication. The next system of relevance logic is RM or R-mingle. The question is whether adding mingle axiom to R› yields the pure implicational fragment RM› of the system? As concerns the weak systems there are at least (...) two approaches to the problem. First of all, it is possible to restrict a validity of some theorems. In another approach we can investigate even weaker logics which have no theorems and are characterized only by rules of deducibility. (shrink)
The text is a continuation of the article of the same name published in the previous issue of Philosophical Alternatives. The philosophical interpretations of the Kochen- Specker theorem (1967) are considered. Einstein's principle regarding the,consubstantiality of inertia and gravity" (1918) allows of a parallel between descriptions of a physical micro-entity in relation to the macro-apparatus on the one hand, and of physical macro-entities in relation to the astronomical mega-entities on the other. The Bohmian interpretation ( 1952) of quantum mechanics proposes (...) that all quantum systems be interpreted as dissipative ones and that the theorem be thus derstood. The conclusion is that the continual representation, by force or (gravitational) field between parts interacting by means of it, of a system is equivalent to their mutual entanglement if representation is discrete. Gravity (force field) and entanglement are two different, correspondingly continual and discrete, images of a single common essence. General relativity can be interpreted as a superluminal generalization of special relativity. The postulate exists of an alleged obligatory difference between a model and reality in science and philosophy. It can also be deduced by interpreting a corollary of the heorem. On the other hand, quantum mechanics, on the basis of this theorem and of V on Neumann's (1932), introduces the option that a model be entirely identified as the modeled reality and, therefore, that absolutely reality be recognized: this is a non-standard hypothesis in the epistemology of science. Thus, the true reality begins to be understood mathematically, i.e. in a Pythagorean manner, for its identification with its mathematical model. A few linked problems are highlighted: the role of the axiom of choice forcorrectly interpreting the theorem; whether the theorem can be considered an axiom; whether the theorem can be considered equivalent to the negation of the axiom. (shrink)
I prove that invoking the univalence axiom is equivalent to arguing 'without loss of generality' (WLOG) within Propositional Univalent Foundations (PropUF), the fragment of Univalent Foundations (UF) in which all homotopy types are mere propositions. As a consequence, I argue that practicing mathematicians, in accepting WLOG as a valid form of argument, implicitly accept the univalence axiom and that UF rightly serves as a Foundation for Mathematical Practice. By contrast, ZFC is inconsistent with WLOG as it is applied, (...) and therefore cannot serve as a foundation for practice. (shrink)
Peano arithmetic cannot serve as the ground of mathematics for it is inconsistent to infinity, and infinity is necessary for its foundation. Though Peano arithmetic cannot be complemented by any axiom of infinity, there exists at least one (logical) axiomatics consistent to infinity. That is nothing else than a new reading at issue and comparative interpretation of Gödel’s papers (1930; 1931) meant here. Peano arithmetic admits anyway generalizations consistent to infinity and thus to some addable axiom(s) of infinity. (...) The most utilized example of those generalizations is the complex Hilbert space. Any generalization of Peano arithmetic consistent to infinity, e.g. the complex Hilbert space, can serve as a foundation for mathematics to found itself and by itself. (shrink)
Intuitionistic logic provides an elegant solution to the Sorites Paradox. Its acceptance has been hampered by two factors. First, the lack of an accepted semantics for languages containing vague terms has led even philosophers sympathetic to intuitionism to complain that no explanation has been given of why intuitionistic logic is the correct logic for such languages. Second, switching from classical to intuitionistic logic, while it may help with the Sorites, does not appear to offer any advantages when dealing with the (...) so-called paradoxes of higher-order vagueness. We offer a proposal that makes strides on both issues. We argue that the intuitionist’s characteristic rejection of any third alethic value alongside true and false is best elaborated by taking the normal modal system S4M to be the sentential logic of the operator ‘it is clearly the case that’. S4M opens the way to an account of higher-order vagueness which avoids the paradoxes that have been thought to infect the notion. S4M is one of the modal counterparts of the intuitionistic sentential calculus and we use this fact to explain why IPC is the correct sentential logic to use when reasoning with vague statements. We also show that our key results go through in an intuitionistic version of S4M. Finally, we deploy our analysis to reply to Timothy Williamson’s objections to intuitionistic treatments of vagueness. (shrink)
We examine some of Connes’ criticisms of Robinson’s infinitesimals starting in 1995. Connes sought to exploit the Solovay model S as ammunition against non-standard analysis, but the model tends to boomerang, undercutting Connes’ own earlier work in functional analysis. Connes described the hyperreals as both a “virtual theory” and a “chimera”, yet acknowledged that his argument relies on the transfer principle. We analyze Connes’ “dart-throwing” thought experiment, but reach an opposite conclusion. In S , all definable sets of reals are (...) Lebesgue measurable, suggesting that Connes views a theory as being “virtual” if it is not definable in a suitable model of ZFC. If so, Connes’ claim that a theory of the hyperreals is “virtual” is refuted by the existence of a definable model of the hyperreal field due to Kanovei and Shelah. Free ultrafilters aren’t definable, yet Connes exploited such ultrafilters both in his own earlier work on the classification of factors in the 1970s and 80s, and in Noncommutative Geometry, raising the question whether the latter may not be vulnerable to Connes’ criticism of virtuality. We analyze the philosophical underpinnings of Connes’ argument based on Gödel’s incompleteness theorem, and detect an apparent circularity in Connes’ logic. We document the reliance on non-constructive foundational material, and specifically on the Dixmier trace −∫ (featured on the front cover of Connes’ magnum opus) and the Hahn–Banach theorem, in Connes’ own framework. We also note an inaccuracy in Machover’s critique of infinitesimal-based pedagogy. (shrink)
C. I. Lewis (I883-I964) was the first major figure in history and philosophy of logic—-a field that has come to be recognized as a separate specialty after years of work by Ivor Grattan-Guinness and others (Dawson 2003, 257).Lewis was among the earliest to accept the challenges offered by this field; he was the first who had the philosophical and mathematical talent, the philosophical, logical, and historical background, and the patience and dedication to objectivity needed to excel. He was blessed with (...) many fortunate circumstances, not least of which was entering the field when mathematical logic, after only six decades of toil, had just reaped one of its most important harvests with publication of the monumental Principia Mathematica. It was a time of joyful optimism which demanded an historical account and a sober philosophical critique. Lewis was one of the first to apply to mathematical logic the Aristotelian dictum that we do not understand a living institution until we see it growing from its birth. (shrink)
In previous articles, it has been shown that the deductive system developed by Aristotle in his "second logic" is a natural deduction system and not an axiomatic system as previously had been thought. It was also stated that Aristotle's logic is self-sufficient in two senses: First, that it presupposed no other logical concepts, not even those of propositional logic; second, that it is (strongly) complete in the sense that every valid argument expressible in the language of the system is deducible (...) by means of a formal deduction in the system. Review of the system makes the first point obvious. The purpose of the present article is to prove the second. Strong completeness is demonstrated for the Aristotelian system. (shrink)
This paper intends to further the understanding of the formal properties of (higher-order) vagueness by connecting theories of (higher-order) vagueness with more recent work in topology. First, we provide a “translation” of Bobzien's account of columnar higher-order vagueness into the logic of topological spaces. Since columnar vagueness is an essential ingredient of her solution to the Sorites paradox, a central problem of any theory of vagueness comes into contact with the modern mathematical theory of topology. Second, Rumfitt’s recent topological reconstruction (...) of Sainsbury’s theory of prototypically defined concepts is shown to lead to the same class of spaces that characterize Bobzien’s account of columnar vagueness, namely, weakly scattered spaces. Rumfitt calls these spaces polar spaces. They turn out to be closely related to Gärdenfors’ conceptual spaces, which have come to play an ever more important role in cognitive science and related disciplines. Finally, Williamson’s “logic of clarity” is explicated in terms of a generalized topology (“locology”) that can be considered an alternative to standard topology. Arguably, locology has some conceptual advantages over topology with respect to the conceptualization of a boundary and a borderline. Moreover, in Williamson’s logic of clarity, vague concepts with respect to a notion of a locologically inspired notion of a “slim boundary” are (stably) columnar. Thus, Williamson’s logic of clarity also exhibits a certain affinity for columnar vagueness. In sum, a topological perspective is useful for a conceptual elucidation and unification of central aspects of a variety of contemporary accounts of vagueness. (shrink)
The aim of this paper is to present a general method for constructing natural tessellations of conceptual spaces that is based on their topological structure. This method works for a class of spaces that was defined some 80 years ago by the Russian mathematician Pavel Alexandroff. Alexandroff spaces, as they are called today, are distinguished from other topological spaces by the fact that they exhibit a 1-1 correspondence between their specialization orders and their topological structures. Recently, Ian Rumfitt (apparently not (...) being aware of Alexandroff’s work) used a very special case of Alexandroff’s method to elucidate the logic of vague concepts in a new way. Elaborating his approach, the color circle’s conceptual space can be shown to define an atomistic Boolean algebra of regular open concepts. In a similar way Gärdenfors’ geometrical discretization of conceptual spaces by Voronoi tessellations also can be shown to be a kind of geometrical version of Alexandroff’s topological construction. More precisely, a discretization à la Gärdenfors is extensionally equivalent to a topological discretization constructed by Alexandroff’s method. Rumfitt’s and Gärdenfors’s constructions turn out to be special cases of an approach that works much more generally, namely, for Alexandroff spaces. For these spaces (X, OX) the Boolean algebras O*X of regular open sets are still atomistic and yield natural tessellations of X. (shrink)
The paper contains an overview of the most important results presented in the monograph of the author "Teorie Językow Syntaktycznie-Kategorialnych" ("Theories of Syntactically-Categorial Languages" (in Polish), PWN, Warszawa-Wrocław 1985. In the monograph four axiomatic systems of syntactically-categorial languages are presented. The ﬁrst two refer to languages of expression-tokens. The others also takes into consideration languages of expression-types. Generally, syntactically-categorial languages are languages built in accordance with principles of the theory of syntactic categories introduced by S. Leśniewski [1929,1930]; they are connected (...) with- the Ajdukiewicz’s work [1935] which was a continuation of Leśniewski’s idea and further developed and popularized in the research on categorial grammars, by Y. Bar-Hillel [1950,1953,1964]. To assign a suitable syntactic category to each word of the vocabulary is the main idea of syntactically-categorial approach to language. Compound expressions are built from the words of the vocabulary and then a suitable syntactic-category is assigned to each of them. A language built in this way should be decidable, which means that there should exist an algorithm for deciding about each expression of it, whether it is well-formed or is syntactically connected . The traditional, originating from Husserl, understanding of the syntactic category confronts some diﬃculties. This notion is deﬁned by abstraction using the concept of aﬃliation of two expressions to the same syntactic category. (shrink)
This paper offers an overview of various alternative formulations for Analysis, the theory of Integral and Differential Calculus, and its diverging conceptions of the topological structure of the continuum. We pay particularly attention to Smooth Analysis, a proposal created by William Lawvere and Anders Kock based on Grothendieck’s work on a categorical algebraic geometry. The role of Heyting’s logic, common to all these alternatives is emphasized.
The primary sense of the word ‘hypothesis’ in modern colloquial English includes “proposition not yet settled” or “open question”. Its opposite is ‘fact’ in the sense of “proposition widely known to be true”. People are amazed that Plato [1, p. 1684] and Aristotle [Post. An. I.2 72a14–24, quoted below] used the Greek form of the word for indemonstrable first principles [sc. axioms] in general or for certain kinds of axioms. These two facts create the paradoxical situation that in many cases (...) it is impossible to translate the Greek form of the word using the English form: the primary sense of the word ‘hypothesis’ in modern colloquial English is diametrically opposed to one sense used by Plato and by his most accomplished student Given current colloquial English usage it is impossible to get the word hypothesis to carry the connotation of “settled truth” much less “axiomatic truth”. The ‘hypo-’ [under] in the Plato-Aristotle use of ‘hypothesis’ might carry the sense of “basis” or “foundational” as opposed to “less than usual or normal”. This paradox parallels the one pointed out by Robin Smith: it is impossible for the English word ‘syllogism’ to carry the meaning of its Greek form Aristotle intended. There are other cases as well: it is impossible for the English biological term ‘genus’ to carry the meaning of its Greek form the Greek genos refers to family as in our ‘genealogy’, not to “higher species” as in our ‘generic’. (shrink)
The main purpose of this article is to tackle the problem of living together – as dignified human beings – in a certain territory in the field of social philosophy, on the theoretical grounding ensured by some remarkable exponents of the Austrian School − and by means of the praxeologic method. Because political tools diminish the human nature not only of those who use them, but also of those who undergo their effects, people can live a life worthy of a (...) human being only as members of some autarchic or self-governing communities. As a spontaneous order, every autarchic community is inherently democratic, inasmuch as it makes possible free involvement, peaceful coordination, free expression and the free reproduction of ideas. The members of autarchic communities are moral individuals who avoid aggression, practice self-control, seek a dynamical efficiency and establish a democratic public discourse. (shrink)
Suppose there is a domain of discourse of English, then everything of which any predicate is true is a member of that domain. If English has a domain of discourse, then, since ‘is a domain of discourse of English’ is itself a predicate of English and true of that domain, that domain is a member of itself. But nothing is a member of itself. Thus English has no domain of discourse. We defend this argument and go on to argue to (...) the same conclusion without relying on the supposition that English is a language which contains the predicate ‘is a domain of discourse of English’. (shrink)
Because formal systems of symbolic logic inherently express and represent the deductive inference model formal proofs to theorem consequences can be understood to represent sound deductive inference to true conclusions without any need for other representations such as model theory.
If the concept of “free will” is reduced to that of “choice” all physical world shares the latter quality. Anyway the “free will” can be distinguished from the “choice”: The “free will” involves implicitly a certain goal, and the choice is only the mean, by which the aim can be achieved or not by the one who determines the target. Thus, for example, an electron has always a choice but not free will unlike a human possessing both. Consequently, and paradoxically, (...) the determinism of classical physics is more subjective and more anthropomorphic than the indeterminism of quantum mechanics for the former presupposes certain deterministic goal implicitly following the model of human freewill behavior. Quantum mechanics introduces the choice in the fundament of physical world involving a generalized case of choice, which can be called “subjectless”: There is certain choice, which originates from the transition of the future into the past. Thus that kind of choice is shared of all existing and does not need any subject: It can be considered as a low of nature. There are a few theorems in quantum mechanics directly relevant to the topic: two of them are called “free will theorems” by their authors (Conway and Kochen 2006; 2009). Any quantum system either a human or an electron or whatever else has always a choice: Its behavior is not predetermined by its past. This is a physical law. It implies that a form of information, the quantum information underlies all existing for the unit of the quantity of information is an elementary choice: either a bit or a quantum bit (qubit). (shrink)
God's Dice.Vasil Penchev - 2015 - In Actas: VIII Conference of the Spanish Society for Logic, Methodology, and Philosophy of Sciences (eds. J. Martínez, García-Carpintero, J. Díez, S. Oms),. Barcelona: Universitat de Barcelona. pp. 297-303.details
Einstein wrote his famous sentence "God does not play dice with the universe" in a letter to Max Born in 1920. All experiments have confirmed that quantum mechanics is neither wrong nor “incomplete”. One can says that God does play dice with the universe. Let quantum mechanics be granted as the rules generalizing all results of playing some imaginary God’s dice. If that is the case, one can ask how God’s dice should look like. God’s dice turns out to be (...) a qubit and thus having the shape of a unit ball. Any item in the universe as well the universe itself is both infinitely many rolls and a single roll of that dice for it has infinitely many “sides”. Thus both the smooth motion of classical physics and the discrete motion introduced in addition by quantum mechanics can be described uniformly correspondingly as an infinite series converges to some limit and as a quantum jump directly into that limit. The second, imaginary dimension of God’s dice corresponds to energy, i.e. to the velocity of information change between two probabilities in both series and jump. (shrink)
Trying to define nothingness has always been a challenge for philosophers. What exactly is it? Does it share properties similar to spaces? Can we treat it as a ''thing'' ? We can say an object is inside nothingness, but how do we imagine that ''containment'' ?
We give a concise presentation of the Univalent Foundations of mathematics outlining the main ideas, followed by a discussion of the UniMath library of formalized mathematics implementing the ideas of the Univalent Foundations (section 1), and the challenges one faces in attempting to design a large-scale library of formalized mathematics (section 2). This leads us to a general discussion about the links between architecture and mathematics where a meeting of minds is revealed between architects and mathematicians (section 3). On the (...) way our odyssey from the foundations to the "horizon" of mathematics will lead us to meet the mathematicians David Hilbert and Nicolas Bourbaki as well as the architect Christopher Alexander. (shrink)
The present crisis of foundations in Fundamental Science is manifested as a comprehensive conceptual crisis, crisis of understanding, crisis of interpretation and representation, crisis of methodology, loss of certainty. Fundamental Science "rested" on the understanding of matter, space, nature of the "laws of nature", fundamental constants, number, time, information, consciousness. The question "What is fundametal?" pushes the mind to other questions → Is Fundamental Science fundamental? → What is the most fundamental in the Universum?.. Physics, do not be afraid of (...) Metaphysics! Levels of fundamentality. The problem №1 of Fundamental Science is the ontological justification (basification) of mathematics. To understand is to "grasp" Structure ("La Structure mère"). Key ontological ideas for emerging from the crisis of understanding: total unification of matter across all levels of the Universum, one ontological superaxiom, one ontological superprinciple. The ontological construction method of the knowledge basis (framework, carcass, foundation). The triune (absolute, ontological) space of eternal generation of new structures and meanings. Super concept of the scientific world picture of the Information era - Ontological (structural, cosmic) memory as "soul of matter", measure of the Universum being as the holistic generating process. The result of the ontological construction of the knowledge basis: primordial (absolute) generating structure is the most fundamental in the Universum. (shrink)
Fundamental knowledge endures deep conceptual crisis manifested in total crisis of understanding, crisis of interpretation and representation, loss of certainty, troubles with physics, crisis of methodology. Crisis of understanding in fundamental science generates deep crisis of understanding in global society. What way should we choose for overcoming total crisis of understanding in fundamental science? It should be the way of metaphysical construction of new comprehensive model of ideality on the basis of the "modified ontology". Result of quarter-century wanderings: sum of (...) ideas, concepts and eidoses, new understanding of space, time, consciousness. (shrink)
Total ontological unification of matter at all levels of reality as a whole, its “grasp” of its dialectical structure, space dimensionality and structure of the language of nature – “house of Being” [1], gives the opportunity to see the “place” and to understand the nature of information as a phenomenon of Ontological Memory, the measure of being of the whole, “the soul of matter”, qualitative quality of the absolute forms of existence of matter (absolute states). “Information” and “time” are multivalent (...) phenomena of Ontological Memory (OntoMemory) substantiating the essential unity of the world on the “horizontal” and “vertical”. Ontological constructing of dialectics of Logos self-motion, total unification of matter, “grasp” of the nature of information leads to the necessity of introducing a new unit of information showing the ideas of dialectical formation and generation of new structures and meanings, namely Delta-Logit (Δ-Logit), qualitative quantum-prototecton, fundamental organizing, absolute existential-extreme. The simplest mathematical symbol represents the dialectical microprocessor of the Nature. Ontological formula of John A. Wheeler «It from Bit» [2] is “grasped” as the first dialectic link in the chain of ontological formulas → “It from Δ-Logit” → “It from OntoMemory” → “It from Logos, Logos into It”. Ontological Memory - core, the attractor of the new conceptual structure of the world of the information age, which is based on Absolute generating structure, the representant of onto-genetic code of the Universe. (shrink)
We show how removing faith-based beliefs in current philosophies of classical and constructive mathematics admits formal, evidence-based, definitions of constructive mathematics; of a constructively well-defined logic of a formal mathematical language; and of a constructively well-defined model of such a language. -/- We argue that, from an evidence-based perspective, classical approaches which follow Hilbert's formal definitions of quantification can be labelled `theistic'; whilst constructive approaches based on Brouwer's philosophy of Intuitionism can be labelled `atheistic'. -/- We then adopt what may (...) be labelled a finitary, evidence-based, `agnostic' perspective and argue that Brouwerian atheism is merely a restricted perspective within the finitary agnostic perspective, whilst Hilbertian theism contradicts the finitary agnostic perspective. -/- We then consider the argument that Tarski's classic definitions permit an intelligence---whether human or mechanistic---to admit finitary, evidence-based, definitions of the satisfaction and truth of the atomic formulas of the first-order Peano Arithmetic PA over the domain N of the natural numbers in two, hitherto unsuspected and essentially different, ways. -/- We show that the two definitions correspond to two distinctly different---not necessarily evidence-based but complementary---assignments of satisfaction and truth to the compound formulas of PA over N. -/- We further show that the PA axioms are true over N, and that the PA rules of inference preserve truth over N, under both the complementary interpretations; and conclude some unsuspected constructive consequences of such complementarity for the foundations of mathematics, logic, philosophy, and the physical sciences. -/- . (shrink)
Quantum computer is considered as a generalization of Turing machine. The bits are substituted by qubits. In turn, a "qubit" is the generalization of "bit" referring to infinite sets or series. It extends the consept of calculation from finite processes and algorithms to infinite ones, impossible as to any Turing machines (such as our computers). However, the concept of quantum computer mets all paradoxes of infinity such as Gödel's incompletness theorems (1931), etc. A philosophical reflection on how quantum computer might (...) implement the idea of "infinite calculation" is the main subject. (shrink)
People with the kind of preferences that give rise to the St. Petersburg paradox are problematic---but not because there is anything wrong with infinite utilities. Rather, such people cannot assign the St. Petersburg gamble any value that any kind of outcome could possibly have. Their preferences also violate an infinitary generalization of Savage's Sure Thing Principle, which we call the *Countable Sure Thing Principle*, as well as an infinitary generalization of von Neumann and Morgenstern's Independence axiom, which we call (...) *Countable Independence*. In violating these principles, they display foibles like those of people who deviate from standard expected utility theory in more mundane cases: they choose dominated strategies, pay to avoid information, and reject expert advice. We precisely characterize the preference relations that satisfy Countable Independence in several equivalent ways: a structural constraint on preferences, a representation theorem, and the principle we began with, that every prospect has a value that some outcome could have. (shrink)
We prove a representation theorem for preference relations over countably infinite lotteries that satisfy a generalized form of the Independence axiom, without assuming Continuity. The representing space consists of lexicographically ordered transfinite sequences of bounded real numbers. This result is generalized to preference orders on abstract superconvex spaces.
A number of philosophers think that grounding is, in some sense, well-founded. This thesis, however, is not always articulated precisely, nor is there a consensus in the literature as to how it should be characterized. In what follows, I consider several principles that one might have in mind when asserting that grounding is well-founded, and I argue that one of these principles, which I call ‘full foundations’, best captures the relevant claim. My argument is by the process of elimination. For (...) each of the inadequate principles, I illustrate its inadequacy by showing either that it excludes cases that should not be ruled out by a well-foundedness axiom for grounding, or that it admits cases that should be ruled out. (shrink)
We give two social aggregation theorems under conditions of risk, one for constant population cases, the other an extension to variable populations. Intra and interpersonal welfare comparisons are encoded in a single ‘individual preorder’. The theorems give axioms that uniquely determine a social preorder in terms of this individual preorder. The social preorders described by these theorems have features that may be considered characteristic of Harsanyi-style utilitarianism, such as indifference to ex ante and ex post equality. However, the theorems are (...) also consistent with the rejection of all of the expected utility axioms, completeness, continuity, and independence, at both the individual and social levels. In that sense, expected utility is inessential to Harsanyi-style utilitarianism. In fact, the variable population theorem imposes only a mild constraint on the individual preorder, while the constant population theorem imposes no constraint at all. We then derive further results under the assumption of our basic axioms. First, the individual preorder satisfies the main expected utility axiom of strong independence if and only if the social preorder has a vector-valued expected total utility representation, covering Harsanyi’s utilitarian theorem as a special case. Second, stronger utilitarian-friendly assumptions, like Pareto or strong separability, are essentially equivalent to strong independence. Third, if the individual preorder satisfies a ‘local expected utility’ condition popular in non-expected utility theory, then the social preorder has a ‘local expected total utility’ representation. Fourth, a wide range of non-expected utility theories nevertheless lead to social preorders of outcomes that have been seen as canonically egalitarian, such as rank-dependent social preorders. Although our aggregation theorems are stated under conditions of risk, they are valid in more general frameworks for representing uncertainty or ambiguity. (shrink)
There are at least three vaguely atomistic principles that have come up in the literature, two explicitly and one implicitly. First, standard atomism is the claim that everything is composed of atoms, and is very often how atomism is characterized in the literature. Second, superatomism is the claim that parthood is well-founded, which implies that every proper parthood chain terminates, and has been discussed as a stronger alternative to standard atomism. Third, there is a principle that lies between these two (...) theses in terms of its relative strength: strong atomism, the claim that every maximal proper parthood chain terminates. Although strong atomism is equivalent to superatomism in classical extensional mereology, it is strictly weaker than it in strictly weaker systems in which parthood is a partial order. And it is strictly stronger than standard atomism in classical extensional mereology and, given the axiom of choice, in such strictly weaker systems as well. Though strong atomism has not, to my knowledge, been explicitly identified, Shiver appears to have it in mind, though it is unclear whether he recognizes that it is not equivalent to standard atomism in each of the mereologies he considers. I prove these logical relationships which hold amongst these three atomistic principles, and argue that, whether one adopts classical extensional mereology or a system strictly weaker than it in which parthood is a partial order, standard atomism is a more defensible addition to one's mereology than either of the other two principles, and it should be regarded as the best formulation of the atomistic thesis. (shrink)
Judgment aggregation theory, or rather, as we conceive of it here, logical aggregation theory generalizes social choice theory by having the aggregation rule bear on judgments of all kinds instead of merely preference judgments. It derives from Kornhauser and Sager’s doctrinal paradox and List and Pettit’s discursive dilemma, two problems that we distinguish emphatically here. The current theory has developed from the discursive dilemma, rather than the doctrinal paradox, and the final objective of the paper is to give the latter (...) its own theoretical development along the line of recent work by Dietrich and Mongin. However, the paper also aims at reviewing logical aggregation theory as such, and it covers impossibility theorems by Dietrich, Dietrich and List, Dokow and Holzman, List and Pettit, Mongin, Nehring and Puppe, Pauly and van Hees, providing a uniform logical framework in which they can be compared with each other. The review goes through three historical stages: the initial paradox and dilemma, the scattered early results on the independence axiom, and the so-called canonical theorem, a collective achievement that provided the theory with its specific method of analysis. The paper goes some way towards philosophical logic, first by briefly connecting the aggregative framework of judgment with the modern philosophy of judgment, and second by thoroughly discussing and axiomatizing the ‘general logic’ built in this framework. (shrink)
Process philosophies tend to emphasise the value of continuous creation as the core of their discourse. For Bergson, Whitehead, Deleuze, and others the real is ultimately a creative becoming. Critics have argued that there is an irreducible element of (almost religious) belief in this re-evaluation of immanent creation. While I don’t think belief is necessarily a sign of philosophical and existential weakness, in this paper I will examine the possibility for the concept of universal creation to be a political and (...) ethical axiom, the result of a global social contract rather than of a new spirituality. I argue here that a coherent way to fight against potentially totalitarian absolutes is to replace them with a virtual absolute that cannot territorialise without deterritorialising at the same time: the Creal principle. (shrink)
In this paper, I present the results of an experimental study on intuitions about moral obligation (ought) and ability (can). Many philosophers accept as an axiom the principle known as “Ought Implies Can” (OIC). If the truth of OIC is intuitive, such that it is accepted by many philosophers as an axiom, then we would expect people to judge that agents who are unable to perform an action are not morally obligated to perform that action. The results of (...) my experimental study show that participants were more inclined to judge that an agent ought to perform an action than that the agent can perform the action. Overall, participants said that an agent ought to perform an action even when they said that the agent cannot do it. I discuss the implications of these results for the debate over OIC. (shrink)
According to dispositionalism about modality, a proposition <p> is possible just in case something has, or some things have, a power or disposition for its truth; and <p> is necessary just in case nothing has a power for its falsity. But are there enough powers to go around? In Yates (2015) I argued that in the case of mathematical truths such as <2+2=4>, nothing has the power to bring about their falsity or their truth, which means they come out both (...) necessary and not possible. Combining this with axiom (T), it is easy to derive a contradiction. I suggested that dispositionalists ought to retreat a little and say that <p> is possible just in case either p, or there is a power to bring it about that p, grounding the possibility of mathematical propositions in their truth rather than in powers. Vetter’s (2015) has the resources to provide a response to my argument, and in her (2018) she explicitly addresses it by arguing for a plenitude of powers, based on the idea that dispositions come in degrees, with necessary properties a limiting case of dispositionality. On this view there is a power for <2+2=4>, without there being a power to bring about its truth. In this paper I argue that Vetter’s case for plenitude does not work. However, I suggest, if we are prepared to accept metaphysical causation, a case can be made that there is indeed a power for <2+2=4>. (shrink)
I propose a relevance-based independence axiom on how to aggregate individual yes/no judgments on given propositions into collective judgments: the collective judgment on a proposition depends only on people’s judgments on propositions which are relevant to that proposition. This axiom contrasts with the classical independence axiom: the collective judgment on a proposition depends only on people’s judgments on the same proposition. I generalize the premise-based rule and the sequential-priority rule to an arbitrary priority order of the propositions, (...) instead of a dichotomous premise/conclusion order resp. a linear priority order. I prove four impossibility theorems on relevance-based aggregation. One theorem simultaneously generalizes Arrow’s Theorem (in its general and indiﬀerence-free versions) and the well-known Arrow-like theorem in judgment aggregation. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.