Standard approaches to proper names, based on Kripke's views, hold that the semantic values of expressions are (set-theoretic) functions from possible worlds to extensions and that names are rigid designators, i.e.\ that their values are \emph{constant} functions from worlds to entities. The difficulties with these approaches are well-known and in this paper we develop an alternative. Based on earlier work on a higher order logic that is \emph{truly intensional} in the sense that it does not validate the axiom scheme (...) of Extensionality, we develop a simple theory of names in which Kripke's intuitions concerning rigidity are accounted for, but the more unpalatable consequences of standard implementations of his theory are avoided. The logic uses Frege's distinction between sense and reference and while it accepts the rigidity of names it rejects the view that names have direct reference. Names have constant denotations across possible worlds, but the semantic value of a name is not determined by its denotation. (shrink)
In this paper we define intensional models for the classical theory of types, thus arriving at an intensional type logic ITL. Intensional models generalize Henkin's general models and have a natural definition. As a class they do not validate the axiom of Extensionality. We give a cut-free sequent calculus for type theory and show completeness of this calculus with respect to the class of intensional models via a model existence theorem. After this we turn our attention to applications. (...) Firstly, it is argued that, since ITL is truly intensional, it can be used to model ascriptions of propositional attitude without predicting logical omniscience. In order to illustrate this a small fragment of English is defined and provided with an ITL semantics. Secondly, it is shown that ITL models contain certain objects that can be identified with possible worlds. Essential elements of modal logic become available within classical type theory once the axiom of Extensionality is given up. (shrink)
The theory of granular partitions is designed to capture in a formal framework important aspects of the selective character of common-sense views of reality. It comprehends not merely the ways in which we can view reality by conceiving its objects as gathered together not merely into sets, but also into wholes of various kinds, partitioned into parts at various levels of granularity. We here represent granular partitions as triples consisting of a rooted tree structure as first component, a domain satisfying (...) the axioms of Extensional Mereology as second component, and a mapping (called ’projection’) of the first into the second as a third component. We define ordering relations among granular partitions the resulting structures are called partition frames. We then introduce an axiomatic theory which sentences are interpreted in partition frames. (shrink)
The naive theory of properties states that for every condition there is a property instantiated by exactly the things which satisfy that condition. The naive theory of properties is inconsistent in classical logic, but there are many ways to obtain consistent naive theories of properties in nonclassical logics. The naive theory of classes adds to the naive theory of properties an extensionality rule or axiom, which states roughly that if two classes have exactly the same members, they are (...) identical. In this paper we examine the prospects for obtaining a satisfactory naive theory of classes. We start from a result by Ross Brady, which demonstrates the consistency of something resembling a naive theory of classes. We generalize Brady’s result somewhat and extend it to a recent system developed by Andrew Bacon. All of the theories we prove consistent contain an extensionality rule or axiom. But we argue that given the background logics, the relevant extensionality principles are too weak. For example, in some of these theories, there are universal classes which are not declared coextensive. We elucidate some very modest demands on extensionality, designed to rule out this kind of pathology. But we close by proving that even these modest demands cannot be jointly satisfied. In light of this new impossibility result, the prospects for a naive theory of classes are bleak. (shrink)
Intensional evidence is any reason to accept a proposition that is not the truth values of the proposition accepted or, if it is a complex proposition, is not the truth values of its propositional contents. Extensional evidence is non-intensional evidence. Someone can accept a complex proposition, but deny its logical consequences when her acceptance is based on intensional evidence, while the logical consequences of the proposition presuppose the acceptance of extensional evidence, e.g., she can refuse the logical consequence of a (...) proposition she accepts because she doesn’t know what are the truth-values of its propositional contents. This tension motivates counterexamples to the negation of conditionals, the propositional analysis of conditionals, hypothetical syllogism, contraposition and or-to-if. It is argued that these counterexamples are non-starters because they rely on a mix of intensionally based premises and extensionally based conclusions. Instead, a genuine counterexample to classical argumentative forms should present circumstances where an intuitively true and extensionally based premise leads to an intuitively false conclusion that is also extensionally based. The other point is that evidentiary concerns about intensionally based beliefs should be constrained by the truth conditions of propositions presented by classical logic, which are nothing more than coherence requirements in distributions of truth value. It is argued that this restriction also dissolves some known puzzles such as conditional stand-offs, Adams pair, the opt-out property, and the burglar’s puzzle. (shrink)
The link between the high-order metaphysics and abstractions, on the one hand, and choice in the foundation of set theory, on the other hand, can distinguish unambiguously the “good” principles of abstraction from the “bad” ones and thus resolve the “bad company problem” as to set theory. Thus it implies correspondingly a more precise definition of the relation between the axiom of choice and “all company” of axioms in set theory concerning directly or indirectly abstraction: the principle of abstraction, (...)axiom of comprehension, axiom scheme of specification, axiom scheme of separation, subset axiom scheme, axiom scheme of replacement, axiom of unrestricted comprehension, axiom of extensionality, etc. (shrink)
Abstract. The aim of this paper is to present a topological method for constructing discretizations (tessellations) of conceptual spaces. The method works for a class of topological spaces that the Russian mathematician Pavel Alexandroff defined more than 80 years ago. Alexandroff spaces, as they are called today, have many interesting properties that distinguish them from other topological spaces. In particular, they exhibit a 1-1 correspondence between their specialization orders and their topological structures. Recently, a special type of Alexandroff spaces was (...) used by Ian Rumfitt to elucidate the logic of vague concepts in a new way. According to his approach, conceptual spaces such as the color spectrum give rise to classical systems of concepts that have the structure of atomic Boolean algebras. More precisely, concepts are represented as regular open regions of an underlying conceptual space endowed with a topological structure. Something is subsumed under a concept iff it is represented by an element of the conceptual space that is maximally close to the prototypical element p that defines that concept. This topological representation of concepts comes along with a representation of the familiar logical connectives of Aristotelian syllogistics in terms of natural settheoretical operations that characterize regular open interpretations of classical Boolean propositional logic. In the last 20 years, conceptual spaces have become a popular tool of dealing with a variety of problems in the fields of cognitive psychology, artificial intelligence, linguistics and philosophy, mainly due to the work of Peter Gärdenfors and his collaborators. By using prototypes and metrics of similarity spaces, one obtains geometrical discretizations of conceptual spaces by so-called Voronoi tessellations. These tessellations are extensionally equivalent to topological tessellations that can be constructed for Alexandroff spaces. Thereby, Rumfitt’s and Gärdenfors’s constructions turn out to be special cases of an approach that works for a more general class of spaces, namely, for weakly scattered Alexandroff spaces. This class of spaces provides a convenient framework for conceptual spaces as used in epistemology and related disciplines in general. Alexandroff spaces are useful for elucidating problems related to the logic of vague concepts, in particular they offer a solution of the Sorites paradox (Rumfitt). Further, they provide a semantics for the logic of clearness (Bobzien) that overcomes certain problems of the concept of higher2 order vagueness. Moreover, these spaces help find a natural place for classical syllogistics in the framework of conceptual spaces. The crucial role of order theory for Alexandroff spaces can be used to refine the all-or-nothing distinction between prototypical and nonprototypical stimuli in favor of a more fine-grained gradual distinction between more-orless prototypical elements of conceptual spaces. The greater conceptual flexibility of the topological approach helps avoid some inherent inadequacies of the geometrical approach, for instance, the so-called “thickness problem” (Douven et al.) and problems of selecting a unique metric for similarity spaces. Finally, it is shown that only the Alexandroff account can deal with an issue that is gaining more and more importance for the theory of conceptual spaces, namely, the role that digital conceptual spaces play in the area of artificial intelligence, computer science and related disciplines. Keywords: Conceptual Spaces, Polar Spaces, Alexandroff Spaces, Prototypes, Topological Tessellations, Voronoi Tessellations, Digital Topology. (shrink)
There are two main ways in which the notion of mereological fusion is usually defined in the current literature in mereology which have been labelled ‘Leśniewski fusion’ and ‘Goodman fusion’. It is well-known that, with Minimal Mereology as the background theory, every Leśniewski fusion also qualifies as a Goodman fusion. However, the converse does not hold unless stronger mereological principles are assumed. In this paper I will discuss how the gap between the two notions can be filled, focussing in particular (...) on two specific sets of principles that appear to be of particular philosophical interest. The first way to make the two notions equivalent can be used to shed some interesting light on the kind of intuition both notions seem to articulate. The second shows the importance of a little-known mereological principle which I will call ‘Mild Supplementation’. As I will show, the mereology obtained by adding Mild Supplementation to Minimal Mereology occupies an interesting position in the landscape of theories that are stronger than Minimal Mereology but weaker than what Achille Varzi and Roberto Casati have labelled ‘Extensional Mereology’. (shrink)
The paper concentrates on the problem of adequate reflection of fragments of reality via expressions of language and inter-subjective knowledge about these fragments, called here, in brief, language adequacy. This problem is formulated in several aspects, the most being: the compatibility of language syntax with its bi-level semantics: intensional and extensional. In this paper, various aspects of language adequacy find their logical explication on the ground of the formal-logical theory T of any categorial language L generated by the so-called classical (...) categorial grammar, and also on the ground of its extension to the bi-level, intensional and extensional semantic-pragmatic theory ST for L. In T, according to the token-type distinction of Ch.S. Peirce, L is characterized first as a language of well-formed expression-tokens (wfe-tokens) - material, concrete objects - and then as a language of wfe-types - abstract objects, classes of wfe-tokens. In ST the semantic-pragmatic notions of meaning and interpretation for wfe-types of L of intensional semantics and the notion of denotation of extensional semanics for wfe-types and constituents of knowledge are formalized. These notions allow formulating a postulate (an axiom of categorial adequacy) from which follow all the most important conditions of the language adequacy, including the above, and a structural one connected with three principles of compositionality. (shrink)
Spinoza scholars have claimed that we are faced with a dilemma: either Spinoza's definitions in his Ethics are real, in spite of indications to the contrary, or the definitions are nominal and the propositions derived from them are false. I argue that Spinoza did not recognize the distinction between real and nominal definitions. Rather, Spinoza classified definitions according to whether they require a priori or a posteriori justification, which is a classification distinct from either the real/nominal or the intensional/extensional classification. (...) I argue that Spinoza uses both a priori and a posteriori definitions in the Ethics and that recognizing both types of definitions allows us to understand Spinoza's geometric method in a new way. We can now understand the geometric method as two methods, one resulting in propositions that Spinoza considers to be absolutely certain and another resulting in propositions that Spinoza does not consider certain. The latter method makes use of a posteriori definitions and postulates, whereas the former method uses only a priori definitions and axioms. (shrink)
If one takes seriously the idea that a scientific language must be extensional, and accepts Quine’s notion of truth-value-related extensionality, and also recognizes that a scientific language must allow for singular terms that do not refer to existing objects, then there is a problem, since this combination of assumptions must be inconsistent. I will argue for a particular solution to the problem, namely, changing what is meant by the word ‘extensionality’, so that it would not be the truth-value (...) that had to be preserved under the substitution of co-extensional expressions, but the state of affairs that the sentence described. The question is whether or not elementary sentences containing empty singular terms, such as ‘Vulcan rotates’, are extensional in the substitutivity sense. Five conditions are specified under which extensionality in the substitutivity sense of such sentences can be secured. It is demonstrated that such sentences are state-of-affairs-as-extension-related extensional. This implies that such sentences are also truth-value-related extensional in Quine’s sense, but not truth-value-as-extension-related extensional. (shrink)
In this paper, I introduce the idea that some important parts of contemporary pure mathematics are moving away from what I call the extensional point of view. More specifically, these fields are based on criteria of identity that are not extensional. After presenting a few cases, I concentrate on homotopy theory where the situation is particularly clear. Moreover, homotopy types are arguably fundamental entities of geometry, thus of a large portion of mathematics, and potentially to all mathematics, at least according (...) to some speculative research programs. (shrink)
In the early 1900s, Russell began to recognize that he, and many other mathematicians, had been using assertions like the Axiom of Choice implicitly, and without explicitly proving them. In working with the Axioms of Choice, Infinity, and Reducibility, and his and Whitehead’s Multiplicative Axiom, Russell came to take the position that some axioms are necessary to recovering certain results of mathematics, but may not be proven to be true absolutely. The essay traces historical roots of, and motivations (...) for, Russell’s method of analysis, which are intended to shed light on his view about the status of mathematical axioms. I describe the position Russell develops in consequence as “immanent logicism,” in contrast to what Irving (1989) describes as “epistemic logicism.” Immanent logicism allows Russell to avoid the logocentric predicament, and to propose a method for discovering structural relationships of dependence within mathematical theories. (shrink)
I focus on three mereological principles: the Extensionality of Parthood (EP), the Uniqueness of Composition (UC), and the Extensionality of Composition (EC). These principles are not equivalent. Nonetheless, they are closely related (and often equated) as they all reflect the basic nominalistic dictum, No difference without a difference maker. And each one of them—individually or collectively—has been challenged on philosophical grounds. In the first part I argue that such challenges do not quite threaten EP insofar as they are (...) either self-defeating or unsupported. In the second part I argue that they hardly undermine the tenability of EC and UC as well. (shrink)
When people combine concepts these are often characterised as “hybrid”, “impossible”, or “humorous”. However, when simply considering them in terms of extensional logic, the novel concepts understood as a conjunctive concept will often lack meaning having an empty extension (consider “a tooth that is a chair”, “a pet flower”, etc.). Still, people use different strategies to produce new non-empty concepts: additive or integrative combination of features, alignment of features, instantiation, etc. All these strategies involve the ability to deal with conflicting (...) attributes and the creation of new (combinations of) properties. We here consider in particular the case where a Head concept has superior ‘asymmetric’ control over steering the resulting concept combination (or hybridisation) with a Modifier concept. Specifically, we propose a dialogical approach to concept combination and discuss an implementation based on axiom weakening, which models the cognitive and logical mechanics of this asymmetric form of hybridisation. (shrink)
The basic axioms or formal conditions of decision theory, especially the ordering condition put on preferences and the axioms underlying the expected utility formula, are subject to a number of counter-examples, some of which can be endowed with normative value and thus fall within the ambit of a philosophical reflection on practical rationality. Against such counter-examples, a defensive strategy has been developed which consists in redescribing the outcomes of the available options in such a way that the threatened axioms or (...) conditions continue to hold. We examine how this strategy performs in three major cases: Sen's counterexamples to the binariness property of preferences, the Allais paradox of EU theory under risk, and the Ellsberg paradox of EU theory under uncertainty. We find that the strategy typically proves to be lacking in several major respects, suffering from logical triviality, incompleteness, and theoretical insularity. To give the strategy more structure, philosophers have developed “principles of individuation”; but we observe that these do not address the aforementioned defects. Instead, we propose the method of checking whether the strategy can overcome its typical defects once it is given a proper theoretical expansion. We find that the strategy passes the test imperfectly in Sen's case and not at all in Allais's. In Ellsberg's case, however, it comes close to meeting our requirement. But even the analysis of this more promising application suggests that the strategy ought to address the decision problem as a whole, rather than just the outcomes, and that it should extend its revision process to the very statements it is meant to protect. Thus, by and large, the same cautionary tale against redescription practices runs through the analysis of all three cases. A more general lesson, simply put, is that there is no easy way out from the paradoxes of decision theory. (shrink)
Those incompleteness theorems mean the relation of (Peano) arithmetic and (ZFC) set theory, or philosophically, the relation of arithmetical finiteness and actual infinity. The same is managed in the framework of set theory by the axiom of choice (respectively, by the equivalent well-ordering "theorem'). One may discuss that incompleteness form the viewpoint of set theory by the axiom of choice rather than the usual viewpoint meant in the proof of theorems. The logical corollaries from that "nonstandard" viewpoint the (...) relation of set theory and arithmetic are demonstrated. (shrink)
There are at least three vaguely atomistic principles that have come up in the literature, two explicitly and one implicitly. First, standard atomism is the claim that everything is composed of atoms, and is very often how atomism is characterized in the literature. Second, superatomism is the claim that parthood is well-founded, which implies that every proper parthood chain terminates, and has been discussed as a stronger alternative to standard atomism. Third, there is a principle that lies between these two (...) theses in terms of its relative strength: strong atomism, the claim that every maximal proper parthood chain terminates. Although strong atomism is equivalent to superatomism in classical extensional mereology, it is strictly weaker than it in strictly weaker systems in which parthood is a partial order. And it is strictly stronger than standard atomism in classical extensional mereology and, given the axiom of choice, in such strictly weaker systems as well. Though strong atomism has not, to my knowledge, been explicitly identified, Shiver appears to have it in mind, though it is unclear whether he recognizes that it is not equivalent to standard atomism in each of the mereologies he considers. I prove these logical relationships which hold amongst these three atomistic principles, and argue that, whether one adopts classical extensional mereology or a system strictly weaker than it in which parthood is a partial order, standard atomism is a more defensible addition to one’s mereology than either of the other two principles, and it should be regarded as the best formulation of the atomistic thesis. (shrink)
A description of consciousness leads to a contradiction with the postulation from special relativity that there can be no connections between simultaneous event. This contradiction points to consciousness involving quantum level mechanisms. The Quantum level description of the universe is re- evaluated in the light of what is observed in consciousness namely 4 Dimensional objects. A new improved interpretation of Quantum level observations is introduced. From this vantage point the following axioms of consciousness is presented. Consciousness consists of two distinct (...) components, the observed U and the observer I. The observed U consist of all the events I is aware of. A vast majority of these occur simultaneously. Now if I were to be an entity within the space-time continuum, all of these events of U together with I would have to occur at one point in space-time. However, U is distributed over a definite region of space-time (region in brain). Thus, I is aware of a multitude of space-like separated events. It is seen that this awareness necessitates I to be an entity outside the space-time continuum. With I taken as such, a new concept called concept A is introduced. With the help of concept A a very important axiom of consciousness, namely Free Will is explained. Libet s Experiment which was originally seen to contradict Free will, in the light of Concept A is shown to support it. A variation to Libet s Experiment is suggested that will give conclusive proof for Concept A and Free Will. (shrink)
Extensional scientific realism is the view that each believable scientific theory is supported by the unique first-order evidence for it and that if we want to believe that it is true, we should rely on its unique first-order evidence. In contrast, intensional scientific realism is the view that all believable scientific theories have a common feature and that we should rely on it to determine whether a theory is believable or not. Fitzpatrick argues that extensional realism is immune, while intensional (...) realism is not, to the pessimistic induction. I reply that if extensional realism overcomes the pessimistic induction at all, that is because it implicitly relies on the theoretical resource of intensional realism. I also argue that extensional realism, by nature, cannot embed a criterion for distinguishing between believable and unbelievable theories. (shrink)
We present an elementary system of axioms for the geometry of Minkowski spacetime. It strikes a balance between a simple and streamlined set of axioms and the attempt to give a direct formalization in first-order logic of the standard account of Minkowski spacetime in [Maudlin 2012] and [Malament, unpublished]. It is intended for future use in the formalization of physical theories in Minkowski spacetime. The choice of primitives is in the spirit of [Tarski 1959]: a predicate of betwenness and a (...) four place predicate to compare the square of the relativistic intervals. Minkowski spacetime is described as a four dimensional ‘vector space’ that can be decomposed everywhere into a spacelike hyperplane - which obeys the Euclidean axioms in [Tarski and Givant, 1999] - and an orthogonal timelike line. The length of other ‘vectors’ are calculated according to Pythagora’s theorem. We conclude with a Representation Theorem relating models of our system that satisfy second order continuity to the mathematical structure called ‘Minkowski spacetime’ in physics textbooks. (shrink)
Hylomorphically complex objects are things that change their parts or matter or that might have, or have had, different parts or matter. Often ontologists analyze such objects in terms of sets (or functions, understood set-theoretically) or other extensional entities such as mereological fusions or quantities of matter. I urge two reasons for being wary of any such analyses. First, being extensional, such things as sets are ill-suited to capture the characteristic modal and temporal flexibility of hylomorphically complex objects. Secondly, sets (...) are often appealed to because they seem to contain their members. But the idea that sets do contain their members, in the ordinary sense of containment, is a substantive metaphysical position that makes analyses that rely on that idea for their plausibility much more metaphysically committing than is generally thought. (shrink)
In this article I develop an elementary system of axioms for Euclidean geometry. On one hand, the system is based on the symmetry principles which express our a priori ignorant approach to space: all places are the same to us, all directions are the same to us and all units of length we use to create geometric figures are the same to us. On the other hand, through the process of algebraic simplification, this system of axioms directly provides the Weyl’s (...) system of axioms for Euclidean geometry. The system of axioms, together with its a priori interpretation, offers new views to philosophy and pedagogy of mathematics: it supports the thesis that Euclidean geometry is a priori, it supports the thesis that in modern mathematics the Weyl’s system of axioms is dominant to the Euclid’s system because it reflects the a priori underlying symmetries, it gives a new and promising approach to learn geometry which, through the Weyl’s system of axioms, leads from the essential geometric symmetry principles of the mathematical nature directly to modern mathematics. (shrink)
Neo-Fregean approaches to set theory, following Frege, have it that sets are the extensions of concepts, where concepts are the values of second-order variables. The idea is that, given a second-order entity $X$, there may be an object $\varepsilon X$, which is the extension of X. Other writers have also claimed a similar relationship between second-order logic and set theory, where sets arise from pluralities. This paper considers two interpretations of second-order logic—as being either extensional or intensional—and whether either is (...) more appropriate for this approach to the foundations of set theory. Although there seems to be a case for the extensional interpretation resulting from modal considerations, I show how there is no obstacle to starting with an intensional second-order logic. I do so by showing how the $\varepsilon$ operator can have the effect of “extensionalizing” intensional second-order entities. (shrink)
Many believe that, if true, reason-statements of the form ‘that X is F is a reason to φ’ describe a ‘favouring-relation’ between the fact that X is F and the act of φing. This favouring-relation has been assumed to share many features of other, more concrete relations. This combination of views leads to immediate problems. Firstly, unlike statements about many other relations, reason-statements can be true even when the relata do not exist, i.e., when the relevant facts do not obtain (...) and the relevant acts are not done. Secondly, the previous combination of views also makes it very difficult to draw the distinction between agent-relative and agent-neutral reasons. I argue that we should think that the predicate ‘is a reason to’ creates non-extensional contexts in the statements in which it is used. This would both solve the previous problems and avoid the awkward consequences of the so-called slingshot argument. (shrink)
I discuss Frege's argument - later called the slingshot - that if a construction is extensional and preserves logical equivalence then it is truth-functional. I consider some simple apparent counterexamples and conclude that they are not sentence-embedding in the required way.
Spinoza's causal axiom is at the foundation of the Ethics. I motivate, develop and defend a new interpretation that I call the ‘causally restricted interpretation’. This interpretation solves several longstanding puzzles and helps us better understand Spinoza's arguments for some of his most famous doctrines, including his parallelism doctrine and his theory of sense perception. It also undermines a widespread view about the relationship between the three fundamental, undefined notions in Spinoza's metaphysics: causation, conception and inherence.
The Hyperuniverse Programme, introduced in Arrigoni and Friedman (2013), fosters the search for new set-theoretic axioms. In this paper, we present the procedure envisaged by the programme to find new axioms and the conceptual framework behind it. The procedure comes in several steps. Intrinsically motivated axioms are those statements which are suggested by the standard concept of set, i.e. the `maximal iterative concept', and the programme identi fies higher-order statements motivated by the maximal iterative concept. The satisfaction of these statements (...) (H-axioms) in countable transitive models, the collection of which constitutes the `hyperuniverse' (H), has remarkable 1st-order consequences, some of which we review in section 5. (shrink)
The independence phenomenon in set theory, while pervasive, can be partially addressed through the use of large cardinal axioms. A commonly assumed idea is that large cardinal axioms are species of maximality principles. In this paper, I argue that whether or not large cardinal axioms count as maximality principles depends on prior commitments concerning the richness of the subset forming operation. In particular I argue that there is a conception of maximality through absoluteness, on which large cardinal axioms are restrictive. (...) I argue, however, that large cardinals are still important axioms of set theory and can play many of their usual foundational roles. (shrink)
Discussion of new axioms for set theory has often focused on conceptions of maximality, and how these might relate to the iterative conception of set. This paper provides critical appraisal of how certain maximality axioms behave on different conceptions of ontology concerning the iterative conception. In particular, we argue that forms of multiversism and actualism face complementary problems. The latter view is unable to use maximality axioms that make use of extensions, where the former has to contend with the existence (...) of extensions violating maximality axioms. An analysis of two kinds of multiversism, a Zermelian form and Skolemite form, leads to the conclusion that the kind of maximality captured by an axiom differs substantially according to background ontology. (shrink)
Axiom weakening is a technique that allows for a fine-grained repair of inconsistent ontologies. Its main advantage is that it repairs on- tologies by making axioms less restrictive rather than by deleting them, employing the use of refinement operators. In this paper, we build on pre- viously introduced axiom weakening for ALC, and make it much more irresistible by extending its definitions to deal with SROIQ, the expressive and decidable description logic underlying OWL 2 DL. We extend the (...) definitions of refinement operator to deal with SROIQ constructs, in particular with role hierarchies, cardinality constraints and nominals, and illustrate its application. Finally, we discuss the problem of termi- nation of an iterated weakening procedure. (shrink)
Formalizing Euclid’s first axiom. Bulletin of Symbolic Logic. 20 (2014) 404–5. (Coauthor: Daniel Novotný) -/- Euclid [fl. 300 BCE] divides his basic principles into what came to be called ‘postulates’ and ‘axioms’—two words that are synonyms today but which are commonly used to translate Greek words meant by Euclid as contrasting terms. -/- Euclid’s postulates are specifically geometric: they concern geometric magnitudes, shapes, figures, etc.—nothing else. The first: “to draw a line from any point to any point”; the last: (...) the parallel postulate. -/- Euclid’s axioms are general principles of magnitude: they concern geometric magnitudes and magnitudes of other kinds as well even numbers. The first is often translated “Things that equal the same thing equal one another”. -/- There are other differences that are or might become important. -/- Aristotle [fl. 350 BCE] meticulously separated his basic principles [archai, singular archê] according to subject matter: geometrical, arithmetic, astronomical, etc. However, he made no distinction that can be assimilated to Euclid’s postulate/axiom distinction. -/- Today we divide basic principles into non-logical [topic-specific] and logical [topic-neutral] but this too is not the same as Euclid’s. In this regard it is important to be cognizant of the difference between equality and identity—a distinction often crudely ignored by modern logicians. Tarski is a rare exception. The four angles of a rectangle are equal to—not identical to—one another; the size of one angle of a rectangle is identical to the size of any other of its angles. No two angles are identical to each other. -/- The sentence ‘Things that equal the same thing equal one another’ contains no occurrence of the word ‘magnitude’. This paper considers the problem of formalizing the proposition Euclid intended as a principle of magnitudes while being faithful to the logical form and to its information content. (shrink)
Future Logic is an original, and wide-ranging treatise of formal logic. It deals with deduction and induction, of categorical and conditional propositions, involving the natural, temporal, extensional, and logical modalities. Traditional and Modern logic have covered in detail only formal deduction from actual categoricals, or from logical conditionals (conjunctives, hypotheticals, and disjunctives). Deduction from modal categoricals has also been considered, though very vaguely and roughly; whereas deduction from natural, temporal and extensional forms of conditioning has been all but totally ignored. (...) As for induction, apart from the elucidation of adductive processes (the scientific method), almost no formal work has been done. This is the first work ever to strictly formalize the inductive processes of generalization and particularization, through the novel methods of factorial analysis, factor selection and formula revision. This is the first work ever to develop a formal logic of the natural, temporal and extensional types of conditioning (as distinct from logical conditioning), including their production from modal categorical premises. Future Logic contains a great many other new discoveries, organized into a unified, consistent and empirical system, with precise definitions of the various categories and types of modality (including logical modality), and full awareness of the epistemological and ontological issues involved. Though strictly formal, it uses ordinary language, wherever symbols can be avoided. Among its other contributions: a full list of the valid modal syllogisms (which is more restrictive than previous lists); the main formalities of the logic of change (which introduces a dynamic instead of merely static approach to classification); the first formal definitions of the modal types of causality; a new theory of class logic, free of the Russell Paradox; as well as a critical review of modern metalogic. But it is impossible to list briefly all the innovations in logical science — and therefore, epistemology and ontology — this book presents; it has to be read for its scope to be appreciated. (shrink)
Much of the ontology made in the analytic tradition of philosophy nowadays is founded on some of Quine’s proposals. His naturalism and the binding between existence and quantification are respectively two of his very influential metaphilosophical and methodological theses. Nevertheless, many of his specific claims are quite controversial and contemporaneously have few followers. Some of them are: (a) his rejection of higher-order logic; (b) his resistance in accepting the intensionality of ontological commitments; (c) his rejection of first-order modal logic; and (...) (d) his rejection of the distinction between analytic and synthetic statements. I intend to argue that these controversial negative claims are just interconnected consequences of those much more accepted and apparently less harmful metaphilosophical and methodological theses, and that the glue linking all these consequences to its causes is the notion of extensionality. (shrink)
In this paper, we outline an approach to giving extensional truth-theoretic semantics for what have traditionally been seen as opaque sentential contexts. We outline an approach to providing a compositional truth-theoretic semantics for opaque contexts which does not require quantifying over intensional entities of any kind, and meets standard objections to such accounts. The account we present aims to meet the following desiderata on a semantic theory T for opaque contexts: (D1) T can be formulated in a first-order extensional language; (...) (D2) T does not require quantification over intensional entitiesi.e., meanings, propositions, properties, relations, or the likein its treatment of opaque contexts; (D3) T captures the entailment relations that hold in virtue of form between sentences in the language for which it is a theory; (D4) T has a finite number of axioms. If the approach outlined here is correct, it resolves a longstanding complex of problems in metaphysics, the philosophy of mind and the philosophy of language. (shrink)
In quantum theory every state can be diagonalized, i.e. decomposed as a convex combination of perfectly distinguishable pure states. This elementary structure plays an ubiquitous role in quantum mechanics, quantum information theory, and quantum statistical mechanics, where it provides the foundation for the notions of majorization and entropy. A natural question then arises: can we reconstruct these notions from purely operational axioms? We address this question in the framework of general probabilistic theories, presenting a set of axioms that guarantee that (...) every state can be diagonalized. The first axiom is Causality, which ensures that the marginal of a bipartite state is well defined. Then, Purity Preservation states that the set of pure transformations is closed under composition. The third axiom is Purification, which allows to assign a pure state to the composition of a system with its environment. Finally, we introduce the axiom of Pure Sharpness, stating that for every system there exists at least one pure effect occurring with unit probability on some state. For theories satisfying our four axioms, we show a constructive algorithm for diagonalizing every given state. The diagonalization result allows us to formulate a majorization criterion that captures the convertibility of states in the operational resource theory of purity, where random reversible transformations are regarded as free operations. (shrink)
Need considerations play an important role in empirically informed theories of distributive justice. We propose a concept of need-based justice that is related to social participation and provide an ethical measurement of need-based justice. The β-ε-index satisfies the need-principle, monotonicity, sensitivity, transfer and several »technical« axioms. A numerical example is given.
In “Psychopower and Ordinary Madness” my ambition, as it relates to Bernard Stiegler’s recent literature, was twofold: 1) critiquing Stiegler’s work on exosomatization and artefactual posthumanism—or, more specifically, nonhumanism—to problematize approaches to media archaeology that rely upon technical exteriorization; 2) challenging how Stiegler engages with Giuseppe Longo and Francis Bailly’s conception of negative entropy. These efforts were directed by a prevalent techno-cultural qualifier: the rise of Synthetic Intelligence (including neural nets, deep learning, predictive processing and Bayesian models of cognition). This (...) paper continues this project but first directs a critical analytic lens at the Derridean practice of the ontologization of grammatization from which Stiegler emerges while also distinguishing how metalanguages operate in relation to object-oriented environmental interaction by way of inferentialism. Stalking continental (Kapp, Simondon, Leroi-Gourhan, etc.) and analytic traditions (e.g., Carnap, Chalmers, Clark, Sutton, Novaes, etc.), we move from artefacts to AI and Predictive Processing so as to link theories related to technicity with philosophy of mind. Simultaneously drawing forth Robert Brandom’s conceptualization of the roles that commitments play in retrospectively reconstructing the social experiences that lead to our endorsement(s) of norms, we compliment this account with Reza Negarestani’s deprivatized account of intelligence while analyzing the equipollent role between language and media (both digital and analog). (shrink)
Ontology engineering is a hard and error-prone task, in which small changes may lead to errors, or even produce an inconsistent ontology. As ontologies grow in size, the need for automated methods for repairing inconsistencies while preserving as much of the original knowledge as possible increases. Most previous approaches to this task are based on removing a few axioms from the ontology to regain consistency. We propose a new method based on weakening these axioms to make them less restrictive, employing (...) the use of refinement operators. We introduce the theoretical framework for weakening DL ontologies, propose algorithms to repair ontologies based on the framework, and provide an analysis of the computational complexity. Through an empirical analysis made over real-life ontologies, we show that our approach preserves significantly more of the original knowledge of the ontology than removing axioms. (shrink)
Hilbert’s choice operators τ and ε, when added to intuitionistic logic, strengthen it. In the presence of certain extensionality axioms they produce classical logic, while in the presence of weaker decidability conditions for terms they produce various superintuitionistic intermediate logics. In this thesis, I argue that there are important philosophical lessons to be learned from these results. To make the case, I begin with a historical discussion situating the development of Hilbert’s operators in relation to his evolving program in (...) the foundations of mathematics and in relation to philosophical motivations leading to the development of intuitionistic logic. This sets the stage for a brief description of the relevant part of Dummett’s program to recast debates in metaphysics, and in particular disputes about realism and anti-realism, as closely intertwined with issues in philosophical logic, with the acceptance of classical logic for a domain reflecting a commitment to realism for that domain. Then I review extant results about what is provable and what is not when one adds epsilon to intuitionistic logic, largely due to Bell and DeVidi, and I give several new proofs of intermediate logics from intuitionistic logic+ε without identity. With all this in hand, I turn to a discussion of the philosophical significance of choice operators. Among the conclusions I defend are that these results provide a finer-grained basis for Dummett’s contention that commitment to classically valid but intuitionistically invalid principles reflect metaphysical commitments by showing those principles to be derivable from certain existence assumptions; that Dummett’s framework is improved by these results as they show that questions of realism and anti-realism are not an “all or nothing” matter, but that there are plausibly metaphysical stances between the poles of anti-realism and realism, because different sorts of ontological assumptions yield intermediate rather than classical logic; and that these intermediate positions between classical and intuitionistic logic link up in interesting ways with our intuitions about issues of objectivity and reality, and do so usefully by linking to questions around intriguing everyday concepts such as “is smart,” which I suggest involve a number of distinct dimensions which might themselves be objective, but because of their multivalent structure are themselves intermediate between being objective and not. Finally, I discuss the implications of these results for ongoing debates about the status of arbitrary and ideal objects in the foundations of logic, showing among other things that much of the discussion is flawed because it does not recognize the degree to which the claims being made depend on the presumption that one is working with a very strong logic. (shrink)
The purpose of this paper is to challenge some widespread assumptions about the role of the modal axiom 4 in a theory of vagueness. In the context of vagueness, axiom 4 usually appears as the principle ‘If it is clear (determinate, definite) that A, then it is clear (determinate, definite) that it is clear (determinate, definite) that A’, or, more formally, CA → CCA. We show how in the debate over axiom 4 two different notions of clarity (...) are in play (Williamson-style "luminosity" or self-revealing clarity and concealeable clarity) and what their respective functions are in accounts of higher-order vagueness. On this basis, we argue first that, contrary to common opinion, higher-order vagueness and S4 are perfectly compatible. This is in response to claims like that by Williamson that, if vagueness is defined with the help of a clarity operator that obeys axiom 4, higher-order vagueness disappears. Second, we argue that, contrary to common opinion, (i) bivalence-preservers (e.g. epistemicists) can without contradiction condone axiom 4 (by adopting what elsewhere we call columnar higher-order vagueness), and (ii) bivalence-discarders (e.g. open-texture theorists, supervaluationists) can without contradiction reject axiom 4. Third, we rebut a number of arguments that have been produced by opponents of axiom 4, in particular those by Williamson. (The paper is pitched towards graduate students with basic knowledge of modal logic.). (shrink)
This article proposes a way of doing type theory informally, assuming a cubical style of reasoning. It can thus be viewed as a first step toward a cubical alternative to the program of informalization of type theory carried out in the homotopy type theory book for dependent type theory augmented with axioms for univalence and higher inductive types. We adopt a cartesian cubical type theory proposed by Angiuli, Brunerie, Coquand, Favonia, Harper, and Licata as the implicit foundation, confining our presentation (...) to elementary results such as function extensionality, the derivation of weak connections and path induction, the groupoid structure of types, and the Eckmman–Hilton duality. (shrink)
Axiom weakening is a novel technique that allows for fine-grained repair of inconsistent ontologies. In a multi-agent setting, integrating ontologies corresponding to multiple agents may lead to inconsistencies. Such inconsistencies can be resolved after the integrated ontology has been built, or their generation can be prevented during ontology generation. We implement and compare these two approaches. First, we study how to repair an inconsistent ontology resulting from a voting-based aggregation of views of heterogeneous agents. Second, we prevent the generation (...) of inconsistencies by letting the agents engage in a turn-based rational protocol about the axioms to be added to the integrated ontology. We instantiate the two approaches using real-world ontologies and compare them by measuring the levels of satisfaction of the agents w.r.t. the ontology obtained by the two procedures. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.