Total ontological unification of matter at all levels of reality as a whole, its “grasp” of its dialectical structure, space dimensionality and structure of the language of nature – “house of Being” [1], gives the opportunity to see the “place” and to understand the nature of information as a phenomenon of Ontological Memory, the measure of being of the whole, “the soul of matter”, qualitative quality of the absolute forms of existence of matter (absolute states). “Information” and “time” are multivalent (...) phenomena of Ontological Memory (OntoMemory) substantiating the essential unity of the world on the “horizontal” and “vertical”. Ontological constructing of dialectics of Logos self-motion, total unification of matter, “grasp” of the nature of information leads to the necessity of introducing a new unit of information showing the ideas of dialectical formation and generation of new structures and meanings, namely Delta-Logit (Δ-Logit), qualitative quantum-prototecton, fundamental organizing, absolute existential-extreme. The simplest mathematical symbol represents the dialectical microprocessor of the Nature. Ontological formula of John A. Wheeler «It from Bit» [2] is “grasped” as the first dialectic link in the chain of ontological formulas → “It from Δ-Logit” → “It from OntoMemory” → “It from Logos, Logos into It”. Ontological Memory - core, the attractor of the new conceptual structure of the world of the information age, which is based on Absolute generating structure, the representant of onto-genetic code of the Universe. (shrink)
Kripke frames (and models) provide a suitable semantics for sub-classical logics; for example, intuitionistic logic (of Brouwer and Heyting) axiomatizes the reflexive and transitive Kripke frames (with persistent satisfaction relations), and the basic logic (of Visser) axiomatizes transitive Kripke frames (with persistent satisfaction relations). Here, we investigate whether Kripke frames/models could provide a semantics for fuzzy logics. For each axiom of the basic fuzzy logic, necessary and sufficient conditions are sought for Kripke frames/models which satisfy them. It turns out (...) that the only fuzzy logics (logics containing the basic fuzzy logic) which are sound and complete with respect to a class of Kripke frames/models are the extensions of the Gödel logic (or the super-intuitionistic logic of Dummett); indeed this logic is sound and strongly complete with respect to reflexive, transitive and connected (linear) Kripke frames (with persistent satisfaction relations). This provides a semantic characterization for the Gödel logic among (propositional) fuzzy logics. (shrink)
The two-fold ontological character of linguistic objects revealed due to the distinction between “type” and “token” introduced by Ch. S. Peirce can be a base of the two-fold, both theoretical and axiomatic, approach to the language. Referring to some ideas included in A. A. Markov’s work [1954] (in Russian) on Theory of Algorithms and in some earlier papers of the author, the problem of formalization of the concrete and abstract words theories raised by J. Słupecki was solved. The construction of (...) the theories presented here has two levels. The axiomatic theory of label-tokens: material, physical linguistic objects, constitutes the ﬁrst one. Label-types, according to the literature of the subject, are deﬁned on the other level as equivalence classes of equiform label-tokens. Assuming the opposite point of view, one can accept that theory of label-types: abstract labels, formalized on the first level, in which it is possible to deﬁne the notion of label-token as well as the derivative notions on the second level, should become the basis of formalization of the theory of linguistic expressions and the theory of language in general. The axioms and deﬁnitions of both theories of labels: T k and T p representing the other approach to the ontology of language are included in the sequel of the abstract. The foundations of the theory of labels T k in which the primary assumption as to the label-types existence is superﬂuous have been referred on the basis of the author's monography "Teorie Języków Syntaktycznie Kategorialnych" ( "The Theories of Syntactically Categorial Languages"), PWN, Warszawa-Wrocław 1985. The basis of the theory of labels T p which takes into account the other position has to be presented here for the ﬁrst time. Some extended ideas of the paper will also be presented in author's paper "Logiczne podstawy ontologii składni Języka" ("Logical foundations of language syntax ontology), Studia Filozoﬁczne 6-7 (271-272), (1988), pp. 263-284. (shrink)
This article has been written about the explanation of the scientific affair. There are the philosophical circles that a philosopher must consider their approaches. Postmodern thinkers generally refuse the universality of the rational affair. They believe that the experience cannot reach general knowledge. They emphasize on the partial and plural knowledge. Any human being has his knowledge and interpretation. The world is always becoming. Diversity is an inclusive epistemological principle. Naturally, in such a state, the scientific activity is a non-sense (...) process. The postmodern world is a post-scientific world. Another collection of approaches to science belongs to the analytical philosophy. The main part of analytical philosophy context centers around language. Language is the center of the world. We can study the world through language. Language is the possibility of knowledge. If we overcome the language games, we can study the world accurately. Generally, at present, it seems that the rational thoughts are marginal points in the sea of irrationality. We can probably talk about an anarchistic epistemological situation. Another issue is the role of observation to form a scientific activity. In this situation, the question is whether the observation is the technique of gathering information or the final reference of original scientific analysis. In relation to this issue, the confrontation of science with the families and the categories of the objects is the main problem. For more than two thousand years, the totalitarian heritage of the Aristotelian logic has been the guideline of categorizations for the human being. In order to reach an inclusive and applicable result, we should categorize everything; but it is not really known that if the method of the world categorization is the same or not. The accordance of the conceptual categories with their extensions in the world is one of the main problems which has malfunctioned the scientific system. It is better to say that it has deactivated science. Moreover, there are many interpretations for each event based on its occurrence conditions. For example, consider a fever; In order to diagnose its cause, the most significant factor is the role of a physician. There is no sense in saying that the observation has diagnosed the cause of the fever. This is the observer that plays a determinative role. Another problem is the disputes between realism and instrumentalism. Instrumentalists believe that to form a theory, the concepts are just the tools of scientific researches. Based on realism, but we confront the concepts in the original way. This chaos causes the scientific system to collapse. Now the question is what the solution is. I would like to propose my scheme that is constructed based on a main principle: there is no occurrence in the world unless we can trace the effect of consciousness back in that occurrence. In other words, there is no accident in the world. One can argue that it is the main problem that the effects of consciousness ---if there is such a concept--- is not traceable in the world. An immediate response to this problem is an inefficient technique . The observation is a technique . It is not the method of interpretation. Some people argue that the empirical approach cannot lead us to an authentic criterion to know the world. Because there is no criterion for truth, and if there is probably such a criterion, we can obtain it via several ways. The solution to the first problem is simple. Any occurrence is a truth; even we suppose it as a partial truth. Therefore we can say we face a multi-facial world. It can be the solution of the second problem: if consciousness is essentially the origin of the world and all occurrences, we face consciousness as a unique truth; even its creations contradict each other. If there is no consciousness as a creator of the world and we cannot trace it back in all existents, then what should we do? If there is consciousness, why its creations contradict each other. Since consciousness is a will, it is able to decide in any situation it intends. The issue of contradictory consciousness intentions has a teleological aspect. We cannot judge its teleological substance at present. If we are able to believe that there is no consciousness at all, it will contradict the main practical principle in our lives. Every action that we take in our lives is conjoined with consciousness. We learn many things from many sources. The substance of our instincts and emotions can be questioned. It is true that we cannot actually analyze our instincts and emotions, but we can discuss explicitly about the rational elements of the instincts. What are these elements? These elements are the quality of their appearance. The quality of instinct appearance is different among beings. The strength and the depth and length of these irrational qualities are different in all beings. What factors or factor cause the differences among these aspects? There is certainly a determinative factor. The same factor which determines the occurrence of any event. Even if we believe in partial knowledge, we must accept the confrontation of the occurrences. Thus it is not possible to refuse the occurrences as the reference unit of knowledge. In spite of the difference, every occurrence, even as a single occurrence, happens and we can regard it as the reference unit of knowledge. Then we can consider every occurrence as an occurred object. When an occurrence happens, then it is a determined object. Everywhere there is a determination, there is certainly a determiner. Why do not we encounter a multi-determinative consciousness? Because there are a few aspects of the unique super-consciousness. These aspects are motion and growth and the amount of constituent material in every being. In all over the world, the difference in these factors creates all differences. Thus we do not encounter the plural consciousness. Is the world birth accidental? Actually, there is no accident. The accident has no meaning in the world. As explained above, the constructing element of the world is the occurrence. An occurrence can be analyzed. Therefore, we should not talk about the accident until we are not able to analyze all phenomena. We can look at every subject from different points of view at present. Thus there is theoretically no possibility to say that a certain matter is the consequence of the accident. The accident is a popular and inexact concept that has no philosophical content. Moreover, the definition of consciousness versus its contradiction causes the collapse of all epistemological axioms. We can ask why we must not define other logical principles against their contradictions. Thinkers who deny the role of logic usually ignore the role of logic. They have not been able to deny the logical function to form an epistemological system because it is not an easy task. In this article some other issues were discussed, such as the consideration of restriction as a stable and firm foundation of knowledge instead of the Descartes cogito. In addition to these philosophical points some other issues were propounded such as the role of instrumental mechanism to form the new science and the great alterations in philosophy and history of Europe. Many issues threaten human civilization. The only way to deal with such threats is certainly to use scientific facilities. The scientific possibilities undoubtedly come from a complete and inclusive scientific theory. The present article is an attempt to explain the scientific affair in order to develop human achievement, however small. (shrink)
The present crisis of foundations in Fundamental Science is manifested as a comprehensive conceptual crisis, crisis of understanding, crisis of interpretation and representation, crisis of methodology, loss of certainty. Fundamental Science "rested" on the understanding of matter, space, nature of the "laws of nature", fundamental constants, number, time, information, consciousness. The question "What is fundametal?" pushes the mind to other questions → Is Fundamental Science fundamental? → What is the most fundamental in the Universum?.. Physics, do not be afraid of (...) Metaphysics! Levels of fundamentality. The problem №1 of Fundamental Science is the ontological justification (basification) of mathematics. To understand is to "grasp" Structure ("La Structure mère"). Key ontological ideas for emerging from the crisis of understanding: total unification of matter across all levels of the Universum, one ontological superaxiom, one ontological superprinciple. The ontological construction method of the knowledge basis (framework, carcass, foundation). The triune (absolute, ontological) space of eternal generation of new structures and meanings. Super concept of the scientific world picture of the Information era - Ontological (structural, cosmic) memory as "soul of matter", measure of the Universum being as the holistic generating process. The result of the ontological construction of the knowledge basis: primordial (absolute) generating structure is the most fundamental in the Universum. (shrink)
I will introduce and motivate eliminativist super-relationism. This is the conjunction of relationism about spacetime and eliminativism about material objects. According to the view, the universe is a big collection of spatio-temporal relations and natural properties, and no substance (material or spatio-temporal) exists in it. The view is original since eliminativism about material objects, when understood as including not only ordinary objects like tables or chairs but also physical particles, is generally taken to imply substantivalism about spacetime: if properties (...) are directly instantiated by spacetime without the mediation of material objects, then, surely, spacetime has to be a substance. After introducing briefly the two debates about spacetime (§1) and material objects (§2), I will present Schaffer's super-substantivalism (§3), the conjunction of substantivalism about spacetime and eliminativism about material objects at the fundamental level. I shall then expose and discuss the assumption from which the implication from eliminativism to substantivalism is drawn, and discuss the compatibility of eliminativism with relationism: if spacetime is not a substance, and if material objects are not real, how are we to understand the instantiation of properties (§4)? And what are the relata of spatio-temporal relations (§5)? I then show that each argument in favor of super-substantivalism offered by Schaffer also holds for super-relationism (§6) and examine several metaphysical consequences of the view (§7). I conclude that both super-substantivalism and super-relationism are compatible with Schaffer's priority monism (§8). (shrink)
This paper aims to build a bridge between two areas of philosophical research, the structure of kinds and metaphysical modality. Our central thesis is that kinds typically involve super-explanatory properties, and that these properties are therefore metaphysically essential to natural kinds. Philosophers of science who work on kinds tend to emphasize their complexity, and are generally resistant to any suggestion that they have “essences”. The complexities are real enough, but they should not be allowed to obscure the way that (...) kinds are typically unified by certain core properties. We shall show how this unifying role offers a natural account of why certain properties are metaphysically essential to kinds. (shrink)
We argue that explanationist views in epistemology continue to face persistent challenges to both their necessity and their sufficiency. This is so despite arguments offered by Kevin McCain in a paper recently published in this journal which attempt to show otherwise. We highlight ways in which McCain’s attempted solutions to problems we had previously raised go awry, while also presenting a novel challenge for all contemporary explanationist views.
Spinoza's causal axiom is at the foundation of the Ethics. I motivate, develop and defend a new interpretation that I call the ‘causally restricted interpretation’. This interpretation solves several longstanding puzzles and helps us better understand Spinoza's arguments for some of his most famous doctrines, including his parallelism doctrine and his theory of sense perception. It also undermines a widespread view about the relationship between the three fundamental, undefined notions in Spinoza's metaphysics: causation, conception and inherence.
The Hyperuniverse Programme, introduced in Arrigoni and Friedman (2013), fosters the search for new set-theoretic axioms. In this paper, we present the procedure envisaged by the programme to find new axioms and the conceptual framework behind it. The procedure comes in several steps. Intrinsically motivated axioms are those statements which are suggested by the standard concept of set, i.e. the `maximal iterative concept', and the programme identi fies higher-order statements motivated by the maximal iterative concept. The satisfaction of these statements (...) (H-axioms) in countable transitive models, the collection of which constitutes the `hyperuniverse' (H), has remarkable 1st-order consequences, some of which we review in section 5. (shrink)
Discussion of new axioms for set theory has often focused on conceptions of maximality, and how these might relate to the iterative conception of set. This paper provides critical appraisal of how certain maximality axioms behave on different conceptions of ontology concerning the iterative conception. In particular, we argue that forms of multiversism and actualism face complementary problems. The latter view is unable to use maximality axioms that make use of extensions, where the former has to contend with the existence (...) of extensions violating maximality axioms. An analysis of two kinds of multiversism, a Zermelian form and Skolemite form, leads to the conclusion that the kind of maximality captured by an axiom differs substantially according to background ontology. (shrink)
Formalizing Euclid’s first axiom. Bulletin of Symbolic Logic. 20 (2014) 404–5. (Coauthor: Daniel Novotný) -/- Euclid [fl. 300 BCE] divides his basic principles into what came to be called ‘postulates’ and ‘axioms’—two words that are synonyms today but which are commonly used to translate Greek words meant by Euclid as contrasting terms. -/- Euclid’s postulates are specifically geometric: they concern geometric magnitudes, shapes, figures, etc.—nothing else. The first: “to draw a line from any point to any point”; the last: (...) the parallel postulate. -/- Euclid’s axioms are general principles of magnitude: they concern geometric magnitudes and magnitudes of other kinds as well even numbers. The first is often translated “Things that equal the same thing equal one another”. -/- There are other differences that are or might become important. -/- Aristotle [fl. 350 BCE] meticulously separated his basic principles [archai, singular archê] according to subject matter: geometrical, arithmetic, astronomical, etc. However, he made no distinction that can be assimilated to Euclid’s postulate/axiom distinction. -/- Today we divide basic principles into non-logical [topic-specific] and logical [topic-neutral] but this too is not the same as Euclid’s. In this regard it is important to be cognizant of the difference between equality and identity—a distinction often crudely ignored by modern logicians. Tarski is a rare exception. The four angles of a rectangle are equal to—not identical to—one another; the size of one angle of a rectangle is identical to the size of any other of its angles. No two angles are identical to each other. -/- The sentence ‘Things that equal the same thing equal one another’ contains no occurrence of the word ‘magnitude’. This paper considers the problem of formalizing the proposition Euclid intended as a principle of magnitudes while being faithful to the logical form and to its information content. (shrink)
In Consciousness and Fundamental Reality, Philip Goff argues that the case against physicalist views of consciousness turns on “Phenomenal Transparency”, roughly the thesis that phenomenal concepts reveal the essential nature of phenomenal properties. This paper considers the argument that Goff offers for Phenomenal Transparency. The key premise is that our introspective judgments about current conscious experience are “Super Justified”, in that these judgments enjoy an epistemic status comparable to that of simple mathematical judgments, and a better epistemic status than (...) run of the mill perceptual judgments. After presenting the key ideas in the “Super Justification Argument”, I distinguish two Super Justification theses, which vary according to the kind of introspective judgments that they take to be Super Justified. I argue that Goff’s case requires “Strong Super Justification”, according to which a wide range of introspective judgments about conscious experience are Super Justified. Unfortunately, it turns out that Strong Super Justification is implausible and not well-supported by examples. In contrast, a weaker Super Justification thesis does not require anything like Phenomenal Transparency and, indeed, can be explained by physicalistic accounts of phenomenal concepts. (shrink)
Recent advances in neuroscience lead to a wider realm for philosophy to include the science of the Darwinian-evolved computational brain, our inner world producing organ, a non-recursive super- Turing machine combining 100B synapsing-neuron DNA-computers based on the genetic code. The whole system is a logos machine offering a world map for global context, essential for our intentional grasp of opportunities. We start from the observable contrast between the chaotic universe vs. our orderly inner world, the noumenal cosmos. So far, (...) philosophy has been rehearsing our thoughts, our human-internal world, a grand painting of the outer world, how we comprehend subjectively our experience, worked up by the logos machine, but now we seek a wider horizon, how humans understand the world thanks to Darwinian evolution to adapt in response to the metaphysical gap, the chasm between the human animal and its environment, shaping the organism so it can deal with its variable world. This new horizon embraces global context coded in neural structures that support the noumenal cosmos, our inner mental world, for us as denizens of the outer environment. Kant’s inner and outer senses are fundamental ingredients of scientific philosophy. Several sections devoted to Heidegger, his lizard debunked, but his version of the metaphysical gap & his doctrine of the logos praised. Rorty and others of the behaviorist school discussed also. (shrink)
The purpose of this paper is to challenge some widespread assumptions about the role of the modal axiom 4 in a theory of vagueness. In the context of vagueness, axiom 4 usually appears as the principle ‘If it is clear (determinate, definite) that A, then it is clear (determinate, definite) that it is clear (determinate, definite) that A’, or, more formally, CA → CCA. We show how in the debate over axiom 4 two different notions of clarity (...) are in play (Williamson-style "luminosity" or self-revealing clarity and concealeable clarity) and what their respective functions are in accounts of higher-order vagueness. On this basis, we argue first that, contrary to common opinion, higher-order vagueness and S4 are perfectly compatible. This is in response to claims like that by Williamson that, if vagueness is defined with the help of a clarity operator that obeys axiom 4, higher-order vagueness disappears. Second, we argue that, contrary to common opinion, (i) bivalence-preservers (e.g. epistemicists) can without contradiction condone axiom 4 (by adopting what elsewhere we call columnar higher-order vagueness), and (ii) bivalence-discarders (e.g. open-texture theorists, supervaluationists) can without contradiction reject axiom 4. Third, we rebut a number of arguments that have been produced by opponents of axiom 4, in particular those by Williamson. (The paper is pitched towards graduate students with basic knowledge of modal logic.). (shrink)
In this chapter I argue that emerging soldier enhancement technologies have the potential to transform the ethical character of the relationship between combatants, in conflicts between ‘Superpower’ militaries, with the ability to deploy such technologies, and technologically disadvantaged ‘Underdog’ militaries. The reasons for this relate to Paul Kahn’s claims about the paradox of riskless warfare. When an Underdog poses no threat to a Superpower, the standard just war theoretic justifications for the Superpower’s combatants using lethal violence against their opponents breaks (...) down. Therefore, Kahn argues, combatants in that position must approach their opponents in an ethical guise relevantly similar to ‘policing’. I argue that the kind of disparities in risk and threat between opposing combatants that Kahn’s analysis posits, don’t obtain in the context of face-to-face combat, in the way they would need to in order to support his ethical conclusions about policing. But then I argue that soldier enhancement technologies have the potential to change this, in a way that reactivates the force of those conclusions. (shrink)
The paper questions the scientific rather than ideological problem of an eventual biological successor of the mankind. The concept of superhumans is usually linked to Nietzsche or to Heidegger’s criticism or even to the ideology of Nazism. However, the superhuman can be also viewed as that biological species who will originate from humans eventually in the course of evolution.While the society is reached a natural limitation of globalism, technics depends on the amount of utilized energy, and the mind is restricted (...) by its carrier, i.e. by the brain, it is language which seems to be the frontier of any future development of humans or superhumans. Language is a symbolization of the world and thus doubling in an ideal or virtual world fruitful for creativity and the modeling of the former. Consequently, the gap between the material and the ideal world is both produced by and productive for language. (shrink)
In quantum theory every state can be diagonalized, i.e. decomposed as a convex combination of perfectly distinguishable pure states. This elementary structure plays an ubiquitous role in quantum mechanics, quantum information theory, and quantum statistical mechanics, where it provides the foundation for the notions of majorization and entropy. A natural question then arises: can we reconstruct these notions from purely operational axioms? We address this question in the framework of general probabilistic theories, presenting a set of axioms that guarantee that (...) every state can be diagonalized. The first axiom is Causality, which ensures that the marginal of a bipartite state is well defined. Then, Purity Preservation states that the set of pure transformations is closed under composition. The third axiom is Purification, which allows to assign a pure state to the composition of a system with its environment. Finally, we introduce the axiom of Pure Sharpness, stating that for every system there exists at least one pure effect occurring with unit probability on some state. For theories satisfying our four axioms, we show a constructive algorithm for diagonalizing every given state. The diagonalization result allows us to formulate a majorization criterion that captures the convertibility of states in the operational resource theory of purity, where random reversible transformations are regarded as free operations. (shrink)
Metaphysical dualities divorce humankind from its natural environment, dualities that can precipitate environmental disaster. Loyal Rue in Religion Is Not About God seeks to resolve the abstract modalities of religion and naturalism in a unified monistic ecocentric metaphysic characterized as religious naturalism. Rue puts forward proposals for a general naturalistic theory of religion, a theory that lays bare the structural and functional features of religious phenomena as the critical first step on the road to badly needed religion- science realignment. Only (...) then will humanity be equipped to address the environmental imperative. (shrink)
In the early 1900s, Russell began to recognize that he, and many other mathematicians, had been using assertions like the Axiom of Choice implicitly, and without explicitly proving them. In working with the Axioms of Choice, Infinity, and Reducibility, and his and Whitehead’s Multiplicative Axiom, Russell came to take the position that some axioms are necessary to recovering certain results of mathematics, but may not be proven to be true absolutely. The essay traces historical roots of, and motivations (...) for, Russell’s method of analysis, which are intended to shed light on his view about the status of mathematical axioms. I describe the position Russell develops in consequence as “immanent logicism,” in contrast to what Irving (1989) describes as “epistemic logicism.” Immanent logicism allows Russell to avoid the logocentric predicament, and to propose a method for discovering structural relationships of dependence within mathematical theories. (shrink)
WILLIAM DAVIS HAS MAINTAINED THAT A SUPER PLEASURE HELMET COULD IN PRINCIPLE SATISFY ALL HUMAN NEEDS, BUT THAT SUCH A MACHINE IS PROBABLY A PRACTICAL IMPOSSIBILITY. I ARGUE THAT THE SUPER PLEASURE HELMET IS CONCEPTUALLY IMPOSSIBLE BY ARGUING THAT A PERSON'S NEEDS CANNOT BE SATISFIED JUST BY BRINGING ABOUT CERTAIN PSYCHOLOGICAL STATES IN THAT PERSON.
Sometimes a proposition is ‘opaque’ to an agent: he doesn't know it, but he does know something about how coming to know it should affect his or her credence function. It is tempting to assume that a rational agent's credence function coheres in a certain way with his or her knowledge of these opaque propositions, and I call this the ‘Opaque Proposition Principle’. The principle is compelling but demonstrably false. I explain this incongruity by showing that the principle is ambiguous: (...) the term ‘know’ as it appears in the principle can be interpreted in two different ways, as either basic-know or super-know. I use this distinction to construct a plausible version of the principle, and then to similarly construct plausible versions of the Reflection Principle and the Sure-Thing Principle. (shrink)
In this article I develop an elementary system of axioms for Euclidean geometry. On one hand, the system is based on the symmetry principles which express our a priori ignorant approach to space: all places are the same to us, all directions are the same to us and all units of length we use to create geometric figures are the same to us. On the other hand, through the process of algebraic simplification, this system of axioms directly provides the Weyl’s (...) system of axioms for Euclidean geometry. The system of axioms, together with its a priori interpretation, offers new views to philosophy and pedagogy of mathematics: it supports the thesis that Euclidean geometry is a priori, it supports the thesis that in modern mathematics the Weyl’s system of axioms is dominant to the Euclid’s system because it reflects the a priori underlying symmetries, it gives a new and promising approach to learn geometry which, through the Weyl’s system of axioms, leads from the essential geometric symmetry principles of the mathematical nature directly to modern mathematics. (shrink)
A description of consciousness leads to a contradiction with the postulation from special relativity that there can be no connections between simultaneous event. This contradiction points to consciousness involving quantum level mechanisms. The Quantum level description of the universe is re- evaluated in the light of what is observed in consciousness namely 4 Dimensional objects. A new improved interpretation of Quantum level observations is introduced. From this vantage point the following axioms of consciousness is presented. Consciousness consists of two distinct (...) components, the observed U and the observer I. The observed U consist of all the events I is aware of. A vast majority of these occur simultaneously. Now if I were to be an entity within the space-time continuum, all of these events of U together with I would have to occur at one point in space-time. However, U is distributed over a definite region of space-time (region in brain). Thus, I is aware of a multitude of space-like separated events. It is seen that this awareness necessitates I to be an entity outside the space-time continuum. With I taken as such, a new concept called concept A is introduced. With the help of concept A a very important axiom of consciousness, namely Free Will is explained. Libet s Experiment which was originally seen to contradict Free will, in the light of Concept A is shown to support it. A variation to Libet s Experiment is suggested that will give conclusive proof for Concept A and Free Will. (shrink)
Second-order Peano Arithmetic minus the Successor Axiom is developed from first principles through Quadratic Reciprocity and a proof of self-consistency. This paper combines 4 other papers of the author in a self-contained exposition.
Ontology engineering is a hard and error-prone task, in which small changes may lead to errors, or even produce an inconsistent ontology. As ontologies grow in size, the need for automated methods for repairing inconsistencies while preserving as much of the original knowledge as possible increases. Most previous approaches to this task are based on removing a few axioms from the ontology to regain consistency. We propose a new method based on weakening these axioms to make them less restrictive, employing (...) the use of refinement operators. We introduce the theoretical framework for weakening DL ontologies, propose algorithms to repair ontologies based on the framework, and provide an analysis of the computational complexity. Through an empirical analysis made over real-life ontologies, we show that our approach preserves significantly more of the original knowledge of the ontology than removing axioms. (shrink)
The independence phenomenon in set theory, while pervasive, can be partially addressed through the use of large cardinal axioms. A commonly assumed idea is that large cardinal axioms are species of maximality principles. In this paper, I argue that whether or not large cardinal axioms count as maximality principles depends on prior commitments concerning the richness of the subset forming operation. In particular I argue that there is a conception of maximality through absoluteness, on which large cardinal axioms are restrictive. (...) I argue, however, that large cardinals are still important axioms of set theory and can play many of their usual foundational roles. (shrink)
We present an elementary system of axioms for the geometry of Minkowski spacetime. It strikes a balance between a simple and streamlined set of axioms and the attempt to give a direct formalization in first-order logic of the standard account of Minkowski spacetime in [Maudlin 2012] and [Malament, unpublished]. It is intended for future use in the formalization of physical theories in Minkowski spacetime. The choice of primitives is in the spirit of [Tarski 1959]: a predicate of betwenness and a (...) four place predicate to compare the square of the relativistic intervals. Minkowski spacetime is described as a four dimensional ‘vector space’ that can be decomposed everywhere into a spacelike hyperplane - which obeys the Euclidean axioms in [Tarski and Givant, 1999] - and an orthogonal timelike line. The length of other ‘vectors’ are calculated according to Pythagora’s theorem. We conclude with a Representation Theorem relating models of our system that satisfy second order continuity to the mathematical structure called ‘Minkowski spacetime’ in physics textbooks. (shrink)
In mathematics, if one starts with the infinite set of positive integers, P, and want to compare the size of the subset of odd positives, O, with P, this is done by pairing off each odd with a positive, using a function such as P=2O+1. This puts the odds in a one-to-one correspondence with the positives, thereby, showing that the subset of odds and the set of positives are the same size, or have the same cardinality. This counter-intuitive result ignores (...) the “natural” relationship of one odd for every two positives in the sequence of positive integers; however, in the set of axioms that constitute mathematics, it is considered valid. In the physical universe, though, relationships between entities matter. For example, in biochemistry, if you start with an organism and you want to study the heart, you can do this by removing some heart cells from the organism and studying them in isolation in a cell culture system. But, the results are often different than what occurs in the intact organism because studying the cells in culture ignores the relationships in the intact body between the heart cells, the rest of the heart tissue and the rest of the organism. In chemistry, if a copper atom was studied in isolation, it would never be known that copper atoms in bulk can conduct electricity because the atoms share their electrons. In physics, the relationships between inertial reference frames in relativity and observer and observed in quantum physics can't be ignored. Furthermore, infinities cause numerous problems in theoretical physics such as non-renormalizability. What this suggests is that the pairing off method and the mathematics of infinite sets based on it are analogous to a cell culture system or studying a copper atom in isolation if they are used in studying the real, physical universe because they ignore the inherent relationships between entities. In the real, physical world, the natural, or inherent, relationships between entities can't be ignored. Said another way, the set of axioms which constitute abstract mathematics may be similar but not identical to the set of physical axioms by which the real, physical universe runs. This suggests that the results from abstract mathematics about infinities may not apply to or should be modified for use in physics. (shrink)
The main goal of this paper is tocompare how Thomas Aquinas expressedhis doctrine of providence through second-ary causes, making use of both Aristotelianand Neo-Platonic principles, in the seventharticle of the third question of his Quaes-tiones Disputatae De Potentia Dei and his Super Librum de Causis Expositio , in whichhe intends to solve the problem of themetaphysical mechanism by which God providentially guides creation. I will rst present his arguments as they appear inthe disputed questions, followed by a pre-sentation of (...) his thought on the matter inhis commentary of the Liber de Causis , andconcluding with my comparative analysisof Aquinas’ solution to the issue of God’sprovidential activity in nature. (shrink)
In this article, a possible generalization of the Löb’s theorem is considered. Main result is: let κ be an inaccessible cardinal, then ¬Con( ZFC +∃κ) .
Axiom weakening is a novel technique that allows for fine-grained repair of inconsistent ontologies. In a multi-agent setting, integrating ontologies corresponding to multiple agents may lead to inconsistencies. Such inconsistencies can be resolved after the integrated ontology has been built, or their generation can be prevented during ontology generation. We implement and compare these two approaches. First, we study how to repair an inconsistent ontology resulting from a voting-based aggregation of views of heterogeneous agents. Second, we prevent the generation (...) of inconsistencies by letting the agents engage in a turn-based rational protocol about the axioms to be added to the integrated ontology. We instantiate the two approaches using real-world ontologies and compare them by measuring the levels of satisfaction of the agents w.r.t. the ontology obtained by the two procedures. (shrink)
The basic axioms or formal conditions of decision theory, especially the ordering condition put on preferences and the axioms underlying the expected utility formula, are subject to a number of counter-examples, some of which can be endowed with normative value and thus fall within the ambit of a philosophical reflection on practical rationality. Against such counter-examples, a defensive strategy has been developed which consists in redescribing the outcomes of the available options in such a way that the threatened axioms or (...) conditions continue to hold. We examine how this strategy performs in three major cases: Sen's counterexamples to the binariness property of preferences, the Allais paradox of EU theory under risk, and the Ellsberg paradox of EU theory under uncertainty. We find that the strategy typically proves to be lacking in several major respects, suffering from logical triviality, incompleteness, and theoretical insularity. To give the strategy more structure, philosophers have developed “principles of individuation”; but we observe that these do not address the aforementioned defects. Instead, we propose the method of checking whether the strategy can overcome its typical defects once it is given a proper theoretical expansion. We find that the strategy passes the test imperfectly in Sen's case and not at all in Allais's. In Ellsberg's case, however, it comes close to meeting our requirement. But even the analysis of this more promising application suggests that the strategy ought to address the decision problem as a whole, rather than just the outcomes, and that it should extend its revision process to the very statements it is meant to protect. Thus, by and large, the same cautionary tale against redescription practices runs through the analysis of all three cases. A more general lesson, simply put, is that there is no easy way out from the paradoxes of decision theory. (shrink)
Those incompleteness theorems mean the relation of (Peano) arithmetic and (ZFC) set theory, or philosophically, the relation of arithmetical finiteness and actual infinity. The same is managed in the framework of set theory by the axiom of choice (respectively, by the equivalent well-ordering "theorem'). One may discuss that incompleteness form the viewpoint of set theory by the axiom of choice rather than the usual viewpoint meant in the proof of theorems. The logical corollaries from that "nonstandard" viewpoint the (...) relation of set theory and arithmetic are demonstrated. (shrink)
CAT4 is proposed as a general method for representing information, enabling a powerful programming method for large-scale information systems. It enables generalised machine learning, software automation and novel AI capabilities. It is based on a special type of relation called CAT4, which is interpreted to provide a semantic representation. This is Part 1 of a five-part introduction. The focus here is on defining the key mathematical structures first, and presenting the semantic-database application in subsequent Parts. We focus in Part 1 (...) on general axioms for the structures, and introduce key concepts. Part 2 analyses the CAT2 sub-relation of CAT4 in more detail. The interpretation of fact networks is introduced in Part 3, where we turn to interpreting semantics. We start with examples of relational and graph databases, with methods to translate them into CAT3 networks, with the aim of retaining the meaning of information. The full application to semantic theory comes in Part 4, where we introduce general functions, including the language interpretation or linguistic functions. The representation of linear symbolic languages, including natural languages and formal symbolic languages, is a function that CAT4 is uniquely suited to. In Part 5, we turn to software design considerations, to show how files, indexes, functions and screens can be defined to implement a CAT4 system efficiently. (shrink)
The naive theory of properties states that for every condition there is a property instantiated by exactly the things which satisfy that condition. The naive theory of properties is inconsistent in classical logic, but there are many ways to obtain consistent naive theories of properties in nonclassical logics. The naive theory of classes adds to the naive theory of properties an extensionality rule or axiom, which states roughly that if two classes have exactly the same members, they are identical. (...) In this paper we examine the prospects for obtaining a satisfactory naive theory of classes. We start from a result by Ross Brady, which demonstrates the consistency of something resembling a naive theory of classes. We generalize Brady’s result somewhat and extend it to a recent system developed by Andrew Bacon. All of the theories we prove consistent contain an extensionality rule or axiom. But we argue that given the background logics, the relevant extensionality principles are too weak. For example, in some of these theories, there are universal classes which are not declared coextensive. We elucidate some very modest demands on extensionality, designed to rule out this kind of pathology. But we close by proving that even these modest demands cannot be jointly satisfied. In light of this new impossibility result, the prospects for a naive theory of classes are bleak. (shrink)
We introduce translations between display calculus proofs and labeled calculus proofs in the context of tense logics. First, we show that every derivation in the display calculus for the minimal tense logic Kt extended with general path axioms can be effectively transformed into a derivation in the corresponding labeled calculus. Concerning the converse translation, we show that for Kt extended with path axioms, every derivation in the corresponding labeled calculus can be put into a special form that is translatable to (...) a derivation in the associated display calculus. A key insight in this converse translation is a canonical representation of display sequents as labeled polytrees. Labeled polytrees, which represent equivalence classes of display sequents modulo display postulates, also shed light on related correspondence results for tense logics. (shrink)
I use modal logic and transfinite set-theory to define metaphysical foundations for a general theory of computation. A possible universe is a certain kind of situation; a situation is a set of facts. An algorithm is a certain kind of inductively defined property. A machine is a series of situations that instantiates an algorithm in a certain way. There are finite as well as transfinite algorithms and machines of any degree of complexity (e.g., Turing and super-Turing machines and more). (...) There are physically and metaphysically possible machines. There is an iterative hierarchy of logically possible machines in the iterative hierarchy of sets. Some algorithms are such that machines that instantiate them are minds. So there is an iterative hierarchy of finitely and transfinitely complex minds. (shrink)
“I am me”, but what does this mean? For centuries humans identified themselves as conscious beings with free will, beings that are important in the cosmos they live in. However, modern science has been trying to reduce us into unimportant pawns in a cold universe and diminish our sense of consciousness into a mere illusion generated by lifeless matter. Our identity in the cosmos is nothing more than a deception and all the scientific evidence seem to support this idea. Or (...) is it not? The goal of this paper is to discard current underlying dogmatism (axioms taken for granted as "self-evident") of modern mind research and to show that consciousness seems to be the ultimate frontier that will cause a major change in the way exact sciences think. If we want to re-discover our identity as luminous beings in the cosmos, we must first try to pinpoint our prejudices and discard them. Materialism is an obsolete philosophical dogma and modern scientists should try to also use other premises as the foundation of their theories to approach the mysteries of the self. Exact sciences need to examine the world with a more open mind, accepting potentially different interpretations of existing experimental data in the fields of brain research, which are currently not considered simply on the basis of a strong anti-spiritual dogmatism. Such interpretations can be compatible with the notion of an immaterial spirit proposed by religion for thousands of years. Mind seems that is not the by-product of matter, but the opposite: its master. No current materialistic theory can explain how matter may give rise to what we call “self” and only a drastic paradigm shift towards more idealistic theories will help us avoid rejecting our own nature. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.