Background independence begins life as an informal property that a physical theory might have, often glossed as 'doesn't posit a fixed spacetime background'. Interest in trying to offer a precise account of background independence has been sparked by the pronouncements of several theorists working on quantum gravity that background independence embodies in some sense an essential discovery of the General Theory of Relativity, and a feature we should strive to carry forward to future physical theories. This paper has two (...) goals. The first is to investigate what a world must be like in order to be truly described by a background independent theory given extant accounts of background independence. The second is to argue that there are no non-empirical reasons to be more confident in theories that satisfy extant accounts of background independence than in theories that don't. The paper concludes by drawing a general moral about a way in which focussing primarily on mathematical formulations of our physical theories can adversely affect debates in the metaphysics of physics. (shrink)
Space-time intervals are the fundamental components of conscious experience, gravity, and a Theory of Everything. Space-time intervals are relationships that arise naturally between events. They have a generalcovariance (independence of coordinate systems, scale invariance), a physical constancy, that encompasses all frames of reference. There are three basic types of space-time intervals (light-like, time-like, space-like) which interact to create space-time and its properties. Human conscious experience is a four-dimensional space-time continuum created through the processing of space-time intervals by (...) the brain; space-time intervals are the source of conscious experience (observed physical reality). Human conscious experience is modeled by Einstein’s special theory of relativity, a theory designed specifically from the generalcovariance of space-time intervals (for inertial frames of reference). General relativity is our most accurate description of gravity. In general relativity, the generalcovariance of space-time intervals is extended to all frames of reference (inertial and non-inertial), including gravitational reference frames; space-time intervals are the source of gravity in general relativity. The generalcovariance of space-time intervals is further extended to quantum mechanics; space-time intervals are the source of quantum gravity. The generalcovariance of space-time intervals seamlessly merges general relativity with quantum field theory (the two grand theories of the universe). Space-time intervals consequently are the basis of a Theory of Everything (a single all-encompassing coherent theoretical framework of physics that fully explains and links together all physical aspects of the universe). This theoretical framework encompasses our observed physical reality (conscious experience) as well; space-time intervals link observed physical reality to actual physical reality. This provides an accurate and reliable match between observed physical reality and the physical universe by which we can carry on our activity. The Minkowski metric, which defines generally covariant space-time intervals, may be considered an axiom (premise, postulate) for the Theory of Everything. (shrink)
Intuitively, a classical field theory is background-in- dependent if the structure required to make sense of its equations is itself subject to dynamical evolution, rather than being imposed ab initio. The aim of this paper is to provide an explication of this intuitive notion. Background-independence is not a not formal property of theories: the question whether a theory is background-independent depends upon how the theory is interpreted. Under the approach proposed here, a theory is fully background-independent relative to an interpretation (...) if each physical possibility corresponds to a distinct spacetime geometry; and it falls short of full background-independence to the extent that this condition fails. (shrink)
General Relativity generated various early philosophical interpretations. His adherents have highlighted the "relativization of inertia" and the concept of simultaneity, Kantians and Neo-Kantians have underlined the approach of certain synthetic "intellectual forms" (especially the principle of generalcovariance, and logical empirics have emphasized the philosophical methodological significance of the theory. Reichenbach approached the GR through the "relativity of geometry" thesis, trying to build a "constructive axiomatization" of relativity based on "elementary matters of fact" (Elementartatbestande) for the observable (...) behavior of light rays, rods and clocks. The mathematician Hermann Weyl attempted a reconstruction of Einstein's theory based on the epistemology of a "pure infinitesimal geometry", an extended geometry with additional terms that formally identified with the potential of the electromagnetic field. DOI: 10.13140/RG.2.2.11641.93281. (shrink)
This paper outlines a critique of the use of the genetic variance–covariance matrix (G), one of the central concepts in the modern study of natural selection and evolution. Specifically, I argue that for both conceptual and empirical reasons, studies of G cannot be used to elucidate so-called constraints on natural selection, nor can they be employed to detect or to measure past selection in natural populations – contrary to what assumed by most practicing biologists. I suggest that the search (...) for a general solution to the difficult problem of identifying causal structures given observed correlation’s has led evolutionary quantitative geneticists to substitute statistical modeling for the more difficult, but much more valuable, job of teasing apart the many possible causes underlying the action of natural selection. Hence, the entire evolutionary quantitative genetics research program may be in need of a fundamental reconsideration of its goals and how they correspond to the array of mathematical and experimental techniques normally employed by its practitioners. (shrink)
While the philosophers of science discuss the General Relativity, the mathematical physicists do not question it. Therefore, there is a conflict. From the theoretical point view “the question of precisely what Einstein discovered remains unanswered, for we have no consensus over the exact nature of the theory 's foundations. Is this the theory that extends the relativity of motion from inertial motion to accelerated motion, as Einstein contended? Or is it just a theory that treats gravitation geometrically in the (...) spacetime setting?”. “The voices of dissent proclaim that Einstein was mistaken over the fundamental ideas of his own theory and that their basic principles are simply incompatible with this theory. Many newer texts make no mention of the principles Einstein listed as fundamental to his theory; they appear as neither axiom nor theorem. At best, they are recalled as ideas of purely historical importance in the theory's formation. The very name General Relativity is now routinely condemned as a misnomer and its use often zealously avoided in favour of, say , Einstein's theory of gravitation What has complicated an easy resolution of the debate are the alterations of Einstein's own position on the foundations of his theory”, (Norton, 1993). Of other hand from the mathematical point view the “General Relativity had been formulated as a messy set of partial differential equations in a single coordinate system. People were so pleased when they found a solution that they didn't care that it probably had no physical significance” (Hawking and Penrose, 1996). So, during a time, the declaration of quantum theorists:“I take the positivist viewpoint that a physical theory is just a mathematical model and that it is meaningless to ask whether it corresponds to reality. All that one can ask is that its predictions should be in agreement with observation.” (Hawking and Penrose, 1996)seemed to solve the problem, but recently achieved with the help of the tightly and collectively synchronized clocks in orbit frontally contradicts fundamental assumptions of the theory of Relativity. These observations are in disagree from predictions of the theory of Relativity. (Hatch, 2004a, 2004b, 2007). The mathematical model was developed first by Grossmann who presented it, in 1913, as the mathematical part of the Entwurf theory, still referred to a curved Minkowski spacetime. Einstein completed the mathematical model, in 1915, formulated for Riemann ́s spacetimes. In this paper, we present as of General Relativity currently remains only the mathematical model, darkened with the results of Hatch and, course, we conclude that a Einstein ́s gravity theory does not exist. (shrink)
Einstein structured the theoretical frame of his work on gravity under the Special Relativity and Minkowski´s spacetime using three guide principles: The strong principle of equivalence establishes that acceleration and gravity are equivalents. Mach´s principle explains the inertia of the bodies and particles as completely determined by the total mass existent in the universe. And, generalcovariance searches to extend the principle of relativity from inertial motion to accelerated motion. Mach´s principle was abandoned quickly, generalcovariance (...) resulted mathematical property of the tensors and principle of equivalence inconsistent and it can only apply to punctual gravity, no to extended gravity. Also, the basic principle of Special Relativity, i.e., the constancy of the speed of the electromagnetic wave in the vacuum was abandoned, static Minkowski´s spacetime was replaced to dynamic Lorentz´s manifold and the main conceptual fundament of the theory, i.e. spacetime is not known what is. Of other hand, gravity never was conceptually defined; neither answers what is the law of gravity in general. However, the predictions arise of Einstein equations are rigorously exacts. Thus, the conclusion is that on gravity, it has only the equations. In this work it shows that principle of equivalence applies really to punctual and extended gravity, gravity is defined as effect of change of coordinates although in the case of the extended gravity with change of geometry from Minkowski´s spacetime to Lorentz´s manifold; and the gravitational motion is the geodesic motion that well it can declare as the general law of gravity. (shrink)
On one popular view, the generalcovariance of gravity implies that change is relational in a strong sense, such that all it is for a physical degree of freedom to change is for it to vary with regard to a second physical degree of freedom. At a quantum level, this view of change as relative variation leads to a fundamentally timeless formalism for quantum gravity. Here, we will show how one may avoid this acute ‘problem of time’. Under (...) our view, duration is still regarded as relative, but temporal succession is taken to be absolute. Following our approach, which is presented in more formal terms in, it is possible to conceive of a genuinely dynamical theory of quantum gravity within which time, in a substantive sense, remains. 1 Introduction1.1 The problem of time1.2 Our solution2 Understanding Symmetry2.1 Mechanics and representation2.2 Freedom by degrees2.3 Voluntary redundancy3 Understanding Time3.1 Change and order3.2 Quantization and succession4 Time and Gravitation4.1 The two faces of classical gravity4.2 Retaining succession in quantum gravity5 Discussion5.1 Related arguments5.2 Concluding remarks. (shrink)
I discuss the ontological assumptions and implications of General Relativity. I maintain that General Relativity is a theory about gravitational fields, not about space-time. The latter is a more basic ontological category, that emerges from physical relations among all existents. I also argue that there are no physical singularities in space-time. Singular space-time models do not belong to the ontology of the world: they are not things but concepts, i.e. defective solutions of Einstein’s field equations. I briefly discuss (...) the actual implication of the so-called singularity theorems in General Relativity and some problems related to ontological assumptions of Quantum Gravity. (shrink)
I argue that the best interpretation of the general theory of relativity (GTR) has need of a causal entity (i.e., the gravitational field), and causal structure that is not reducible to light cone structure. I suggest that this causal interpretation of GTR helps defeat a key premise in one of the most popular arguments for causal reductionism, viz., the argument from physics.
This is the editorial for a special volume of JETAI, featuring papers by Omohundro, Armstrong/Sotala/O’Heigeartaigh, T Goertzel, Brundage, Yampolskiy, B. Goertzel, Potapov/Rodinov, Kornai and Sandberg. - If the general intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity – so even if we estimate the probability of this event to be fairly low, it is necessary to think about it now. We need to estimate what progress we can expect, (...) what the impact of superintelligent machines might be, how we might design safe and controllable systems, and whether there are directions of research that should best be avoided or strengthened. (shrink)
An ontology of Leibnizian relationalism, consisting in distance relations among sparse matter points and their change only, is well recognized as a serious option in the context of classical mechanics. In this paper, we investigate how this ontology fares when it comes to general relativistic physics. Using a Humean strategy, we regard the gravitational field as a means to represent the overall change in the distance relations among point particles in a way that achieves the best combination of being (...) simple and being informative. (shrink)
In this paper I show that Einstein made essential use of aim-oriented empiricism in scientific practice in developing special and general relativity. I conclude by considering to what extent Einstein came explicitly to advocate aim-oriented empiricism in his later years.
Abstract. The theory-change epistemological model, tried on maxwellian revolution and special relativity genesis, is unfolded to apprehend general relativity genesis. It is exhibited that the dynamics of general relativity (GR) construction was largely governed by internal tensions of special relativity and Newton’s theory of gravitation. The research traditions’ encounter engendered construction of the hybrid domain at first with an irregular set of theoretical models. However, step by step, on revealing and gradual eliminating the contradictions between the models involved, (...) the hybrid set was put into order with a help of equivalence principle. A hierarchy of theoretical models starting from the crossbreeds and up to usual hybrids was moulded. The claim to put forward is that Einstein’s unification design could be successfully implemented since his programme embraced the ideas of the Nordström research programme, as well as the presuppositions of the programme of Max Abraham. By and large Einstein’s victory over his rivals became possible because the core of his research strategy was formed by the equivalence principle comprehended in the light of Kantian epistemology. It is stated that the theories of Nordström and Abraham contrived before November 25, 1915, were not merely the scaffolds to construct the GR basic model. They are still the necessary part of the whole GR theory necessary for its common use. Key words: Einstein, Nordstrom, Abraham, general relativity. -/- . (shrink)
In this paper, I consider the basis for Kant's praise of Wolff's general logic as "the best we have." I argue that Wolff's logic was highly esteemed by Kant on account of its novel analysis of the three operations of the mind (tres operationes mentis), in the course of which Wolff formulates an argument for the priority of the understanding's activity of judging.
Special Issue “Risks of artificial general intelligence”, Journal of Experimental and Theoretical Artificial Intelligence, 26/3 (2014), ed. Vincent C. Müller. http://www.tandfonline.com/toc/teta20/26/3# - Risks of general artificial intelligence, Vincent C. Müller, pages 297-301 - Autonomous technology and the greater human good - Steve Omohundro - pages 303-315 - - - The errors, insights and lessons of famous AI predictions – and what they mean for the future - Stuart Armstrong, Kaj Sotala & Seán S. Ó hÉigeartaigh - pages 317-342 (...) - - - The path to more general artificial intelligence - Ted Goertzel - pages 343-354 - - - Limitations and risks of machine ethics - Miles Brundage - pages 355-372 - - - Utility function security in artificially intelligent agents - Roman V. Yampolskiy - pages 373-389 - - - GOLEM: towards an AGI meta-architecture enabling both goal preservation and radical self-improvement - Ben Goertzel - pages 391-403 - - - Universal empathy and ethical bias for artificial general intelligence - Alexey Potapov & Sergey Rodionov - pages 405-416 - - - Bounding the impact of AGI - András Kornai - pages 417-438 - - - Ethics of brain emulations - Anders Sandberg - pages 439-457. (shrink)
Examining the significance of the General’s enlightenment in the Platform Sutra, this article clarifies the fundamental role that emotions play in the development of one’s spiritual understanding. In order to do so, this article emphasizes that the way to enlightenment implicit in the story of the General and the Master involves first granting negative emotions a means for productive expression. By acting as a preparatory measure for calming the mind and surrendering control over it, human passions become a (...) necessary, antecedent condition to wisdom—a conclusion that this article argues is a major, and sometimes underappreciated, lesson embedded in the teachings of the Sixth Patriarch. (shrink)
Oulis pointed out that there is a great deal of interest in specific mechanisms relating to mental disorders and that these mechanisms should play a role in classification. Although specific mechanisms are important, more attention should be given to general theories. The following example from Salmon illustrates the difference.
Several authors have claimed that prediction is essentially impossible in the general theory of relativity, the case being particularly strong, it is said, when one fully considers the epistemic predicament of the observer. Each of these claims rests on the support of an underdetermination argument and a particular interpretation of the concept of prediction. I argue that these underdetermination arguments fail and depend on an implausible explication of prediction in the theory. The technical results adduced in these arguments can (...) be related to certain epistemic issues, but can only be misleadingly or mistakenly characterized as related to prediction. (shrink)
This dissertation examines the conceptual and theoretical foundations of the most general and most widely used framework for understanding social evolution, W. D. Hamilton's theory of kin selection. While the core idea is intuitive enough (when organisms share genes, they sometimes have an evolutionary incentive to help one another), its apparent simplicity masks a host of conceptual subtleties, and the theory has proved a perennial source of controversy in evolutionary biology. To move towards a resolution of these controversies, we (...) need a careful and rigorous analysis of the philosophical foundations of the theory. My aim in this work is to provide such an analysis. I begin with an examination of the concepts behavioural ecologists employ to describe and classify types of social behaviour. I stress the need to distinguish concepts that are often conflated: for example, we need to distinguish simple cooperation from collaboration in collective tasks, behaviours from strategies, and control from manipulation and coercion. I proceed from here to the formal representation of kin selection via George R. Price’s covariance selection mathematics. I address a number of interpretative issues the Price formalism raises, including the vexed question of whether kin selection theory is ‘formally equivalent’ to multi-level selection theory. In the second half of the dissertation, I assess the uses and limits of Hamilton’s rule for the evolution of social behaviour; I provide a precise statement of the conditions under which the rival neighbour-modulated fitness and inclusive fitness approaches in contemporary kin selection theory are equivalent (and describe cases in which they are not); and I criticize recent formal attempts to establish the controversial claim that kin selection leads to organisms behaving as if maximizing their inclusive fitness. (shrink)
Important features of space and time are taken to be missing in quantum gravity, allegedly requiring an explanation of the emergence of spacetime from non-spatio-temporal theories. In this paper, we argue that the explanatory gap between general relativity and non-spatio- temporal quantum gravity theories might signifi cantly be reduced with two moves. First, we point out that spacetime is already partially missing in the context of general relativity when understood from a dynamical perspective. Second, we argue that most (...) approaches to quantum gravity already start with an in-built distinction between structures to which the asymmetry between space and time can be traced back. (shrink)
Different anesthetics are known to modulate different types of membrane-bound receptors. Their common mechanism of action is expected to alter the mechanism for consciousness. Consciousness is hypothesized as the integral of all the units of internal sensations induced by reactivation of inter-postsynaptic membrane functional LINKs during mechanisms that lead to oscillating potentials. The thermodynamics of the spontaneous lateral curvature of lipid membranes induced by lipophilic anesthetics can lead to the formation of non-specific inter-postsynaptic membrane functional LINKs by different mechanisms. These (...) include direct membrane contact by excluding the inter-membrane hydrophilic region and readily reversible partial membrane hemifusion. The constant reorganization of the lipid membranes at the lateral edges of the postsynaptic terminals (dendritic spines) resulting from AMPA receptor-subunit vesicle exocytosis and endocytosis can favor the effect of anesthetic molecules on lipid membranes at this location. Induction of a large number of non-specific LINKs can alter the conformation of the integral of the units of internal sensations that maintain consciousness. Anesthetic requirement is reduced in the presence of dopamine that causes enlargement of dendritic spines. Externally applied pressure can transduce from the middle ear through the perilymph, cerebrospinal fluid, and the recently discovered glymphatic pathway to the extracellular matrix space, and finally to the paravenular space. The pressure gradient reduce solubility and displace anesthetic molecules from the membranes into the paravenular space, explaining the pressure reversal of anesthesia. Changes in membrane composition and the conversion of membrane hemifusion to fusion due to defects in the checkpoint mechanisms can lead to cytoplasmic content mixing between neurons and cause neurodegenerative changes. The common mechanism of anesthetics presented here can operate along with the known specific actions of different anesthetics. (shrink)
There is no uniquely standard concept of an effectively decidable set of real numbers or real n-tuples. Here we consider three notions: decidability up to measure zero [M.W. Parker, Undecidability in Rn: Riddled basins, the KAM tori, and the stability of the solar system, Phil. Sci. 70(2) (2003) 359–382], which we abbreviate d.m.z.; recursive approximability [or r.a.; K.-I. Ko, Complexity Theory of Real Functions, Birkhäuser, Boston, 1991]; and decidability ignoring boundaries [d.i.b.; W.C. Myrvold, The decision problem for entanglement, in: R.S. (...) Cohen et al. (Eds.), Potentiality, Entanglement, and Passion-at-a-Distance: Quantum Mechanical Studies fo Abner Shimony, Vol. 2, Kluwer Academic Publishers, Great Britain, 1997, pp. 177–190]. Unlike some others in the literature, these notions apply not only to certain nice sets, but to general sets in Rn and other appropriate spaces. We consider some motivations for these concepts and the logical relations between them. It has been argued that d.m.z. is especially appropriate for physical applications, and on Rn with the standard measure, it is strictly stronger than r.a. [M.W. Parker, Undecidability in Rn: Riddled basins, the KAM tori, and the stability of the solar system, Phil. Sci. 70(2) (2003) 359–382]. Here we show that this is the only implication that holds among our three decidabilities in that setting. Under arbitrary measures, even this implication fails. Yet for intervals of non-zero length, and more generally, convex sets of non-zero measure, the three concepts are equivalent. (shrink)
This letter was rejected by International Knowledge Press because "we are unable to conclude that these findings would warrant publication in this journal." The letter is suggesting that dark energy, dark matter and universal expansion are intimately related. However, they aren't viewed as revolutions in cosmology which are essential to a complete understanding of the modern universe. They are instead viewed as properties which need to be added to the cosmos when Einstein's theory of gravity (General Relativity) is apparently (...) still not thoroughly comprehended a little over a century since it was published. (shrink)
Richard Peters argued for a general education based largely on the study of truth-seeking subjects for its own sake. His arguments have long been acknowledged as problematic. There are also difficulties with Paul Hirst's arguments for a liberal education, which in part overlap with Peters'. Where justification fails, can historical explanation illuminate? Peters was influenced by the prevailing idea that a secondary education should be based on traditional, largely knowledge-orientated subjects, pursued for intrinsic as well as practical ends. Does (...) history reveal good reasons for this view? The view itself has roots going back to the 16th century and the educational tradition of radical Protestantism. Religious arguments to do with restoring the image of an omniscient God in man made good sense, within their own terms, of an encyclopaedic approach to education. As these faded in prominence after 1800, old curricular patterns persisted in the drive for ‘middle-class schools’, and new, less plausible justifications grew in salience. These were based first on faculty psychology and later on the psychology of individual differences. The essay relates the views of Peters and Hirst to these historical arguments, asking how far their writings show traces of the religious argument mentioned, and how their views on education and the development of mind relate to the psychological arguments. (shrink)
Relativity Theory by Albert Einstein has been so far littleconsidered by cognitive scientists, notwithstanding its undisputedscientific and philosophical moment. Unfortunately, we don't have adiary or notebook as cognitively useful as Faraday's. But physicshistorians and philosophers have done a great job that is relevant bothfor the study of the scientist's reasoning and the philosophy ofscience. I will try here to highlight the fertility of a `triangulation'using cognitive psychology, history of science and philosophy of sciencein starting answering a clearly very complex question:why (...) did Einstein discover Relativity Theory? Here we arenot much concerned with the unending question of precisely whatEinstein discovered, that still remains unanswered, for we have noconsensus over the exact nature of the theory's foundations. We are mainly interested in starting to answer the`how question', and especially the following sub-question: what were his goals and strategies in hissearch? I will base my argument on fundamental publications ofEinstein, aiming at pointing out a theory-specific heuristic, settingboth a goal and a strategy: covariance/invariance.The result has significance in theory formation in science, especiallyin concept and model building. It also raises other questions that gobeyond the aim of this paper: why was he so confident in suchheuristic? Why didn't many other scientists use it? Where did he keep? such a heuristic? Do we have any other examples ofsimilar heuristic search in other scientific problemsolving? (shrink)
The singularities from the general relativity resulting by solving Einstein's equations were and still are the subject of many scientific debates: Are there singularities in spacetime, or not? Big Bang was an initial singularity? If singularities exist, what is their ontology? Is the general theory of relativity a theory that has shown its limits in this case? In this essay I argue that there are singularities, and the general theory of relativity, as any other scientific theory at (...) present, is not valid for singularities. But that does not mean, as some scientists think, that it must be regarded as being obsolete. After a brief presentation of the specific aspects of Newtonian classical theory and the special theory of relativity, and a brief presentation of the general theory of relativity, the chapter Ontology of General Relativity presents the ontological aspects of general relativity. The next chapter, Singularities, is dedicated to the presentation of the singularities resulting in general relativity, the specific aspects of the black holes and the event horizon, including the Big Bang debate as original singularity, and arguments for the existence of the singularities. In Singularity Ontology, I am talking about the possibilities of ontological framing of singularities in general and black holes in particular, about the hole argument highlighted by Einstein, and the arguments presented by scientists that there are no singularities and therefore that the general theory of relativity is in deadlock. In Conclusions I outline and summarize briefly the arguments that support my above views. (shrink)
Although brain size and the concept of intelligence have been extensively used in comparative neuroscience to study cognition and its evolution, such coarse-grained traits may not be informative enough about important aspects of neurocognitive systems. By taking into account the different evolutionary trajectories and the selection pressures on neurophysiology across species, Logan and colleagues suggest that the cognitive abilities of an organism should be investigated by considering the fine-grained and species-specific phenotypic traits that characterize it. In such a way, we (...) would avoid adopting human-oriented, coarse-grained traits, typical of the standard approach in cognitive neuroscience. We argue that this standard approach can fail in some cases, but can, however, work in others, by discussing two major topics in contemporary neuroscience as examples: general intelligence and brain asymmetries. (shrink)
When matter is falling into a black hole, the associated information becomes unavailable to the black hole's exterior. If the black hole disappears by Hawking evaporation, the information seems to be lost in the singularity, leading to Hawking's information paradox: the unitary evolution seems to be broken, because a pure separate quantum state can evolve into a mixed one.
This article proposes a new interpretation of the black hole singularities, which restores the information conservation. For the Schwarzschild black hole, it presents (...) new coordinates, which move the singularity at the future infinity (although it can still be reached in finite proper time). For the evaporating black holes, this article shows that we can still cure the apparently destructive effects of the singularity on the information conservation. For this, we propose to allow the metric to be degenerate at some points, and use the singular semiriemannian geometry. This view, which results naturally from the Cauchy problem, repairs the incomplete geodesics.
The reinterpretation of singularities suggested here allows (in the context of standard General Relativity) the information conservation and unitary evolution to be restored, both for eternal and for evaporating black holes.
When matter is falling into a black hole, the associated information becomes unavailable to the black hole's exterior. If the black hole disappears by Hawking evaporation, the information seems to be lost in the singularity, leading to Hawking's information paradox: the unitary evolution seems to be broken, because a pure separate quantum state can evolve into a mixed one.
This article proposes a new interpretation of the black hole singularities, which restores the information conservation. For the Schwarzschild black hole, it presents (...) new coordinates, which move the singularity at the future infinity (although it can still be reached in finite proper time). For the evaporating black holes, this article shows that we can still cure the apparently destructive effects of the singularity on the information conservation. For this, we propose to allow the metric to be degenerate at some points, and use the singular semiriemannian geometry. This view, which results naturally from Ashtekar's new variables formulation of Einstein's equation, repairs the incomplete geodesics.
The reinterpretation of singularities suggested here allows (in the context of standard General Relativity) the information conservation and unitary evolution to be restored, both for eternal and for evaporating black holes. (shrink)
Entanglement is one of the most striking features of quantum mechanics, and yet it is not specifically quantum. More specific to quantum mechanics is the connection between entanglement and thermodynamics, which leads to an identification between entropies and measures of pure state entanglement. Here we search for the roots of this connection, investigating the relation between entanglement and thermodynamics in the framework of general probabilistic theories. We first address the question whether an entangled state can be transformed into another (...) by means of local operations and classical communication. Under two operational requirements, we prove a general version of the Lo-Popescu theorem, which lies at the foundations of the theory of pure-state entanglement. We then consider a resource theory of purity where free operations are random reversible transformations, modelling the scenario where an agent has limited control over the dynamics of a closed system. Our key result is a duality between the resource theory of entanglement and the resource theory of purity, valid for every physical theory where all processes arise from pure states and reversible interactions at the fundamental level. As an application of the main result, we establish a one-to-one correspondence between entropies and measures of pure bipartite entanglement and exploit it to define entanglement measures in the general probabilistic framework. In addition, we show a duality between the task of information erasure and the task of entanglement generation, whereby the existence of entropy sinks (systems that can absorb arbitrary amounts of information) becomes equivalent to the existence of entanglement sources (correlated systems from which arbitrary amounts of entanglement can be extracted). (shrink)
I propose a gentle reconciliation of Quantum Theory and General Relativity. It is possible to add small, but unshackling constraints to the quantum fields, making them compatible with General Relativity. Not all solutions of the Schrodinger's equation are needed. I show that the continuous and spatially separable solutions are sufficient for the nonlocal manifestations associated with entanglement and wavefunction collapse. After extending this idea to quantum fields, I show that Quantum Field Theory can be defined in terms of (...) partitioned classical fields. One key element is the idea of integral interactions, which also helps clarifying the quantum measurement and classical level problems. The unity of Quantum Theory and General Relativity can now be gained with the help of the partitioned fields' energy-momentum. A brief image of a General Relativistic Quantum Standard Model is outlined. (shrink)
Lipsey and Lancaster's ``general theory of second best'' is widely thought to have significant implications for applied theorizing about the institutions and policies that most effectively implement abstract normative principles. It is also widely thought to have little significance for theorizing about which abstract normative principles we ought to implement. Contrary to this conventional wisdom, I show how the second best theorem can be extended to myriad domains beyond applied normative theorizing, and in particular to more abstract theorizing about (...) the normative principles we should aim to implement. I start by separating the mathematical model used to prove the second best theorem from its familiar economic interpretation. I then develop an alternative normative-theoretic interpretation of the model, which yields a novel second best theorem for idealistic normative theory. My method for developing this interpretation provides a template for developing additional interpretations that can extend the reach of the second best theorem beyond normative theoretical domains. I also show how, within any domain, the implications of the second best theorem are more specific than is typically thought. I conclude with some brief remarks on the value of mathematical models for conceptual exploration. (shrink)
The principle that rational agents should maximize expected utility or choiceworthiness is intuitively plausible in many ordinary cases of decision-making under uncertainty. But it is less plausible in cases of extreme, low-probability risk (like Pascal's Mugging), and intolerably paradoxical in cases like the St. Petersburg and Pasadena games. In this paper I show that, under certain conditions, stochastic dominance reasoning can capture most of the plausible implications of expectational reasoning while avoiding most of its pitfalls. Specifically, given sufficient background uncertainty (...) about the choiceworthiness of one's options, many expectation-maximizing gambles that do not stochastically dominate their alternatives "in a vacuum" become stochastically dominant in virtue of that background uncertainty. But, even under these conditions, stochastic dominance will generally not require agents to accept extreme gambles like Pascal's Mugging or the St. Petersburg game. The sort of background uncertainty on which these results depend looks unavoidable for any agent who measures the choiceworthiness of her options in part by the total amount of value in the resulting world. At least for such agents, then, stochastic dominance offers a plausible general principle of choice under uncertainty that can explain more of the apparent rational constraints on such choices than has previously been recognized. (shrink)
Agents are often assumed to have degrees of belief (“credences”) and also binary beliefs (“beliefs simpliciter”). How are these related to each other? A much-discussed answer asserts that it is rational to believe a proposition if and only if one has a high enough degree of belief in it. But this answer runs into the “lottery paradox”: the set of believed propositions may violate the key rationality conditions of consistency and deductive closure. In earlier work, we showed that this problem (...) generalizes: there exists no local function from degrees of belief to binary beliefs that satisfies some minimal conditions of rationality and non-triviality. “Locality” means that the binary belief in each proposition depends only on the degree of belief in that proposition, not on the degrees of belief in others. One might think that the impossibility can be avoided by dropping the assumption that binary beliefs are a function of degrees of belief. We prove that, even if we drop the “functionality” restriction, there still exists no local relation between degrees of belief and binary beliefs that satisfies some minimal conditions. Thus functionality is not the source of the impossibility; its source is the condition of locality. If there is any non-trivial relation between degrees of belief and binary beliefs at all, it must be a “holistic” one. We explore several concrete forms this “holistic” relation could take. (shrink)
In this paper, I articulate and argue for a new truthmaker view of ontological commitment, which I call the “General Truthmaker View”: when one affirms a sentence, one is ontologically committed to there being something that makes true the proposition expressed by the sentence. This view comes apart from Quinean orthodoxy in that we are not ontologically committed to the things over which we quantify, and it comes apart from extant truthmaker views of ontological commitment in that we are (...) not ontologically committed to the truthmakers of our sentences. (shrink)
A review of Francoise Laruelle's General Theory of Victims, which places Laruelle's theory in the context of post-colonial theories of the subaltern subject after Gayatri Spivak and Edward Said. The review questions whether Laruelle's General Theory of Victims really allows the so-called victims to speak for themselves, or simply represents another attempt by Western (French?) intellectuals to speak to/through the victims, for their own political and theoretical purposes.
I assume that there exists a general phenomenon, the phenomenon of the explanatory gap, surrounding consciousness, normativity, intentionality, and more. Explanatory gaps are often thought to foreclose reductive possibilities wherever they appear. In response, reductivists who grant the existence of these gaps have offered countless local solutions. But typically such reductivist responses have had a serious shortcoming: because they appeal to essentially domain-specific features, they cannot be fully generalized, and in this sense these responses have been not just local (...) but parochial. Here I do better. Taking for granted that the explanatory gap is a genuine phenomenon, I offer a fully general diagnosis that unifies these previously fragmented reductivist responses. (shrink)
How can the propositional attitudes of several individuals be aggregated into overall collective propositional attitudes? Although there are large bodies of work on the aggregation of various special kinds of propositional attitudes, such as preferences, judgments, probabilities and utilities, the aggregation of propositional attitudes is seldom studied in full generality. In this paper, we seek to contribute to filling this gap in the literature. We sketch the ingredients of a general theory of propositional attitude aggregation and prove two new (...) theorems. Our first theorem simultaneously characterizes some prominent aggregation rules in the cases of probability, judgment and preference aggregation, including linear opinion pooling and Arrovian dictatorships. Our second theorem abstracts even further from the specific kinds of attitudes in question and describes the properties of a large class of aggregation rules applicable to a variety of belief-like attitudes. Our approach integrates some previously disconnected areas of investigation. (shrink)
Locke’s conception of substance in general or substratum has two relatively widespread interpretations. According to one, substance in general is the bearer of properties, a pure subject, something which sustains properties but itself has no properties. I will call this interpretation traditional, because it has already been formulated by Leibniz. According to the other interpretation, substance is general is something like real essence: an underlying structure which is responsible for the fact that certain observable properties form stable, (...) recurrent clusters. I will argue that both interpretation are partly right, and what is good in them can be reconciled. The traditional interpretation captures the purpose and signficanc of the idea of substance in general, i.e. the reason why Locke says we have this idea. The real essence view is right about the real world counterpart of the idea, i.e. what sort of entity the idea corresponds to. The paper starts with a review of the strengths and weaknesses of the rival interpretations (I, II). Then I examine which part of the traditional interpretation can be sustained in light of the problems it faces (III). Thereafter I will show that the part of the traditional interpretation which can be sustained cannot stand on its own and needs to be supplemented at one point, and the real essence view can provide what is needed. This, as it were, mixed interpretation will be supported by sketching an argument which is plausible within the context of Locke’s teachings and which explains how Locke could have arrived from the view which the traditional interpretation correctly attributes to him to the view which the real essence interpretation takes him to espouse (IV). The two problematic points in this argument will be taken up in the following two sections. (V, VI). Finally, I will provide some evidence from the Drafts for Locke’s identification of substance and essence (VII). (shrink)
How can different individuals' probability assignments to some events be aggregated into a collective probability assignment? Classic results on this problem assume that the set of relevant events -- the agenda -- is a sigma-algebra and is thus closed under disjunction (union) and conjunction (intersection). We drop this demanding assumption and explore probabilistic opinion pooling on general agendas. One might be interested in the probability of rain and that of an interest-rate increase, but not in the probability of rain (...) or an interest-rate increase. We characterize linear pooling and neutral pooling for general agendas, with classic results as special cases for agendas that are sigma-algebras. As an illustrative application, we also consider probabilistic preference aggregation. Finally, we compare our results with existing results on binary judgment aggregation and Arrovian preference aggregation. This paper is the first of two self-contained, but technically related companion papers inspired by binary judgment-aggregation theory. (shrink)
Modularity theorists have challenged that there are, or could be, general learning mechanisms that explain theory-of-mind development. In response, supporters of the ‘scientific theory-theory’ account of theory-of-mind development have appealed to children's use of auxiliary hypotheses and probabilistic causal modeling. This article argues that these general learning mechanisms are not sufficient to meet the modularist's challenge. The article then explores an alternative domain-general learning mechanism by proposing that children grasp the concept belief through the progressive alignment of (...) relational structure that occurs as a result of structural-comparison. The article also explores the implications of the proposed account for Fodor's puzzle of conceptual learning. (shrink)
Philosophers have often claimed that general ideas or representations have their origin in abstraction, but it remains unclear exactly what abstraction as a psychological process consists in. We argue that the Lockean aspiration of using abstraction to explain the origins of all general representations cannot work and that at least some general representations have to be innate. We then offer an explicit framework for understanding abstraction, one that treats abstraction as a computational process that operates over an (...) innate quality space of fine-grained general representations. We argue that this framework has important philosophical implications for the nativism-empiricism dispute, for questions about the acquisition of unstructured representations, and for questions about the relation between human and animal minds. (shrink)
This is a short response to Aaron Cotnoir's 'Composition as General Identity', in which I suggest some further applications of his ideas, and try to press the question of why we should think of his 'general identity relation' as a genuine identity relation.
Based on the Ontology for General Medical Science, we propose definitions for biomarkers of various types of. These definitions provide not only a complete formal representation of what biomarkers are according to the Institute of Medicine (IOM), but also remove the ambiguities and inconsistencies encountered in the documentation provided by the IOM.
This is the first volume of Equality and Justice, a six-volume collection of the most important articles of the twentieth century on the topic of justice and equality. This volume addresses the following three (only loosely related) issues: (1) What is the concept of justice? (2) Is justice primarily a demand on individuals or on societies? (3) What are the relative merits of conceptions of justice based on equality, based on priority for those who have less, and based on ensuring (...) that everyone has a basic minimum, of the relevant goods? (shrink)
An examination of the role played by general rules in Hume's positive (nonskeptical) epistemology. General rules for Hume are roughly just general beliefs. The difference between justified and unjustified belief is a matter of the influence of good versus bad general rules, the good general rules being the "extensive" and "constant" ones.
Propositional logics in general, considered as a set of sentences, can be undecidable even if they have “nice” representations, e.g., are given by a calculus. Even decidable propositional logics can be computationally complex (e.g., already intuitionistic logic is PSPACE-complete). On the other hand, finite-valued logics are computationally relatively simple—at worst NP. Moreover, finite-valued semantics are simple, and general methods for theorem proving exist. This raises the question to what extent and under what circumstances propositional logics represented in various (...) ways can be approximated by finite-valued logics. It is shown that the minimal m-valued logic for which a given calculus is strongly sound can be calculated. It is also investigated under which conditions propositional logics can be characterized as the intersection of (effectively given) sequences of finite-valued logics. (shrink)
Metacognition is the capacity to evaluate the success of one's own cognitive processes in various domains; for example, memory and perception. It remains controversial whether metacognition relies on a domain-general resource that is applied to different tasks or if self-evaluative processes are domain specific. Here, we investigated this issue directly by examining the neural substrates engaged when metacognitive judgments were made by human participants of both sexes during perceptual and memory tasks matched for stimulus and performance characteristics. By comparing (...) patterns of fMRI activity while subjects evaluated their performance, we revealed both domain-specific and domain-general metacognitive representations. Multivoxel activity patterns in anterior prefrontal cortex predicted levels of confidence in a domain-specific fashion, whereas domain-general signals predicting confidence and accuracy were found in a widespread network in the frontal and posterior midline. The demonstration of domain-specific metacognitive representations suggests the presence of a content-rich mechanism available to introspection and cognitive control. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.