Scientists and engineers seek to understand how real-world systems work and could work better. Any modeling method devised for such purposes must simplify reality. Ideally, however, the modeling method should be flexible as well as logically rigorous; it should permit model simplifications to be appropriately tailored for the specific purpose at hand. Flexibility and logical rigor have been the two key goals motivating the development of Agent-based Computational Economics (ACE), a completely agent-based modeling method characterized by seven specific modeling principles. (...) This perspective provides an overview of ACE, a brief history of its development, and its role within a broader spectrum of experiment-based modeling methods. (shrink)
When solving a complex problem in a group, should group members always choose the best available solution that they are aware of? In this paper, I build simulation models to show that, perhaps surprisingly, a group of agents who individually randomly follow a better available solution than their own can end up outperforming a group of agents who individually always follow the best available solution. This result has implications for the feminist philosophy of science and social epistemology.
The debates on the scientificity of social sciences in general, and sociology in particular, are recurring. From the original methodenstreitat the end the 19th Century to the contemporary controversy on the legitimacy of “regional epistemologies”, a same set of interrogations reappears. Are social sciences really scientific? And if so, are they sciences like other sciences? How should we conceive “research programs” Lakatos (1978) or “research traditions” for Laudan (1977) able to produce advancement of knowledge in the field of social and (...) human phenomena? Is the progress of knowledge in social sciences similar to the one generally observed in natural sciences? Is it possible to evaluate the relative merits of each one of these research programs? -/- These debates are important vectors of social and intellectual polarization. The historical divide between the positivist and the hermeneutics poles precedes the structure of the contemporary debate around the epistemic space of social sciences, It is not only a question of renewing the opposition between a monist view of sciences (e.g. McIntyre, 1996) and a dualistic one (e.g. Geertz, 1973) or even a trialist view of sciences (e.g. Lepenies, 1985). It is also a question of asserting dichotomies transformed into framework (including when it is a question of exceeding them): nature-culture, nomothetic-idiographic, models-narrative, structure-history, cause-reason, explanation-comprehension. -/- In this short introduction, we provide, in section 2, a first overview of this epistemological debate in social science. Section 3 proposes a different standpoint on the same questions, by introducing both ontological and methodological aspects in this basic epistemological debate. Namely, following (Hollis, 1994) oppositions of the explanation - understanding, causes - meaning, etc., types discussed in the firsts ection are comparatively examined together with oppositions of the structure - action, holism - individualism types. This allows us to discus show multi-agent design, by integrating various dimensions and standpoints in the same framework (Phan, Amblard, 2007, Chapters 1, 5, 14) can help us to shift these boundaries, and to bypass these oppositions. As model building and ontology design are at the core of this process (Phan, Amblard, 2007, Chapter 12), Section4 discusses various issues of the art of modelling, starting both from economists’ and sociologists’ current standpoints. (shrink)
Diversity of practice is widely recognized as crucial to scientific progress. If all scientists perform the same tests in their research, they might miss important insights that other tests would yield. If all scientists adhere to the same theories, they might fail to explore other options which, in turn, might be superior. But the mechanisms that lead to this sort of diversity can also generate epistemic harms when scientific communities fail to reach swift consensus on successful theories. In this paper, (...) we draw on extant literature using network models to investigate diversity in science. We evaluate different mechanisms from the modeling literature that can promote transient diversity of practice, keeping in mind ethical and practical constraints posed by real epistemic communities. We ask: what are the best ways to promote an appropriate amount of diversity of practice in scientific communities? (shrink)
Ken Forbus's Qualitative Process Theory (QPT) is a popular theory for reasoning about the physical aspects of the daily world. Qualitative Process Theory Using Linguistic Variables by Bruce D'Ambrosio (Springer-Verlag, New York, 1989) is an attempt to fill some gaps in QPT.
I use network models to simulate social learning situations in which the dominant group ignores or devalues testimony from the marginalized group. I find that the marginalized group ends up with several epistemic advantages due to testimonial ignoration and devaluation. The results provide one possible explanation for a key claim of standpoint epistemology, the inversion thesis, by casting it as a consequence of another key claim of the theory, the unidirectional failure of testimonial reciprocity. Moreover, the results complicate the understanding (...) and application of previously discovered network epistemology effects, notably the Zollman effect (Zollman 2007, 2010). (shrink)
What separates the unique nature of human consciousness and that of an entity that can only perceive the world via strict logic-based structures? Rather than assume that there is some potential way in which logic-only existence is non-feasible, our species would be better served by assuming that such sentient existence is feasible. Under this assumption, artificial intelligence systems (AIS), which are creations that run solely upon logic to process data, even with self-learning architectures, should therefore not face the opposition they (...) have to gaining some legal duties and protections insofar as they are sophisticated enough to display consciousness akin to humans. Should our species enable AIS to gain a digital body to inhabit (if we have not already done so), it is more pressing than ever that solid arguments be made as to how humanity can accept AIS as being cognizant of the same degree as we ourselves claim to be. By accepting the notion that AIS can and will be able to fool our senses into believing in their claim to possessing a will or ego, we may yet have a chance to address them as equals before some unforgivable travesty occurs betwixt ourselves and these super-computing beings. (shrink)
John Stuart Mill advocated for increased interactions between individuals of dissenting opinions for the reason that it would improve society. Whether Mill and similar arguments that advocate for opinion diversity are valid depends on background assumptions about the psychology and sociality of individuals. The field of opinion dynamics is a burgeoning testing ground for how different combinations of sociological and psychological facts contribute to phenomena that affect opinion diversity, such as polarization. This paper applies some recent results from the opinion (...) dynamics literature to assess the impacts of the Millian suggestion. The goal is to understand how the scope of the validity of Mill-style arguments depends on plausible assumptions that can be formalized using agent-based models, a common modeling approach in opinion dynamics. The most salient insight is that homophily (increased interactions between like-minded individuals) does not sufficiently explain decreased opinion diversity. Hence, decreasing homophily by increasing interactions between individuals of dissenting opinions is not the simple solution that a Millian-style argument may advocate. (shrink)
The structure of communication networks can be more or less “democratic”: networks are less democratic if (a) communication is more limited in terms of characteristic degree and (b) is more tightly channeled to a few specifc nodes. Together those measures give us a two-dimensional landscape of more and less democratic networks. We track opinion volatility across that landscape: the extent to which random changes in a small percentage of binary opinions at network nodes result in wide changes across the network (...) as a whole. If wide and frequent swings of popular opinion are taken as a mark of instability, democratic communication networks prove far more stable than anti-democratic ones. In a fnal section, we consider the democratic or anti-democratic character of networks that respond to volatility by rewiring at random, in a search for community, or in a search for a leader. (shrink)
We are increasingly exposed to polarized media sources, with clear evidence that individuals choose those sources closest to their existing views. We also have a tradition of open face-to-face group discussion in town meetings, for example. There are a range of current proposals to revive the role of group meetings in democratic decision-making. Here, we build a simulation that instantiates aspects of reinforcement theory in a model of competing social influences. What can we expect in the interaction of polarized media (...) with group interaction along the lines of town meetings? Some surprises are evident from a computational model that includes both. Deliberative group discussion can be expected to produce opinion convergence. That convergence may not, however, be a cure for extreme views polarized at opposite ends of the opinion spectrum. In a large class of cases, we show that adding the influence of group meetings in an environment of self-selected media produces not a moderate central consensus but opinion convergence at one of the extremes defined by polarized media. (shrink)
How do conventions of communication emerge? How do sounds or gestures take on a semantic meaning, and how do pragmatic conventions emerge regarding the passing of adequate, reliable, and relevant information? My colleagues and I have attempted in earlier work to extend spatialized game theory to questions of semantics. Agent-based simulations indicate that simple signaling systems emerge fairly naturally on the basis of individual information maximization in environments of wandering food sources and predators. Simple signaling emerges by means of any (...) of various forms of updating on the behavior of immediate neighbors: imitation, localized genetic algorithms, and partial training in neural nets. Here the goal is to apply similar techniques to questions of pragmatics. The motivating idea is the same: the idea that important aspects of pragmatics, like important aspects of semantics, may fall out as a natural results of information maximization in informational networks. The attempt below is to simulate fundamental elements of the Gricean picture: in particular, to show within networks of very simple agents the emergence of behavior in accord with the Gricean maxims. What these simulations suggest is that important features of pragmatics, like important aspects of semantics, don't have to be added in a theory of informational networks. They come for free. (shrink)
In this paper we make a simple theoretical point using a practical issue as an example. The simple theoretical point is that robustness is not 'all or nothing': in asking whether a system is robust one has to ask 'robust with respect to what property?' and 'robust over what set of changes in the system?' The practical issue used to illustrate the point is an examination of degrees of linkage between sub-networks and a pointed contrast in robustness and fragility between (...) the dynamics of (1) contact infection and (2) information transfer or belief change. Time to infection across linked sub-networks, it turns out, is fairly robust with regard to the degree of linkage between them. Time to infection is fragile and sensitive, however, with regard to the type of sub-network involved: total, ring, small world, random, or scale-free. Aspects of robustness and fragility are reversed where it is belief updating with reinforcement rather than infection that is at issue. In information dynamics, the pattern of time to consensus is robust across changes in network type but remarkably fragile with respect to degree of linkage between sub-networks. These results have important implications for public health interventions in realistic social networks, particularly with an eye to ethnic and socio-economic sub-communities, and in social networks with sub-communities changing in structure or linkage. (shrink)
There are many social psychological theories regarding the nature of prejudice, but only one major theory of prejudice reduction: under the right circumstances, prejudice between groups will be reduced with increased contact. On the one hand, the contact hypothesis has a range of empirical support and has been a major force in social change. On the other hand, there are practical and ethical obstacles to any large-scale controlled test of the hypothesis in which relevant variables can be manipulated. Here we (...) construct a spatialized model that tests the core hypothesis in a large array of game-theoretic agents. Robust results offer a new kind of support for the contact hypothesis: results in simulation do accord with a hypothesis of reduced prejudice with increased contact. The spatialized game-theoretic model also suggests a deeper explanation for at least some of the social psychological phenomena at issue. (shrink)
What is it for a sound or gesture to have a meaning, and how does it come to have one? In this paper, a range of simulations are used to extend the tradition of theories of meaning as use. The authors work throughout with large spatialized arrays of sessile individuals in an environment of wandering food sources and predators. Individuals gain points by feeding and lose points when they are hit by a predator and are not hiding. They can also (...) make sounds heard by immediate neighbours in the array, and can respond to sounds from immediate neighbours. No inherent meaning for these sounds is built into the simulation; under what circumstances they are sent, if any, and what the response to them is, if any, vary initially with the strategies randomized across the array. These sounds do take on a specific function for communities of individuals, however, with any of three forms of strategy change: direct imitation of strategies of successful neighbours, a localized genetic algorithm in which strategies are ‘crossed’ with those of successful neighbours, and neural net training on the behaviour of successful neighbours. Starting from an array randomized across a large number of strategies, and using any of these modes of strategy change, communities of ‘communicators’ emerge. Within these evolving communities the sounds heard from immediate neighbours, initially arbitrary across the array, come to be used for very specific communicative functions. ‘Communicators’ make a particular sound on feeding and respond to that same sound from neighbours by opening their mouths; they make a different sound when hit with a predator and respond to that sound by hiding. Robustly and persistently, even in simple computer models of communities of self-interested agents, something suggestively like signalling emerges and spreads. Keywords: meaning, communication, genetic algorithms, neural networks. (shrink)
Immersing in the virtual world of the Internet, information and communication technologies are changing the human being. In spite of the apparent similarity of on-line and off-line, social laws of their existence are different. According to the analysis of games, based on the violation of the accepted laws of the world off-line, their censoring, as well as the cheating, features of formation and violations of social norms in virtual worlds were formulated. Although the creators of the games have priority in (...) the standardization of the virtual world, society as well as players can have impact on it to reduce the realism. The violation of the prescribed rules by a player is regarded as cheating. And it is subjected to sanctions, but the attitude toward it is ambiguous, sometimes positive. Some rules are formed as a result of the interaction between players. (shrink)
The study of a person existence in Internet space is certainly an actual task, since the Internet is not only a source of innovation, but also the cause of society's transformations and the social and cultural problems that arise in connection with this. Computer network is global. It is used by people of different professions, age, level and nature of education, living around the world and belonging to different cultures. It complicates the problem of developing common standards of behavior, a (...) system of norms and rules that could be widely accepted by all users. On the other hand, the Internet space can be viewed as a new form of existence where physical laws do not work, and in connection with this, social ones are often questioned. This paper focuses on how social norms regulate relations in Internet space. The authors represents the typology of deviant behavior in the network. The empirical basis of the research includes the sociological survey of students of the senior courses in the Institute of Computer Science and Technology of Peter the Great St. Petersburg Polytechnic University. Sociological survey allows to identify students’ understanding of Internet space. The selection of students is conditioned by the fact that IT professionals are considered simultaneously as ordinary users of the network and as future professionals in this field. (shrink)
Abstract: In the future, it will be possible to create advance simulations of ancestor in computers. Superintelligent AI could make these simulations very similar to the real past by creating a simulation of all of humanity. Such a simulation would use all available data about the past, including internet archives, DNA samples, advanced nanotech-based archeology, human memories, as well as text, photos and videos. This means that currently living people will be recreated in such a simulation, and in some sense, (...) “resurrected”. Such “resurrectional simulation” could be deliberately created just for this goal: to return to life all people who have ever lived. The main technical problem of such simulation will be uncertainty about the past, which increases exponentially for more remote times. Such problem could be partly addressed by “acausal trade” between different branches of the multiverse, which will create slightly different versions of the simulation using a quantum randomness generator. Such trade will result in resurrection of all possible people (including those who existed in other branches). Ethical problems of such a resurrectional simulation include: a) possible resurrection of some people against their will; b) such simulation may create additional suffering; с) such simulation could be used by hostile AI to return people to life and then torture them. In this work, I explore preliminary ideas about how to address these problems. (shrink)
Real-world economies are open-ended dynamic systems consisting of heterogeneous interacting participants. Human participants are decision-makers who strategically take into account the past actions and potential future actions of other participants. All participants are forced to be locally constructive, meaning their actions at any given time must be based on their local states; and participant actions at any given time affect future local states. Taken together, these essential properties imply real-world economies are locally-constructive sequential games. This paper discusses a modeling approach, (...) Agent-based Computational Economics, that permits researchers to study economic systems from this point of view. ACE modeling principles and objectives are first concisely presented and explained. The remainder of the paper then highlights challenging issues and edgier explorations that ACE researchers are currently pursuing. (shrink)
This study provides a basic introduction to agent-based modeling (ABM) as a powerful blend of classical and constructive mathematics, with a primary focus on its applicability for social science research. The typical goals of ABM social science researchers are discussed along with the culture-dish nature of their computer experiments. The applicability of ABM for science more generally is also considered, with special attention to physics. Finally, two distinct types of ABM applications are summarized in order to illustrate concretely the duality (...) of ABM: Real-world systems can not only be simulated with verisimilitude using ABM; they can also be efficiently and robustly designed and constructed on the basis of ABM principles. (shrink)
We apply spatialized game theory and multi-agent computational modeling as philosophical tools: (1) for assessing the primary social psychological hypothesis regarding prejudice reduction, and (2) for pursuing a deeper understanding of the basic mechanisms of prejudice reduction.
Since the 1990’s, social sciences are living their computational turn. This paper aims to clarify the epistemological meaning of this turn. To do this, we have to discriminate between different epistemic functions of computation among the diverse uses of computers for modeling and simulating in the social sciences. Because of the introduction of a new – and often more user-friendly – way of formalizing and computing, the question of realism of formalisms and of proof value of computational treatments reemerges. Facing (...) the spreading of computational simulations in all disciplines, some enthusiastic observers are claiming that we are entering a new era of unity for social sciences. Finally, the article shows that the conceptual and epistemological distinctions presented in the first sections lead to a more mitigated position: the transdisciplinary computational turn is a great one, but it is of a methodological nature. (shrink)
This paper introduces the concept of Adaptive Rooms, which are virtual environments able to dynamically adapt to users’ needs, including ‘physical’ and cognitive workflow requirements, number of users, differing cognitive abilities and skills. Adaptive rooms are collections of virtual objects, many of them self-transforming objects, housed in an architecturally active room with information spaces and tools. An ontology of objects used in adap- tive rooms is presented. Virtual entities are classified as passive, reactive, ac- tive, and information entities, and their (...) sub-categories. Only active objects can be self-transforming. Adaptive Rooms are meant to combine the insights of ubiquitous computing -- that computerization should be everywhere, transpar- ently incorporated -- with the insights of augmented reality -- that everyday ob- jects can be digitally enhanced to carry more information about their use. To display the special potential of adaptive rooms, concrete examples are given to show how the demands of cognitive workflow can be reduced. (shrink)
Virtual environment landmarks are essential in wayfinding: they anchor routes through a region and provide memorable destinations to return to later. Current virtual environment browsers provide user interface menus that characterize available travel destinations via landmark textual descriptions or thumbnail images. Such characterizations lack the depth cues and context needed to reliably recognize 3D landmarks. This paper introduces a new user interface affordance that captures a 3D representation of a virtual environment landmark into a 3D thumbnail, called a worldlet. Each (...) worldlet is a miniature virtual world fragment that may be interactively viewed in 3D, enabling a traveler to gain first-person experience with a travel destination. In a pilot student conducted to compare textual, image, and worldlet landmark representations within a wayfinding task, worldlet use significantly reduced the overall travel time and distance traversed, virtually eliminating unnecessary backtracking. (shrink)
Dramatic advances in 3D Web technologies have recently led to widespread development of virtual world Web browsers and 3D content. A natural question is whether 3D thumbnails can be used to find one’s way about such 3D content the way that text and 2D thumbnail images are used to navigate 2D Web content. We have conducted an empirical experiment that shows interactive 3D thumbnails, which we call worldlets, improve travelers’ landmark knowledge and expedite wayfinding in virtual environments.
This paper introduces the concept of Adaptive Rooms, which are virtual environments able to dynamically adapt to users’ needs, including ‘physical’ and cognitive workflow requirements, number of users, differing cognitive abilities and skills. Adaptive rooms are collections of virtual objects, many of them self-transforming objects, housed in an architecturally active room with information spaces and tools. An ontology of objects used in adap- tive rooms is presented. Virtual entities are classified as passive, reactive, ac- tive, and information entities, and their (...) sub-categories. Only active objects can be self-transforming. Adaptive Rooms are meant to combine the insights of ubiquitous computing -- that computerization should be everywhere, transpar- ently incorporated -- with the insights of augmented reality -- that everyday ob- jects can be digitally enhanced to carry more information about their use. To display the special potential of adaptive rooms, concrete examples are given to show how the demands of cognitive workflow can be reduced. (shrink)
The goal of philosophy of information is to understand what information is, how it operates, and how to put it to work. But unlike âinformationâ in the technical sense of information theory, what we are interested in is meaningful information. To understand the nature and dynamics of information in this sense we have to understand meaning. What we offer here are simple computational models that show emergence of meaning and information transfer in randomized arrays of neural nets. These we take (...) to be formal instantiations of a tradition of theories of meaning as use. What they offer, we propose, is a glimpse into the origin and dynamics of at least simple forms of meaning and information transfer as properties inherent in behavioral coordination across a community. (shrink)
The simulation hypothesis has recently excited renewed interest, especially in the physics and philosophy communities. However, the hypothesis specifically concerns {computers} that simulate physical universes, which means that to properly investigate it we need to couple computer science theory with physics. Here I do this by exploiting the physical Church-Turing thesis. This allows me to introduce a preliminary investigation of some of the computer science theoretic aspects of the simulation hypothesis. In particular, building on Kleene's second recursion theorem, I prove (...) that it is mathematically possible for us to be in a simulation that is being run on a computer \textit{by us}. In such a case, there would be two identical instances of us; the question of which of those is ``really us'' is meaningless. I also show how Rice's theorem provides some interesting impossibility results concerning simulation and self-simulation; briefly describe the philosophical implications of fully homomorphic encryption for (self-)simulation; briefly investigate the graphical structure of universes simulating universes simulating universes, among other issues. I end by describing some of the possible avenues for future research that this preliminary investigation reveals. (shrink)
Do the hard problem of consciousness and the simulation argument potentially resolve each other? Here we will argue for four possible views: that consciousness may be possible only (a) outside of, (b) inside and/or outside of, (c) inside of, or (d) interfacing with simulations. The first two of these views have been explored by David Chalmers and are used as jumping off points to introduce the latter two views, which are underdeveloped. If any one of these views could be proven (...) true, this would simultaneously both support a kind of account of properties of consciousness and also provide a kind of sign as to whether or not we are indeed living in a simulation. Given that none of these views are proven true but all are plausible, these considerations should tend to neutralize our credences that we are either simulated or not simulated, by themselves giving us no sign one way or the other. (shrink)
In this essay, I present an alternative philosophical approach to meta-curating. While the debate surrounding the meta-curating of content often centers around technology like post-digital art, I prefer to take a broader perspective and examine its ontological implications. I consider the realist or anti-realist assumptions of meta-curating through Jean Baudrillard’s concept of seduction and Giorgio Agamben’s idea of spectrality. Both simulacrum and spectrality tend to support an anti-realist approach to meta-curating where the value of the object is made fragile when (...) constantly predetermined by a superficially seductive or spectrally floating context. Against meta-curating as anti-realist, I argue that meta-curation is realist. As a case, the seductive and the spectral in Zaha Hadid’s Morpheus in Macao demonstrate that meta-curating does not completely disregard, but rather raises the question of how to establish an antifragile realism prompted by an architectural object. (shrink)
La, V. P., & Vuong, Q. H. (2019). bayesvl: Visually learning the graphical structure of Bayesian networks and performing MCMC with ‘Stan’. The Comprehensive R Archive Network (CRAN).
According to the most common interpretation of the simulation argument, we are very likely to live in an ancestor simulation. It is interesting to ask if some families of simulations are more likely than others inside the space of all simulations. We argue that a natural probability measure is given by computational complexity: easier simulations are more likely to be run. Remarkably this allows us to extract experimental predictions from the fact that we live in a simulation. For instance we (...) show that it is very likely that humanity will not achieve interstellar travel and that humanity will not meet other intelligent species in the universe, in turn explaining the Fermi's Paradox. On the opposite side, experimental falsification of any of these predictions would constitute evidence against our reality being a simulation. (shrink)
I introduce the implantation argument, a new argument for the existence of God. Spatiotemporal extensions believed to exist outside of the mind, composing an external physical reality, cannot be composed of either atomlessness, or of Democritean atoms, and therefore the inner experience of an external reality containing spatiotemporal extensions believed to exist outside of the mind does not represent the external reality, the mind is a mere cinematic-like mindscreen, implanted into the mind by a creator-God. It will be shown that (...) only a creator-God can be the implanting creator of the mindscreen simulation, and other simulation theories, such as Bostrom’s famous account, that do not involve a creator-God as the mindscreen simulation creator, involve a reification fallacy. (shrink)
Most online platforms are becoming increasingly algorithmically personalized. The question is if these practices are simply satisfying users preferences or if something is lost in this process. This article focuses on how to reconcile the personalization with the importance of being able to share cultural objects - including fiction – with others. In analyzing two concrete personalization examples from the streaming giant Netflix, several tendencies are observed. One is to isolate users and sometimes entirely eliminate shared world aspects. Another tendency (...) is to blur the boundary between shared cultural objects and personalized content, which can be misleading and disorienting. A further tendency is for personalization algorithms to be optimized to deceptively prey on desires for content that mirrors one’s own lived experience. Some specific - often minority targeting -“clickbait” practices received public blowback. These practices show disregard both for honest labeling and for our desires to have access and representation in a shared world. The article concludes that personalization tendencies are moving towards increasingly isolating and disorienting interfaces, but that platforms could be redesigned to support better social world orientation. (shrink)
Is simulation some new kind of science? We argue that instead simulation fits smoothly into existing scientific practice, but does so in several importantly different ways. Simulations in general, and computer simulations in particular, ought to be understood as techniques which, like many scientific techniques, can be employed in the service of various and diverse epistemic goals. We focus our attentions on the way in which simulations can function as (i) explanatory and (ii) predictive tools. We argue that a wide (...) variety of simulations, both computational and physical, are best conceived in terms of a set of common features: initial or input conditions, a mechanism or set of rules, and a set of results or output conditions. Studying simulations in these terms yields a new understanding of their character as well as a body of normative recommendations for the care and feeding of scientific simulations. (shrink)
This article’s conclusion is that the theories of Einstein are generally correct and will still be relevant in the next century (there will be modifications necessary for development of quantum gravity). Those Einsteinian theories are Special Relativity, General Relativity, and the title of a paper he published in 1919 which asked if gravitation plays a role in the composition of elementary particles of matter. This paper was the bridge between General Relativity and the Unified Field Theory he sought during the (...) last 25 years of his life. In an article published in the "Annals of Physics" in 1957, Charles Misner and John Wheeler claimed that Einstein's latest equations demonstrated the unified field theory. But Einstein himself felt he had not fully succeeded. The present article begins with Olbers’ paradox (why is the sky dark at night?) Then it briefly proceeds to the subjects of Newtonian gravity, quantum entanglement, gravitational waves, E=mc^2, dark energy, dark matter, cosmic expansion, redshift, blueshift, the cosmic microwave background, the 1st Law of Thermodynamics, and explanation of advanced waves travelling back in time. The section “vector-tensor-scalar geometry” touches on mass, quantum spin, the Higgs boson and Higgs field, stellar jets, the pervasiveness of photons and gravitons, and supersymmetry. Then come half a dozen paragraphs referring to formation of planets, black holes, and bosons of the weak and strong nuclear forces. They end with Descartes’ space-matter relation. Also added are paragraphs about simply-connected mathematics, non-orientability, consciousness, the Law of Falling Bodies, the multiverse, space-time travel developed from maths’ Brouwer Fixed Point Theorem and from an experiment in electrical engineering performed at Yale University, development from future space-time travel of human flight in the manner of fiction’s Superman and Supergirl, as well as downloaded band-gap implants in the brain that could deal with forms of matter. They could add or delete anything and everything we choose by emulating computers’ copy/paste function to add things; as well as their delete function, to remove things. To complete my seemingly unusual ideas, 6 sections are added – 1) “Advanced and Retarded Waves” is extended to include dinosaurs, ageing, and photography, 2) there’s a bit about space-time warping and “imaginary” computers, 3) several paragraphs about restoring health (even gaining immortality) by using gravity, 4) a section about superconductivity and the electric or magnetic fields of planets (this section mentions Mercury, Planet 9 and precession), 5) a section titled EXPLAINING OCEAN TIDES WHEN GENERAL RELATIVITY SAYS GRAVITY IS A PUSH CAUSED BY THE CURVATURE OF SPACE-TIME (this has subsections about M-Sigma, Geysers on Saturn’s Moon Enceladus, and A Brief History of Gravity), plus 5) the potential of COVID-19 to create the Golden Rule, world peace, eternal life, and a non-economic world that doesn’t use any form of money (no cash, credit cards, digital currency, etc.) The final section is called DISTANT-FUTURE SCIENCE INTERPRETED BY RELIGIONS AS SUPERNATURAL and introduces an idea for becoming immortal in these physical bodies. If the Theory of Everything sought by physicists applies to all space-time, then every person’s brain must be entangled with the 22nd century (and far beyond that time too). (shrink)
What is the status of a cat in a virtual reality environment? Is it a real object? Or part of a fiction? Virtual realism, as defended by D. J. Chalmers, takes it to be a virtual object that really exists, that has properties and is involved in real events. His preferred specification of virtual realism identifies the cat with a digital object. The project of this paper is to use a comparison between virtual reality environments and scientific computer simulations to (...) critically engage with Chalmers’s position. I first argue that, if it is sound, his virtual realism should also be applied to objects that figure in scientific computer simulations, e.g. to simulated galaxies. This leads to a slippery slope because it implies an unreasonable proliferation of digital objects. A philosophical analysis of scientific computer simulations suggests an alternative picture: The cat and the galaxies are parts of fictional models for which the computer provides model descriptions. This result motivates a deeper analysis of the way in which Chalmers builds up his realism. I argue that he buys realism too cheap. For instance, he does not really specify what virtual objects are supposed to be. As a result, rhetoric aside, his virtual realism isn’t far from a sort of fictionalism. (shrink)
Abstract. Boltzmann brains (BBs) are minds which randomly appear as a result of thermodynamic or quantum fluctuations. In this article, the question of if we are BBs, and the observational consequences if so, is explored. To address this problem, a typology of BBs is created, and the evidence is compared with the Simulation Argument. Based on this comparison, we conclude that while the existence of a “normal” BB is either unlikely or irrelevant, BBs with some ordering may have observable consequences. (...) There are two types of such ordered BBs: Boltzmannian typewriters (including Boltzmannian simulations), and chains of observer moments. Notably, the existence or non-existence of BBs may have practical applications for measuring the size of the universe, achieving immortality, or even manipulating the observed probability of events. -/- Disclaimer and trigger warning: some people have emotional breakdowns when thinking about topics described in the article, especially the “flux universe”. However, everything eventually adds up to normality. (shrink)
У статті досліджено феномен віртуальної тілесності, яка не тільки посідає важливе місце у сфері зацікавлення гуманітарних наук, а й увійшла як елемент повсякденності в життя значної частини сучасного людства завдяки мережі Інтернет. Розглянуто концепції поєднання віртуальності та тілесності, ключові підходи до аналізу цього поєднання. Предметом аналізу стали анонімні форуми як яскравий приклад конфігурації віртуального тіла, радикально відмінний від інших через анонімний спосіб репрезентації. Інформація, яку індивід викладає в публічний доступ, сприймається як його втілення в буквальному сенсі слова, а цифровий формат (...) інформації впливає на характеристики буття віртуального тіла індивіда. (shrink)
I present an argument that for any computer-simulated civilization we design, the mathematical knowledge recorded by that civilization has one of two limitations. It is untrustworthy, or it is weaker than our own mathematical knowledge. This is paradoxical because it seems that nothing prevents us from building in all sorts of advantages for the inhabitants of said simulation.
Do we live in a computer simulation? I will present an argument that the results of a certain experiment constitute empirical evidence that we do not live in, at least, one type of simulation. The type of simulation ruled out is very specific. Perhaps that is the price one must pay to make any kind of Popperian progress.
In his famous “It from Bit” essay, John Wheeler contends that the stuff of the physical universe (“it”) arises from information (“bits” – encoded yes or no answers). Wheeler’s question and assumptions are re-examined from a post Aspect experiment perspective. Information is examined and discussed in terms of classical information and “quanglement” (nonlocal state sharing). An argument is made that the universe may arise from (or together with) quanglement but not via classical yes/no information coding.
Nick Bostrom’s recently patched ‘‘simulation argument’’ (Bostrom in Philos Q 53:243–255, 2003; Bos- trom and Kulczycki in Analysis 71:54–61, 2011) purports to demonstrate the probability that we ‘‘live’’ now in an ‘‘ancestor simulation’’—that is as a simulation of a period prior to that in which a civilization more advanced than our own—‘‘post-human’’—becomes able to simulate such a state of affairs as ours. As such simulations under consid- eration resemble ‘‘brains in vats’’ (BIVs) and may appear open to similar objections, the (...) paper begins by reviewing objections to BIV-type proposals, specifically those due a presumed mad envatter. In counter example, we explore the motivating rationale behind current work in the development of psychologically realistic social simula- tions. Further concerns about rendering human cognition in a computational medium are confronted through review of current dynamic systems models of cognitive agency. In these models, aspects of the human condition are repro- duced that may in other forms be considered incomputable, i.e., political voice, predictive planning, and consciousness. The paper then argues that simulations afford a unique potential to secure a post-human future, and may be nec- essary for a pre-post-human civilization like our own to achieve and to maintain a post-human situation. Long-s- tanding philosophical interest in tools of this nature for Aristotle’s ‘‘statesman’’ and more recently for E.O. Wilson in the 1990s is observed. Self-extinction-level threats from State and individual levels of organization are compared, and a likely dependence on large-scale psychologically realistic simulations to get past self-extinction-level threats is projected. In the end, Bostrom’s basic argument for the conviction that we exist now in a simulation is reaffirmed. (shrink)
Both patched versions of the Bostrom/Kulczycki simulation argument contain serious objective errors, discovered while attempting to formalize them in predicate logic. The English glosses of both versions involve badly misleading meanings of vague magnitude terms, which their impressiveness benefits from. We fix the errors, prove optimal versions of the arguments, and argue that both are much less impressive than they originally appeared. Finally, we provide a guide for readers to evaluate the simulation argument for themselves, using well-justified settings of the (...) argument parameters that have simple, accurate statements in English, which are easier to understand and critique than the statements in the original paper. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.