The emergence of Large Language Models (LLMs) and advancements in Artificial Intelligence (AI) offer an opportunity for computational social science research at scale. Building upon prior explorations of LLM agent design, our work introduces a simulated agent society where complex social relationships dynamically form and evolve over time. Agents are imbued with psychological drives and placed in a sandbox survival environment. We conduct an evaluation of the agent society through the lens of Thomas Hobbes's seminal Social Contract Theory (SCT). We (...) analyze whether, as the theory postulates, agents seek to escape a brutish "state of nature" by surrendering rights to an absolute sovereign in exchange for order and security. Our experiments unveil an alignment: Initially, agents engage in unrestrained conflict, mirroring Hobbes's depiction of the state of nature. However, as the simulation progresses, social contracts emerge, leading to the authorization of an absolute sovereign and the establishment of a peaceful commonwealth founded on mutual cooperation. This congruence between our LLM agent society's evolutionary trajectory and Hobbes's theoretical account indicates LLMs' capability to model intricate social dynamics and potentially replicate forces that shape human societies. By enabling such insights into group behavior and emergent societal phenomena, LLM-driven multi-agent simulations, while unable to simulate all the nuances of human behavior, may hold potential for advancing our understanding of social structures, group dynamics, and complex human systems. (shrink)
The Simulation Hypothesis proposes that all of reality is in fact an artificial simulation, analogous to a computer simulation. Outlined here is a method for programming relativistic mass, space and time at the Planck level as applicable for use in Planck Universe-as-a-Simulation Hypothesis. For the virtual universe the model uses a 4-axis hyper-sphere that expands in incremental steps (the simulation clock-rate). Virtual particles that oscillate between an electric wave-state and a mass point-state are mapped within this hyper-sphere, the oscillation driven (...) by this expansion. Particles are assigned an N-S axis which determines the direction in which they are pulled along by the expansion, thus an independent particle motion may be dispensed with. Only in the mass point-state do particles have fixed hyper-sphere co-ordinates. The rate of expansion translates to the speed of light and so in terms of the hyper-sphere co-ordinates all particles (and objects) travel at the speed of light, time (as the clock-rate) and velocity (as the rate of expansion) are therefore constant, however photons, as the means of information exchange, are restricted to lateral movement across the hyper-sphere thus giving the appearance of a 3-D space. Lorentz formulas are used to translate between this 3-D space and the hyper-sphere co-ordinates, relativity resembling the mathematics of perspective. (shrink)
An orbital simulation program is described that uses a geometrical approach to modeling gravitational and atomic orbits at the Planck scale. Orbiting objects A, B, C... are sub-divided into points, each point representing 1 unit of Planck mass, for example, a 1kg satellite would divide into 1kg/Planck mass = 45940509 points. Each point in object A then forms a rotating orbital pair with every point in objects B, C..., resulting in a universe-wide, n-body network of rotating point-to-point orbital pairs. Each (...) orbital pair rotates 1 unit of Planck length per unit of Planck time at velocity c in hypersphere space co-ordinates, the results then summed and averaged giving the new co-ordinates, the program then repeats. When these rotations are mapped over time in a 3D space, objects A, B, C... appear to be orbiting each other. The basic simulation uses the fine structure constant alpha as an orbital constant to simulate gravitational orbit parameters. As each orbital comprises only 2 points, 1 at each orbital pole, information regarding the objects A, B, C... ; momentum, size, center of mass, barycenter etc ... is not required, instead only the start positions (co-ordinates) of each point are used as initial inputs. Each point, by having a mass of Planck mass, is itself a construct of multiple particles, and so we can also form particle to particle orbital pairs, transition energies for the H atom (an electron-proton orbital pair) are well documented and so can be used as reference. Atomic orbital pairs rotate slower than gravitational orbitals by a factor of alpha, 'gravity' therefore stronger than the 'electric force' when measured at the Planck scale. (shrink)
Outlined here is a simulation hypothesis approach that uses an expanding (the simulation clock-rate measured in units of Planck time) 4-axis hyper-sphere and mathematical particles that oscillate between an electric wave-state and a mass (unit of Planck mass per unit of Planck time) point-state. Particles are assigned a spin axis which determines the direction in which they are pulled by this (hyper-sphere pilot wave) expansion, thus all particles travel at, and only at, the velocity of expansion (the origin of $c$), (...) however only the particle point-state has definable co-ordinates within the hyper-sphere. Photons are the mechanism of information exchange, as they lack a mass state they can only travel laterally (in hypersphere co-ordinate terms) between particles and so this hypersphere expansion cannot be directly observed, relativity then becomes the mathematics of perspective translating between the absolute (hypersphere) and the relative motion (3D space) co-ordinate systems. A discrete `pixel' lattice geometry is assigned as the gravitational space. Units of $\hbar c$ `physically' link particles into orbital pairs. As these are direct particle to particle links, a gravitational force between macro objects is not required, the gravitational orbit as the sum of these individual orbiting pairs. A 14.6 billion year old hyper-sphere (the sum of Planck black-hole units) has similar parameters to the cosmic microwave background. The Casimir force is a measure of the background radiation density. (shrink)
Both patched versions of the Bostrom/Kulczycki simulation argument contain serious objective errors, discovered while attempting to formalize them in predicate logic. The English glosses of both versions involve badly misleading meanings of vague magnitude terms, which their impressiveness benefits from. We fix the errors, prove optimal versions of the arguments, and argue that both are much less impressive than they originally appeared. Finally, we provide a guide for readers to evaluate the simulation argument for themselves, using well-justified settings of the (...) argument parameters that have simple, accurate statements in English, which are easier to understand and critique than the statements in the original paper. (shrink)
Forty-two years ago, Capra published “The Tao of Physics” (Capra, 1975). In this book (page 17) he writes: “The exploration of the atomic and subatomic world in the twentieth century has …. necessitated a radical revision of many of our basic concepts” and that, unlike ‘classical’ physics, the sub-atomic and quantum “modern physics” shows resonances with Eastern thoughts and “leads us to a view of the world which is very similar to the views held by mystics of all ages and (...) traditions.“ This article stresses an analogous situation in biology with respect to a new theoretical approach for studying living systems, Integral Biomathics (IB), which also exhibits some resonances with Eastern thought. Stepping on earlier research in cybernetics1 and theoretical biology,2 IB has been developed since 2011 by over 100 scientists from a number of disciplines who have been exploring a substantial set of theoretical frameworks. From that effort, the need for a robust core model utilizing advanced mathematics and computation adequate for understanding the behavior of organisms as dynamic wholes was identified. At this end, the authors of this article have proposed WLIMES (Ehresmann and Simeonov, 2012), a formal theory for modeling living systems integrating both the Memory Evolutive Systems (Ehresmann and Vanbremeersch, 2007) and the Wandering Logic Intelligence (Simeonov, 2002b). Its principles will be recalled here with respect to their resonances to Eastern thought. (shrink)
Are acts of violence performed in virtual environments ever morally wrong, even when no other persons are affected? While some such acts surely reflect deficient moral character, I focus on the moral rightness or wrongness of acts. Typically it’s thought that, on Kant’s moral theory, an act of virtual violence is morally wrong (i.e., violate the Categorical Imperative) only if the act mistreats another person. But I argue that, on Kant’s moral theory, some acts of virtual violence can be morally (...) wrong, even when no other persons or their avatars are affected. First, I explain why many have thought that, in general on Kant’s moral theory, virtual acts affecting no other persons or their avatars can’t violate the Categorical Imperative. For there are real world acts that clearly do, but it seems that when we consider the same sorts of acts done alone in a virtual environment, they don’t violate the Categorical Imperative, because no others persons were involved. But then, how could any virtual acts like these, that affect no other persons or their avatars, violate the Categorical Imperative? I then argue that there indeed can be such cases of morally wrong virtual acts—some due to an actor’s having erroneous beliefs about morally relevant facts, and others due not to error, but to the actor’s intention leaving out morally relevant facts while immersed in a virtual environment. I conclude by considering some implications of my arguments for both our present technological context as well as the future. (shrink)
The commercial VR/AR marketplace is gaining ground and is becoming an ever larger and more significant component of the global economy. While much attention has been paid to the commercial promise of VR/AR, comparatively little attention has been given to the ethical issues that VR/AR technologies introduce. We here examine existing codes of ethics proposed by the ACM and IEEE and apply them to the unique ethical facets that VR/AR introduces. We propose a VR/AR code of ethics for developers and (...) apply this code to several commercial applications. (shrink)
This article explores whether and under which circumstances it is ethically viable to include artificial beings worthy of moral consideration in virtual environments. In particular, the article focuses on virtual environments such as those in digital games and training simulations – interactive and persistent digital artifacts designed to fulfill specific purposes, such as entertainment, education, training, or persuasion. The article introduces the criteria for moral consideration that serve as a framework for this analysis. Adopting this framework, the article tackles the (...) question of whether including artificial intelligences that are entitled to moral consideration in virtual environments constitutes an immoral action on the part of human creators. To address this problem, the article draws on three conceptual lenses from the philosophical branch of ethics: the problem of parenthood and procreation, the question concerning the moral status of animals, and the classical problem of evil. Using a thought experiment, the concluding section proposes a contractualist answer to the question posed in this article. The same section also emphasizes the potential need to reframe our understanding of the design of virtual environments and their future stakeholders. (shrink)
Emerging technologies such as cloud computing, augmented and virtual reality, artificial intelligence and robotics, among others, are transforming the field of manufacturing and industry as a whole in unprecedent ways. This fourth industrial revolution is consequentially changing how operators that have been crucial to industry success go about their practices in industrial environments. This short paper briefly introduces the notion of the Operator 4.0 as well as how this novel way of conceptualizing the human operator necessarily implicates human values in (...) the technologies that constitute it. Similarly, the design methodology known as value sensitive design (VSD) is drawn upon to discuss how these Operator 4.0 technologies can be design for human values and, conversely, how a potential value-sensitive Operator 4.0 can be used to strengthen the VSD methodology in developing novel technologies. (shrink)
What is the status of a cat in a virtual reality environment? Is it a real object? Or part of a fiction? Virtual realism, as defended by D. J. Chalmers, takes it to be a virtual object that really exists, that has properties and is involved in real events. His preferred specification of virtual realism identifies the cat with a digital object. The project of this paper is to use a comparison between virtual reality environments and scientific computer simulations to (...) critically engage with Chalmers’s position. I first argue that, if it is sound, his virtual realism should also be applied to objects that figure in scientific computer simulations, e.g. to simulated galaxies. This leads to a slippery slope because it implies an unreasonable proliferation of digital objects. A philosophical analysis of scientific computer simulations suggests an alternative picture: The cat and the galaxies are parts of fictional models for which the computer provides model descriptions. This result motivates a deeper analysis of the way in which Chalmers builds up his realism. I argue that he buys realism too cheap. For instance, he does not really specify what virtual objects are supposed to be. As a result, rhetoric aside, his virtual realism isn’t far from a sort of fictionalism. (shrink)
La, V. P., & Vuong, Q. H. (2019). bayesvl: Visually learning the graphical structure of Bayesian networks and performing MCMC with ‘Stan’. The Comprehensive R Archive Network (CRAN).
This paper draws on the notion of the ‘project,’ as developed in the existential philosophy of Heidegger and Sartre, to articulate an understanding of the existential structure of engagement with virtual worlds. By this philosophical understanding, the individual’s orientation towards a project structures a mechanism of self-determination, meaning that the project is understood essentially as the project to make oneself into a certain kind of being. Drawing on existing research from an existential-philosophical perspective on subjectivity in digital game environments, the (...) notion of a ‘virtual subjectivity’ is proposed to refer to the subjective sense of being-in-the-virtual-world. The paper proposes an understanding of virtual subjectivity as standing in a nested relation to the individual’s subjectivity in the actual world, and argues that it is this relation that allows virtual world experience to gain significance in the light of the individual’s projectual existence. The arguments advanced in this paper pave the way for a comprehensive understanding of the transformative, self-transformative, and therapeutic possibilities and advantages afforded by virtual worlds. (shrink)
Spatially situated opinions that can be held with different degrees of conviction lead to spatiotemporal patterns such as clustering (homophily), polarization, and deadlock. Our goal is to understand how sensitive these patterns are to changes in the local nature of interactions. We introduce two different mixing mechanisms, spatial relocation and nonlocal interaction (“telephoning”), to an earlier fully spatial model (no mixing). Interestingly, the mechanisms that create deadlock in the fully spatial model have the opposite effect when there is a sufficient (...) amount of mixing. With telephoning, not only is polarization and deadlock broken up, but consensus is hastened. The effects of mixing by relocation are even more pronounced. Further insight into these dynamics is obtained for selected parameter regimes via comparison to the mean-field differential equations. (shrink)
This book addresses key conceptual issues relating to the modern scientific and engineering use of computer simulations. It analyses a broad set of questions, from the nature of computer simulations to their epistemological power, including the many scientific, social and ethics implications of using computer simulations. The book is written in an easily accessible narrative, one that weaves together philosophical questions and scientific technicalities. It will thus appeal equally to all academic scientists, engineers, and researchers in industry interested in questions (...) related to the general practice of computer simulations. (shrink)
The Simulation Hypothesis proposes that all of reality, including the earth and the universe, is in fact an artificial simulation, analogous to a computer simulation, and as such our reality is an illusion. In this essay I describe a method for programming mass, length, time and charge (MLTA) as geometrical objects derived from the formula for a virtual electron; $f_e = 4\pi^2r^3$ ($r = 2^6 3 \pi^2 \alpha \Omega^5$) where the fine structure constant $\alpha$ = 137.03599... and $\Omega$ = 2.00713494... (...) are mathematical constants and the MLTA geometries are; M = (1), T = ($2\pi$), L = ($2\pi^2\Omega^2$), A = ($4\pi \Omega)^3/\alpha$. As objects they are independent of any set of units and also of any numbering system, terrestrial or alien. As the geometries are interrelated according to $f_e$, we can replace designations such as ($kg, m, s, A$) with a rule set; mass = $u^{15}$, length = $u^{-13}$, time = $u^{-30}$, ampere = $u^{3}$. The formula $f_e$ is unit-less ($u^0$) and combines these geometries in the following ratio M$^9$T$^{11}$/L$^{15}$ and (AL)$^3$/T, as such these ratio are unit-less. To translate MLTA to their respective SI Planck units requires an additional 2 unit-dependent scalars. We may thereby derive the CODATA 2014 physical constants via the 2 (fixed) mathematical constants ($\alpha, \Omega$), 2 dimensioned scalars and the rule set $u$. As all constants can be defined geometrically, the least precise constants ($G, h, e, m_e, k_B$...) can also be solved via the most precise ($c, \mu_0, R_\infty, \alpha$), numerical precision then limited by the precision of the fine structure constant $\alpha$. (shrink)
De acuerdo algunos investigadores el ruido es concebido típicamente como factor perjudicial en el desempeño cognitivo afectando la percepción, toma de decisiones y la función motora. No obstante, en estudios recientes se asocia al ruido blanco con la concentración y la calma, por lo tanto, esta investigación busca establecer el impacto del ruido blanco binaural en el desempeño de la memoria de trabajo y visual a corto plazo, la actividad cerebral alfa – beta y la atención – meditación, mediante el (...) uso de dos estímulos auditivos con rangos de frecuencia de (100 a 450hz) y (100 a 750hz). Este estudio se realizó en la ciudad de Montes Claros, República de Brasil, donde se evaluó a siete participantes (n = 7) con una edad promedio de 36.71±, y dos grupos de edad (GP1) 21 a 30 y (GP2) 41 a 50 de escolaridad media a universitaria. Dentro del proceso experimental se realizaron pruebas de memoria visual a corto plazo mediante el uso de la batería de evaluación cognitiva general CAB de CogniFit™, así como el registro de actividades cerebrales mediante el uso de Electroencefalograma monopolar y los algoritmos eSense™. Con los resultados obtenidos y mediante el uso de pruebas estadísticas podemos inferir que el ruido blanco binaural con oscilaciones de 100 a 750 Hz contribuyeron con el rendimiento de la memoria visual de trabajo a corto plazo. (shrink)
En este trabajo presento un estudio sobre el estado del arte de la llamada ‘epistemología de las simulaciones computacionales’. En particular, me centro en los varios trabajos de Eric Winsberg quién es uno de los filósofos más fructíferos y sistemáticos en este tema. Además de analizar la obra de Winsberg, y basándome en sus trabajos y en el de otros filósofos, mostraré que hay buenas razones para pensar que la epistemología tradicional de la ciencia no es suficiente para el análisis (...) de las simulaciones computacionales. (shrink)
This article aims to develop a new account of scientific explanation for computer simulations. To this end, two questions are answered: what is the explanatory relation for computer simulations? And what kind of epistemic gain should be expected? For several reasons tailored to the benefits and needs of computer simulations, these questions are better answered within the unificationist model of scientific explanation. Unlike previous efforts in the literature, I submit that the explanatory relation is between the simulation model and the (...) results of the simulation. I also argue that our epistemic gain goes beyond the unificationist account, encompassing a practical dimension as well. (shrink)
Within dramatherapy and psychodrama, the term ‘de-roling’ indicates a set of activities that assist the subjects of therapy in ‘disrobing’ themselves from their fictional characters. Starting from the psychological needs and the therapeutic goals that ‘de-roling’ techniques address in dramatherapy and psychodrama, this text provides a broader understanding of procedures and exercises that define and ease transitional experiences across cultural practices such as religious rituals and spatial design. After this introductory section, we propose a tentative answer as to why game (...) studies and virtual world research largely ignored processes of ‘roling’ and ‘de-roling’ that separate the lived experience of role-play from our everyday sense of the self. The concluding sections argue that de-roling techniques are likely to become more relevant, both academically and in terms of their practical applications, with the growing diffusion of virtual technologies in social practices. The relationships we can establish with ourselves and with our surroundings in digital virtual worlds are, we argue, only partially comparable with similar occurrences in pre-digital practices of subjectification. We propose a perspective according to which the accessibility and immersive phenomenological richness of virtual reality technologies are likely to exacerbate the potentially dissociative effects of virtual reality applications. This text constitutes an initial step towards framing specific socio-technical concerns and starting a timely conversation that binds together dramatherapy, psychodrama, game studies, and the design of digital virtual worlds. (shrink)
Opinions are rarely binary; they can be held with different degrees of conviction, and this expanded attitude spectrum can affect the influence one opinion has on others. Our goal is to understand how different aspects of influence lead to recognizable spatio-temporal patterns of opinions and their strengths. To do this, we introduce a stochastic spatial agent-based model of opinion dynamics that includes a spectrum of opinion strengths and various possible rules for how the opinion strength of one individual affects the (...) influence that this individual has on others. Through simulations, we find that even a small amount of amplification of opinion strength through interaction with like-minded neighbors can tip the scales in favor of polarization and deadlock. (shrink)
Despite the impressive amount of financial resources recently invested in carrying out large-scale brain simulations, it is controversial what the pay-offs are of pursuing this project. One idea is that from designing, building, and running a large-scale neural simulation, scientists acquire knowledge about the computational performance of the simulating system, rather than about the neurobiological system represented in the simulation. It has been claimed that this knowledge may usher in a new era of neuromorphic, cognitive computing systems. This study elucidates (...) this claim and argues that the main challenge this era is facing is not the lack of biological realism. The challenge lies in identifying general neurocomputational principles for the design of artificial systems, which could display the robust flexibility characteristic of biological intelligence. (shrink)
Problems and questions originally raised by Robert Nozick in his famous thought experiment ‘The Experience Machine’ are frequently invoked in the current discourse concerning virtual worlds. Having conceptualized his Gedankenexperiment in the early seventies, Nozick could not fully anticipate the numerous and profound ways in which the diffusion of computer simulations and video games came to affect the Western world. -/- This article does not articulate whether or not the virtual worlds of video games, digital simulations, and virtual technologies currently (...) actualize (or will actualize) Nozick’s thought experiment. Instead, it proposes a philosophical reflection that focuses on human experiences in the upcoming age of their ‘technical reproducibility’. -/- In pursuing that objective, this article integrates and supplements some of the interrogatives proposed in Robert Nozick’s thought experiment. More specifically, through the lenses of existentialism and philosophy of technology, this article tackles the technical and cultural heritage of virtual reality, and unpacks its potential to function as a tool for self-discovery and self-construction. Ultimately, it provides an interpretation of virtual technologies as novel existential domains. Virtual worlds will not be understood as the contexts where human beings can find completion and satisfaction, but rather as instruments that enable us to embrace ourselves and negotiate with various aspects of our (individual as well as collective) existence in previously-unexperienced guises. (shrink)
Making good decisions depends on having accurate information – quickly, and in a form in which it can be readily communicated and acted upon. Two features of medical practice can help: deliberation in groups and the use of scores and grades in evaluation. We study the contributions of these features using a multi-agent computer simulation of groups of physicians. One might expect individual differences in members’ grading standards to reduce the capacity of the group to discover the facts on which (...) well-informed decisions depend. Observations of the simulated groups suggest on the contrary that this kind of diversity can in fact be conducive to epistemic performance. Sometimes, it is adopting common standards that may be expected to result in poor decisions. (shrink)
У статті досліджено феномен віртуальної тілесності, яка не тільки посідає важливе місце у сфері зацікавлення гуманітарних наук, а й увійшла як елемент повсякденності в життя значної частини сучасного людства завдяки мережі Інтернет. Розглянуто концепції поєднання віртуальності та тілесності, ключові підходи до аналізу цього поєднання. Предметом аналізу стали анонімні форуми як яскравий приклад конфігурації віртуального тіла, радикально відмінний від інших через анонімний спосіб репрезентації. Інформація, яку індивід викладає в публічний доступ, сприймається як його втілення в буквальному сенсі слова, а цифровий формат (...) інформації впливає на характеристики буття віртуального тіла індивіда. (shrink)
An overview of my work arguing that peer-to-peer computer networking (the Peer-to-Peer Simulation Hypothesis) may be the best explanation of quantum phenomena and a number of perennial philosophical problems.
Berbagai macam aktifitas manusia tidak bisa dipisahkan dari teknologi sebagai wujud alat bantu untuk melaksanakan tugas-tugasnya dengan lebih efektif dan efisien dan terobosan teknologi di bidang informatika telah menghasilkan bentukan teknologi yang berbasis komunikasi yang selain efektif juga bersifat menyenangkan. Media sosial dan game online adalah dua diantara berbagai macam produk hasil pengembangan teknologi di bidang tersebut. Dalam kedua media ini manusia mengalami sebuah perpindahan realitas dari dunia nyata ke dalam dunia virtual. Filsafat teknologi yang dicetuskan oleh Don Ihde digunakan (...) sebagai pisau analisis untuk mengulas dan memahami fenomena ini. Hasil kajian menunjukkan bahwa perubahan persepsi manusia melalui kedua teknologi tersebut dapat mempengaruhi perkembangan budaya secara signifikan. (shrink)
Could a person ever transcend what it is like to be in the world as a human being? Could we ever know what it is like to be other creatures? Questions about the overcoming of a human perspective are not uncommon in the history of philosophy. In the last century, those very interrogatives were notably raised by American philosopher Thomas Nagel in the context of philosophy of mind. In his 1974 essay What is it Like to Be a Bat?, Nagel (...) offered reflections on human subjectivity and its constraints. Nagel’s insights were elaborated before the social diffusion of computers and could not anticipate the cultural impact of technological artefacts capable of materializing interactive simulated worlds as well as disclosing virtual alternatives to the “self.” In this sense, this article proposes an understanding of computers as epistemological and ontological instruments. The embracing of a phenomenological standpoint entails that philosophical issues are engaged and understood from a fundamentally practical perspective. In terms of philosophical praxis, or “applied philosophy,” I explored the relationship between human phenomenologies and digital mediation through the design and the development of experimental video games. For instance, I have conceptualized the first-person action-adventure video game Haerfest (Technically Finished 2009) as a digital re-formulation of the questions posed in Nagel’s famous essay. Experiencing a bat’s perceptual equipment in Haerfest practically corroborates Nagel’s conclusions: there is no way for humans to map, reproduce, or even experience the consciousness of an actual bat. Although unverifiable in its correspondence to that of bats, Haerfest still grants access to experiences and perceptions that, albeit still inescapably within the boundaries of human kinds of phenomenologies, were inaccessible to humans prior to the advent of computers. Phenomenological alterations and virtual experiences disclosed by interactive digital media cannot take place without a shift in human kinds of ontologies, a shift which this study recognizes as the fundamental ground for the development of a new humanism (I deem it necessary to specify that I am not utilizing the term “humanism” in its common connotation, that is to say the one that emerged from the encounter between the Roman civilization and the late Hellenistic culture. According to this conventional acceptation, humanism indicates the realization of the human essence through “scholarship and training in good conduct” (Heidegger 1998, p. 244). However, Heidegger observed that this understanding of humanism does not truly cater to the original essence of human beings, but rather “is determined with regard to an already established interpretation of nature, history, world, and […] beings as a whole.” (Heidegger 1998, p. 245) The German thinker found this way of embracing humanism reductive: a by-product of Western metaphysics. As Heidegger himself specified in his 1949 essay Letter on Humanism, his opposition to the traditional acceptation of the term humanism does not advocate for the “inhuman” or a return to the “barbaric” but stems instead from the belief that the humanism can only be properly understood and restored in culture as more original way of meditating and caring for humanity and understanding its relationship with Being.). Additionally, this study explicitly proposes and exemplifies the use of interactive digital technology as a medium for testing, developing and disseminating philosophical notions, problems and hypotheses in ways which are alternative to the traditional textual one. Presented as virtual experiences, philosophical concepts can be accessed without the filter of subjective imagination. In a persistent, interactive, simulated environment, I claim that the crafting and the mediation of thought takes a novel, projective (In Martin Heidegger’s 1927 Being and Time, the term “projectivity” indicates the way a Being opens to the world in terms of its possibilities of being (Heidegger 1962, pp. 184–185, BT 145). Inspired by Heidegger’s and Vilem Flusser’s work in the field of philosophy of technology as well as Helmuth Plessner’s anthropological position presented in his 1928 book Die Stufen des Organischen und der Mensch. Einleitung in die philosophische Anthropologie, this study understands the concept of projectivity as the innate openness of human beings to construct themselves and their world by means of technical artefacts. In this sense, this study proposes a fundamental understanding of technology as the materialization of mankind’s tendency to overcome its physical, perceptual and communicative limitations.) dimension which I propose to call “augmented ontology.”. (shrink)
So you‘re leaving the cinema—you've just been blown away by Inception—and your mind is buzzing. There is a buzz around you too. Everyone‘s asking each other: ‗Does Cobb‘s spinning top fall?‘ Throughout Inception, Cobb has been struggling to achieve two things: to get back home so he can see his kids again and to keep a grip on reality in the process. What ends up happening to Cobb‘s totem bears on both of these struggles. So, most people who watch Inception (...) think that the whole point of the movie hinges on whether or not Cobb‘s top keeps spinning. Unfortunately for most people, they missed the point! The correct answer to 'Does Cobb‘s spinning top fall?‘ is: 'Who cares!‘ The truth, and in my opinion the main point of Inception, is that reality doesn't really matter. (shrink)
With the rapidly growing amounts of information, visualization is becoming increasingly important, as it allows users to easily explore and understand large amounts of information. However the field of information visualiza- tion currently lacks sufficient theoretical foundations. This article addresses foundational questions connecting information visualization with computing and philosophy studies. The idea of multiscale information granula- tion is described based on two fundamental concepts: information (structure) and computation (process). A new information processing paradigm of Granular Computing enables stepwise increase of (...) granulation/aggregation of information on different levels of resolution, which makes possible dynamical viewing of data. Information produced by Google Earth is an illustration of visualization based on clustering (granulation) of information on a succession of layers. Depending on level, specific emergent properties become visible as a result of different ways of aggregation of data/information. As information visualization ultimately aims at amplifying cognition, we discuss the process of simulation and emulation in relation to cognition, and in particular visual cognition. (shrink)
The terms ‘verification’ and ‘validation’ are widely used in science, both in the natural and the social sciences. They are extensively used in simulation, often associated with the need to evaluate models in different stages of the simulation development process. Frequently, terminological ambiguities arise when researchers conflate, along the simulation development process, the technical meanings of both terms with other meanings found in the philosophy of science and the social sciences. This article considers the problem of verification and validation in (...) social science simulation along five perspectives: The reasons to address terminological issues in simulation; the meaning of the terms in the philosophical sense of the problem of “truth”; the observation that some debates about these terms in simulation are inadvertently more terminological than epistemological; the meaning of the terms in the technical context of the simulation development process; and finally, a comprehensive outline of the relation between terminology used in simulation, different types of models used in the development process and different epistemological perspectives. (shrink)
Now that complex Agent-Based Models and computer simulations spread over economics and social sciences - as in most sciences of complex systems -, epistemological puzzles (re)emerge. We introduce new epistemological tools so as to show to what precise extent each author is right when he focuses on some empirical, instrumental or conceptual significance of his model or simulation. By distinguishing between models and simulations, between types of models, between types of computer simulations and between types of empiricity, section 2 gives (...) conceptual tools to explain the rationale of the diverse epistemological positions presented in section 1. Finally, we claim that a careful attention to the real multiplicity of denotational powers of symbols at stake and then to the implicit routes of references operated by models and computer simulations is necessary to determine, in each case, the proper epistemic status and credibility of a given model and/or simulation. (shrink)
The formal and empirical-generative perspectives of computation are demonstrated to be inadequate to secure the goals of simulation in the social sciences. Simulation does not resemble formal demonstrations or generative mechanisms that deductively explain how certain models are sufficient to generate emergent macrostructures of interest. The description of scientific practice implies additional epistemic conceptions of scientific knowledge. Three kinds of knowledge that account for a comprehensive description of the discipline were identified: formal, empirical and intentional knowledge. The use of formal (...) conceptions of computation for describing simulation is refuted; the roles of programming languages according to intentional accounts of computation are identified; and the roles of iconographic programming languages and aesthetic machines in simulation are characterized. The roles that simulation and intentional decision making may be able to play in a participative information society are also discussed. (shrink)
In the Western tradition, at least since the 14th century, the philosophy of knowledge has been built around the idea of knowledge as a representation [Boulnois 1999]. The question of the evaluation of knowledge refers at the same time (1) to the object represented (which one does one represent?), (2) to the process of knowledge formation, in particular with the role of the knowing subject (which one does one represent and how does one represent it?), and finally (3) to the (...) relationship between the representation and the represented object. Criteria of evaluation such as “validity”, “adequacy” or “truth”, as mentioned in chapter 4, make sense only with respect to these three dimensions. An evaluation can thus (1) depend on the ontological nature of the object of knowledge, (2) relate to the relationship between subject and object—including the structures (cognitive, social) which organize this relationship, or (3) relate to the relation of similarity between the object and its representation as well. The relevant criteria of evaluation thus depend on the points of view adopted on these questions. As there are indeed a plurality of points of view in this field, the goal of this appendix is to summarize, as briefly as possible, the various positions adopted by the philosophers and to refer to the relevant texts of reference for more information. The first section introduces useful discussions about the philosophy of theoretical knowledge and general epistemology, from a quasi-historical perspective. Section two discusses the intermediary but central notion of models. Section three, more exploratory, intro-duces an approach to simulation as “concrete experiment”. It suggests that such a frequent claim in the literature, when precisely evaluated, can, to some extent, renew both the representational and the linguistic views on simulation. (shrink)
In the Western tradition, at least since the 14th century, the philosophy of knowledge has been built around the idea of knowledge as a representation [BOU 99]. The question of the evaluation of knowledge refers at the same time (1) to the object represented (which one does one represent?), (2) to the process of knowledge formation, in particular with the role of the knowing subject (which one does one represent and how does one represent it?), and finally (3) to the (...) relationship between the representation and the represented object. Criteria of evaluation such as “validity”, “adequacy” or “truth”, as mentioned in [AMB 07], make sense only with respect to these three dimensions. An evaluation can thus (1) depend on the ontological nature of the object of knowledge, (2) relate to the relationship between subject and object—including the structures (cognitive, social) which organize this relationship, or (3) relate to the relation of similarity between the object and its representation as well. The relevant criteria of evaluation thus depend on the points of view adopted on these questions. As there are indeed a plurality of points of view in this field, the goal of this appendix is to summarize, as briefly as possible, the various positions adopted by the philosophers and to refer to the relevant texts of reference for more information (for a first outline [CHA 76, SCH 01]). The first section 1 introduces useful discussions about the philosophy of theoretical knowledge and general epistemology, from a quasi-historical perspective. Section two 2 discusses the intermediary but central notion of models. Section three 3, more exploratory, introduces an approach to simulation as “concrete experiment”. It suggests that such a frequent claim in the literature, when precisely evaluated, can, to some extent, renew both the representational and the linguistic views on simulation. (shrink)
The classical theory of computation does not represent an adequate model of reality for simulation in the social sciences. The aim of this paper is to construct a methodological perspective that is able to conciliate the formal and empirical logic of program verification in computer science, with the interpretative and multiparadigmatic logic of the social sciences. We attempt to evaluate whether social simulation implies an additional perspective about the way one can understand the concepts of program and computation. We demonstrate (...) that the logic of social simulation implies at least two distinct types of program verifications that reflect an epistemological distinction in the kind of knowledge one can have about programs. Computer programs seem to possess a causal capability (Fetzer, 1999) and an intentional capability that scientific theories seem not to possess. This distinction is associated with two types of program verification, which we call empirical and intentional verification. We demonstrate, by this means, that computational phenomena are also intentional phenomena, and that such is particularly manifest in agent-based social simulation. Ascertaining the credibility of results in social simulation requires a focus on the identification of a new category of knowledge we can have about computer programs. This knowledge should be considered an outcome of an experimental exercise, albeit not empirical, acquired within a context of limited consensus. The perspective of intentional computation seems to be the only one possible to reflect the multiparadigmatic character of social science in terms of agent-based computational social science. We contribute, additionally, to the clarification of several questions that are found in the methodological perspectives of the discipline, such as the computational nature, the logic of program scalability, and the multiparadigmatic character of agent-based simulation in the social sciences. (shrink)
The author suggests the subjugation of physical reality (IT) to a pair of self-supporting virtual realities (BIT from BIT), neither of which exists without the other.
Create an account to enable off-campus access through your institution's proxy server or OpenAthens.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.