Entropy is ubiquitous in physics, and it plays important roles in numerous other disciplines ranging from logic and statistics to biology and economics. However, a closer look reveals a complicated picture: entropy is defined differently in different contexts, and even within the same domain different notions of entropy are at work. Some of these are defined in terms of probabilities, others are not. The aim of this chapter is to arrive at an understanding of some of the (...) most important notions of entropy and to clarify the relations between them, After setting the stage by introducing the thermodynamic entropy, we discuss notions of entropy in information theory, statistical mechanics, dynamical systems theory and fractal geometry. (shrink)
The logical basis for information theory is the newly developed logic of partitions that is dual to the usual Boolean logic of subsets. The key concept is a "distinction" of a partition, an ordered pair of elements in distinct blocks of the partition. The logical concept of entropy based on partition logic is the normalized counting measure of the set of distinctions of a partition on a finite set--just as the usual logical notion of probability based on the Boolean (...) logic of subsets is the normalized counting measure of the subsets (events). Thus logical entropy is a measure on the set of ordered pairs, and all the compound notions of entropy (join entropy, conditional entropy, and mutual information) arise in the usual way from the measure (e.g., the inclusion-exclusion principle)--just like the corresponding notions of probability. The usual Shannon entropy of a partition is developed by replacing the normalized count of distinctions (dits) by the average number of binary partitions (bits) necessary to make all the distinctions of the partition. (shrink)
The notion of a partition on a set is mathematically dual to the notion of a subset of a set, so there is a logic of partitions dual to Boole's logic of subsets (Boolean logic is usually mis-specified as "propositional" logic). The notion of an element of a subset has as its dual the notion of a distinction of a partition (a pair of elements in different blocks). Boole developed finite logical probability as the normalized counting measure on elements of (...) subsets so there is a dual concept of logical entropy which is the normalized counting measure on distinctions of partitions. Thus the logical notion of information is a measure of distinctions. Classical logical entropy naturally extends to the notion of quantum logical entropy which provides a more natural and informative alternative to the usual Von Neumann entropy in quantum information theory. The quantum logical entropy of a post-measurement density matrix has the simple interpretation as the probability that two independent measurements of the same state using the same observable will have different results. The main result of the paper is that the increase in quantum logical entropy due to a projective measurement of a pure state is the sum of the absolute squares of the off-diagonal entries ("coherences") of the pure state density matrix that are zeroed ("decohered") by the measurement, i.e., the measure of the distinctions ("decoherences") created by the measurement. (shrink)
The physical singularity of life phenomena is analyzed by means of comparison with the driving concepts of theories of the inert. We outline conceptual analogies, transferals of methodologies and theoretical instruments between physics and biology, in addition to indicating significant differences and sometimes logical dualities. In order to make biological phenomenalities intelligible, we introduce theoretical extensions to certain physical theories. In this synthetic paper, we summarize and propose a unified conceptual framework for the main conclusions drawn from work spanning a (...) book and several articles, quoted throughout. (shrink)
This study has demonstrated that entropy is not a physical quantity, that is, the physical quantity called entropy does not exist. If the efficiency of heat engine is defined as η = W/W1, and the reversible cycle is considered to be the Stirling cycle, then, given ∮dQ/T = 0, we can prove ∮dW/T = 0 and ∮d/T = 0. If ∮dQ/T = 0, ∮dW/T = 0 and ∮dE/T = 0 are thought to define new system state variables, such (...) definitions would be absurd. The fundamental error of entropy is that in any reversible process, the polytropic process function Q is not a single-valued function of T, and the key step of Σ[(ΔQ)/T)] to ∫dQ/T doesn’t hold, P-V fig. should be P-V-T fig.in thermodynamics. Similarly, ∮dQ/T = 0, ∮dW/T = 0 and ∮dE/T = 0 do not hold, either. Since the absolute entropy of Boltzmann is used to explain Clausius entropy and the unit (J/K) of the former is transformed from the latter, the non-existence of Clausius entropy simultaneously denies Boltzmann entropy. (shrink)
Metaphors establish connection. Root metaphors--patterns of relational imagery in the language and thought of a culture, in which a diverse group of tenors are related to a single indentifiable class of vehicles--play an important role in organizing our thought, and in bringing a coherence to our vision of the world. This is a political function; root metaphors, as philosopher Stephen Pepper discusses them, are most often found in the works of philosophers remembered as political philosophers. ;The second law of thermodynamics--the (...) law of entropy--holds that in any spontaneous process, usable energy becomes unusable energy. It also suggests that improbable order must succumb, through time, to more probable chaos. The law of entropy has enjoyed a popularity as metaphor unusual for such physics esoterica. In the works of Brooks Adams, Henry Adams, Nicholas Georgescu-Roegen, and Thomas Pynchon, the idea of entropy appears as the fundamental, organizing idea for an economic interpretation of history, a philosophy of history, an ecologically enlightened economic theory, and an encyclopedic novel that apotheosizes modern culture. Analysis of how the entropy metaphor is manifest in the works of these thinkers allows us to judge the strengths and weaknesses of entropy as root metaphor. Analysis of its contemporary popularity affords insight into the politics of the day. Ultimately, the entropy root metaphor serves as the foundation of a refurbished "generating substance" world hypothesis, but the root metaphor itself remains equivocal on the important issue of centralized versus decentralized political organization. (shrink)
I assess the thesis that counterfactual asymmetries are explained by an asymmetry of the global entropy at the temporal boundaries of the universe, by developing a method of evaluating counterfactuals that includes, as a background assumption, the low entropy of the early universe. The resulting theory attempts to vindicate the common practice of holding the past mostly fixed under counterfactual supposition while at the same time allowing the counterfactual's antecedent to obtain by a natural physical development. Although the (...) theory has some success in evaluating a wide variety of ordinary counterfactuals, it fails as an explanation of counterfactual asymmetry. (shrink)
In previous work, we studied four well known systems of qualitative probabilistic inference, and presented data from computer simulations in an attempt to illustrate the performance of the systems. These simulations evaluated the four systems in terms of their tendency to license inference to accurate and informative conclusions, given incomplete information about a randomly selected probability distribution. In our earlier work, the procedure used in generating the unknown probability distribution (representing the true stochastic state of the world) tended to yield (...) probability distributions with moderately high entropy levels. In the present article, we present data charting the performance of the four systems when reasoning in environments of various entropy levels. The results illustrate variations in the performance of the respective reasoning systems that derive from the entropy of the environment, and allow for a more inclusive assessment of the reliability and robustness of the four systems. (shrink)
The Kolmogorov-Sinai entropy is a fairly exotic mathematical concept which has recently aroused some interest on the philosophers’ part. The most salient trait of this concept is its working as a junction between such diverse ambits as statistical mechanics, information theory and algorithm theory. In this paper I argue that, in order to understand this very special feature of the Kolmogorov-Sinai entropy, is essential to reconstruct its genealogy. Somewhat surprisingly, this story takes us as far back as the (...) beginning of celestial mechanics and through some of the most exciting developments of mathematical physics of the 19th century. (shrink)
“There’s Plenty of Room at the Bottom”, said the title of Richard Feynman’s 1959 seminal conference at the California Institute of Technology. Fifty years on, nanotechnologies have led computer scientists to pay close attention to the links between physical reality and information processing. Not all the physical requirements of optimal computation are captured by traditional models—one still largely missing is reversibility. The dynamic laws of physics are reversible at microphysical level, distinct initial states of a system leading to distinct final (...) states. On the other hand, as von Neumann already conjectured, irreversible information processing is expensive: to erase a single bit of information costs ~3 × 10−21 joules at room temperature. Information entropy is a thermodynamic cost, to be paid in non-computational energy dissipation. This paper addresses the problem drawing on Edward Fredkin’s Finite Nature hypothesis: the ultimate nature of the universe is discrete and finite, satisfying the axioms of classical, atomistic mereology. The chosen model is a cellular automaton with reversible dynamics, capable of retaining memory of the information present at the beginning of the universe. Such a CA can implement the Boolean logical operations and the other building bricks of computation: it can develop and host all-purpose computers. The model is a candidate for the realization of computational systems, capable of exploiting the resources of the physical world in an efficient way, for they can host logical circuits with negligible internal energy dissipation. (shrink)
From the epistemological posture that we present in this work we sustain the following thesis:-That as subjects we constitute the world we live in through one of the possible conceptual frameworks.-Our cognitive and social practices construct the world in a certain manner, which makes us responsible for the way this world is constituted.
Introduction & Objectives: Norwich’s Entropy Theory of Perception (1975 [1] -present) stands alone. It explains many firing-rate behaviors and psychophysical laws from bare theory. To do so, it demands a unique sort of interaction between receptor and brain, one that Norwich never substantiated. Can it now be confirmed, given the accumulation of empirical sensory neuroscience? Background: Norwich conjoined sensation and a mathematical model of communication, Shannon’s Information Theory, as follows: “In the entropic view of sensation, magnitude of sensation is (...) regarded as a measure of the entropy or uncertainty of the stimulus signal” [2]. “To be uncertain about the outcome of an event, one must first be aware of a set of alternative outcomes” [3]. “The entropy-establishing process begins with the generation of a [internal] sensory signal by the stimulus generator. This is followed by receipt of the [external] stimulus by the sensory receptor, transmission of action potentials by the sensory neurons, and finally recapture of the [response to the internal] signal by the generator” [4]. The latter “recapture” differentiates external from internal stimuli. The hypothetical “stimulus generators” are internal emitters, that generate photons in vision, audible sounds in audition (to Norwich, the spontaneous otoacoustic emissions [SOAEs]), “temperatures in excess of local skin temperature” in skin temperature sensation [4], etc. Method (1): Several decades of empirical sensory physiology literature was scrutinized for internal “stimulus generators”. Results (1): Spontaneous photopigment isomerization (“dark light”) does not involve visible light. SOAEs are electromechanical basilar-membrane artefacts that rarely produce audible tones. The skin’s temperature sensors do not raise skin temperature, etc. Method (2): The putative action of the brain-and-sensory-receptor loop was carefully reexamined. Results (2): The sensory receptor allegedly “perceives”, experiences “awareness”, possesses “memory”, and has a “mind”. But those traits describe the whole human. The receptor, thus anthropomorphized, must therefore contain its own perceptual loop, containing a receptor, containing a perceptual loop, etc. Summary & Conclusions: The Entropy Theory demands sensory awareness of alternatives, through an imagined brain-and-sensory-receptor loop containing internal “stimulus generators”. But (1) no internal “stimulus generators” seem to exist and (2) the loop would be the outermost of an infinite nesting of identical loops. (shrink)
This paper strengthens and defends the pluralistic implications of Einstein's successful, quantitative predictions of Brownian motion for a philosophical dispute about the nature of scientific advance that began between two prominent philosophers of science in the second half of the twentieth century (Thomas Kuhn and Paul Feyerabend). Kuhn promoted a monistic phase-model of scientific advance, according to which a paradigm driven `normal science' gives rise to its own anomalies, which then lead to a crisis and eventually a scientific revolution. Feyerabend (...) stressed the importance of pluralism for scientific progress. He rejected Kuhn's model arguing that it fails to recognize the role that alternative theories can play in identifying exactly which phenomena are anomalous in the first place. On Feyerabend's account, Einstein's predictions allow for a crucial experiment between two incommensurable theories, and are an example of an anomaly that could refute the reigning paradigm only after the development of a competitor. Using Kuhn's specification of a disciplinary matrix to illustrate the incommensurability between the two paradigms, we examine the different research strategies available in this peculiar case. On the basis of our reconstruction, we conclude by rebutting some critics of Feyerabend's argument. (shrink)
We propose here to clarify some of the relations existing between information and meaning by showing how meaningful information can be generated by a system submitted to a constraint. We build up definitions and properties for meaningful information, a meaning generator system and the domain of efficiency of a meaning (to cover cases of meaningful information transmission). Basic notions of information processing are used.
Purpose – The purpose of this paper is to ask whether a first-order-cybernetics concept, Shannon’s Information Theory, actually allows a far-reaching mathematics of perception allegedly derived from it, Norwich et al.’s “Entropy Theory of Perception”. Design/methodology/approach – All of The Entropy Theory, 35 years of publications, was scrutinized for its characterization of what underlies Shannon Information Theory: Shannon’s “general communication system”. There, “events” are passed by a “source” to a “transmitter”, thence through a “noisy channel” to a “receiver”, (...) that passes “outcomes” (received events) to a “destination”. Findings – In the entropy theory, “events” were sometimes interactions with the stimulus, but could be microscopic stimulus conditions. “Outcomes” often went unnamed; sometimes, the stimulus, or the interaction with it, or the resulting sensation, were “outcomes”. A “source” was often implied to be a “transmitter”, which frequently was a primary afferent neuron; elsewhere, the stimulus was the “transmitter” and perhaps also the “source”. “Channel” was rarely named; once, it was the whole eye; once, the incident photons; elsewhere, the primary or secondary afferent. “Receiver” was usually the sensory receptor, but could be an afferent. “Destination” went unmentioned. In sum, the entropy theory’s idea of Shannon’s “general communication system” was entirely ambiguous. Research limitations/implications – The ambiguities indicate that, contrary to claim, the entropy theory cannot be an “information theoretical description of the process of perception”. Originality/value – Scrutiny of the entropy theory’s use of information theory was overdue and reveals incompatibilities that force a reconsideration of information theory’s possible role in perception models. A second-order-cybernetics approach is suggested. (shrink)
This paper reveals errors within Norwich et al.’s Entropy Theory of Perception, errors that have broad implications for our understanding of perception. What Norwich and coauthors dubbed their “informational theory of neural coding” is based on cybernetics, that is, control and communication in man and machine. The Entropy Theory uses information theory to interpret human performance in absolute judgments. There, the continuum of the intensity of a sensory stimulus is cut into categories and the subject is shown exemplar (...) stimuli of each category. The subject must then identify individual exemplars by category. The identifications are recorded in the Garner-Hake version of the Shannon “confusion matrix”. The matrix yields “H”, the entropy (degree of uncertainty) about what stimulus was presented. Hypothetically, uncertainty drops as a stimulus lengthens, i.e. a plot of H vs. stimulus duration should fall monotonically. Such “adaptation” is known for both sensation and firing rate. Hence, because “the physiological adaptation curve has the same general shape as the psychophysical adaptation curve”, Norwich et al. assumed that both have the same time course; sensation and firing rate were thus both declared proportional to H. However, a closer look reveals insurmountable contradictions. First, the peripheral neuron hypothetically cannot fire in response to a stimulus of a given intensity until after somehow computing H from its responses to stimuli of various intensities. Thus no sensation occurs until firing rate adapts, i.e. attains its spontaneous rate. But hypothetically, once adaptation is complete, certainty is reached and perception ends. Altogether, then, perception cannot occur until perception is over. Secondly, sensations, firing rates, and H’s are empirically not synchronous, contrary to assumption. In sum, the core concept of the cybernetics-based Entropy Theory of Perception, that is, that uncertainty reduction is the basis for perception, is irrational. (shrink)
One finds, in Maxwell's writings on thermodynamics and statistical physics, a conception of the nature of these subjects that differs in interesting ways from the way that they are usually conceived. In particular, though—in agreement with the currently accepted view—Maxwell maintains that the second law of thermodynamics, as originally conceived, cannot be strictly true, the replacement he proposes is different from the version accepted by most physicists today. The modification of the second law accepted by most physicists is a probabilistic (...) one: although statistical fluctuations will result in occasional spontaneous differences in temperature or pressure, there is no way to predictably and reliably harness these to produce large violations of the original version of the second law. Maxwell advocates a version of the second law that is strictly weaker; the validity of even this probabilistic version is of limited scope, limited to situations in which we are dealing with large numbers of molecules en masse and have no ability to manipulate individual molecules. Connected with this is his concept of the thermodynamic concepts of heat, work, and entropy; on the Maxwellian view, these are concepts that must be relativized to the means we have available for gathering information about and manipulating physical systems. The Maxwellian view is one that deserves serious consideration in discussions of the foundation of statistical mechanics. It has relevance for the project of recovering thermodynamics from statistical mechanics because, in such a project, it matters which version of the second law we are trying to recover. (shrink)
n this text, we revisit part of the analysis of anti-entropy in Bailly and Longo (2009} and develop further theoretical reflections. In particular, we analyze how randomness, an essential component of biological variability, is associated to the growth of biological organization, both in ontogenesis and in evolution. This approach, in particular, focuses on the role of global entropy production and provides a tool for a mathematical understanding of some fundamental observations by Gould on the increasing phenotypic complexity along (...) evolution. Lastly, we analyze the situation in terms of theoretical symmetries, in order to further specify the biological meaning of anti-entropy as well as its strong link with randomness. (shrink)
The concept of time is examined using the second law of thermodynamics that was recently formulated as an equation of motion. According to the statistical notion of increasing entropy, flows of energy diminish differences between energy densities that form space. The flow of energy is identified with the flow of time. The non-Euclidean energy landscape, i.e. the curved space–time, is in evolution when energy is flowing down along gradients and levelling the density differences. The flows along the steepest descents, (...) i.e. geodesics are obtained from the principle of least action for mechanics, electrodynamics and quantum mechanics. The arrow of time, associated with the expansion of the Universe, identifies with grand dispersal of energy when high-energy densities transform by various mechanisms to lower densities in energy and eventually to ever-diluting electromagnetic radiation. Likewise, time in a quantum system takes an increment forwards in the detection-associated dissipative transformation when the stationary-state system begins to evolve pictured as the wave function collapse. The energy dispersal is understood to underlie causality so that an energy gradient is a cause and the resulting energy flow is an effect. The account on causality by the concepts of physics does not imply determinism; on the contrary, evolution of space–time as a causal chain of events is non-deterministic. (shrink)
This paper reviews the complex, overlapping ideas of two prominent Italian philosophers, Lorenzo Magnani and Luciano Floridi, with the aim of facilitating the nonviolent transformation of self and world, and with a focus on information technologies in mediating this process. In Floridi’s information ethics, problems of consistency arise between self-poiesis, anagnorisis, entropy, evil, and the narrative structure of the world. Solutions come from Magnani’s work in distributed morality, moral mediators, moral bubbles and moral disengagement. Finally, two examples of information (...) technology, one ancient and one new, a Socratic narrative and an information processing model of moral cognition, are offered as mediators for the nonviolent transformation of self and world respectively, while avoiding the tragic requirements inherent in Floridi’s proposal. (shrink)
The central motivating idea behind the development of this work is the concept of prespace, a hypothetical structure that is postulated by some physicists to underlie the fabric of space or space-time. I consider how such a structure could relate to space and space-time, and the rest of reality as we know it, and the implications of the existence of this structure for quantum theory. Understanding how this structure could relate to space and to the rest of reality requires, I (...) believe, that we consider how space itself relates to reality, and how other so-called "spaces" used in physics relate to reality. In chapter 2, I compare space and space-time to other spaces used in physics, such as configuration space, phase space and Hilbert space. I support what is known as the "property view" of space, opposing both the traditional views of space and space-time, substantivalism and relationism. I argue that all these spaces are property spaces. After examining the relationships of these spaces to causality, I argue that configuration space has, due to its role in quantum mechanics, a special status in the microscopic world similar to the status of position space in the macroscopic world. In chapter 3, prespace itself is considered. One way of approaching this structure is through the comparison of the prespace structure with a computational system, in particular to a cellular automaton, in which space or space-time and all other physical quantities are broken down into discrete units. I suggest that one way open for a prespace metaphysics can be found if physics is made fully discrete in this way. I suggest as a heuristic principle that the physical laws of our world are such that the computational cost of implementing those laws on an arbitrary computational system is minimized, adapting a heuristic principle of this type proposed by Feynman. In chapter 4, some of the ideas of the previous chapters are applied in an examination of the physics and metaphysics of quantum theory. I first discuss the "measurement problem" of quantum mechanics: this problem and its proposed solution are the primary subjects of chapter 4. It turns out that considering how quantum theory could be made fully discrete leads naturally to a suggestion of how standard linear quantum mechanics could be modified to give rise to a solution to the measurement problem. The computational heuristic principle reinforces the same solution. I call the modified quantum mechanics Critical Complexity Quantum Mechanics (CCQM). I compare CCQM with some of the other proposed solutions to the measurement problem, in particular the spontaneous localization model of Ghirardi, Rimini and Weber. Finally, in chapters 5 and 6, I argue that the measure of complexity of quantum mechanical states I introduce in CCQM also provides a new definition of entropy for quantum mechanics, and suggests a solution to the problem of providing an objective foundation for statistical mechanics, thermodynamics, and the arrow of time. (shrink)
How can McTaggart's A-series notion of time be incorporated into physics while retaining the B-series notion? It may be the A-series 'now' can be construed as ontologically private. How is that modeled? Could a definition of a combined AB-series entropy help with the Past Hypothesis problem? What if the increase in entropy as a system goes from earlier times to later times is canceled by the decrease in entropy as a system goes from future, to present, to (...) past? (shrink)
Development has been themain strategy in addressing the problemof sustainability since at least the mid-1980s. The results of this strategy have been mixed, if not disappointing. In their objections to this approach, critics frequently invoke constraints imposed by physical reality of which the most important one is entropy production. They question the belief that technological innovations are capable of solving the problem of sustainability. Is development the right response to this problem and is the current course capable of attaining (...) sustainability? The article examines closely and critiques the principal theoretical objection to sustainable development that emphasizes physical constraints, and more specifically entropy production. It also offers a critique of the current approach to sustainable development. The article advocates a systems approach as a way to anchor a broad consensus in the ongoing sustainability debates. (shrink)
In a preceding publication a fundamentally oriented and irreversible world was shown to be de- rivable from the important principle of least action. A consequence of such a paradigm change is avoidance of paradoxes within a “dynamic” quantum physics. This becomes essentially possible because fundamental irreversibility allows consideration of the “entropy” concept in elementary processes. For this reason, and for a compensation of entropy in the spread out energy of the wave, the duality of particle and wave has (...) to be mediated via an information self-image of matter. In this publication considerations are extended to irreversible thermodynamics, to gravitation and cos- mology with its dependence on quantum interpretations. The information self-image of matter around particles could be identified with gravitation. Because information can also impose an al- ways constant light velocity there is no need any more to attribute such a property to empty space, as done in relativity theory. In addition, the possibility is recognized to consider entropy genera- tion by expanding photon fields in the universe. Via a continuous activation of information on matter photons can generate entropy and release small energy packages without interacting with matter. This facilitates a new interpretation of galactic redshift, emphasizes an information link between quantum- and cosmological phenomena, and evidences an information-triggered origin of the universe. Self-organized processes approach maximum entropy production within their constraints. In a far from equilibrium world also information, with its energy content, can self- organize to a higher hierarchy of computation. It is here identified with consciousness. This ap- pears to explain evolution of spirit and intelligence on a materialistic basis. Also gravitation, here identified as information on matter, could, under special conditions, self-organize to act as a su- per-gravitation, offering an alternative to dark matter. Time is not an illusion, but has to be understood as flux of action, which is the ultimate reality of change. The concept of an irreversible physical world opens a route towards a rational understanding of complex contexts in nature. (shrink)
We refer to the remarkable thought of Erwin Schrödinger expressed in his book “What is life?” regarding the connection between life and a decrease of entropy realized via feeding (eating). This thought is “transferred” into the ﬁeld of human psychology, explaining hooligan behavior (e.g. the “days of violence”) as a natural human response to the improper (in its content or form) “informational feeding” that does not allow one to normally treat (‘’digest”) the received information, i.e. to make ones thoughts (...) simpler in their logical structure. Delivering information without pedagogical, psychological, and (when children are involved) neurological supervision or assistance is supposed to be the cause for the hooliganism that more and more often obtains a dangerous organized form. (shrink)
This paper considers questions about continuity and discontinuity between life and mind. It begins by examining such questions from the perspective of the free energy principle (FEP). The FEP is becoming increasingly influential in neuroscience and cognitive science. It says that organisms act to maintain themselves in their expected biological and cognitive states, and that they can do so only by minimizing their free energy given that the long-term average of free energy is entropy. The paper then argues that (...) there is no singular interpretation of the FEP for thinking about the relation between life and mind. Some FEP formulations express what we call an independence view of life and mind. One independence view is a cognitivist view of the FEP. It turns on information processing with semantic content, thus restricting the range of systems capable of exhibiting mentality. Other independence views exemplify what we call an overly generous non-cognitivist view of the FEP, and these appear to go in the opposite direction. That is, they imply that mentality is nearly everywhere. The paper proceeds to argue that non-cognitivist FEP, and its implications for thinking about the relation between life and mind, can be usefully constrained by key ideas in recent enactive approaches to cognitive science. We conclude that the most compelling account of the relationship between life and mind treats them as strongly continuous, and that this continuity is based on particular concepts of life (autopoiesis and adaptivity) and mind (basic and non-semantic). (shrink)
Many believe that a suitably programmed computer could act for its own goals and experience feelings. I challenge this view and argue that agency, mental causation and qualia are all founded in the unique, homeostatic nature of living matter. The theory was formulated for coherence with the concept of an agent, neuroscientific data and laws of physics. By this method, I infer that a successful action is homeostatic for its agent and can be caused by a feeling - which does (...) not motivate as a force, but as a control signal. From brain research and the locality principle of physics, I surmise that qualia are a fundamental, biological form of energy generated in specialized neurons. Subjectivity is explained as thermodynamically necessary on the supposition that, by converting action potentials to feelings, the neural cells avert damage from the electrochemical pulses. In exchange for this entropic benefit, phenomenal energy is spent as and where it is produced - which precludes the objective observation of qualia. (shrink)
This article provides an answer to the question: What is the function of cognition? By answering this question it becomes possible to investigate what are the simplest cognitive systems. It addresses the question by treating cognition as a solution to a design problem. It defines a nested sequence of design problems: (1) How can a system persist? (2) How can a system affect its environment to improve its persistence? (3) How can a system utilize better information from the environment to (...) select better actions? And, (4) How can a system reduce its inherent informational limitations to achieve more successful behavior? This provides a corresponding nested sequence of system classes: (1) autonomous systems, (2) (re)active autonomous systems, (3) informationally controlled autonomous systems (autonomous agents), and (4) cognitive systems. -/- This article provides the following characterization of cognition: The cognitive system is the set of mechanisms of an autonomous agent that (1) allow increase of the correlation and integration between the environment and the information system of the agent, so that (2) the agent can improve the selection of actions and thereby produce more successful behavior. -/- Finally, it shows that common cognitive capacities satisfy the characterization: learning, memory, representation, decision making, reasoning, attention, and communication. (shrink)
Statistical physics cannot explain why a thermodynamic arrow of time exists, unless one postulates very special and unnatural initial conditions. Yet, we argue that statistical physics can explain why the thermodynamic arrow of time is universal, i.e., why the arrow points in the same direction everywhere. Namely, if two subsystems have opposite arrow-directions at a particular time, the interaction between them makes the configuration statistically unstable and causes a decay towards a system with a universal direction of the arrow of (...) time. We present general qualitative arguments for that claim and support them by a detailed analysis of a toy model based on the baker’s map. (shrink)
Synthetic biology aims at reconstructing life to put to the test the limits of our understanding. It is based on premises similar to those which permitted invention of computers, where a machine, which reproduces over time, runs a program, which replicates. The underlying heuristics explored here is that an authentic category of reality, information, must be coupled with the standard categories, matter, energy, space and time to account for what life is. The use of this still elusive category permits us (...) to interact with reality via construction of self-consistent models producing predictions which can be instantiated into experiments. While the present theory of information has much to say about the program, with the creative properties of recursivity at its heart, we almost entirely lack a theory of the information supporting the machine. We suggest that the program of life codes for processes meant to trap information which comes from the context provided by the environment of the machine. (shrink)
Automated reasoning about uncertain knowledge has many applications. One difficulty when developing such systems is the lack of a completely satisfactory integration of logic and probability. We address this problem directly. Expressive languages like higher-order logic are ideally suited for representing and reasoning about structured knowledge. Uncertain knowledge can be modeled by using graded probabilities rather than binary truth-values. The main technical problem studied in this paper is the following: Given a set of sentences, each having some probability of being (...) true, what probability should be ascribed to other (query) sentences? A natural wish-list, among others, is that the probability distribution (i) is consistent with the knowledge base, (ii) allows for a consistent inference procedure and in particular (iii) reduces to deductive logic in the limit of probabilities being 0 and 1, (iv) allows (Bayesian) inductive reasoning and (v) learning in the limit and in particular (vi) allows confirmation of universally quantified hypotheses/sentences. We translate this wish-list into technical requirements for a prior probability and show that probabilities satisfying all our criteria exist. We also give explicit constructions and several general characterizations of probabilities that satisfy some or all of the criteria and various (counter) examples. We also derive necessary and sufficient conditions for extending beliefs about finitely many sentences to suitable probabilities over all sentences, and in particular least dogmatic or least biased ones. We conclude with a brief outlook on how the developed theory might be used and approximated in autonomous reasoning agents. Our theory is a step towards a globally consistent and empirically satisfactory unification of probability and logic. (shrink)
Why is our knowledge of the past so much more ‘expansive’ (to pick a suitably vague term) than our knowledge of the future, and what is the best way to capture the difference(s) (i.e., in what sense is knowledge of the past more ‘expansive’)? One could reasonably approach these questions by giving necessary conditions for different kinds of knowledge, and showing how some were satisfied by certain propositions about the past, and not by corresponding propositions about the future. I take (...) it that such is the approach of Chapter 6 of Time and Chance (T&C). Here’s another such a proposal, similar to that of, but significantly different from T&C; my purpose in this section is to highlight the differences, by showing how this account fails. (shrink)
Losses in channel flows are usually determined using a frictional head loss parameter. Fluid friction is however not the only source of loss in channel flows with heat transfer. For such flow problems, thermal energy degradation, in addition to mechanical energy degradation, add to the total loss in thermodynamic head. To assess the total loss in a channel with combined convection and radiation heat transfer, the conventional frictional head loss parameter is extended in this study. The analysis is applied to (...) a 3D turbulent channel flow and identifies the critical locations in the flow domain where the losses are concentrated. The influence of Boltzmann number is discussed, and the best channel geometry for flows with combined heat transfer modes is also determined. (shrink)
We review some of the main implications of the free-energy principle (FEP) for the study of the self-organization of living systems – and how the FEP can help us to understand (and model) biotic self-organization across the many temporal and spatial scales over which life exists. In order to maintain its integrity as a bounded system, any biological system - from single cells to complex organisms and societies - has to limit the disorder or dispersion (i.e., the long-run entropy) (...) of its constituent states. We review how this can be achieved by living systems that minimize their variational free energy. Variational free energy is an information theoretic construct, originally introduced into theoretical neuroscience and biology to explain perception, action, and learning. It has since been extended to explain the evolution, development, form, and function of entire organisms, providing a principled model of biotic self-organization and autopoiesis. It has provided insights into biological systems across spatiotemporal scales, ranging from microscales (e.g., sub- and multicellular dynamics), to intermediate scales (e.g., groups of interacting animals and culture), through to macroscale phenomena (the evolution of entire species). A crucial corollary of the FEP is that an organism just is (i.e., embodies or entails) an implicit model of its environment. As such, organisms come to embody causal relationships of their ecological niche, which, in turn, is influenced by their resulting behaviors. Crucially, free-energy minimization can be shown to be equivalent to the maximization of Bayesian model evidence. This allows us to cast natural selection in terms of Bayesian model selection, providing a robust theoretical account of how organisms come to match or accommodate the spatiotemporal complexity of their surrounding niche. In line with the theme of this volume; namely, biological complexity and self-organization, this chapter will examine a variational approach to self-organization across multiple dynamical scales. (shrink)
By drawing on the philosophy of Bernard Stiegler, the phenomena of mechanical (a.k.a. artificial, digital, or electronic) intelligence is explored in terms of its real significance as an ever-repeating threat of the reemergence of stupidity (as cowardice), which can be transformed into knowledge (pharmacological analysis of poisons and remedies) by practices of care, through the outlook of what researchers describe equivocally as “artificial stupidity”, which has been identified as a new direction in the future of computer science and machine problem (...) solving as well as a new difficulty to be overcome. I weave together of web of “artificial stupidity”, which denotes the mechanic (1), the human (2), or the global (3). With regards to machine intelligence, artificial stupidity refers to: 1a) Weak A.I. or a rhetorical inversion of designating contemporary practices of narrow task-based procedures by algorithms in opposition to “True A.I.”; 1b) the restriction or employment of constraints that weaken the effectiveness of A.I., which is to say a “dumbing-down” of A.I. by intentionally introducing mistakes by programmers for safety concerns and human interaction purposes; 1c) the failure of machines to perform designated tasks; 1d) a lack of a noetic capacity, which is a lack of moral and ethical discretion; 1e) a lack of causal reasoning (true intelligence) as opposed to statistical associative “curve fitting”; or 2) the phenomenon of increasing human “stupidity” or drive-based behaviors, which is considered as the degradation of human intelligence and/or “intelligent human behavior” through technics; and finally, 3) the global phenomenon of increasing entropy due to a black-box economy of closed systems and/or industry consolidation. (shrink)
The development of technology is unbelievably rapid. From limited local networks to high speed Internet, from crude computing machines to powerful semi-conductors, the world had changed drastically compared to just a few decades ago. In the constantly renewing process of adapting to such an unnaturally high-entropy setting, innovations as well as entirely new concepts, were often born. In the business world, one such phenomenon was the creation of a new type of entrepreneurship. This paper proposes a new academic discipline (...) of computational entrepreneurship, which centers on: (i) an exponentially growing (and less expensive) computing power, to the extent that almost everybody in a modern society can own and use that; (ii) omnipresent high-speed Internet connectivity, wired or wireless, representing our modern day’s economic connectomics; (iii) growing concern of exploiting “serendipity” for a strategic commercial advantage; and (iv) growing capabilities of lay people in performing calculations for their informed decisions in taking fast-moving entrepreneurial opportunities. Computational entrepreneurship has slowly become a new mode of operation for business ventures and will likely bring the academic discipline of entrepreneurship back to mainstream economics. (shrink)
While drawing from the philosophy of Bernard Stiegler throughout the paper, I commence by highlighting Zoltan Istvan’s representation of transhumanism in the light of its role in politics. I continue by elaborating on the notion of the promise of eternal life. After that I differentiate between subjects that are proper for philosophy (such as the mind or whether life is worth living) and science (measurable and replicable). The arguments mostly concern mind-uploading and at the same time I elaborate on a (...) simple critique of mind-body dualism, which is one of the key imagined orders exploitable by technologies in the narratives of transhumanism present in popular culture. This is reframed as a problem of action. The focus of this article is on the claim that certain transhumanisms are dangerous forms of Neo-Darwinism. It comes from a critical assessment of capital and the exploitation of bodies through market forces. Entropy is a process of growing disorder, while neganthropy is an anthropological struggle against exploitation, not only of bodies, but of all ecosystems of the Earth. The arguments of Stiegler from a collection of lectures are recapitulated, and his claims are presented through the prism of transhuman narrative, with a particular focus on Christian Salmon's position in the book Storytelling: Bewitching the Modern Mind. (shrink)
Satisfaction or contentment is deficient in our intelligent world, for entropy is at its prodigality accompanying the egoistic human mind. The lesser beings are content with what is provided, seem more beholden of being created, rather than the selfish unsatisfied human, whose desire to gain has no limit leaving the body unsatisfied to deteriorate thy own soul and its existence and of the others. The cause of entropy is human intelligence and the falsified superiority of human consciousness that (...) leaves the body unsatisfied and the soul to writhe; for it creates a resistance in the flow of consciousness that makes the human loose its value of life and all that resides within. Egoism enhances superiority promotes an indestructible feeling; denies consciousness that flows within and across the system; a loss of revere to the cosmic bridge of consciousness that links the body, mind and soul to the universe. Satisfaction can be gotten when one clearly differentiates the subjectivity from the objectivity of consciousness; for the subjectivity of consciousness cannot be taken for granted over its trailing objectivity that ceaselessly deceives. (shrink)
Classical logic is usually interpreted as the logic of propositions. But from Boole's original development up to modern categorical logic, there has always been the alternative interpretation of classical logic as the logic of subsets of any given (nonempty) universe set. Partitions on a universe set are dual to subsets of a universe set in the sense of the reverse-the-arrows category-theoretic duality--which is reflected in the duality between quotient objects and subobjects throughout algebra. Hence the idea arises of a dual (...) logic of partitions. That dual logic is described here. Partition logic is at the same mathematical level as subset logic since models for both are constructed from (partitions on or subsets of) arbitrary unstructured sets with no ordering relations, compatibility or accessibility relations, or topologies on the sets. Just as Boole developed logical finite probability theory as a quantitative treatment of subset logic, applying the analogous mathematical steps to partition logic yields a logical notion of entropy so that information theory can be refounded on partition logic. But the biggest application is that when partition logic and the accompanying logical information theory are "lifted" to complex vector spaces, then the mathematical framework of quantum mechanics is obtained. Partition logic models indefiniteness (i.e., numerical attributes on a set become more definite as the inverse-image partition becomes more refined) while subset logic models the definiteness of classical physics (an entity either definitely has a property or definitely does not). Hence partition logic provides the backstory so the old idea of "objective indefiniteness" in QM can be fleshed out to a full interpretation of quantum mechanics. (shrink)
In a quantum universe with a strong arrow of time, it is standard to postulate that the initial wave function started in a particular macrostate---the special low-entropy macrostate selected by the Past Hypothesis. Moreover, there is an additional postulate about statistical mechanical probabilities according to which the initial wave function is a ''typical'' choice in the macrostate. Together, they support a probabilistic version of the Second Law of Thermodynamics: typical initial wave functions will increase in entropy. Hence, there (...) are two sources of randomness in such a universe: the quantum-mechanical probabilities of the Born rule and the statistical mechanical probabilities of the Statistical Postulate. I propose a new way to understand time's arrow in a quantum universe. It is based on what I call the Thermodynamic Theories of Quantum Mechanics. According to this perspective, there is a natural choice for the initial quantum state of the universe, which is given by not a wave function but by a density matrix. The density matrix plays a microscopic role: it appears in the fundamental dynamical equations of those theories. The density matrix also plays a macroscopic / thermodynamic role: it is exactly the projection operator onto the Past Hypothesis subspace. Thus, given an initial subspace, we obtain a unique choice of the initial density matrix. I call this property "the conditional uniqueness" of the initial quantum state. The conditional uniqueness provides a new and general strategy to eliminate statistical mechanical probabilities in the fundamental physical theories, by which we can reduce the two sources of randomness to only the quantum mechanical one. I also explore the idea of an absolutely unique initial quantum state, in a way that might realize Penrose's idea of a strongly deterministic universe. (shrink)
In a quantum universe with a strong arrow of time, we postulate a low-entropy boundary condition to account for the temporal asymmetry. In this paper, I show that the Past Hypothesis also contains enough information to simplify the quantum ontology and define a unique initial condition in such a world. First, I introduce Density Matrix Realism, the thesis that the quantum universe is described by a fundamental density matrix that represents something objective. This stands in sharp contrast to Wave (...) Function Realism, the thesis that the quantum universe is described by a wave function that represents something objective. Second, I suggest that the Past Hypothesis is sufficient to determine a unique and simple density matrix. This is achieved by what I call the Initial Projection Hypothesis: the initial density matrix of the universe is the normalized projection onto the special low-dimensional Hilbert space. Third, because the initial quantum state is unique and simple, we have a strong case for the \emph{Nomological Thesis}: the initial quantum state of the universe is on a par with laws of nature. This new package of ideas has several interesting implications, including on the harmony between statistical mechanics and quantum mechanics, the dynamic unity of the universe and the subsystems, and the alleged conflict between Humean supervenience and quantum entanglement. (shrink)
This article is devoted to describing results of conceptualization of the idea of mind at the stage of maturity. Delineated the acquisition by the energy system (mind) of stable morphological characteristics, which associated with such a pivotal formation as the discourse. A qualitative structural and ontological sign of the system transition to this stage is the transformation of the verbal morphology of the mind into a discursive one. The analysis of the poststructuralist understanding of discourse in the context of the (...) dispersion of meanings (Foucault) made it possible to formulate a notion of it as a meaning that is constituted by the relation between the discursive practice and the worldview, regarded as a meta-discourse or a global discursive formation. In consequence of this relationship, a discrete and simultaneous scattering of meanings arises, the procedural side of which is a concrete discourse, and its productive aspect is linked with the creation of a local discursive formation. Based on this view it is proposed a logical formula of discourse, which takes into account the entropy of the language and the entropy of the worldview, as a particular manifestation of the mind entropy. Using this formula and considering the reactive nature of discourse, it was developed a classification, which included such types of discourses as reactive, suggestive, synthetic and creative. In turn, the proposed types of discourses are correlated with the specific characteristics of certain activities, as a psychological category. Also, it was considered the translation of the structure of discourse dissipation from the cognitive plan into the affective sphere because of which it is formed a hierarchy of significances, which performs the sense-forming function. It was analyzed the inverse influence of the hierarchy of significances on the structure of meanings dispersion and for respective account it was introduced a conditional coefficient of the value deviation of the significance of the meanings. This parameter reflects the sense correction of the meaning that occurs in the process of the emergence of discourse from discursive practice. Thus, the discourse is presented as a complex dynamic formation of the mind arising at the maturity stage of the system as a result of the combined effect of entropic dispersion of meanings and the value deviation of their significances. (shrink)
Categorical logic has shown that modern logic is essentially the logic of subsets (or "subobjects"). Partitions are dual to subsets so there is a dual logic of partitions where a "distinction" [an ordered pair of distinct elements (u,u′) from the universe U ] is dual to an "element". An element being in a subset is analogous to a partition π on U making a distinction, i.e., if u and u′ were in different blocks of π. Subset logic leads to finite (...) probability theory by taking the (Laplacian) probability as the normalized size of each subset-event of a finite universe. The analogous step in the logic of partitions is to assign to a partition the number of distinctions made by a partition normalized by the total number of ordered pairs |U|² from the finite universe. That yields a notion of "logical entropy" for partitions and a "logical information theory." The logical theory directly counts the (normalized) number of distinctions in a partition while Shannon's theory gives the average number of binary partitions needed to make those same distinctions. Thus the logical theory is seen as providing a conceptual underpinning for Shannon's theory based on the logical notion of "distinctions.". (shrink)
Boltzmannian statistical mechanics partitions the phase space of a sys- tem into macro-regions, and the largest of these is identified with equilibrium. What justifies this identification? Common answers focus on Boltzmann’s combinatorial argument, the Maxwell-Boltzmann distribution, and maxi- mum entropy considerations. We argue that they fail and present a new answer. We characterise equilibrium as the macrostate in which a system spends most of its time and prove a new theorem establishing that equilib- rium thus defined corresponds to the (...) largest macro-region. Our derivation is completely general in that it does not rely on assumptions about a system’s dynamics or internal interactions. (shrink)
The integration of embodied and computational approaches to cognition requires that non-neural body parts be described as parts of a computing system, which realizes cognitive processing. In this paper, based on research about morphological computations and the ecology of vision, I argue that nonneural body parts could be described as parts of a computational system, but they do not realize computation autonomously, only in connection with some kind of—even in the simplest form—central control system. Finally, I integrate the proposal defended (...) in the paper with the contemporary mechanistic approach to wide computation. (shrink)
The principle of Information Conservation or Determinism is a governing assumption of physical theory. Determinism has counterfactual consequences. It entails that if the present were different, then the future would be different. But determinism is temporally symmetric: it entails that if the present were different, the past would also have to be different. This runs contrary to our commonsense intuition that what has happened in the future depends on the past in a way the past does not depend on the (...) future. To understand how this can be so we observe that while the truth of some counterfactuals is guaranteed by the laws of logic or the laws of nature, some are not. It is among the latter contingent, counterfactuals that we find temporal asymmetry. It is this asymmetry that gives causation a temporal direction. The temporal asymmetry of these counterfactuals is explained by the fact that the dynamical laws of nature are logically irreversible functions from partial states of the world onto other partial states. (Logical reversibility is not to be confused, though it too often is, with time-reversal invariance). Though these irreversible laws are locally indeterministic, they can sum to give a globally deterministic description of the world. This combination of global determinism and local indeterminism gives rise to contingent counterfactual dependence and gives that dependence a direction. That direction is independent of the direction of entropy. The direction of contingent counterfactual dependence is time's arrow. (shrink)
In this paper, the role of the environment and physical embodiment of computational systems for explanatory purposes will be analyzed. In particular, the focus will be on cognitive computational systems, understood in terms of mechanisms that manipulate semantic information. It will be argued that the role of the environment has long been appreciated, in particular in the work of Herbert A. Simon, which has inspired the mechanistic view on explanation. From Simon’s perspective, the embodied view on cognition seems natural but (...) it is nowhere near as critical as its proponents suggest. The only point of difference between Simon and embodied cognition is the significance of body-based off-line cognition; however, it will be argued that it is notoriously over-appreciated in the current debate. The new mechanistic view on explanation suggests that even if it is critical to situate a mechanism in its environment and study its physical composition, or realization, it is also stressed that not all detail counts, and that some bodily features of cognitive systems should be left out from explanations. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.