The Integrated InformationTheory is a leading scientific theory of consciousness, which implies a kind of panpsychism. In this paper, I consider whether IIT is compatible with a particular kind of panpsychism, known as Russellian panpsychism, which purports to avoid the main problems of both physicalism and dualism. I will first show that if IIT were compatible with Russellian panpsychism, it would contribute to solving Russellian panpsychism’s combination problem, which threatens to show that the view does not (...) avoid the main problems of physicalism and dualism after all. I then show that the theories are not compatible as they currently stand, in view of what I call the coarse-graining problem. After I explain the coarse-graining problem, I will offer two possible solutions, each involving a small modification of IIT. Given either of these modifications, IIT and Russellian panpsychism may be fully compatible after all, and jointly enable significant progress on the mind–body problem. (shrink)
Chaitin’s incompleteness result related to random reals and the halting probability has been advertised as the ultimate and the strongest possible version of the incompleteness and undecidability theorems. It is argued that such claims are exaggerations.
It's not clear what integrated information theorists (Koch, Tononi) are saying. And their view lacks the resources to explain even very rudimentary facts about experiences.
The Integrated InformationTheory of consciousness (IIT) claims that consciousness is identical to maximal integrated information, or maximal Φ. One objection to IIT is based on what may be called the intrinsicality problem: consciousness is an intrinsic property, but maximal Φ is an extrinsic property; therefore, they cannot be identical. In this paper, I show that this problem is not unique to IIT, but rather derives from a trilemma that confronts almost any theory of consciousness. Given (...) most theories of consciousness, the following three claims are inconsistent. INTRINSICALITY: Consciousness is intrinsic. NON-OVERLAP: Conscious systems do not overlap with other conscious systems (a la Unger’s problem of the many). REDUCTIONISM: Consciousness is constituted by more fundamental properties (as per standard versions of physicalism and Russellian monism). In view of this, I will consider whether rejecting INTRINSICALITY is necessarily less plausible than rejecting NON-OVERLAP or REDUCTIONISM. I will also consider whether IIT is necessarily committed to rejecting INTRINSICALITY or whether it could also accept solutions that reject NON-OVERLAP or REDUCTIONISM instead. I will suggest that the best option for IIT may be a solution that rejects REDUCTIONISM rather than INTRINSICALITY or NON-OVERLAP. (shrink)
Categorical logic has shown that modern logic is essentially the logic of subsets (or "subobjects"). Partitions are dual to subsets so there is a dual logic of partitions where a "distinction" [an ordered pair of distinct elements (u,u′) from the universe U ] is dual to an "element". An element being in a subset is analogous to a partition π on U making a distinction, i.e., if u and u′ were in different blocks of π. Subset logic leads to finite (...) probability theory by taking the (Laplacian) probability as the normalized size of each subset-event of a finite universe. The analogous step in the logic of partitions is to assign to a partition the number of distinctions made by a partition normalized by the total number of ordered pairs |U|² from the finite universe. That yields a notion of "logical entropy" for partitions and a "logical informationtheory." The logical theory directly counts the (normalized) number of distinctions in a partition while Shannon's theory gives the average number of binary partitions needed to make those same distinctions. Thus the logical theory is seen as providing a conceptual underpinning for Shannon's theory based on the logical notion of "distinctions.". (shrink)
Logical informationtheory is the quantitative version of the logic of partitions just as logical probability theory is the quantitative version of the dual Boolean logic of subsets. The resulting notion of information is about distinctions, differences and distinguishability and is formalized using the distinctions of a partition. All the definitions of simple, joint, conditional and mutual entropy of Shannon informationtheory are derived by a uniform transformation from the corresponding definitions at the logical (...) level. The purpose of this paper is to give the direct generalization to quantum logical informationtheory that similarly focuses on the pairs of eigenstates distinguished by an observable, i.e., qudits of an observable. The fundamental theorem for quantum logical entropy and measurement establishes a direct quantitative connection between the increase in quantum logical entropy due to a projective measurement and the eigenstates that are distinguished by the measurement. Both the classical and quantum versions of logical entropy have simple interpretations as “two-draw” probabilities for distinctions. The conclusion is that quantum logical entropy is the simple and natural notion of information for quantum informationtheory focusing on the distinguishing of quantum states. (shrink)
In Cybernetics (1961 Edition), Professor Norbert Wiener noted that “The role of information and the technique of measuring and transmitting information constitute a whole discipline for the engineer, for the neuroscientist, for the psychologist, and for the sociologist”. Sociology aside, the neuroscientists and the psychologists inferred “information transmitted” using the discrete summations from Shannon InformationTheory. The present author has since scrutinized the psychologists’ approach in depth, and found it wrong. The neuroscientists’ approach is highly (...) related, but remains unexamined. Neuroscientists quantified “the ability of [physiological sensory] receptors (or other signal-processing elements) to transmit information about stimulus parameters”. Such parameters could vary along a single continuum (e.g., intensity), or along multiple dimensions that altogether provide a Gestalt – such as a face. Here, unprecedented scrutiny is given to how 23 neuroscience papers computed “information transmitted” in terms of stimulus parameters and the evoked neuronal spikes. The computations relied upon Shannon’s “confusion matrix”, which quantifies the fidelity of a “general communication system”. Shannon’s matrix is square, with the same labels for columns and for rows. Nonetheless, neuroscientists labelled the columns by “stimulus category” and the rows by “spike-count category”. The resulting “information transmitted” is spurious, unless the evoked spike-counts are worked backwards to infer the hypothetical evoking stimuli. The latter task is probabilistic and, regardless, requires that the confusion matrix be square. Was it? For these 23 significant papers, the answer is No. (shrink)
What does it feel like to be a bat? Is conscious experience of echolocation closer to that of vision or audition? Or do bats process echolocation nonconsciously, such that they do not feel anything about echolocation? This famous question of bats' experience, posed by a philosopher Thomas Nagel in 1974, clarifies the difficult nature of the mind–body problem. Why a particular sense, such as vision, has to feel like vision, but not like audition, is totally puzzling. This is especially so (...) given that any conscious experience is supported by neuronal activity. Activity of a single neuron appears fairly uniform across modalities and even similar to those for non-conscious processing. Without any explanation on why a particular sense has to feel the way it does, researchers cannot approach the question of the bats' experience. Is there any theory that gives us a hope for such explanation? Currently, probably none, except for one. Integrated informationtheory has potential to offer a plausible explanation. IIT essentially claims that any system that is composed of causally interacting mechanisms can have conscious experience. And precisely how the system feels is determined by the way the mechanisms influence each other in a holistic way. In this article, I will give a brief explanation of the essence of IIT. Further, I will briefly provide a potential scientific pathway to approach bats' conscious experience and its philosophical implications. If IIT, or its improved or related versions, is validated enough, the theory will gain credibility. When it matures enough, predictions from the theory, including nature of bats' experience, will have to be accepted. I argue that a seemingly impossible question about bats' consciousness will drive empirical and theoretical consciousness research to make big breakthroughs, in a similar way as an impossible question about the age of the universe has driven modern cosmology. (shrink)
This paper investigates the degree to which informationtheory, and the derived uses that make it work as a metaphor of our age, can be helpful in thinking about God’s immanence and transcendance. We ask when it is possible to say that a consciousness has to be behind the information we encounter. If God is to be thought about as a communicator of information, we need to ask whether a communication system has to pre-exist to the (...) divine and impose itself to God. If we want God to be Creator, and not someone who would work like a human being, ‘creating’ will mean sustaining in being as much the channel, the material system, as the message. Is information control? It seems that God’s actions are not going to be informational control of everything. To clarify the issue, we attempt to distinguish two kinds of ‘genialities’ in nature, as a way to evaluate the likelihood of God from nature. We investigate concepts and images of God, in terms of the history of ideas but also in terms of philosophical theology, metaphysics, and religious ontology. (shrink)
Integrated InformationTheory (IIT) is one of the most influential theories of consciousness, mainly due to its claim of mathematically formalizing consciousness in a measurable way. However, the theory, as it is formulated, does not account for contextual observations that are crucial for understanding consciousness. Here we put forth three possible difficulties for its current version, which could be interpreted as a trilemma. Either consciousness is contextual or not. If contextual, either IIT needs revisions to its axioms (...) to include contextuality, or it is inconsistent. If consciousness is not contextual, then IIT faces an empirical challenge. Therefore, we argue that IIT in its current version is inadequate. (shrink)
A generalized informationtheory is proposed as a natural extension of Shannon's informationtheory. It proposes that information comes from forecasts. The more precise and the more unexpected a forecast is, the more information it conveys. If subjective forecast always conforms with objective facts then the generalized information measure will be equivalent to Shannon's information measure. The generalized communication model is consistent with K. R. Popper's model of knowledge evolution. The mathematical foundations (...) of the new informationtheory, the generalized communication model , information measures for semantic information and sensory information, and the coding meanings of generalized entropy and generalized mutual information are introduced. Assessments and optimizations of pattern recognition, predictions, and detection with the generalized information criterion are discussed. For economization of communication, a revised version of rate-distortion theory: rate-of-keeping-precision theory, which is a theory for datum compression and also a theory for matching an objective channels with the subjective understanding of information receivers, is proposed. Applications include stock market forecasting and video image presentation. (shrink)
InformationTheory, Evolution and The Origin ofLife: The Origin and Evolution of Life as a Digital Message: How Life Resembles a Computer, Second Edition. Hu- bert P. Yockey, 2005, Cambridge University Press, Cambridge: 400 pages, index; hardcover, US $60.00; ISBN: 0-521-80293-8. The reason that there are principles of biology that cannot be derived from the laws of physics and chemistry lies simply in the fact that the genetic information content of the genome for constructing even the simplest (...) organisms is much larger than the information content of these laws. Yockey in his previous book (1992, 335) In this new book, InformationTheory, Evolution and The Origin ofLife, Hubert Yockey points out that the digital, segregated, and linear character of the genetic information system has a fundamental significance. If inheritance would blend and not segregate, Darwinian evolution would not occur. If inheritance would be analog, instead of digital, evolution would be also impossible, because it would be impossible to remove the effect of noise. In this way, life is guided by information, and so information is a central concept in molecular biology. The author presents a picture of how the main concepts of the genetic code were developed. He was able to show that despite Francis Crick's belief that the Central Dogma is only a hypothesis, the Central Dogma of Francis Crick is a mathematical consequence of the redundant nature of the genetic code. The redundancy arises from the fact that the DNA and mRNA alphabet is formed by triplets of 4 nucleotides, and so the number of letters (triplets) is 64, whereas the proteome alphabet has only 20 letters (20 amino acids), and so the translation from the larger alphabet to the smaller one is necessarily redundant. Except for Tryptohan and Methionine, all amino acids are coded by more than one triplet, therefore, it is undecidable which source code letter was actually sent from mRNA. This proof has a corollary telling that there are no such mathematical constraints for protein-protein communication. With this clarification, Yockey contributes to diminishing the widespread confusion related to such a central concept like the Central Dogma. Thus the Central Dogma prohibits the origin of life "proteins first." Proteins can not be generated by "self-organization." Understanding this property of the Central Dogma will have a serious impact on research on the origin of life. (shrink)
Shannon’s informationtheory has been a popular component of first-order cybernetics. It quantifies information transmitted in terms of the number of times a sent symbol is received as itself, or as another possible symbol. Sent symbols were events and received symbols were outcomes. Garner and Hake reinterpreted Shannon, describing events and outcomes as categories of a stimulus attribute, so as to quantify the information transmitted in the psychologist’s category (or absolute judgment) experiment. There, categories are represented (...) by specific stimuli, and the human subject must assign those stimuli, singly and in random order, to the categories that they represent. Hundreds of computations ensued of information transmitted and its alleged asymptote, the sensory channel capacity. The present paper critically re-examines those estimates. It also reviews estimates of memory capacity from memory experiments. It concludes that absolute judgment is memory-limited and that channel capacities are actually memory capacities. In particular, there are factors that affect absolute judgment that are not explainable within Shannon’s theory, factors such as feedback, practice, motivation, and stimulus range, as well as the anchor effect, sequential dependences, the rise in information transmitted with the increase in number of stimulus dimensions, and the phenomena of masking and stimulus duration dependence. It is recommended that absolute judgments be abandoned, because there are already many direct estimates of memory capacity. (shrink)
This report reviews what quantum physics and informationtheory have to tell us about the age-old question, How come existence? No escape is evident from four conclusions: (1) The world cannot be a giant machine, ruled by any preestablished continuum physical law. (2) There is no such thing at the microscopic level as space or time or spacetime continuum. (3) The familiar probability function or functional, and wave equation or functional wave equation, of standard quantum theory provide (...) mere continuum idealizations and by reason of this circumstance conceal the information-theoretic source from which they derive. (4) No element in the description of physics shows itself as closer to primordial than the elementary quantum phenomenon, that is, the elementary device-intermediated act of posing a yes-no physical question and eliciting an answer or, in brief, the elementary act of observer-participancy. Otherwise stated, every physical quantity, every it, derives its ultimate significance from bits, binary yes-or-no indications, a conclusion which we epitomize in the phrase, it from bit. (shrink)
Measures and theories of information abound, but there are few formalised methods for treating the contextuality that can manifest in different information systems. Quantum theory provides one possible formalism for treating information in context. This paper introduces a quantum inspired model of the human mental lexicon. This model is currently being experimentally investigated and we present a preliminary set of pilot data suggesting that concept combinations can indeed behave non-separably.
Information flow in a system is a core cybernetics concept. It has been used frequently in Sensory Psychology since 1951. There, Shannon InformationTheory was used to calculate "information transmitted" in "absolute identification" experiments involving human subjects. Originally, in Shannon's "system", any symbol received ("outcome") is among the symbols sent ("events"). Not all symbols are received as transmitted, hence an indirect noise measure is calculated, "information transmitted", which requires knowing the confusion matrix, its columns labeled (...) by "event" and its rows labeled by "outcome". Each matrix entry is dependent upon the frequency with which a particular outcome corresponds to a particular event. However, for the sensory psychologist, stimulus intensities are "events"; the experimenter partitions the intensity continuum into ranges called "stimulus categories" and "response categories", such that each confusion-matrix entry represents the frequency with which a stimulus from a stimulus category falls within a particular response category. Of course, a stimulus evokes a sensation, and the subject's immediate memory of it is compared to the memories of sensations learned during practice, to make a categorization. Categorizing thus introduces "false noise", which is only removed if categorizations can be converted back to their hypothetical evoking stimuli. But sensations and categorizations are both statistically distributed, and the stimulus that corresponds to a given mean categorization cannot be known from only the latter; the relation of intensity to mean sensation, and of mean sensation to mean categorization, are needed. Neither, however, are presently knowable. This is a quandary, which arose because sensory psychologists ignored an ubiquitous component of Shannon's "system", the uninvolved observer, who calculates "information transmitted". Human sensory systems, however, are within de facto observers, making "false noise" inevitable. (shrink)
Bohm and Hiley suggest that a certain new type of active information plays a key objective role in quantum processes. This paper discusses the implications of this suggestion to our understanding of the relation between the mental and the physical aspects of reality.
Software application ontologies have the potential to become the keystone in state-of-the-art information management techniques. It is expected that these ontologies will support the sort of reasoning power required to navigate large and complex terminologies correctly and efficiently. Yet, there is one problem in particular that continues to stand in our way. As these terminological structures increase in size and complexity, and the drive to integrate them inevitably swells, it is clear that the level of consistency required for such (...) navigation will become correspondingly difficult to maintain. While descriptive semantic representations are certainly a necessary component to any adequate ontology-based system, so long as ontology engineers rely solely on semantic information, without a sound ontological theory informing their modeling decisions, this goal will surely remain out of reach. In this paper we describe how Language and Computing nv (L&C), along with The Institute for Formal Ontology and Medical Information Sciences (IFOMIS), are working towards developing and implementing just such a theory, combining the open software architecture of L&C’s LinkSuiteTM with the philosophical rigor of IFOMIS’s Basic Formal Ontology. In this way we aim to move beyond the more or less simple controlled vocabularies that have dominated the industry to date. (shrink)
The way, in which quantum information can unify quantum mechanics (and therefore the standard model) and general relativity, is investigated. Quantum information is defined as the generalization of the concept of information as to the choice among infinite sets of alternatives. Relevantly, the axiom of choice is necessary in general. The unit of quantum information, a qubit is interpreted as a relevant elementary choice among an infinite set of alternatives generalizing that of a bit. The invariance (...) to the axiom of choice shared by quantum mechanics is introduced: It constitutes quantum information as the relation of any state unorderable in principle (e.g. any coherent quantum state before measurement) and the same state already well-ordered (e.g. the well-ordered statistical ensemble of the measurement of the quantum system at issue). This allows of equating the classical and quantum time correspondingly as the well-ordering of any physical quantity or quantities and their coherent superposition. That equating is interpretable as the isomorphism of Minkowski space and Hilbert space. Quantum information is the structure interpretable in both ways and thus underlying their unification. Its deformation is representable correspondingly as gravitation in the deformed pseudo-Riemannian space of general relativity and the entanglement of two or more quantum systems. The standard model studies a single quantum system and thus privileges a single reference frame turning out to be inertial for the generalized symmetry [U(1)]X[SU(2)]X[SU(3)] “gauging” the standard model. As the standard model refers to a single quantum system, it is necessarily linear and thus the corresponding privileged reference frame is necessary inertial. The Higgs mechanism U(1) → [U(1)]X[SU(2)] confirmed enough already experimentally describes exactly the choice of the initial position of a privileged reference frame as the corresponding breaking of the symmetry. The standard model defines ‘mass at rest’ linearly and absolutely, but general relativity non-linearly and relatively. The “Big Bang” hypothesis is additional interpreting that position as that of the “Big Bang”. It serves also in order to reconcile the linear standard model in the singularity of the “Big Bang” with the observed nonlinearity of the further expansion of the universe described very well by general relativity. Quantum information links the standard model and general relativity in another way by mediation of entanglement. The linearity and absoluteness of the former and the nonlinearity and relativeness of the latter can be considered as the relation of a whole and the same whole divided into parts entangled in general. (shrink)
An important problem with machine learning is that when label number n>2, it is very difficult to construct and optimize a group of learning functions, and we wish that optimized learning functions are still useful when prior distribution P(x) (where x is an instance) is changed. To resolve this problem, the semantic information G theory, Logical Bayesian Inference (LBI), and a group of Channel Matching (CM) algorithms together form a systematic solution. MultilabelMultilabel A semantic channel in the G (...)theory consists of a group of truth functions or membership functions. In comparison with likelihood functions, Bayesian posteriors, and Logistic functions used by popular methods, membership functions can be more conveniently used as learning functions without the above problem. In Logical Bayesian Inference (LBI), every label’s learning is independent. For Multilabel learning, we can directly obtain a group of optimized membership functions from a big enough sample with labels, without preparing different samples for different labels. A group of Channel Matching (CM) algorithms are developed for machine learning. For the Maximum Mutual Information (MMI) classification of three classes with Gaussian distributions on a two-dimensional feature space, 2-3 iterations can make mutual information between three classes and three labels surpass 99% of the MMI for most initial partitions. For mixture models, the Expectation-Maxmization (EM) algorithm is improved and becomes the CM-EM algorithm, which can outperform the EM algorithm when mixture ratios are imbalanced, or local convergence exists. The CM iteration algorithm needs to combine neural networks for MMI classifications on high-dimensional feature spaces. LBI needs further studies for the unification of statistics and logic. (shrink)
In the April 2002 edition of JCS I outlined the conscious electromagnetic information field theory, claiming that consciousness is that component of the brain's electromagnetic field that is downloaded to motor neurons and is thereby capable of communicating its informational content to the outside world. In this paper I demonstrate that the theory is robust to criticisms. I further explore implications of the theory particularly as regards the relationship between electromagnetic fields, information, the phenomenology of (...) consciousness and the meaning of free will. Using cemi field theory I propose a working hypothesis that shows, among other things, that awareness and information represent the same phenomenon viewed from different reference frames. (shrink)
This paper articulates an account of causation as a collection of information-theoretic relationships between patterns instantiated in the causal nexus. I draw on Dennett’s account of real patterns to characterize potential causal relata as patterns with specific identification criteria and noise tolerance levels, and actual causal relata as those patterns instantiated at some spatiotemporal location in the rich causal nexus as originally developed by Salmon. I develop a representation framework using phase space to precisely characterize causal relata, including their (...) degree of counterfactual robustness, causal profiles, causal connectivity, and privileged grain size. By doing so, I show how the philosophical notion of causation can be rendered in a format that is amenable for direct application of mathematical techniques from informationtheory such that the resulting informational measures are causal informational measures. This account provides a metaphysics of causation that supports interventionist semantics and causal modeling and discovery techniques. (shrink)
Information-theoretic approaches to formal logic analyse the "common intuitive" concept of propositional implication (or argumental validity) in terms of information content of propositions and sets of propositions: one given proposition implies a second if the former contains all of the information contained by the latter; an argument is valid if the conclusion contains no information beyond that of the premise-set. This paper locates information-theoretic approaches historically, philosophically and pragmatically. Advantages and disadvantages are identified by examining (...) such approaches in themselves and by contrasting them with standard transformation-theoretic approaches. Transformation-theoretic approaches analyse validity (and thus implication) in terms of transformations that map one argument onto another: a given argument is valid if no transformation carries it onto an argument with all true premises and false conclusion. Model-theoretic, set-theoretic, and substitution-theoretic approaches, which dominate current literature, can be construed as transformation-theoretic, as can the so-called possible-worlds approaches. Ontic and epistemic presuppositions of both types of approaches are considered. Attention is given to the question of whether our historically cumulative experience applying logic is better explained from a purely information-theoretic perspective or from a purely transformation-theoretic perspective or whether apparent conflicts between the two types of approaches need to be reconciled in order to forge a new type of approach that recognizes their basic complementarity. (shrink)
In a recent paper I proposed a novel relativity theory termed Information Relativity (IR). Unlike Einstein's relativity which dictates as force majeure that relativity is a true state of nature, Information Relativity assumes that relativity results from difference in information about nature between observers who are in motion relative to each other. The theory is based on two axioms: 1. the laws of physics are the same in all inertial frames of reference (Special relativity's first (...) axiom); 2. All translations of information from one frame of reference to another are carried by light or by another carrier with equal velocity (information-carrier axiom). For the case of constant relative velocities, I showed in the aforementioned paper that IR accounts successfully for the results of a class of relativistic time results, including the Michelson-Morley's "null" result, the Sagnac effect, and the neutrino velocities reported by OPERA and other collaborations. Here I apply the theory, with no alteration, to cosmology. I show that the theory is successful in accounting for several cosmological findings, including the pattern of recession velocity predicted by inflationary theories, the GZK energy suppression phenomenon at redshift z ̴ 1.6, and the amounts of matter and dark energy reported in recent ΛCDM cosmologies. (shrink)
An information recovery problem is the problem of constructing a proposition containing the information dropped in going from a given premise to a given conclusion that folIows. The proposition(s) to beconstructed can be required to satisfy other conditions as well, e.g. being independent of the conclusion, or being “informationally unconnected” with the conclusion, or some other condition dictated by the context. This paper discusses various types of such problems, it presents techniques and principles useful in solving them, and (...) it develops algorithmic methods for certain classes of such problems. The results are then applied to classical number theory, in particular, to questions concerning possible refinements of the 1931 Gödel Axiom Set, e.g. whether any of its axioms can be analyzed into “informational atoms”. Two propositions are “informationally unconnected” [with each other] if no informative (nontautological) consequence of one also follows from the other. A proposition is an “informational atom” if it is informative but no information can be dropped from it without rendering it uninformative (tautological). Presentation, employment, and investigation of these two new concepts are prominent features of this paper. (shrink)
Argues that information, in the animal behaviour or evolutionary context, is correlation/covariation. The alternation of red and green traffic lights is information because it is (quite strictly) correlated with the times when it is safe to drive through the intersection; thus driving in accordance with the lights is adaptive (causative of survival). Daylength is usefully, though less strictly, correlated with the optimal time to breed. Information in the sense of covariance implies what is adaptive; if an animal (...) can infer what the information implies, it increases its chances of survival. (shrink)
The dominant approach in privacy theory defines information privacy as some form of control over personal information. In this essay, I argue that the control approach is mistaken, but for different reasons than those offered by its other critics. I claim that information privacy involves the drawing of epistemic boundaries—boundaries between what others should and shouldn’t know about us. While controlling what information others have about us is one strategy we use to draw such boundaries, (...) it is not the only one. We conceal information about ourselves and we reveal it. And since the meaning of information is not self-evident, we also work to shape how others contextualize and interpret the information about us that they have. Information privacy is thus about more than controlling information; it involves the constant work of producing and managing public identities, what I call “social self- authorship.” In the second part of the essay, I argue that thinking about information privacy in terms of social self- authorship helps us see ways that information technology threatens privacy, which the control approach misses. Namely, information technology makes social self- authorship invisible and unnecessary, by making it difficult for us to know when others are forming impressions about us, and by providing them with tools for making assumptions about who we are which obviate the need for our involvement in the process. (shrink)
By bringing together Dretske’s theory of knowledge, Shannon’s theory of information, and the conceptual framework of statistical physics, this paper explores some of the meta-physical challenges posed by a naturalistic notion of semantical information. It is argued that Dretske’s theory cannot be said to be naturalistically grounded in the world described by classical physics and that Dretske information is not consistent with Shannon information. A possible route to reconciling Dretske’s insights with Shannon’s (...) class='Hi'>theory is proposed. Along the way, an attempt is made to clarify several points of possible confusion about the relationships between Dretske information, Shannon information and statistical physics. (shrink)
This paper is an attempt to lay out foundations for a general theory of coincidence in information spaces such as the World Wide Web, expanding on existing work on bursty structures in document streams and information cascades. We elaborate on the hypothesis that every resource that is published in an information space, enters a temporary interaction with another resource once a unique explicit or implicit reference between the two is found. This thought is motivated by Erwin (...) Shroedingers notion of entanglement between quantum systems. We present a generic information cascade model that exploits only the temporal order of information sharing activities, combined with inherent properties of the shared information resources. The approach was applied to data from the world's largest online citizen science platform Zooniverse and we report about findings of this case study. (shrink)
Information-theoretic approaches to formal logic analyze the "common intuitive" concepts of implication, consequence, and validity in terms of information content of propositions and sets of propositions: one given proposition implies a second if the former contains all of the information contained by the latter; one given proposition is a consequence of a second if the latter contains all of the information contained by the former; an argument is valid if the conclusion contains no information beyond (...) that of the premise-set. This paper locates information-theoretic approaches historically, philosophically, and pragmatically. Advantages and disadvantages are identified by examining such approaches in themselves and by contrasting them with standard transformation-theoretic approaches. Transformation-theoretic approaches analyze validity (and thus implication) in terms of transformations that map one argument onto another: a given argument is valid if no transformation carries it onto an argument with all true premises and false conclusion. Model-theoretic, set-theoretic, and substitution-theoretic approaches, which dominate current literature, can be construed as transformation-theoretic, as can the so-called possible-worlds approaches. Ontic and epistemic presuppositions of both types of approaches are considered. Attention is given to the question of whether our historically cumulative experience applying logic is better explained from a purely information-theoretic perspective or from a purely transformation-theoretic perspective or whether apparent conflicts between the two types of approaches need to be reconciled in order to forge a new type of approach that recognizes their basic complementarity. (shrink)
Purpose – For half a century, neuroscientists have used Shannon InformationTheory to calculate “information transmitted,” a hypothetical measure of how well neurons “discriminate” amongst stimuli. Neuroscientists’ computations, however, fail to meet even the technical requirements for credibility. Ultimately, the reasons must be conceptual. That conclusion is confirmed here, with crucial implications for neuroscience. The paper aims to discuss these issues. Design/methodology/approach – Shannon InformationTheory depends upon a physical model, Shannon’s “general communication system.” Neuroscientists’ (...) interpretation of that model is scrutinized here. Findings – In Shannon’s system, a recipient receives a message composed of symbols. The symbols received, the symbols sent, and their hypothetical occurrence probabilities altogether allow calculation of “information transmitted.” Significantly, Shannon’s system’s “reception” (decoding) side physically mirrors its “transmission” (encoding) side. However, neurons lack the “reception” side; neuroscientists nonetheless insisted that decoding must happen. They turned to Homunculus, an internal humanoid who infers stimuli from neuronal firing. However, Homunculus must contain a Homunculus, and so on ad infinitum – unless it is super-human. But any need for Homunculi, as in “theories of consciousness,” is obviated if consciousness proves to be “emergent.” Research limitations/implications – Neuroscientists’ “information transmitted” indicates, at best, how well neuroscientists themselves can use neuronal firing to discriminate amongst the stimuli given to the research animal. Originality/value – A long-overdue examination unmasks a hidden element in neuroscientists’ use of Shannon InformationTheory, namely, Homunculus. Almost 50 years’ worth of computations are recognized as irrelevant, mandating fresh approaches to understanding “discriminability.”. (shrink)
Purpose – The purpose of this paper is to examine the popular “information transmitted” interpretation of absolute judgments, and to provide an alternative interpretation if one is needed. Design/methodology/approach – The psychologists Garner and Hake and their successors used Shannon’s InformationTheory to quantify information transmitted in absolute judgments of sensory stimuli. Here, informationtheory is briefly reviewed, followed by a description of the absolute judgment experiment, and its informationtheory analysis. Empirical (...) channel capacities are scrutinized. A remarkable coincidence, the similarity of maximum information transmitted to human memory capacity, is described. Over 60 representative psychology papers on “information transmitted” are inspected for evidence of memory involvement in absolute judgment. Finally, memory is conceptually integrated into absolute judgment through a novel qualitative model that correctly predicts how judgments change with increase in the number of judged stimuli. Findings – Garner and Hake gave conflicting accounts of how absolute judgments represent information transmission. Further, “channel capacity” is an illusion caused by sampling bias and wishful thinking; information transmitted actually peaks and then declines, the peak coinciding with memory capacity. Absolute judgments themselves have numerous idiosyncracies that are incompatible with a Shannon general communication system but which clearly imply memory dependence. Research limitations/implications – Memory capacity limits the correctness of absolute judgments. Memory capacity is already well measured by other means, making redundant the informational analysis of absolute judgments. Originality/value – This paper presents a long-overdue comprehensive critical review of the established interpretation of absolute judgments in terms of “information transmitted”. An inevitable conclusion is reached: that published measurements of information transmitted actually measure memory capacity. A new, qualitative model is offered for the role of memory in absolute judgments. The model is well supported by recently revealed empirical properties of absolute judgments. (shrink)
The Foundational Model of Anatomy (FMA) is a map of the human body. Like maps of other sorts – including the map-like representations we find in familiar anatomical atlases – it is a representation of a certain portion of spatial reality as it exists at a certain (idealized) instant of time. But unlike other maps, the FMA comes in the form of a sophisticated ontology of its objectdomain, comprising some 1.5 million statements of anatomical relations among some 70,000 anatomical kinds. (...) It is further distinguished from other maps in that it represents not some specific portion of spatial reality (say: Leeds in 1996), but rather the generalized or idealized spatial reality associated with a generalized or idealized human being at some generalized or idealized instant of time. It will be our concern in what follows to outline the approach to ontology that is represented by the FMA and to argue that it can serve as the basis for a new type of anatomical information science. We also draw some implications for our understanding of spatial reasoning and spatial ontologies in general. (shrink)
Synthetic biology aims at reconstructing life to put to the test the limits of our understanding. It is based on premises similar to those which permitted invention of computers, where a machine, which reproduces over time, runs a program, which replicates. The underlying heuristics explored here is that an authentic category of reality, information, must be coupled with the standard categories, matter, energy, space and time to account for what life is. The use of this still elusive category permits (...) us to interact with reality via construction of self-consistent models producing predictions which can be instantiated into experiments. While the present theory of information has much to say about the program, with the creative properties of recursivity at its heart, we almost entirely lack a theory of the information supporting the machine. We suggest that the program of life codes for processes meant to trap information which comes from the context provided by the environment of the machine. (shrink)
We review a recent approach to the foundations of quantum mechanics inspired by quantum informationtheory. The approach is based on a general framework, which allows one to address a large class of physical theories which share basic information-theoretic features. We first illustrate two very primitive features, expressed by the axioms of causality and purity-preservation, which are satisfied by both classical and quantum theory. We then discuss the axiom of purification, which expresses a strong version of (...) the Conservation of Information and captures the core of a vast number of protocols in quantum information. Purification is a highly non-classical feature and leads directly to the emergence of entanglement at the purely conceptual level, without any reference to the superposition principle. Supplemented by a few additional requirements, satisfied by classical and quantum theory, it provides a complete axiomatic characterization of quantum theory for finite dimensional systems. (shrink)
The outlines of a novel, fully naturalistic theory of perception are provided, that can explain perception of an object X by organism Z in terms of reflexive causality. On the reflexive view proposed, organism Z perceives object or property X just in case X causes Z to acquire causal dispositions reflexively directed back upon X itself. This broadly functionalist theory is potentially capable of explaining both perceptual representation and perceptual content in purely causal terms, making no use of (...) informational concepts. However, such a reflexive, naturalistic causal theory must compete with well entrenched, supposedly equally naturalistic theories of perception that are based on some concept of information, so the paper also includes some basic logical, naturalistic and explanatory criticisms of such informational views. (shrink)
There are (at least) three approaches to quantifying information. The first, algorithmic information or Kolmogorov complexity, takes events as strings and, given a universal Turing machine, quantifies the information content of a string as the length of the shortest program producing it [1]. The second, Shannon information, takes events as belonging to ensembles and quantifies the information resulting from observing the given event in terms of the number of alternate events that have been ruled out (...) [2]. The third, statistical learning theory, has introduced measures of capacity that control (in part) the expected risk of classifiers [3]. These capacities quantify the expectations regarding future data that learning algorithms embed into classifiers. Solomonoff and Hutter have applied algorithmic information to prove remarkable results on universal induction. Shannon information provides the mathematical foundation for communication and coding theory. However, both approaches have shortcomings. Algorithmic information is not computable, severely limiting its practical usefulness. Shannon information refers to ensembles rather than actual events: it makes no sense to compute the Shannon information of a single string – or rather, there are many answers to this question depending on how a related ensemble is constructed. Although there are asymptotic results linking algorithmic and Shannon information, it is unsatisfying that there is such a large gap – a difference in kind – between the two measures. This note describes a new method of quantifying information, effective information, that links algorithmic information to Shannon information, and also links both to capacities arising in statistical learning theory [4, 5]. After introducing the measure, we show that it provides a non-universal analog of Kolmogorov complexity. We then apply it to derive basic capacities in statistical learning theory: empirical VC-entropy and empirical Rademacher complexity. A nice byproduct of our approach is an interpretation of the explanatory power of a learning algorithm in terms of the number of hypotheses it falsifies [6], counted in two different ways for the two capacities. We also discuss how effective information relates to information gain, Shannon and mutual information. (shrink)
This essay is a sustained attempt to bring new light to some of the perennial problems in philosophy of mind surrounding phenomenal consciousness and introspection through developing an account of sensory and phenomenal concepts. Building on the information-theoretic framework of Dretske (1981), we present an informational psychosemantics as it applies to what we call sensory concepts, concepts that apply, roughly, to so-called secondary qualities of objects. We show that these concepts have a special informational character and semantic structure that (...) closely tie them to the brain states realizing conscious qualitative experiences. We then develop an account of introspection which exploits this special nature of sensory concepts. The result is a new class of concepts, which, following recent terminology, we call phenomenal concepts: these concepts refer to phenomenal experience itself and are the vehicles used in introspection. On our account, the connection between sensory and phenomenal concepts is very tight: it consists in different semantic uses of the same cognitive structures underlying the sensory concepts, such as the concept of red. Contrary to widespread opinion, we show that informationtheory contains all the resources to satisfy internalist intuitions about phenomenal consciousness, while not offending externalist ones. A consequence of this account is that it explains and predicts the so-called conceivability arguments against physicalism on the basis of the special nature of sensory and phenomenal concepts. Thus we not only show why physicalism is not threatened by such arguments, but also demonstrate its strength in virtue of its ability to predict and explain away such arguments in a principled way. However, we take the main contribution of this work to be what it provides in addition to a response to those conceivability arguments, namely, a substantive account of the interface between sensory and conceptual systems and the mechanisms of introspection as based on the special nature of the information flow between them. (shrink)
Logical Probability (LP) is strictly distinguished from Statistical Probability (SP). To measure semantic information or confirm hypotheses, we need to use sampling distribution (conditional SP function) to test or confirm fuzzy truth function (conditional LP function). The Semantic Information Measure (SIM) proposed is compatible with Shannon’s informationtheory and Fisher’s likelihood method. It can ensure that the less the LP of a predicate is and the larger the true value of the proposition is, the more (...) class='Hi'>information there is. So the SIM can be used as Popper's information criterion for falsification or test. The SIM also allows us to optimize the true-value of counterexamples or degrees of disbelief in a hypothesis to get the optimized degree of belief, i. e. Degree of Confirmation (DOC). To explain confirmation, this paper 1) provides the calculation method of the DOC of universal hypotheses; 2) discusses how to resolve Raven Paradox with new DOC and its increment; 3) derives the DOC of rapid HIV tests: DOC of “+” =1-(1-specificity)/sensitivity, which is similar to Likelihood Ratio (=sensitivity/(1-specificity)) but has the upper limit 1; 4) discusses negative DOC for excessive affirmations, wrong hypotheses, or lies; and 5) discusses the DOC of general hypotheses with GPS as example. (shrink)
This article explores the usefulness of interdisciplinarity as method of enquiry by proposing an investigation of the concept of information in the light of semiotics. This is because, as Kull, Deacon, Emmeche, Hoffmeyer and Stjernfelt state, information is an implicitly semiotic term (Biological Theory 4(2):167–173, 2009: 169), but the logical relation between semiosis and information has not been sufficiently clarified yet. Across the history of cybernetics, the concept of information undergoes an uneven development; that is, (...)information is an ‘objective’ entity in first order cybernetics, and becomes a ‘subjective’ entity in second order cybernetics. This contradiction relegates the status of information to that of a ‘true’ or ‘false’ formal logic problem. The present study proposes that a solution to this contradiction can be found in Deely’s reconfiguration of Peirce’s ‘object’ (as found in his triadic model of semiosis) into ‘thing’ and ‘object’ (Deely 1981). This ontology allows one to argue that information is neither ‘true’ nor ‘false’, and to suggest that, when considered in light of its workability, information can be both true and false, and as such it constitutes an organism’s purely objective reality (Deely 2009b). It is stated that in the process of building such a reality, information is ‘motivated’ by environmental, physiological, emotional (including past feelings and expectations) constraints which are, in turn, framed by observership. Information is therefore found in the irreducible cybersemiotic process that links at once all these conditions and that is simultaneously constrained by them. The integration of cybernetics’ and semiotics’ understanding of information shows that history is the analytical principle that grants scientific rigour to interdisciplinary investigations. As such, in any attempt to clarify its epistemological stance (e.g. the semiotic aspect of information), it is argued that biosemiotics does not need only to acknowledge semiotics (as it does), but also cybernetics in its interdisciplinary heritage. (shrink)
Many philosophers are skeptical about the scientific value of the concept of biological information. However, several have recently proposed a more positive view of ascribing information as an exercise in scientific modeling. I argue for an alternative role: guiding empirical data collection for the sake of theorizing about the evolution of semantics. I clarify and expand on Bergstrom and Rosvall’s suggestion of taking a “diagnostic” approach that defines biological information operationally as a procedure for collecting empirical cases. (...) The more recent modeling-based accounts still perpetuate a theory-centric view of scientific concepts, which motivated philosophers’ misplaced skepticism in the first place. (shrink)
Policy-makers must sometimes choose between an alternative which has somewhat lower expected value for each person, but which will substantially improve the outcomes of the worst off, or an alternative which has somewhat higher expected value for each person, but which will leave those who end up worst off substantially less well off. The popular ex ante Pareto principle requires the choice of the alternative with higher expected utility for each. We argue that ex ante Pareto ought to be rejected (...) because it conflicts with the requirement that, when possible, one ought to decide as one would with full information. We apply our argument in an analysis of US policy on screening for breast cancer. -/- . (shrink)
Nineteen fifty-eight was an extraordinary year for cultural innovation, especially in English literature. It was also a year in which several boldly revisionary positions were first articulated in analytic philosophy. And it was a crucial year for the establishment of structural linguistics, of structuralist anthropology, and of cognitive psychology. Taken together these developments had a radical effect on our conceptions of individual creativity and of the inheritance of tradition. The present essay attempts to illuminate the relationships among these developments, and (...) to explain the foundational role played by mathematical, logical and informationtheory in all of them. (shrink)
Within theoretical and empirical enquiries, many different meanings associated with consciousness have appeared, leaving the term itself quite vague. This makes formulating an abstract and unifying version of the concept of consciousness – the main aim of this article –into an urgent theoretical imperative. It is argued that consciousness, characterized as dually accessible (cognized from the inside and the outside), hierarchically referential (semantically ordered), bodily determined (embedded in the working structures of an organism or conscious system), and useful in action (...) (pragmatically functional), is a graded rather than an all-or-none phenomenon. A gradational approach, however, despite its explanatory advantages, can lead to some counterintuitive consequences and theoretical problems. In most such conceptions consciousness is extended globally (attached to primitive organisms or artificial systems), but also locally (connected to certain lower-level neuronal and bodily processes). For example, according to information integration theory (as introduced recently by Tononi and Koch, 2014), even such simple artificial systems as photodiodes possess miniscule amounts of consciousness. The major challenge for this article, then, is to establish reasonable, empirically justified constraints on how extended the range of a graded consciousness could be. It is argued that conscious systems are limited globally by the ability to individuate information (where individuated information is understood as evolutionarily embedded, socially altered, and private), whereas local limitations should be determined on the basis of a hypothesis about the action-oriented nature of the processes that select states of consciousness. Using these constraints, an abstract concept of consciousness is arrived at, hopefully contributing to a more unified state of play within consciousness studies itself. (shrink)
Understanding computation as “a process of the dynamic change of information” brings to look at the different types of computation and information. Computation of information does not exist alone by itself but is to be considered as part of a system that uses it for some given purpose. Information can be meaningless like a thunderstorm noise, it can be meaningful like an alert signal, or like the representation of a desired food. A thunderstorm noise participates to (...) the generation of meaningful information about coming rain. An alert signal has a meaning as allowing a safety constraint to be satisfied. The representation of a desired food participates to the satisfaction of some metabolic constraints for the organism. Computations on information and representations will be different in nature and in complexity as the systems that link them have different constraints to satisfy. Animals have survival constraints to satisfy. Humans have many specific constraints coming in addition. And computers will compute what the designer and programmer ask for. We propose to analyze the different relations between information, meaning and representation by taking an evolutionary approach on the systems that link them. Such a bottom-up approach allows starting with simple organisms and avoids an implicit focus on humans, which is the most complex and difficult case. To make available a common background usable for the many different cases, we use a systemic tool that defines the generation of meaningful information by and for a system submitted to a constraint [Menant, 2003]. This systemic tool allows to position information, meaning and representations for systems relatively to environmental entities in an evolutionary perspective. We begin by positioning the notions of information, meaning and representation and recall the characteristics of the Meaning Generator System (MGS) that link a system submitted to a constraint to its environment. We then use the MGS for animals and highlight the network nature of the interrelated meanings about an entity of the environment. This brings us to define the representation of an item for an agent as being the network of meanings relative to the item for the agent. Such meaningful representations embed the agents in their environments and are far from the Good Old Fashion Artificial Intelligence type ones. The MGS approach is then used for humans with a limitation resulting of the unknown nature of human consciousness. Application of the MGS to artificial systems brings to look for compatibilities with different levels of Artificial Intelligence (AI) like embodied-situated AI, the Guidance Theory of Representations, and enactive AI. Concerns relative to different types of autonomy and organic or artificial constraints are highlighted. We finish by summarizing the points addressed and by proposing some continuations. (shrink)
Ever since the hard problem of consciousness (Chalmers, 1996, 1995) first entered the scene in the debate over consciousness many have taken it to show the limitations of a scientific or naturalist explanation of consciousness. The hard problem is the problem of explaining why there is any experience associated with certain physical processes, that is, why there is anything it is like associated with such physical processes? The character of one’s experience doesn’t seem to be entailed by physical processes and (...) so an explanation which can overcome such a worry must (1) explain how physical processes give rise to experience (explain the entailment), (2) give an explanation which doesn’t rely on such physical processes, or (3) show why the hard problem is misguided in some sense. Recently, a rather ambitious and novel theory of consciousness has entered the scene – Integrated InformationTheory (IIT) of Consciousness (Oizumi et al., 2014; Tononi, 2008; Tononi et al., 2016) – and proposes that consciousness is the result of a specific type of information processing, what those developing the theory call integrated information. The central aim of this dissertation is to philosophically investigate IIT and see whether it has the ability to overcome the hard problem and related worries. I then aim to use this philosophical investigation to answer a set of related questions which guide this dissertation, which are the following: Is it possible to give an information-theoretic explanation of consciousness? What would the nature of such an explanation be and would it result in a novel metaphysics of consciousness? In this dissertation, I begin in chapter one by first setting up the hard problem and related arguments against the backdrop of IIT (Mindt, 2017). I show that given a certain understanding of structural and dynamical properties IIT fails to overcome the hard problem of consciousness. I go on in chapter two to argue that a deflationary account of causation is the best view for IIT to overcome the causal exclusion problem (Baxendale and Mindt, 2018). In chapter three, I explain IIT’s account of how the qualitative character of our experience arises (qualia) and what view of intentionality (the directedness of our mental states) IIT advocates. I then move on in chapter four to show why the hard problem mischaracterizes structural and dynamical properties and misses important nuances that may shed light on giving a naturalized explanation of consciousness. In the last and fifth chapter, I outline a sketch of a novel metaphysics of consciousness that takes the conjunction of Neutral Monism and Information-Theoretic Structural Realism to give what I call Information-Theoretic Neutral-Structuralism. (shrink)
A platitude that took hold with Kuhn is that there can be several equally good ways of balancing theoretical virtues for theory choice. Okasha recently modelled theory choice using technical apparatus from the domain of social choice: famously, Arrow showed that no method of social choice can jointly satisfy four desiderata, and each of the desiderata in social choice has an analogue in theory choice. Okasha suggested that one can avoid the Arrow analogue for theory choice (...) by employing a strategy used by Sen in social choice, namely, to enhance the information made available to the choice algorithms. I argue here that, despite Okasha’s claims to the contrary, the information-enhancing strategy is not compelling in the domain of theory choice. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.