Chaitin’s incompleteness result related to random reals and the halting probability has been advertised as the ultimate and the strongest possible version of the incompleteness and undecidability theorems. It is argued that such claims are exaggerations.
There are writers in both metaphysics and algorithmicinformationtheory (AIT) who seem to think that the latter could provide a formal theory of the former. This paper is intended as a step in that direction. It demonstrates how AIT might be used to define basic metaphysical notions such as *object* and *property* for a simple, idealized world. The extent to which these definitions capture intuitions about the metaphysics of the simple world, times the extent to (...) which we think the simple world is analogous to our own, will determine a lower bound for basing a metaphysics for *our* world on AIT. (shrink)
An important problem with machine learning is that when label number n>2, it is very difficult to construct and optimize a group of learning functions, and we wish that optimized learning functions are still useful when prior distribution P(x) (where x is an instance) is changed. To resolve this problem, the semantic information G theory, Logical Bayesian Inference (LBI), and a group of Channel Matching (CM) algorithms together form a systematic solution. MultilabelMultilabel A semantic channel in the G (...)theory consists of a group of truth functions or membership functions. In comparison with likelihood functions, Bayesian posteriors, and Logistic functions used by popular methods, membership functions can be more conveniently used as learning functions without the above problem. In Logical Bayesian Inference (LBI), every label’s learning is independent. For Multilabel learning, we can directly obtain a group of optimized membership functions from a big enough sample with labels, without preparing different samples for different labels. A group of Channel Matching (CM) algorithms are developed for machine learning. For the Maximum Mutual Information (MMI) classification of three classes with Gaussian distributions on a two-dimensional feature space, 2-3 iterations can make mutual information between three classes and three labels surpass 99% of the MMI for most initial partitions. For mixture models, the Expectation-Maxmization (EM) algorithm is improved and becomes the CM-EM algorithm, which can outperform the EM algorithm when mixture ratios are imbalanced, or local convergence exists. The CM iteration algorithm needs to combine neural networks for MMI classifications on high-dimensional feature spaces. LBI needs further studies for the unification of statistics and logic. (shrink)
Argumentation theory underwent a significant development in the Fifties and Sixties: its revival is usually connected to Perelman's criticism of formal logic and the development of informal logic. Interestingly enough it was during this period that Artificial Intelligence was developed, which defended the following thesis (from now on referred to as the AI-thesis): human reasoning can be emulated by machines. The paper suggests a reconstruction of the opposition between formal and informal logic as a move against a premise of (...) an argument for the AI-thesis, and suggests making a distinction between a broad and a narrow notion of algorithm that might be used to reformulate the question as a foundational problem for argumentation theory. (shrink)
In Darwin’s Dangerous Idea, Daniel Dennett claims that evolution is algorithmic. On Dennett’s analysis, evolutionary processes are trivially algorithmic because he assumes that all natural processes are algorithmic. I will argue that there are more robust ways to understand algorithmic processes that make the claim that evolution is algorithmic empirical and not conceptual. While laws of nature can be seen as compression algorithms of information about the world, it does not follow logically that they (...) are implemented as algorithms by physical processes. For that to be true, the processes have to be part of computational systems. The basic difference between mere simulation and real computing is having proper causal structure. I will show what kind of requirements this poses for natural evolutionary processes if they are to be computational. (shrink)
An information recovery problem is the problem of constructing a proposition containing the information dropped in going from a given premise to a given conclusion that folIows. The proposition(s) to beconstructed can be required to satisfy other conditions as well, e.g. being independent of the conclusion, or being “informationally unconnected” with the conclusion, or some other condition dictated by the context. This paper discusses various types of such problems, it presents techniques and principles useful in solving them, and (...) it develops algorithmic methods for certain classes of such problems. The results are then applied to classical number theory, in particular, to questions concerning possible refinements of the 1931 Gödel Axiom Set, e.g. whether any of its axioms can be analyzed into “informational atoms”. Two propositions are “informationally unconnected” [with each other] if no informative (nontautological) consequence of one also follows from the other. A proposition is an “informational atom” if it is informative but no information can be dropped from it without rendering it uninformative (tautological). Presentation, employment, and investigation of these two new concepts are prominent features of this paper. (shrink)
According to a recent survey by the HR Research Institute, as the presence of artificial intelligence (AI) becomes increasingly common in the workplace, HR professionals are worried that the use of recruitment algorithms will lead to a “dehumanization” of the hiring process. Our main goals in this paper are threefold: i) to bring attention to this neglected issue, ii) to clarify what exactly this concern about dehumanization might amount to, and iii) to sketch an argument for why dehumanizing the hiring (...) process is ethically suspect. After distinguishing the use of the term “dehumanization” in this context (i.e., removing the human presence) from its more common meaning in the interdisciplinary field of dehumanization studies (i.e., conceiving of other humans as subhuman), we argue that the use of hiring algorithms may negatively impact the employee-employer relationship. We argue that there are good independent reasons to accept a substantive employee-employer relationship, as well as an applicant-employer relationship, both of which are consistent with a stakeholder theory of corporate obligations. We further argue that dehumanizing the hiring process may negatively impact these relationships because of the difference between the values of human recruiters and the values embedded in recruitment algorithms. Drawing on Nguyen’s (2021) critique of how Twitter “gamifies communication”, we argue that replacing human recruiters with algorithms imports artificial values into the hiring process. We close by briefly considering some ways to potentially mitigate the problems posed by recruitment algorithms, along with the possibility that some difficult trade-offs will need to be made. (shrink)
There are (at least) three approaches to quantifying information. The first, algorithmicinformation or Kolmogorov complexity, takes events as strings and, given a universal Turing machine, quantifies the information content of a string as the length of the shortest program producing it [1]. The second, Shannon information, takes events as belonging to ensembles and quantifies the information resulting from observing the given event in terms of the number of alternate events that have been ruled (...) out [2]. The third, statistical learning theory, has introduced measures of capacity that control (in part) the expected risk of classifiers [3]. These capacities quantify the expectations regarding future data that learning algorithms embed into classifiers. Solomonoff and Hutter have applied algorithmicinformation to prove remarkable results on universal induction. Shannon information provides the mathematical foundation for communication and coding theory. However, both approaches have shortcomings. Algorithmicinformation is not computable, severely limiting its practical usefulness. Shannon information refers to ensembles rather than actual events: it makes no sense to compute the Shannon information of a single string – or rather, there are many answers to this question depending on how a related ensemble is constructed. Although there are asymptotic results linking algorithmic and Shannon information, it is unsatisfying that there is such a large gap – a difference in kind – between the two measures. This note describes a new method of quantifying information, effective information, that links algorithmicinformation to Shannon information, and also links both to capacities arising in statistical learning theory [4, 5]. After introducing the measure, we show that it provides a non-universal analog of Kolmogorov complexity. We then apply it to derive basic capacities in statistical learning theory: empirical VC-entropy and empirical Rademacher complexity. A nice byproduct of our approach is an interpretation of the explanatory power of a learning algorithm in terms of the number of hypotheses it falsifies [6], counted in two different ways for the two capacities. We also discuss how effective information relates to information gain, Shannon and mutual information. (shrink)
\Complexity" is a catchword of certain extremely popular and rapidly developing interdisciplinary new sciences, often called accordingly the sciences of complexity1. It is often closely associated with another notably popular but ambiguous word, \information" information, in turn, may be justly called the central new concept in the whole 20th century science. Moreover, the notion of information is regularly coupled with a key concept of thermodynamics, viz. entropy. And like this was not enough, it is quite usual to (...) add one more, at present extraordinarily popular notion, namely chaos, and wed it with the above-mentioned concepts. (shrink)
Perhaps nowhere better than, "On the Names of God," can readers discern Laclau's appreciation of theology, specifically, negative theology, and the radical potencies of political theology. // It is Laclau's close attention to Eckhart and Dionysius in this essay that reveals a core theological strategy to be learned by populist reasons or social logics and applied in politics or democracies to come. // This mode of algorithmically informed negative political theology is not mathematically inert. It aspires to relate a fraction (...) or ratio to a series ... It strains to reduce the decided determinateness of such seriality ever condemned to the naive metaphysics of bad infinity. // It is worth considering that it is the specific 'number' of Dionysius in differential identification with an ineffable god (and, as such, a singular becoming between theology and numbers) that is floating in at least two dimensions [of signification] (be it political Demand on the horizontal dimension or theological Desire on [a] floating dimension) that cannot but *perform the link that relinks* names of god with any political life, populist reason, social justice, or radical democracy straining toward peace. (shrink)
This article investigates religious ideals persistent in the datafication of information society. Its nodal point is Thomas Bayes, after whom Laplace names the primal probability algorithm. It reconsiders their mathematical innovations with Laplace's providential deism and Bayes' singular theological treatise. Conceptions of divine justice one finds among probability theorists play no small part in the algorithmic data-mining and microtargeting of Cambridge Analytica. Theological traces within mathematical computation are emphasized as the vantage over large numbers shifts to weights beyond (...) enumeration in probability theory. Collateral secularizations of predestination and theodicy emerge as probability optimizes into Bayesian prediction and machine learning. The paper revisits the semiotics and theism of Peirce and a given beyond the probable in Whitehead to recontextualize the critiques of providence by Agamben and Foucault. It reconsiders datafication problems alongside Nietzschean valuations. Religiosity likely remains encoded within the very algorithms presumed purified by technoscientific secularity or mathematical dispassion. (shrink)
Synthetic biology aims at reconstructing life to put to the test the limits of our understanding. It is based on premises similar to those which permitted invention of computers, where a machine, which reproduces over time, runs a program, which replicates. The underlying heuristics explored here is that an authentic category of reality, information, must be coupled with the standard categories, matter, energy, space and time to account for what life is. The use of this still elusive category permits (...) us to interact with reality via construction of self-consistent models producing predictions which can be instantiated into experiments. While the present theory of information has much to say about the program, with the creative properties of recursivity at its heart, we almost entirely lack a theory of the information supporting the machine. We suggest that the program of life codes for processes meant to trap information which comes from the context provided by the environment of the machine. (shrink)
A platitude that took hold with Kuhn is that there can be several equally good ways of balancing theoretical virtues for theory choice. Okasha recently modelled theory choice using technical apparatus from the domain of social choice: famously, Arrow showed that no method of social choice can jointly satisfy four desiderata, and each of the desiderata in social choice has an analogue in theory choice. Okasha suggested that one can avoid the Arrow analogue for theory choice (...) by employing a strategy used by Sen in social choice, namely, to enhance the information made available to the choice algorithms. I argue here that, despite Okasha’s claims to the contrary, the information-enhancing strategy is not compelling in the domain of theory choice. (shrink)
When agents insert technological systems into their decision-making processes, they can obscure moral responsibility for the results. This can give rise to a distinct moral wrong, which we call “agency laundering.” At root, agency laundering involves obfuscating one’s moral responsibility by enlisting a technology or process to take some action and letting it forestall others from demanding an account for bad outcomes that result. We argue that the concept of agency laundering helps in understanding important moral problems in a number (...) of recent cases involving automated, or algorithmic, decision-systems. We apply our conception of agency laundering to a series of examples, including Facebook’s automated advertising suggestions, Uber’s driver interfaces, algorithmic evaluation of K-12 teachers, and risk assessment in criminal sentencing. We distinguish agency laundering from several other critiques of information technology, including the so-called “responsibility gap,” “bias laundering,” and masking. (shrink)
Despite recent breakthroughs in the field of artificial intelligence (AI) – or more specifically machine learning (ML) algorithms for object recognition and natural language processing – it seems to be the majority view that current AI approaches are still no real match for natural intelligence (NI). More importantly, philosophers have collected a long catalogue of features which imply that NI works differently from current AI not only in a gradual sense, but in a more substantial way: NI is closely related (...) to consciousness, intentionality and experiential features like qualia (the subjective contents of mental states)1 and allows for understanding (e.g., taking insight into causal relationships instead of ‘blindly’ relying on correlations), as well as aesthetical and ethical judgement beyond what we can put into (explicit or data-induced implicit) rules to program machines with. Additionally, Psychologists find NI to range from unconscious psychological processes to focused information processing, and from embodied and implicit cognition to ‘true’ agency and creativity. NI thus seems to transcend any neurobiological functionalism by operating on ‘bits of meaning’ instead of information in the sense of data, quite unlike both the ‘good old fashioned’, symbolic AI of the past, as well as the current wave of deep neural network based, ‘sub-symbolic’ AI, which both share the idea of thinking as (only) information processing: In symbolic AI, the name explicitly references to its formal system based, i.e. essentially rule-based, nature, but also sub-symbolic AI is (implicitly) rule-based, only now via globally parametrized, nested functions. In the following I propose an alternative view of NI as information processing plus ‘bundle pushing’, discuss an example which illustrates how bundle pushing can cut information processing short,and suggest first ideas for scientific experiments in neuro-biology and informationtheory as further investigations. (shrink)
The main objective of this dissertation is to philosophically assess how the use of informational concepts in the field of classical thermostatistical physics has historically evolved from the late 1940s to the present day. I will first analyze in depth the main notions that form the conceptual basis on which 'informational physics' historically unfolded, encompassing (i) different entropy, probability and information notions, (ii) their multiple interpretative variations, and (iii) the formal, numerical and semantic-interpretative relationships among them. In the following, (...) I will assess the history of informational thermophysics during the second half of the twentieth century. Firstly, I analyse the intellectual factors that gave rise to this current in the late forties (i.e., popularization of Shannon's theory, interest in a naturalized epistemology of science, etc.), then study its consolidation in the Brillouinian and Jaynesian programs, and finally claim how Carnap (1977) and his disciples tried to criticize this tendency within the scientific community. Then, I evaluate how informational physics became a predominant intellectual current in the scientific community in the nineties, made possible by the convergence of Jaynesianism and Brillouinism in proposals such as that of Tribus and McIrvine (1971) or Bekenstein (1973) and the application of algorithmicinformationtheory into the thermophysical domain. As a sign of its radicality at this historical stage, I explore the main proposals to include information as part of our physical reality, such as Wheeler’s (1990), Stonier’s (1990) or Landauer’s (1991), detailing the main philosophical arguments (e.g., Timpson, 2013; Lombardi et al. 2016a) against those inflationary attitudes towards information. Following this historical assessment, I systematically analyze whether the descriptive exploitation of informational concepts has historically contributed to providing us with knowledge of thermophysical reality via (i) explaining thermal processes such as equilibrium approximation, (ii) advantageously predicting thermal phenomena, or (iii) enabling understanding of thermal property such as thermodynamic entropy. I argue that these epistemic shortcomings would make it impossible to draw ontological conclusions in a justified way about the physical nature of information. In conclusion, I will argue that the historical exploitation of informational concepts has not contributed significantly to the epistemic progress of thermophysics. This would lead to characterize informational proposals as 'degenerate science' (à la Lakatos 1978a) regarding classical thermostatistical physics or as theoretically underdeveloped regarding the study of the cognitive dynamics of scientists in this physical domain. (shrink)
The mind-body problem is analyzed in a physicalist perspective. By combining the concepts of emergence and algorithmicinformationtheory in a thought experiment employing a basic nonlinear process, it is argued that epistemically strongly emergent properties may develop in a physical system. A comparison with the significantly more complex neural network of the brain shows that also consciousness is epistemically emergent in a strong sense. Thus reductionist understanding of consciousness appears not possible; the mind-body problem does not (...) have a reductionist solution. The ontologically emergent character of consciousness is then identified from a combinatorial analysis relating to system limits set by quantum mechanics, implying that consciousness is fundamentally irreducible to low-level phenomena. In the perspective of a modified definition of free will, the character of the physical interactions of the brain's neural system is subsequently studied. As an ontologically open system, it is asserted that its future states are undeterminable in principle. We argue that this leads to freedom of the will. (shrink)
This paper deals with Gärdenfors’ theory of conceptual spaces. Let \({\mathcal {S}}\) be a conceptual space consisting of 2-type fuzzy sets equipped with several kinds of metrics. Let a finite set of prototypes \(\tilde{P}_1,\ldots,\tilde{P}_n\in \mathcal {S}\) be given. Our main result is the construction of a classification algorithm. That is, given an element \({\tilde{A}}\in \mathcal {S},\) our algorithm classifies it into the conceptual field determined by one of the given prototypes \(\tilde{P}_i.\) The construction of our algorithm uses some physical (...) analogies and the Newton potential plays a significant role here. Importantly, the resulting conceptual fields are not convex in the Euclidean sense, which we believe is a reasonable departure from the assumptions of Gardenfors’ original definition of the conceptual space. A partitioning algorithm of the space \(\mathcal {S}\) is also considered in the paper. In the application section, we test our classification algorithm on real data and obtain very satisfactory results. Moreover, the example we consider is another argument against requiring convexity of conceptual fields. (shrink)
This paper investigates the seeming incompatibility of reductionism and non-reductionism in the context of complexity sciences. I review algorithmicinformationtheory for this purpose. I offer two physical metaphors to form a better understanding of algorithmic complexity, and I briefly discuss its advantages, shortcomings and applications. Then, I revisit the non-reductionist approaches in philosophy of mind which are often arguments from ignorance to counter physicalism. A new approach called mild non-reductionism is proposed which reconciliates the necessities (...) of acknowledging irreducibility found in complex systems, and maintaining physicalism. (shrink)
The cognition of quantum processes raises a series of questions about ordering and information connecting the states of one and the same system before and after measurement: Quantum measurement, quantum in-variance and the non-locality of quantum information are considered in the paper from an epistemological viewpoint. The adequate generalization of ‘measurement’ is discussed to involve the discrepancy, due to the fundamental Planck constant, between any quantum coherent state and its statistical representation as a statistical ensemble after measurement. Quantum (...) in-variance designates the relation of any quantum coherent state to the corresponding statistical ensemble of measured results. A set-theory corollary is the curious in-variance to the axiom of choice: Any coherent state excludes any well-ordering and thus excludes also the axiom of choice. However the above equivalence requires it to be equated to a well-ordered set after measurement and thus requires the axiom of choice for it to be able to be obtained. Quantum in-variance underlies quantum information and reveals it as the relation of an unordered quantum “much” (i.e. a coherent state) and a well-ordered “many” of the measured results (i.e. a statistical ensemble). It opens up to a new horizon, in which all physical processes and phenomena can be interpreted as quantum computations realizing relevant operations and algorithms on quantum information. All phenomena of entanglement can be described in terms of the so defined quantum information. Quantum in-variance elucidates the link between general relativity and quantum mechanics and thus, the problem of quantum gravity. The non-locality of quantum information unifies the exact position of any space-time point of a smooth trajectory and the common possibility of all space-time points due to a quantum leap. This is deduced from quantum in-variance. Epistemology involves the relation of ordering and thus a generalized kind of information, quantum one, to explain the special features of the cognition in quantum mechanics. (shrink)
Let f(1)=2, f(2)=4, and let f(n+1)=f(n)! for every integer n≥2. Edmund Landau's conjecture states that the set P(n^2+1) of primes of the form n^2+1 is infinite. Landau's conjecture implies the following unproven statement Φ: card(P(n^2+1))<ω ⇒ P(n^2+1)⊆[2,f(7)]. Let B denote the system of equations: {x_j!=x_k: i,k∈{1,...,9}}∪{x_i⋅x_j=x_k: i,j,k∈{1,...,9}}. The system of equations {x_1!=x_1, x_1 \cdot x_1=x_2, x_2!=x_3, x_3!=x_4, x_4!=x_5, x_5!=x_6, x_6!=x_7, x_7!=x_8, x_8!=x_9} has exactly two solutions in positive integers x_1,...,x_9, namely (1,...,1) and (f(1),...,f(9)). No known system S⊆B with a finite (...) number of solutions in positive integers x_1,...,x_9 has a solution (x_1,...,x_9)∈(N\{0})^9 satisfying max(x_1,...,x_9)>f(9). For every known system S⊆B, if the finiteness/infiniteness of the set {(x_1,...,x_9)∈(N\{0})^9: (x_1,...,x_9) solves S} is unknown, then the statement ∃ x_1,...,x_9∈N\{0} ((x_1,...,x_9) solves S)∧(max(x_1,...,x_9)>f(9)) remains unproven. Let Λ denote the statement: if the system of equations {x_2!=x_3, x_3!=x_4, x_5!=x_6, x_8!=x_9, x_1 \cdot x_1=x_2, x_3 \cdot x_5=x_6, x_4 \cdot x_8=x_9, x_5 \cdot x_7=x_8} has at most finitely many solutions in positive integers x_1,...,x_9, then each such solution (x_1,...,x_9) satisfies x_1,...,x_9≤f(9). The statement Λ is equivalent to the statement Φ. It heuristically justifies the statement Φ . This justification does not yield the finiteness/infiniteness of P(n^2+1). We present a new heuristic argument for the infiniteness of P(n^2+1), which is not based on the statement Φ. Algorithms always terminate. We explain the distinction between existing algorithms (i.e. algorithms whose existence is provable in ZFC) and known algorithms (i.e. algorithms whose definition is constructive and currently known). Assuming that the infiniteness of a set X⊆N is false or unproven, we define which elements of X are classified as known. No known set X⊆N satisfies Conditions (1)-(4) and is widely known in number theory or naturally defined, where this term has only informal meaning. *** (1) A known algorithm with no input returns an integer n satisfying card(X)<ω ⇒ X⊆(-∞,n]. (2) A known algorithm for every k∈N decides whether or not k∈X. (3) No known algorithm with no input returns the logical value of the statement card(X)=ω. (4) There are many elements of X and it is conjectured, though so far unproven, that X is infinite. (5) X is naturally defined. The infiniteness of X is false or unproven. X has the simplest definition among known sets Y⊆N with the same set of known elements. *** Conditions (2)-(5) hold for X=P(n^2+1). The statement Φ implies Condition (1) for X=P(n^2+1). The set X={n∈N: the interval [-1,n] contains more than 29.5+\frac{11!}{3n+1}⋅sin(n) primes of the form k!+1} satisfies Conditions (1)-(5) except the requirement that X is naturally defined. 501893∈X. Condition (1) holds with n=501893. card(X∩[0,501893])=159827. X∩[501894,∞)= {n∈N: the interval [-1,n] contains at least 30 primes of the form k!+1}. We present a table that shows satisfiable conjunctions of the form #(Condition 1) ∧ (Condition 2) ∧ #(Condition 3) ∧ (Condition 4) ∧ #(Condition 5), where # denotes the negation ¬ or the absence of any symbol. No set X⊆N will satisfy Conditions (1)-(4) forever, if for every algorithm with no input, at some future day, a computer will be able to execute this algorithm in 1 second or less. The physical limits of computation disprove this assumption. (shrink)
The aim of this paper is to comprehensively question the validity of the standard way of interpreting Chaitin's famous incompleteness theorem, which says that for every formalized theory of arithmetic there is a finite constant c such that the theory in question cannot prove any particular number to have Kolmogorov complexity larger than c. The received interpretation of theorem claims that the limiting constant is determined by the complexity of the theory itself, which is assumed to be (...) good measure of the strength of the theory. I exhibit certain strong counterexamples and establish conclusively that the received view is false. Moreover, I show that the limiting constants provided by the theorem do not in any way reflect the power of formalized theories, but that the values of these constants are actually determined by the chosen coding of Turing machines, and are thus quite accidental. (shrink)
Informational theories of semantic content have been recently gaining prominence in the debate on the notion of mental representation. In this paper we examine new-wave informational theories which have a special focus on cognitive science. In particular, we argue that these theories face four important difficulties: they do not fully solve the problem of error, fall prey to the wrong distality attribution problem, have serious difficulties accounting for ambiguous and redundant representations and fail to deliver a metasemantic theory of (...) representation. Furthermore, we argue that these difficulties derive from their exclusive reliance on the notion of information, so we suggest that pure informational accounts should be complemented with functional approaches. (shrink)
Integrated InformationTheory (IIT) identifies consciousness with having a maximum amount of integrated information. But a thing’s having the maximum amount of anything cannot be intrinsic to it, for that depends on how that thing compares to certain other things. IIT’s consciousness, then, is not intrinsic. A mereological argument elaborates this consequence: IIT implies that one physical system can be conscious while a physical duplicate of it is not conscious. Thus, by a common and reasonable conception of (...) intrinsicality, IIT’s consciousness is not intrinsic. It is then argued that to avoid the implication that consciousness is not intrinsic, IIT must abandon its Exclusion Postulate, which prohibits overlapping conscious systems. Indeed, theories of consciousness that attribute consciousness to physical systems, should embrace the view that some conscious systems overlap. A discussion of the admittedly counterintuitive nature of this solution, along with some medical and neuroscientific realities that would seem to support it, is included. (shrink)
The Integrated InformationTheory is a leading scientific theory of consciousness, which implies a kind of panpsychism. In this paper, I consider whether IIT is compatible with a particular kind of panpsychism, known as Russellian panpsychism, which purports to avoid the main problems of both physicalism and dualism. I will first show that if IIT were compatible with Russellian panpsychism, it would contribute to solving Russellian panpsychism’s combination problem, which threatens to show that the view does not (...) avoid the main problems of physicalism and dualism after all. I then show that the theories are not compatible as they currently stand, in view of what I call the coarse-graining problem. After I explain the coarse-graining problem, I will offer two possible solutions, each involving a small modification of IIT. Given either of these modifications, IIT and Russellian panpsychism may be fully compatible after all, and jointly enable significant progress on the mind–body problem. (shrink)
The causal and simulation theories are often presented as very distinct views about declarative memory, their major difference lying on the causal condition. The causal theory states that remembering involves an accurate representation causally connected to an earlier experience. In the simulation theory, remembering involves an accurate representation generated by a reliable memory process. I investigate how to construe detailed versions of these theories that correctly classify memory errors as misremembering or confabulation. Neither causalists nor simulationists have paid (...) attention to memory-conjunction errors, which is unfortunate because both theories have problems with these cases. The source of the difficulty is the background assumption that an act of remembering has one target. I fix these theories for those cases. The resulting versions are closely related when implemented using tools of informationtheory, differing only on how memory transmits information about the past. The implementation provides us with insights about the distinction between confabulatory and non-confabulatory memory, where memory-conjunction errors have a privileged position. (shrink)
The mind-body problem is analyzed in a physicalist perspective. By combining the concepts of emergence and algorithmicinformationtheory in a thought experiment employing a basic nonlinear process, it is shown that epistemically strongly emergent properties may develop in a physical system. Turning to the significantly more complex neural network of the brain it is subsequently argued that consciousness is epistemically emergent. Thus reductionist understanding of consciousness appears not possible; the mind-body problem does not have a reductionist (...) solution. The ontologically emergent character of consciousness is then identified from a combinatorial analysis relating to universal limits set by quantum mechanics, implying that consciousness is fundamentally irreducible to low-level phenomena. (shrink)
In this essay we discuss recent attempts to analyse the notion of representation, as it is employed in cognitive science, in purely informational terms. In particular, we argue that recent informational theories cannot accommodate the existence of metarepresentations. Since metarepresentations play a central role in the explanation of many cognitive abilities, this is a serious shortcoming of these proposals.
Backtracking counterfactuals are problem cases for the standard, similarity based, theories of counterfactuals e.g., Lewis. These theories usually need to employ extra-assumptions to deal with those cases. Hiddleston, 632–657, 2005) proposes a causal theory of counterfactuals that, supposedly, deals well with backtracking. The main advantage of the causal theory is that it provides a unified account for backtracking and non-backtracking counterfactuals. In this paper, I present a backtracking counterfactual that is a problem case for Hiddleston’s account. Then I (...) propose an informational theory of counterfactuals, which deals well with this problem case while maintaining the main advantage of Hiddleston’s account. In addition, the informational theory offers a general theory of backtracking that provides clues for the semantics and epistemology of counterfactuals. I propose that backtracking is reasonable when the state of affairs expressed in the antecedent of a counterfactual transmits less information about an event in the past than the actual state of affairs. (shrink)
In 1948, Claude Shannon introduced his version of a concept that was core to Norbert Wiener's cybernetics, namely, informationtheory. Shannon's formalisms include a physical framework, namely a general communication system having six unique elements. Under this framework, Shannon informationtheory offers two particularly useful statistics, channel capacity and information transmitted. Remarkably, hundreds of neuroscience laboratories subsequently reported such numbers. But how (and why) did neuroscientists adapt a communications-engineering framework? Surprisingly, the literature offers no clear (...) answers. To therefore first answer "how", 115 authoritative peer-reviewed papers, proceedings, books and book chapters were scrutinized for neuroscientists' characterizations of the elements of Shannon's general communication system. Evidently, many neuroscientists attempted no identification of the system's elements. Others identified only a few of Shannon's system's elements. Indeed, the available neuroscience interpretations show a stunning incoherence, both within and across studies. The interpretational gamut implies hundreds, perhaps thousands, of different possible neuronal versions of Shannon's general communication system. The obvious lack of a definitive, credible interpretation makes neuroscience calculations of channel capacity and information transmitted meaningless. To now answer why Shannon's system was ever adapted for neuroscience, three common features of the neuroscience literature were examined: ignorance of the role of the observer, the presumption of "decoding" of neuronal voltage-spike trains, and the pursuit of ingrained analogies such as information, computation, and machine. Each of these factors facilitated a plethora of interpretations of Shannon's system elements. Finally, let us not ignore the impact of these "informational misadventures" on society at large. It is the same impact as scientific fraud. (shrink)
As historically acknowledged in the Reasoning about Actions and Change community, intuitiveness of a logical domain description cannot be fully automated. Moreover, like any other logical theory, action theories may also evolve, and thus knowledge engineers need revision methods to help in accommodating new incoming information about the behavior of actions in an adequate manner. The present work is about changing action domain descriptions in multimodal logic. Its contribution is threefold: first we revisit the semantics of action (...) class='Hi'>theory contraction proposed in previous work, giving more robust operators that express minimal change based on a notion of distance between Kripke-models. Second we give algorithms for syntactical action theory contraction and establish their correctness with respect to our semantics for those action theories that satisfy a principle of modularity investigated in previous work. Since modularity can be ensured for every action theory and, as we show here, needs to be computed at most once during the evolution of a domain description, it does not represent a limitation at all to the method here studied. Finally we state AGM-like postulates for action theory contraction and assess the behavior of our operators with respect to them. Moreover, we also address the revision counterpart of action theory change, showing that it benefits from our semantics for contraction. (shrink)
The paper explicates the stages of the author’s philosophical evolution in the light of Kopnin’s ideas and heritage. Starting from Kopnin’s understanding of dialectical materialism, the author has stated that category transformations of physics has opened from conceptualization of immutability to mutability and then to interaction, evolvement and emergence. He has connected the problem of physical cognition universals with an elaboration of the specific system of tools and methods of identifying, individuating and distinguishing objects from a scientific theory domain. (...) The role of vacuum conception and the idea of existence (actual and potential, observable and nonobservable, virtual and hidden) types were analyzed. In collaboration with S.Crymski heuristic and regulative functions of categories of substance, world as a whole as well as postulates of relativity and absoluteness, and anthropic and self-development principles were singled out. Elaborating Kopnin’s view of scientific theories as a practically effective and relatively true mapping of their domains, the author in collaboration with M. Burgin have originated the unified structure-nominative reconstruction (model) of scientific theory as a knowledge system. According to it, every scientific knowledge system includes hierarchically organized and complex subsystems that partially and separately have been studied by standard, structuralist, operationalist, problem-solving, axiological and other directions of the current philosophy of science. 1) The logico-linguistic subsystem represents and normalizes by means of different, including mathematical, languages and normalizes and logical calculi the knowledge available on objects under study. 2) The model-representing subsystem comprises peculiar to the knowledge system ways of their modeling and understanding. 3) The pragmatic-procedural subsystem contains general and unique to the knowledge system operations, methods, procedures, algorithms and programs. 4) From the viewpoint of the problem-heuristic subsystem, the knowledge system is a unique way of setting and resolving questions, problems, puzzles and tasks of cognition of objects into question. It also includes various heuristics and estimations (truth, consistency, beauty, efficacy, adequacy, heuristicity etc) of components and structures of the knowledge system. 5) The subsystem of links fixes interrelations between above-mentioned components, structures and subsystems of the knowledge system. The structure-nominative reconstruction has been used in the philosophical and comparative case-studies of mathematical, physical, economic, legal, political, pedagogical, social, and sociological theories. It has enlarged the collection of knowledge structures, connected, for instance, with a multitude of theoreticity levels and with an application of numerous mathematical languages. It has deepened the comprehension of relations between the main directions of current philosophy of science. They are interpreted as dealing mainly with isolated subsystems of scientific theory. This reconstruction has disclosed a variety of undetected knowledge structures, associated also, for instance, with principles of symmetry and supersymmetry and with laws of various levels and degrees. In cooperation with the physicist Olexander Gabovich the modified structure-nominative reconstruction is in the processes of development and justification. Ideas and concepts were also in the center of Kopnin’s cognitive activity. The author has suggested and elaborated the triplet model of concepts. According to it, any scientific concept is a dependent on cognitive situation, dynamical, multifunctional state of scientist’s thinking, and available knowledge system. A concept is modeled as being consisted from three interrelated structures. 1) The concept base characterizes objects falling under a concept as well as their properties and relations. In terms of volume and content the logical modeling reveals partially only the concept base. 2) The concept representing part includes structures and means (names, statements, abstract properties, quantitative values of object properties and relations, mathematical equations and their systems, theoretical models etc.) of object representation in the appropriate knowledge system. 3) The linkage unites a structures and procedures that connect components from the abovementioned structures. The partial cases of the triplet model are logical, information, two-tired, standard, exemplar, prototype, knowledge-dependent and other concept models. It has introduced the triplet classification that comprises several hundreds of concept types. Different kinds of fuzziness are distinguished. Even the most precise and exact concepts are fuzzy in some triplet aspect. The notions of relations between real scientific concepts are essentially extended. For example, the definition and strict analysis of such relations between concepts as formalization, quantification, mathematization, generalization, fuzzification, and various kinds of identity are proposed. The concepts «PLANET» and «ELEMENTARY PARTICLE» and some of their metamorphoses were analyzed in triplet terms. The Kopnin’s methodology and epistemology of cognition was being used for creating conception of the philosophy of law as elaborating of understanding, justification, estimating and criticizing legal system. The basic information on the major directions in current Western philosophy of law (legal realism, feminism, criticism, postmodernism, economical analysis of law etc.) is firstly introduced to the Ukrainian audience. The classification of more than fifty directions in modern legal philosophy is suggested. Some results of historical, linguistic, scientometric and philosophic-legal studies of the present state of Ukrainian academic science are given. (shrink)
How do conventions of communication emerge? How do sounds or gestures take on a semantic meaning, and how do pragmatic conventions emerge regarding the passing of adequate, reliable, and relevant information? My colleagues and I have attempted in earlier work to extend spatialized game theory to questions of semantics. Agent-based simulations indicate that simple signaling systems emerge fairly naturally on the basis of individual information maximization in environments of wandering food sources and predators. Simple signaling emerges by (...) means of any of various forms of updating on the behavior of immediate neighbors: imitation, localized genetic algorithms, and partial training in neural nets. Here the goal is to apply similar techniques to questions of pragmatics. The motivating idea is the same: the idea that important aspects of pragmatics, like important aspects of semantics, may fall out as a natural results of information maximization in informational networks. The attempt below is to simulate fundamental elements of the Gricean picture: in particular, to show within networks of very simple agents the emergence of behavior in accord with the Gricean maxims. What these simulations suggest is that important features of pragmatics, like important aspects of semantics, don't have to be added in a theory of informational networks. They come for free. (shrink)
Richard Dawkins has popularized an argument that he thinks sound for showing that there is almost certainly no God. It rests on the assumptions (1) that complex and statistically improbable things are more difficult to explain than those that are not and (2) that an explanatory mechanism must show how this complexity can be built up from simpler means. But what justifies claims about the designer’s own complexity? One comes to a different understanding of order and of simplicity when one (...) considers the psychological counterpart of information. In assessing his treatment of biological organisms as either self-programmed machines or algorithms, I show how self-generated organized complexity does not fit well with our knowledge of abduction and of informationtheory as applied to genetics. I also review some philosophical proposals for explaining how the complexity of the world could be externally controlled if one wanted to uphold a traditional understanding of divine simplicity. (shrink)
In the first instance, IIT is formulated as a theory of the physical basis of the 'degree' or ‘level’ or ‘amount’ of consciousness in a system. I raise a series of questions about the central explanatory target, the 'degree' or ‘level’ or ‘amount’ of consciousness. I suggest it is not at all clear what scientists and philosophers are talking about when they talk about consciousness as gradable. This point is developed in more detail in my paper "What Is the (...) Integrated InformationTheory of Consciousness?"Journal of Consciousness Studies 26 (1-2):1-2 (2019) . (shrink)
In Cybernetics (1961 Edition), Professor Norbert Wiener noted that “The role of information and the technique of measuring and transmitting information constitute a whole discipline for the engineer, for the neuroscientist, for the psychologist, and for the sociologist”. Sociology aside, the neuroscientists and the psychologists inferred “information transmitted” using the discrete summations from Shannon InformationTheory. The present author has since scrutinized the psychologists’ approach in depth, and found it wrong. The neuroscientists’ approach is highly (...) related, but remains unexamined. Neuroscientists quantified “the ability of [physiological sensory] receptors (or other signal-processing elements) to transmit information about stimulus parameters”. Such parameters could vary along a single continuum (e.g., intensity), or along multiple dimensions that altogether provide a Gestalt – such as a face. Here, unprecedented scrutiny is given to how 23 neuroscience papers computed “information transmitted” in terms of stimulus parameters and the evoked neuronal spikes. The computations relied upon Shannon’s “confusion matrix”, which quantifies the fidelity of a “general communication system”. Shannon’s matrix is square, with the same labels for columns and for rows. Nonetheless, neuroscientists labelled the columns by “stimulus category” and the rows by “spike-count category”. The resulting “information transmitted” is spurious, unless the evoked spike-counts are worked backwards to infer the hypothetical evoking stimuli. The latter task is probabilistic and, regardless, requires that the confusion matrix be square. Was it? For these 23 significant papers, the answer is No. (shrink)
Contemporary philosophy and theoretical psychology are dominated by an acceptance of content-externalism: the view that the contents of one's mental states are constitutively, as opposed to causally, dependent on facts about the external world. In the present work, it is shown that content-externalism involves a failure to distinguish between semantics and pre-semantics---between, on the one hand, the literal meanings of expressions and, on the other hand, the information that one must exploit in order to ascertain their literal meanings. It (...) is further shown that, given the falsity of content-externalism, the falsity of the Computational Theory of Mind (CTM) follows. It is also shown that CTM involves a misunderstanding of terms such as "computation," "syntax," "algorithm," and "formal truth." Novel analyses of the concepts expressed by these terms are put forth. These analyses yield clear, intuition-friendly, and extensionally correct answers to the questions "what are propositions?, "what is it for a proposition to be true?", and "what are the logical and psychological differences between conceptual (propositional) and non-conceptual (non-propositional) content?" Naively taking literal meaning to be in lockstep with cognitive content, Burge, Salmon, Falvey, and other semantic externalists have wrongly taken Kripke's correct semantic views to justify drastic and otherwise contraindicated revisions of commonsense. (Salmon: What is non-existent exists; at a given time, one can rationally accept a proposition and its negation. Burge: Somebody who is having a thought may be psychologically indistinguishable from somebody who is thinking nothing. Falvey: somebody who rightly believes himself to be thinking about water is psychologically indistinguishable from somebody who wrongly thinks himself to be doing so and who, indeed, isn't thinking about anything.) Given a few truisms concerning the differences between thought-borne and sentence-borne information, the data is easily modeled without conceding any legitimacy to any one of these rationality-dismantling atrocities. (It thus turns out, ironically, that no one has done more to undermine Kripke's correct semantic points than Kripke's own followers!). (shrink)
Integrated InformationTheory (IIT) is one of the most influential theories of consciousness, mainly due to its claim of mathematically formalizing consciousness in a measurable way. However, the theory, as it is formulated, does not account for contextual observations that are crucial for understanding consciousness. Here we put forth three possible difficulties for its current version, which could be interpreted as a trilemma. Either consciousness is contextual or not. If contextual, either IIT needs revisions to its axioms (...) to include contextuality, or it is inconsistent. If consciousness is not contextual, then IIT faces an empirical challenge. Therefore, we argue that IIT in its current version is inadequate. (shrink)
The first decade of this century has seen the nascency of the first mathematical theory of general artificial intelligence. This theory of Universal Artificial Intelligence (UAI) has made significant contributions to many theoretical, philosophical, and practical AI questions. In a series of papers culminating in book (Hutter, 2005), an exciting sound and complete mathematical model for a super intelligent agent (AIXI) has been developed and rigorously analyzed. While nowadays most AI researchers avoid discussing intelligence, the award-winning PhD thesis (...) (Legg, 2008) provided the philosophical embedding and investigated the UAI-based universal measure of rational intelligence, which is formal, objective and non-anthropocentric. Recently, effective approximations of AIXI have been derived and experimentally investigated in JAIR paper (Veness et al. 2011). This practical breakthrough has resulted in some impressive applications, finally muting earlier critique that UAI is only a theory. For the first time, without providing any domain knowledge, the same agent is able to self-adapt to a diverse range of interactive environments. For instance, AIXI is able to learn from scratch to play TicTacToe, Pacman, Kuhn Poker, and other games by trial and error, without even providing the rules of the games. These achievements give new hope that the grand goal of Artificial General Intelligence is not elusive. This article provides an informal overview of UAI in context. It attempts to gently introduce a very theoretical, formal, and mathematical subject, and discusses philosophical and technical ingredients, traits of intelligence, some social questions, and the past and future of UAI. (shrink)
I argue for patternism, a new answer to the question of when some objects compose a whole. None of the standard principles of composition comfortably capture our natural judgments, such as that my cat exists and my table exists, but there is nothing wholly composed of them. Patternism holds, very roughly, that some things compose a whole whenever together they form a “real pattern”. Plausibly we are inclined to acknowledge the existence of my cat and my table but not of (...) their fusion, because the first two have a kind of internal organizational coherence that their putative fusion lacks. Kolmogorov complexity theory supplies the needed rigorous sense of “internal organizational coherence”. (shrink)
We propose that measures of information integration can be more straightforwardly interpreted as measures of agency rather than of consciousness. This may be useful to the goals of consciousness research, given how agency and consciousness are “duals” in many (although not all) respects.
This paper investigates the degree to which informationtheory, and the derived uses that make it work as a metaphor of our age, can be helpful in thinking about God’s immanence and transcendance. We ask when it is possible to say that a consciousness has to be behind the information we encounter. If God is to be thought about as a communicator of information, we need to ask whether a communication system has to pre-exist to the (...) divine and impose itself to God. If we want God to be Creator, and not someone who would work like a human being, ‘creating’ will mean sustaining in being as much the channel, the material system, as the message. Is information control? It seems that God’s actions are not going to be informational control of everything. To clarify the issue, we attempt to distinguish two kinds of ‘genialities’ in nature, as a way to evaluate the likelihood of God from nature. We investigate concepts and images of God, in terms of the history of ideas but also in terms of philosophical theology, metaphysics, and religious ontology. (shrink)
In the first instance, IIT is formulated as a theory of the physical basis of the 'degree' or ‘level’ or ‘amount’ of consciousness in a system. In addition, integrated information theorists have tried to provide a systematic theory of how physical states determine the specific qualitative contents of episodes of consciousness: for instance, an experience as of a red and round thing rather than a green and square thing. I raise a series of questions about the central (...) explanatory target, the 'degree' or ‘level’ or ‘amount’ of consciousness. I suggest it is not at all clear what scientists and philosophers are talking about when they talk about consciousness as gradable. I also raise some questions about the explanation of qualitative content. (shrink)
InformationTheory, Evolution and The Origin ofLife: The Origin and Evolution of Life as a Digital Message: How Life Resembles a Computer, Second Edition. Hu- bert P. Yockey, 2005, Cambridge University Press, Cambridge: 400 pages, index; hardcover, US $60.00; ISBN: 0-521-80293-8. The reason that there are principles of biology that cannot be derived from the laws of physics and chemistry lies simply in the fact that the genetic information content of the genome for constructing even the simplest (...) organisms is much larger than the information content of these laws. Yockey in his previous book (1992, 335) In this new book, InformationTheory, Evolution and The Origin ofLife, Hubert Yockey points out that the digital, segregated, and linear character of the genetic information system has a fundamental significance. If inheritance would blend and not segregate, Darwinian evolution would not occur. If inheritance would be analog, instead of digital, evolution would be also impossible, because it would be impossible to remove the effect of noise. In this way, life is guided by information, and so information is a central concept in molecular biology. The author presents a picture of how the main concepts of the genetic code were developed. He was able to show that despite Francis Crick's belief that the Central Dogma is only a hypothesis, the Central Dogma of Francis Crick is a mathematical consequence of the redundant nature of the genetic code. The redundancy arises from the fact that the DNA and mRNA alphabet is formed by triplets of 4 nucleotides, and so the number of letters (triplets) is 64, whereas the proteome alphabet has only 20 letters (20 amino acids), and so the translation from the larger alphabet to the smaller one is necessarily redundant. Except for Tryptohan and Methionine, all amino acids are coded by more than one triplet, therefore, it is undecidable which source code letter was actually sent from mRNA. This proof has a corollary telling that there are no such mathematical constraints for protein-protein communication. With this clarification, Yockey contributes to diminishing the widespread confusion related to such a central concept like the Central Dogma. Thus the Central Dogma prohibits the origin of life "proteins first." Proteins can not be generated by "self-organization." Understanding this property of the Central Dogma will have a serious impact on research on the origin of life. (shrink)
It is often said that the best system account of laws needs supplementing with a theory of perfectly natural properties. The ‘strength’ and ‘simplicity’ of a system is language-relative and without a fixed vocabulary it is impossible to compare rival systems. Recently a number of philosophers have attempted to reformulate the BSA in an effort to avoid commitment to natural properties. I assess these proposals and argue that they are problematic as they stand. Nonetheless, I agree with their aim, (...) and show that if simplicity is interpreted as ‘compression’, algorithmicinformationtheory provides a framework for system comparison without the need for natural properties. (shrink)
The Kolmogorov-Sinai entropy is a fairly exotic mathematical concept which has recently aroused some interest on the philosophers’ part. The most salient trait of this concept is its working as a junction between such diverse ambits as statistical mechanics, informationtheory and algorithm theory. In this paper I argue that, in order to understand this very special feature of the Kolmogorov-Sinai entropy, is essential to reconstruct its genealogy. Somewhat surprisingly, this story takes us as far back as (...) the beginning of celestial mechanics and through some of the most exciting developments of mathematical physics of the 19th century. (shrink)
The way, in which quantum information can unify quantum mechanics (and therefore the standard model) and general relativity, is investigated. Quantum information is defined as the generalization of the concept of information as to the choice among infinite sets of alternatives. Relevantly, the axiom of choice is necessary in general. The unit of quantum information, a qubit is interpreted as a relevant elementary choice among an infinite set of alternatives generalizing that of a bit. The invariance (...) to the axiom of choice shared by quantum mechanics is introduced: It constitutes quantum information as the relation of any state unorderable in principle (e.g. any coherent quantum state before measurement) and the same state already well-ordered (e.g. the well-ordered statistical ensemble of the measurement of the quantum system at issue). This allows of equating the classical and quantum time correspondingly as the well-ordering of any physical quantity or quantities and their coherent superposition. That equating is interpretable as the isomorphism of Minkowski space and Hilbert space. Quantum information is the structure interpretable in both ways and thus underlying their unification. Its deformation is representable correspondingly as gravitation in the deformed pseudo-Riemannian space of general relativity and the entanglement of two or more quantum systems. The standard model studies a single quantum system and thus privileges a single reference frame turning out to be inertial for the generalized symmetry [U(1)]X[SU(2)]X[SU(3)] “gauging” the standard model. As the standard model refers to a single quantum system, it is necessarily linear and thus the corresponding privileged reference frame is necessary inertial. The Higgs mechanism U(1) → [U(1)]X[SU(2)] confirmed enough already experimentally describes exactly the choice of the initial position of a privileged reference frame as the corresponding breaking of the symmetry. The standard model defines ‘mass at rest’ linearly and absolutely, but general relativity non-linearly and relatively. The “Big Bang” hypothesis is additional interpreting that position as that of the “Big Bang”. It serves also in order to reconcile the linear standard model in the singularity of the “Big Bang” with the observed nonlinearity of the further expansion of the universe described very well by general relativity. Quantum information links the standard model and general relativity in another way by mediation of entanglement. The linearity and absoluteness of the former and the nonlinearity and relativeness of the latter can be considered as the relation of a whole and the same whole divided into parts entangled in general. (shrink)
In this paper, we take a meta-theoretical stance and aim to compare and assess two conceptual frameworks that endeavor to explain phenomenal experience. In particular, we compare Feinberg & Mallatt’s Neurobiological Naturalism (NN) and Tononi’s and colleagues' Integrated InformationTheory (IIT), given that the former pointed out some similarities between the two theories (Feinberg & Mallatt 2016c-d). To probe their similarity, we first give a general introduction to both frameworks. Next, we expound a ground plan for carrying out (...) our analysis. We move on to articulate a philosophical profile of NN and IIT, addressing their ontological commitments and epistemological foundations. Finally, we compare the two point-by-point, also discussing how they stand on the issue of artificial consciousness. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.