According to Marr's theory of vision, computational processes of early vision rely for their success on certain "natural constraints" in the physical environment. I examine the implications of this feature of Marr's theory for the question whether psychological states supervene on neural states. It is reasonable to hold that Marr's theory is nonindividualistic in that, given the role of natural constraints, distinct computational theories of the same neural processes may be justified in different environments. But to avoid trivializing (...)computational explanations, theories must respect methodological solipsism in the sense that within a theory there cannot be differences in content without a corresponding difference in neural states. (shrink)
This article aims to develop a new account of scientific explanation for computer simulations. To this end, two questions are answered: what is the explanatory relation for computer simulations? And what kind of epistemic gain should be expected? For several reasons tailored to the benefits and needs of computer simulations, these questions are better answered within the unificationist model of scientific explanation. Unlike previous efforts in the literature, I submit that the explanatory relation is between the simulation model (...) and the results of the simulation. I also argue that our epistemic gain goes beyond the unificationist account, encompassing a practical dimension as well. (shrink)
Physical Computation is the summation of Piccinini’s work on computation and mechanistic explanation over the past decade. It draws together material from papers published during that time, but also provides additional clarifications and restructuring that make this the definitive presentation of his mechanistic account of physical computation. This review will first give a brief summary of the account that Piccinini defends, followed by a chapter-by-chapter overview of the book, before finally discussing one aspect of the account in more critical (...) detail. (shrink)
In this paper, I argue that computationalism is a progressive research tradition. Its metaphysical assumptions are that nervous systems are computational, and that information processing is necessary for cognition to occur. First, the primary reasons why information processing should explain cognition are reviewed. Then I argue that early formulations of these reasons are outdated. However, by relying on the mechanistic account of physical computation, they can be recast in a compelling way. Next, I contrast two computational models of (...) working memory to show how modeling has progressed over the years. The methodological assumptions of new modeling work are best understood in the mechanistic framework, which is evidenced by the way in which models are empirically validated. Moreover, the methodological and theoretical progress in computational neuroscience vindicates the new mechanistic approach to explanation, which, at the same time, justifies the best practices of computational modeling. Overall, computational modeling is deservedly successful in cognitive science. Its successes are related to deep conceptual connections between cognition and computation. Computationalism is not only here to stay, it becomes stronger every year. (shrink)
Scientists depend on complex computational systems that are often ineliminably opaque, to the detriment of our ability to give scientific explanations and detect artifacts. Some philosophers have s...
This paper argues that the idea of a computer is unique. Calculators and analog computers are not different ideas about computers, and nature does not compute by itself. Computers, once clearly defined in all their terms and mechanisms, rather than enumerated by behavioral examples, can be more than instrumental tools in science, and more than source of analogies and taxonomies in philosophy. They can help us understand semantic content and its relation to form. This can be achieved because they have (...) the potential to do more than calculators, which are computers that are designed not to learn. Today’s computers are not designed to learn; rather, they are designed to support learning; therefore, any theory of content tested by computers that currently exist must be of an empirical, rather than a formal nature. If they are designed someday to learn, we will see a change in roles, requiring an empirical theory about the Turing architecture’s content, using the primitives of learning machines. This way of thinking, which I call the intensional view of computers, avoids the problems of analogies between minds and computers. It focuses on the constitutive properties of computers, such as showing clearly how they can help us avoid the infinite regress in interpretation, and how we can clarify the terms of the suggested mechanisms to facilitate a useful debate. Within the intensional view, syntax and content in the context of computers become two ends of physically realizing correspondence problems in various domains. (shrink)
In most accounts of realization of computational processes by physical mechanisms, it is presupposed that there is one-to-one correspondence between the causally active states of the physical process and the states of the computation. Yet such proposals either stipulate that only one model of computation is implemented, or they do not reflect upon the variety of models that could be implemented physically. In this paper, I claim that mechanistic accounts of computation should allow for a broad variation of models (...) of computation. In particular, some non-standard models should not be excluded a priori. The relationship between mathematical models of computation and mechanistically adequate models is studied in more detail. (shrink)
Philosophers of psychology debate, among other things, which psychological models, if any, are (or provide) mechanistic explanations. This should seem a little strange given that there is rough consensus on the following two claims: 1) a mechanism is an organized collection of entities and activities that produces, underlies, or maintains a phenomenon, and 2) a mechanistic explanation describes, represents, or provides information about the mechanism producing, underlying, or maintaining the phenomenon to be explained (i.e. the explanandum phenomenon) (Bechtel and (...) Abrahamsen 2005; Craver 2007). If there is a rough consensus on what mechanisms are and that mechanistic explanations describe, represent, or provide information about them, then how is there no consensus on which psychological models are (or provide) mechanistic explanations? Surely the psychological models that are mechanistic explanations are the models that describe, represent, or provide information about mechanisms. That is true, of course; the trouble arises when determining what exactly that involves. Philosophical disagreement over which psychological models are mechanistic explanations is often disagreement about what it means to describe, represent, or provide information about a mechanism, among other things (Hochstein 2016; Levy 2013). In addition, one's position in this debate depends on a host of other seemingly arcane metaphysical issues, such as the nature of mechanisms, computational and functional properties (Piccinini 2016), and realization (Piccinini and Maley 2014), as well as the relation between models, methodologies, and explanations (Craver 2014; Levy 2013; Zednik 2015). Although I inevitably advocate a position, my primary aim in this chapter is to spell out all these relationships and canvas the positions that have been taken (or could be taken) with respect to mechanistic explanation in psychology, using dynamical systems models and cognitive models (or functional analyses) as examples. (shrink)
There is a prevalent notion among cognitive scientists and philosophers of mind that computers are merely formal symbol manipulators, performing the actions they do solely on the basis of the syntactic properties of the symbols they manipulate. This view of computers has allowed some philosophers to divorce semantics from computational explanations. Semantic content, then, becomes something one adds to computational explanations to get psychological explanations. Other philosophers, such as Stephen Stich, have taken a stronger view, advocating doing away (...) with semantics entirely. This paper argues that a correct account of computation requires us to attribute content to computational processes in order to explain which functions are being computed. This entails that computational psychology must countenance mental representations. Since anti-semantic positions are incompatible with computational psychology thus construed, they ought to be rejected. Lastly, I argue that in an important sense, computers are not formal symbol manipulators. (shrink)
This chapter draws an analogy between computing mechanisms and autopoietic systems, focusing on the non-representational status of both kinds of system (computational and autopoietic). It will be argued that the role played by input and output components in a computing mechanism closely resembles the relationship between an autopoietic system and its environment, and in this sense differs from the classical understanding of inputs and outputs. The analogy helps to make sense of why we should think of computing mechanisms as (...) non-representational, and might also facilitate reconciliation between computational and autopoietic/enactive approaches to the study of cognition. (shrink)
This work addresses a broad range of questions which belong to four fields: computation theory, general philosophy of science, philosophy of cognitive science, and philosophy of mind. Dynamical system theory provides the framework for a unified treatment of these questions. ;The main goal of this dissertation is to propose a new view of the aims and methods of cognitive science--the dynamical approach . According to this view, the object of cognitive science is a particular set of dynamical systems, which I (...) call "cognitive systems". The goal of a cognitive study is to specify a dynamical model of a cognitive system, and then use this model to produce a detailed account of the specific cognitive abilities of that system. The dynamical approach does not limit a-priori the form of the dynamical models which cognitive science may consider. In particular, this approach is compatible with both computational and connectionist modeling, for both computational systems and connectionist networks are special types of dynamical systems. ;To substantiate these methodological claims about cognitive science, I deal first with two questions in two different fields: What is a computational system? What is a dynamical explanation of a deterministic process? ;Intuitively, a computational system is a deterministic system which evolves in discrete time steps, and which can be described in an effective way. In chapter 1, I give a formal definition of this concept which employs the notions of isomorphism between dynamical systems, and of Turing computable function. In chapter 2, I propose a more comprehensive analysis which is based on a natural generalization of the concept of Turing machine. ;The goal of chapter 3 is to develop a theory of the dynamical explanation of a deterministic process. By a "dynamical explanation" I mean the specification of a dynamical model of the system or process which we want to explain. I start from the analysis of a specific type of explanandum--dynamical phenomena--and I then use this analysis to shed light on the general form of a dynamical explanation. Finally, I analyze the structure of those theories which generate explanations of this form, namely dynamical theories. (shrink)
We overview logical and computational explanations of the notion of tractability as applied in cognitive science. We start by introducing the basics of mathematical theories of complexity: computability theory, computational complexity theory, and descriptive complexity theory. Computational philosophy of mind often identifies mental algorithms with computable functions. However, with the development of programming practice it has become apparent that for some computable problems finding effective algorithms is hardly possible. Some problems need too much computational resource, e.g., (...) time or memory, to be practically computable. Computational complexity theory is concerned with the amount of resources required for the execution of algorithms and, hence, the inherent difficulty of computational problems. An important goal of computational complexity theory is to categorize computational problems via complexity classes, and in particular, to identify efficiently solvable problems and draw a line between tractability and intractability. -/- We survey how complexity can be used to study computational plausibility of cognitive theories. We especially emphasize methodological and mathematical assumptions behind applying complexity theory in cognitive science. We pay special attention to the examples of applying logical and computational complexity toolbox in different domains of cognitive science. We focus mostly on theoretical and experimental research in psycholinguistics and social cognition. (shrink)
This volume offers very selected papers from the 2014 conference of the “International Association for Computing and Philosophy” (IACAP) - a conference tradition of 28 years. - - - Table of Contents - 0 Vincent C. Müller: - Editorial - 1) Philosophy of computing - 1 Çem Bozsahin: - What is a computational constraint? - 2 Joe Dewhurst: - Computing Mechanisms and Autopoietic Systems - 3 Vincenzo Fano, Pierluigi Graziani, Roberto Macrelli and Gino Tarozzi: - Are Gandy Machines really (...) local? - 4 Doukas Kapantais: - A refutation of the Church-Turing thesis according to some interpretation of what the thesis says - 5 Paul Schweizer: - In What Sense Does the Brain Compute? - 2) Philosophy of computer science & discovery - 6 Mark Addis, Peter Sozou, Peter C R Lane and Fernand Gobet: - Computational Scientific Discovery and Cognitive Science Theories - 7 Nicola Angius and Petros Stefaneas: - Discovering Empirical Theories of Modular Software Systems. An Algebraic Approach. - 8 Selmer Bringsjord, John Licato, Daniel Arista, Naveen Sundar Govindarajulu and Paul Bello: - Introducing the Doxastically Centered Approach to Formalizing Relevance Bonds in Conditionals - 9 Orly Stettiner: - From Silico to Vitro: - Computational Models of Complex Biological Systems Reveal Real-world Emergent Phenomena - 3) Philosophy of cognition & intelligence - 10 Douglas Campbell: - Why We Shouldn’t Reason Classically, and the Implications for Artificial Intelligence - 11 Stefano Franchi: - Cognition as Higher Order Regulation - 12 Marcello Guarini: - Eliminativisms, Languages of Thought, & the Philosophy of Computational Cognitive Modeling - 13 Marcin Miłkowski: - A Mechanistic Account of ComputationalExplanation in Cognitive Science and Computational Neuroscience - 14 Alex Tillas: - Internal supervision & clustering: - A new lesson from ‘old’ findings? - 4) Computing & society - 15 Vasileios Galanos: - Floridi/Flusser: - Parallel Lives in Hyper/Posthistory - 16 Paul Bello: - Machine Ethics and Modal Psychology - 17 Marty J. Wolf and Nir Fresco: - My Liver Is Broken, Can You Print Me a New One? - 18 Marty J. Wolf, Frances Grodzinsky and Keith W. Miller: - Robots, Ethics and Software – FOSS vs. Proprietary Licenses. (shrink)
Necessity and sufficiency are the building blocks of all successful explanations. Yet despite their importance, these notions have been conceptually underdeveloped and inconsistently applied in explainable artificial intelligence (XAI), a fast-growing research area that is so far lacking in firm theoretical foundations. Building on work in logic, probability, and causality, we establish the central role of necessity and sufficiency in XAI, unifying seemingly disparate methods in a single formal framework. We provide a sound and complete algorithm for computing explanatory factors (...) with respect to a given context, and demonstrate its flexibility and competitive performance against state of the art alternatives on various tasks. (shrink)
In this paper, the role of the environment and physical embodiment of computational systems for explanatory purposes will be analyzed. In particular, the focus will be on cognitive computational systems, understood in terms of mechanisms that manipulate semantic information. It will be argued that the role of the environment has long been appreciated, in particular in the work of Herbert A. Simon, which has inspired the mechanistic view on explanation. From Simon’s perspective, the embodied view on cognition (...) seems natural but it is nowhere near as critical as its proponents suggest. The only point of difference between Simon and embodied cognition is the significance of body-based off-line cognition; however, it will be argued that it is notoriously over-appreciated in the current debate. The new mechanistic view on explanation suggests that even if it is critical to situate a mechanism in its environment and study its physical composition, or realization, it is also stressed that not all detail counts, and that some bodily features of cognitive systems should be left out from explanations. (shrink)
As the demand for explainable deep learning grows in the evaluation of language technologies, the value of a principled grounding for those explanations grows as well. Here we study the state-of-the-art in explanation for neural models for natural-language processing (NLP) tasks from the viewpoint of philosophy of science. We focus on recent evaluation work that finds brittleness in explanations obtained through attention mechanisms.We harness philosophical accounts of explanation to suggest broader conclusions from these studies. From this analysis, we (...) assert the impossibility of causal explanations from attention layers over text data. We then introduce NLP researchers to contemporary philosophy of science theories that allow robust yet non-causal reasoning in explanation, giving computer scientists a vocabulary for future research. (shrink)
A common kind of explanation in cognitive neuroscience might be called functiontheoretic: with some target cognitive capacity in view, the theorist hypothesizes that the system computes a well-defined function (in the mathematical sense) and explains how computing this function constitutes (in the system’s normal environment) the exercise of the cognitive capacity. Recently, proponents of the so-called ‘new mechanist’ approach in philosophy of science have argued that a model of a cognitive capacity is explanatory only to the extent that it (...) reveals the causal structure of the mechanism underlying the capacity. If they are right, then a cognitive model that resists a transparent mapping to known neural mechanisms fails to be explanatory. I argue that a functiontheoretic characterization of a cognitive capacity can be genuinely explanatory even absent an account of how the capacity is realized in neural hardware. (shrink)
In this article, after presenting the basic idea of causal accounts of implementation and the problems they are supposed to solve, I sketch the model of computation preferred by Chalmers and argue that it is too limited to do full justice to computational theories in cognitive science. I also argue that it does not suffice to replace Chalmers’ favorite model with a better abstract model of computation; it is necessary to acknowledge the causal structure of physical computers that is (...) not accommodated by the models used in computability theory. Additionally, an alternative mechanistic proposal is outlined. (shrink)
One particularly successful approach to modeling within cognitive science is computational psychology. Computational psychology explores psychological processes by building and testing computational models with human data. In this paper, it is argued that a specific approach to understanding computation, what is called the ‘narrow conception’, has problematically limited the kinds of models, theories, and explanations that are offered within computational psychology. After raising two problems for the narrow conception, an alternative, ‘wide approach’ to computational psychology (...) is proposed. (shrink)
Despite their success in describing and predicting cognitive behavior, the plausibility of so-called ‘rational explanations’ is often contested on the grounds of computational intractability. Several cognitive scientists have argued that such intractability is an orthogonal pseudoproblem, however, since rational explanations account for the ‘why’ of cognition but are agnostic about the ‘how’. Their central premise is that humans do not actually perform the rational calculations posited by their models, but only act as if they do. Whether or not the (...) problem of intractability is solved by recourse to ‘as if’ explanations critically depends, inter alia, on the semantics of the ‘as if’ connective. We examine the five most sensible explications in the literature, and conclude that none of them circumvents the problem. As a result, rational ‘as if’ explanations must obey the minimal computational constraint of tractability. (shrink)
The sensorimotor theory of vision and visual consciousness is often described as a radical alternative to the computational and connectionist orthodoxy in the study of visual perception. However, it is far from clear whether the theory represents a significant departure from orthodox approaches or whether it is an enrichment of it. In this study, I tackle this issue by focusing on the explanatory structure of the sensorimotor theory. I argue that the standard formulation of the theory subscribes to the (...) same theses of the dynamical hypothesis and that it affords covering-law explanations. This however exposes the theory to the mere description worry and generates a puzzle about the role of representations. I then argue that the sensorimotor theory is compatible with a mechanistic framework, and show how this can overcome the mere description worry and solve the problem of the explanatory role of representations. By doing so, it will be shown that the theory should be understood as an enrichment of the orthodoxy, rather than an alternative. (shrink)
Many cognitive scientists, having discovered that some computational-level characterization f of a cognitive capacity φ is intractable, invoke heuristics as algorithmic-level explanations of how cognizers compute f. We argue that such explanations are actually dysfunctional, and rebut five possible objections. We then propose computational-level theory revision as a principled and workable alternative.
This paper addresses a fundamental line of research in neuroscience: the identification of a putative neural processing core of the cerebral cortex, often claimed to be “canonical”. This “canonical” core would be shared by the entire cortex, and would explain why it is so powerful and diversified in tasks and functions, yet so uniform in architecture. The purpose of this paper is to analyze the search for canonical explanations over the past 40 years, discussing the theoretical frameworks informing this research. (...) It will highlight a bias that, in my opinion, has limited the success of this research project, that of overlooking the dimension of cortical development. The earliest explanation of the cerebral cortex as canonical was attempted by David Marr, deriving putative cortical circuits from general mathematical laws, loosely following a deductive-nomological account. Although Marr’s theory turned out to be incorrect, one of its merits was to have put the issue of cortical circuit development at the top of his agenda. This aspect has been largely neglected in much of the research on canonical models that has followed. Models proposed in the 1980s were conceived as mechanistic. They identified a small number of components that interacted as a basic circuit, with each component defined as a function. More recent models have been presented as idealized canonical computations, distinct from mechanistic explanations, due to the lack of identifiable cortical components. Currently, the entire enterprise of coming up with a single canonical explanation has been criticized as being misguided, and the premise of the uniformity of the cortex has been strongly challenged. This debate is analyzed here. The legacy of the canonical circuit concept is reflected in both positive and negative ways in recent large-scale brain projects, such as the Human Brain Project. One positive aspect is that these projects might achieve the aim of producing detailed simulations of cortical electrical activity, a negative one regards whether they will be able to find ways of simulating how circuits actually develop. (shrink)
Review of: "Computation, Information, Cognition: The Nexus and the Liminal", Ed. Susan Stuart & Gordana Dodig Crnkovic, Newcastle: Cambridge Scholars Publishing, September 2007, xxiv+340pp, ISBN: 9781847180902, Hardback: £39.99, $79.99 ---- Are you a computer? Is your cat a computer? A single biological cell in your stomach, perhaps? And your desk? You do not think so? Well, the authors of this book suggest that you think again. They propose a computational turn, a turn towards computationalexplanation and towards (...) the explanation of computation itself. The explanation of computation is the core of the present volume, but the computational turn to regard a wide variety of systems as computational is a potentially very wide-ranging project. (shrink)
Computer simulations constitute a significant scientific tool for promoting scientific understanding of natural phenomena and dynamic processes. Substantial leaps in computational force and software engineering methodologies now allow the design and development of large-scale biological models, which – when combined with advanced graphics tools – may produce realistic biological scenarios, that reveal new scientific explanations and knowledge about real life phenomena. A state-of-the-art simulation system termed Reactive Animation (RA) will serve as a study case to examine the contemporary philosophical (...) debate on the scientific value of simulations, as we demonstrate its ability to form a scientific explanation of natural phenomena and to generate new emergent behaviors, making possible a prediction or hypothesis about the equivalent real-life phenomena. (shrink)
In the last couple of years, a few seemingly independent debates on scientific explanation have emerged, with several key questions that take different forms in different areas. For example, the questions what makes an explanation distinctly mathematical and are there any non-causal explanations in sciences (i.e., explanations that don’t cite causes in the explanans) sometimes take a form of the question of what makes mathematical models explanatory, especially whether highly idealized models in science can be explanatory and in (...) virtue of what they are explanatory. These questions raise further issues about counterfactuals, modality, and explanatory asymmetries: i.e., do mathematical and non-causal explanations support counterfactuals, and how ought we to understand explanatory asymmetries in non-causal explanations? Even though these are very common issues in the philosophy of physics and mathematics, they can be found in different guises in the philosophy of biology where there is the statistical interpretation of the Modern Synthesis theory of evolution, according to which the post-Darwinian theory of natural selection explains evolutionary change by citing statistical properties of populations and not the causes of changes. These questions also arise in philosophy of ecology or neuroscience in regard to the nature of topological explanations. The question here is can the mathematical (or more precisely topological) properties in network models in biology, ecology, neuroscience, and computer science be explanatory of physical phenomena, or are they just different ways to represent causal structures. The aim of this special issue is to unify all these debates around several overlapping questions. These questions are: are there genuinely or distinctively mathematical and non-causal explanations?; are all distinctively mathematical explanations also non-causal; in virtue of what they are explanatory; does the instantiation, implementation, or in general, applicability of mathematical structures to a variety of phenomena and systems play any explanatory role? This special issue provides a platform for unifying the debates around several key issues and thus opens up avenues for better understanding of mathematical and non-causal explanations in general, but also, it will enable even better understanding of key issues within each of the debates. (shrink)
In this paper, I show how semantic factors constrain the understanding of the computational phenomena to be explained so that they help build better mechanistic models. In particular, understanding what cognitive systems may refer to is important in building better models of cognitive processes. For that purpose, a recent study of some phenomena in rats that are capable of ‘entertaining’ future paths (Pfeiffer and Foster 2013) is analyzed. The case shows that the mechanistic account of physical computation may be (...) complemented with semantic considerations, and in many cases, it actually should. (shrink)
Replicability and reproducibility of computational models has been somewhat understudied by “the replication movement.” In this paper, we draw on methodological studies into the replicability of psychological experiments and on the mechanistic account of explanation to analyze the functions of model replications and model reproductions in computational neuroscience. We contend that model replicability, or independent researchers' ability to obtain the same output using original code and data, and model reproducibility, or independent researchers' ability to recreate a model (...) without original code, serve different functions and fail for different reasons. This means that measures designed to improve model replicability may not enhance (and, in some cases, may actually damage) model reproducibility. We claim that although both are undesirable, low model reproducibility poses more of a threat to long-term scientific progress than low model replicability. In our opinion, low model reproducibility stems mostly from authors' omitting to provide crucial information in scientific papers and we stress that sharing all computer code and data is not a solution. Reports of computational studies should remain selective and include all and only relevant bits of code. (shrink)
In my 2013 article, “A New Theory of Free Will”, I argued that several serious hypotheses in philosophy and modern physics jointly entail that our reality is structurally identical to a peer-to-peer (P2P) networked computer simulation. The present paper outlines how quantum phenomena emerge naturally from the computational structure of a P2P simulation. §1 explains the P2P Hypothesis. §2 then sketches how the structure of any P2P simulation realizes quantum superposition and wave-function collapse (§2.1.), quantum indeterminacy (§2.2.), wave-particle duality (...) (§2.3.), and quantum entanglement (§2.4.). Finally, §3 argues that although this is by no means a philosophical proof that our reality is a P2P simulation, it provides ample reasons to investigate the hypothesis further using the methods of computer science, physics, philosophy, and mathematics. (shrink)
In this paper, I revisit Frege's theory of sense and reference in the constructive setting of the meaning explanations of type theory, extending and sharpening a program–value analysis of sense and reference proposed by Martin-Löf building on previous work of Dummett. I propose a computational identity criterion for senses and argue that it validates what I see as the most plausible interpretation of Frege's equipollence principle for both sentences and singular terms. Before doing so, I examine Frege's implementation of (...) his theory of sense and reference in the logical framework of Grundgesetze, his doctrine of truth values, and views on sameness of sense as equipollence of assertions. (shrink)
The philosophical conception of mechanistic explanation is grounded on a limited number of canonical examples. These examples provide an overly narrow view of contemporary scientific practice, because they do not reflect the extent to which the heuristic strategies and descriptive practices that contribute to mechanistic explanation have evolved beyond the well-known methods of decomposition, localization, and pictorial representation. Recent examples from evolutionary robotics and network approaches to biology and neuroscience demonstrate the increasingly important role played by computer simulations (...) and mathematical representations in the epistemic practices of mechanism discovery and mechanism description. These examples also indicate that the scope of mechanistic explanation must be re-examined: With new and increasingly powerful methods of discovery and description comes the possibility of describing mechanisms far more complex than traditionally assumed. (shrink)
A consistent finding in research on conditional reasoning is that individuals are more likely to endorse the valid modus ponens (MP) inference than the equally valid modus tollens (MT) inference. This pattern holds for both abstract task and probabilistic task. The existing explanation for this phenomenon within a Bayesian framework (e.g., Oaksford & Chater, 2008) accounts for this asymmetry by assuming separate probability distributions for both MP and MT. We propose a novel explanation within a computational-level Bayesian (...) account of reasoning according to which “argumentation is learning”. We show that the asymmetry must appear for certain prior probability distributions, under the assumption that the conditional inference provides the agent with new information that is integrated into the existing knowledge by minimizing the Kullback-Leibler divergence between the posterior and prior probability distribution. We also show under which conditions we would expect the opposite pattern, an MT-MP asymmetry. (shrink)
Moral reasoning traditionally distinguishes two types of evil:moral (ME) and natural (NE). The standard view is that ME is the product of human agency and so includes phenomena such as war,torture and psychological cruelty; that NE is the product of nonhuman agency, and so includes natural disasters such as earthquakes, floods, disease and famine; and finally, that more complex cases are appropriately analysed as a combination of ME and NE. Recently, as a result of developments in autonomous agents in cyberspace, (...) a new class of interesting and important examples of hybrid evil has come to light. In this paper, it is called artificial evil (AE) and a case is made for considering it to complement ME and NE to produce a more adequate taxonomy. By isolating the features that have led to the appearance of AE, cyberspace is characterised as a self-contained environment that forms the essential component in any foundation of the emerging field of Computer Ethics (CE). It is argued that this goes someway towards providing a methodological explanation of why cyberspace is central to so many of CE's concerns; and it is shown how notions of good and evil can be formulated in cyberspace. Of considerable interest is how the propensity for an agent's action to be morally good or evil can be determined even in the absence of biologically sentient participants and thus allows artificial agents not only to perpetrate evil (and fort that matter good) but conversely to `receive' or `suffer from' it. The thesis defended is that the notion of entropy structure, which encapsulates human value judgement concerning cyberspace in a formal mathematical definition, is sufficient to achieve this purpose and, moreover, that the concept of AE can be determined formally, by mathematical methods. A consequence of this approach is that the debate on whether CE should be considered unique, and hence developed as a Macroethics, may be viewed, constructively,in an alternative manner. The case is made that whilst CE issues are not uncontroversially unique, they are sufficiently novel to render inadequate the approach of standard Macroethics such as Utilitarianism and Deontologism and hence to prompt the search for a robust ethical theory that can deal with them successfully. The name Information Ethics (IE) is proposed for that theory. Itis argued that the uniqueness of IE is justified by its being non-biologically biased and patient-oriented: IE is an Environmental Macroethics based on the concept of data entity rather than life. It follows that the novelty of CE issues such as AE can be appreciated properly because IE provides a new perspective (though not vice versa). In light of the discussion provided in this paper, it is concluded that Computer Ethics is worthy of independent study because it requires its own application-specific knowledge and is capable of supporting a methodological foundation, Information Ethics. (shrink)
Moral reasoning traditionally distinguishes two types of evil: moral and natural. The standard view is that ME is the product of human agency and so includes phenomena such as war, torture and psychological cruelty; that NE is the product of nonhuman agency, and so includes natural disasters such as earthquakes, floods, disease and famine; and finally, that more complex cases are appropriately analysed as a combination of ME and NE. Recently, as a result of developments in autonomous agents in cyberspace, (...) a new class of interesting and important examples of hybrid evil has come to light. In this paper, it is called artificial evil and a case is made for considering it to complement ME and NE to produce a more adequate taxonomy. By isolating the features that have led to the appearance of AE, cyberspace is characterised as a self-contained environment that forms the essential component in any foundation of the emerging field of Computer Ethics. It is argued that this goes some way towards providing a methodological explanation of why cyberspace is central to so many of CE’s concerns; and it is shown how notions of good and evil can be formulated in cyberspace. Of considerable interest is how the propensity for an agent’s action to be morally good or evil can be determined even in the absence of biologically sentient participants and thus allows artificial agents not only to perpetrate evil but conversely to ‘receive’ or ‘suffer from’ it. The thesis defended is that the notion of entropy structure, which encapsulates human value judgement concerning cyberspace in a formal mathematical definition, is sufficient to achieve this purpose and, moreover, that the concept of AE can be determined formally, by mathematical methods. A consequence of this approach is that the debate on whether CE should be considered unique, and hence developed as a Macroethics, may be viewed, constructively, in an alternative manner. The case is made that whilst CE issues are not uncontroversially unique, they are sufficiently novel to render inadequate the approach of standard Macroethics such as Utilitarianism and Deontologism and hence to prompt the search for a robust ethical theory that can deal with them successfully. The name Information Ethics is proposed for that theory. It is argued that the uniqueness of IE is justified by its being non-biologically biased and patient-oriented: IE is an Environmental Macroethics based on the concept of data entity rather than life. It follows that the novelty of CE issues such as AE can be appreciated properly because IE provides a new perspective. In light of the discussion provided in this paper, it is concluded that Computer Ethics is worthy of independent study because it requires its own application-specific knowledge and is capable of supporting a methodological foundation, Information Ethics. (shrink)
Here is my thesis (and the outline of this paper). Increasingly secret, complex and inscrutable computational systems are being used to intensify existing power relations, and to create new ones (Section II). To be all-things-considered morally permissible, new, or newly intense, power relations must in general meet standards of procedural legitimacy and proper authority (Section III). Legitimacy and authority constitutively depend, in turn, on a publicity requirement: reasonably competent members of the political community in which power is being exercised (...) must be able to determine that power is being exercised legitimately and with proper authority (Section IV). The publicity requirement can be satisfied only if the powerful can explain their decision-making—including the computational tools that they use to support it—to members of their political community. Section V applies these ideas to opaque computational systems. Section VI addresses objections; Section VII concludes. (shrink)
This essay examines the possibility that phenomenological laws might be implemented by a computational mechanism by carefully analyzing key passages from the Prolegomena to Pure Logic. Part I examines the famous Denkmaschine passage as evidence for the view that intuitions of evidence are causally produced by computational means. Part II connects the less famous criticism of Avenarius & Mach on thought-economy with Husserl's 1891 essay 'On the Logic of Signs (Semiotic).' Husserl is shown to reaffirm his earlier opposition (...) to associationist (Humean) and adaptationist (Darwinian) explanations of our thought-machine on the ground that they cannot reconstruct the notion of truth and its syntactic mental preservation in symbolic thought-trains. Part III reveals Husserl's interesting commitment to the idea that descriptive sciences necessarily transform into explanatory sciences on pain of contradicting the essence of science as a rational ideal. Since explanation in relation to phenomenology means causal explanation, and causal explanation in the form of the Denkmaschine accounts for phenomenological intuition, it is inferred that the rationally compelling ideal of explanatory science requires a computationalist reading of the subsequent logical investigations. (shrink)
This paper is intended as a critical examination of the question of when and under what conditions the use of computer simulations is beneficial to scientific explanations. This objective is pursued in two steps: First, I try to establish clear criteria that simulations must meet in order to be explanatory. Basically, a simulation has explanatory power only if it includes all causally relevant factors of a given empirical configuration and if the simulation delivers stable results within the measurement inaccuracies of (...) the input parameters. -/- In the second step, I examine a few examples of Axelrod-style simulations as they have been used to understand the evolution of cooperation (Axelrod, Schüßler). These simulations do not meet the criteria for explanatory validity and it can be shown, as I believe, that they lead us astray from the scientific problems they have been addressed to solve. (shrink)
This is a presentation about the impacts of Logic and Theory of Computation. It starts by some explanations about Theory of Computation and its relations with the other subjects in science. Then we have some explanations about paradoxes and some historical points. In continuation, we present some of the most important paradoxes. Forthcoming, Five subjects around the relations between Logic and Theory of computation is introduced. Finally, we present a new approach to solve P vs NP problem via Paradoxes (Presentation (...) 6/20/2022 In Persian, Online seminar of The Iranian Association for Logic (IAL) . (shrink)
Climatology is a paradigmatic complex systems science. Understanding the global climate involves tackling problems in physics, chemistry, economics, and many other disciplines. I argue that complex systems like the global climate are characterized by certain dynamical features that explain how those systems change over time. A complex system's dynamics are shaped by the interaction of many different components operating at many different temporal and spatial scales. Examining the multidisciplinary and holistic methods of climatology can help us better understand the nature (...) of complex systems in general. -/- Questions surrounding climate science can be divided into three rough categories: foundational, methodological, and evaluative questions. "How do we know that we can trust science?" is a paradigmatic foundational question (and a surprisingly difficult one to answer). Because the global climate is so complex, questions like "what makes a system complex?" also fall into this category. There are a number of existing definitions of `complexity,' and while all of them capture some aspects of what makes intuitively complex systems distinctive, none is entirely satisfactory. Most existing accounts of complexity have been developed to work with information-theoretic objects (signals, for instance) rather than the physical and social systems studied by scientists. -/- Dynamical complexity, a concept articulated in detail in the first third of the dissertation, is designed to bridge the gap between the mathematics of contemporary complexity theory (in particular the formalism of "effective complexity" developed by Gell-Mann and Lloyd [2003]) and a more general account of the structure of science generally. Dynamical complexity provides a physical interpretation of the formal tools of mathematical complexity theory, and thus can be used as a framework for thinking about general problems in the philosophy of science, including theories, explanation, and lawhood. -/- Methodological questions include questions about how climate science constructs its models, on what basis we trust those models, and how we might improve those models. In order to answer questions about climate modeling, it's important to understand what climate models look like and how they are constructed. Climate model families are significantly more diverse than are the model families of most other sciences (even sciences that study other complex systems). Existing climate models range from basic models that can be solved on paper to staggeringly complicated models that can only be analyzed using the most advanced supercomputers in the world. I introduce some of the central concepts in climatology by demonstrating how one of the most basic climate models might be constructed. I begin with the assumption that the Earth is a simple featureless blackbody which receives energy from the sun and releases it into space, and show how to model that assumption formally. I then gradually add other factors (e.g. albedo and the greenhouse effect) to the model, and show how each addition brings the model's prediction closer to agreement with observation. After constructing this basic model, I describe the so-called "complexity hierarchy" of the rest of climate models, and argue that the sense of "complexity" used in the climate modeling community is related to dynamical complexity. -/- With a clear understanding of the basics of climate modeling in hand, I then argue that foundational issues discussed early in the dissertation suggest that computation plays an irrevocably central role in climate modeling. "Science by simulation" is essential given the complexity of the global climate, but features of the climate system--the presence of non-linearities, feedback loops, and chaotic dynamics--put principled limits on the effectiveness of computational models. This tension is at the root of the staggering pluralism of the climate model hierarchy, and suggests that such pluralism is here to stay, rather than an artifact of our ignorance. Rather than attempting to converge on a single "best fit" climate model, we ought to embrace the diversity of climate models, and view each as a specialized tool designed to predict and explain a rather narrow range of phenomena. Understanding the climate system as a whole requires examining a number of different models, and correlating their outputs. This is the most significant methodological challenge of climatology. -/- Climatology's role contemporary political discourse raises an unusually high number of evaluative questions for a physical science. The two leading approaches to crafting policy surrounding climate change center on mitigation (i.e. stopping the changes from occurring) and adaptation (making post hoc changes to ameliorate the harm caused by those changes). Crafting an effective socio-political response to the threat of anthropogenic climate change, however, requires us to integrate multiple perspectives and values: the proper response will be just as diverse and pluralistic as the climate models themselves, and will incorporate aspects of both approaches. I conclude by offering some concrete recommendations about how to integrate this value pluralism into our socio-political decision making framework. (shrink)
Endowing artificial systems with explanatory capacities about the reasons guiding their decisions, represents a crucial challenge and research objective in the current fields of Artificial Intelligence (AI) and Computational Cognitive Science [Langley et al., 2017]. Current mainstream AI systems, in fact, despite the enormous progresses reached in specific tasks, mostly fail to provide a transparent account of the reasons determining their behavior (both in cases of a successful or unsuccessful output). This is due to the fact that the classical (...) problem of opacity in artificial neural networks (ANNs) explodes with the adoption of current Deep Learning techniques [LeCun, Bengio, Hinton, 2015]. In this paper we argue that the explanatory deficit of such techniques represents an important problem, that limits their adoption in the cognitive modelling and computational cognitive science arena. In particular we will show how the current attempts of providing explanations of the deep nets behaviour (see e.g. [Ritter et al. 2017] are not satisfactory. As a possibile way out to this problem, we present two different research strategies. The first strategy aims at dealing with the opacity problem by providing a more abstract interpretation of neural mechanisms and representations. This approach is adopted, for example, by the biologically inspired SPAUN architecture [Eliasmith et al., 2012] and by other proposals suggesting, for example, the interpretation of neural networks in terms of the Conceptual Spaces framework [Gärdenfors 2000, Lieto, Chella and Frixione, 2017]. All such proposals presuppose that the neural level of representation can be considered somehow irrelevant for attacking the problem of explanation [Lieto, Lebiere and Oltramari, 2017]. In our opinion, pursuing this research direction can still preserve the use of deep learning techniques in artificial cognitive models provided that novel and additional results in terms of “transparency” are obtained. The second strategy is somehow at odds with respect to the previous one and tries to address the explanatory issue by avoiding to directly solve the “opacity” problem. In this case, the idea is that one of resorting to pre-compiled plausible explanatory models of the word used in combination with deep-nets (see e.g. [Augello et al. 2017]). We argue that this research agenda, even if does not directly fits the explanatory needs of Computational Cognitive Science, can still be useful to provide results in the area of applied AI aiming at shedding light on the models of interaction between low level and high level tasks (e.g. between perceptual categorization and explanantion) in artificial systems. (shrink)
The opacity of some recent Machine Learning (ML) techniques have raised fundamental questions on their explainability, and created a whole domain dedicated to Explainable Artificial Intelligence (XAI). However, most of the literature has been dedicated to explainability as a scientific problem dealt with typical methods of computer science, from statistics to UX. In this paper, we focus on explainability as a pedagogical problem emerging from the interaction between lay users and complex technological systems. We defend an empirical methodology based on (...) field work, which should go beyond the in-vitro analysis of UX to examine in-vivo problems emerging in the field. Our methodology is also comparative, as it chooses to steer away from the almost exclusive focus on ML to compare its challenges with those faced by more vintage algorithms. Finally, it is also philosophical, as we defend the relevance of the philosophical literature to define the epistemic desiderata of a good explanation. This study was conducted in collaboration with Etalab, a Task Force of the French Prime Minister in charge of Open Data & Open Government Policies, dealing in particular with the enforcement of the right to an explanation. In order to illustrate and refine our methodology before going up to scale, we conduct a preliminary work of case studies on the main different types of algorithms used by the French administration: computation, matching algorithms and ML. We study the merits and drawbacks of a recent approach to explanation, which we baptize input-output black box reasoning or BBR for short. We begin by presenting a conceptual framework including the distinctions necessary to a study of pedagogical explainability. We proceed to algorithmic case studies, and draw model-specific and model-agnostic lessons and conjectures. (shrink)
Simulation models of the Reiterated Prisoner's Dilemma have been popular for studying the evolution of cooperation since more than 30 years now. However, there have been practically no successful instances of empirical application of any of these models. At the same time this lack of empirical testing and confirmation has almost entirely been ignored by the modelers community. In this paper, I examine some of the typical narratives and standard arguments with which these models are justified by their authors despite (...) the lack of empirical validation. I find that most of the narratives and arguments are not at all compelling. None the less they seem to serve an important function in keeping the simulation business running despite its empirical shortcomings. (shrink)
Throughout this paper, we are trying to show how and why our Mathematical frame-work seems inappropriate to solve problems in Theory of Computation. More exactly, the concept of turning back in time in paradoxes causes inconsistency in modeling of the concept of Time in some semantic situations. As we see in the first chapter, by introducing a version of “Unexpected Hanging Paradox”,first we attempt to open a new explanation for some paradoxes. In the second step, by applying this paradox, (...) it is demonstrated that any formalized system for the Theory of Computation based on Classical Logic and Turing Model of Computation leads us to a contradiction. We conclude that our mathematical frame work is inappropriate for Theory of Computation. Furthermore, the result provides us a reason that many problems in Complexity Theory resist to be solved.(This work is completed in 2017 -5- 2, it is in vixra in 2017-5-14, presented in Unilog 2018, Vichy). (shrink)
The paper defends the claim that the mechanistic explanation of information processing is the fundamental kind of explanation in cognitive science. These mechanisms are complex organized systems whose functioning depends on the orchestrated interaction of their component parts and processes. A constitutive explanation of every mechanism must include both appeal to its environment and to the role it plays in it. This role has been traditionally dubbed competence. To fully explain how this role is played it is (...) necessary to explain the information processing inside the mechanism embedded in the environment. The most usual explanation on this level has a form of a computational model, for example a software program or a trained artificial neural network. However, this is not the end of the explanatory chain. What is left to be explained is how the program is realized (or what processes are responsible for information processing in the artificial neural network). By using two dramatically different examples from the history of cognitive science I show the multi-level structure of explanations in cognitive science. These examples are (1) the explanation of human process solving as proposed by A. Newell & H. Simon; (2) the explanation of cricket phonotaxis via robotic models by B. Webb. (shrink)
This essay is divided into two parts. In the first part (§2), I introduce the idea of practical meaning by looking at a certain kind of procedural systems — the motor system — that play a central role in computational explanations of motor behavior. I argue that in order to give a satisfactory account of the content of the representations computed by motor systems (motor commands), we need to appeal to a distinctively practical kind of meaning. Defending the explanatory (...) relevance of semantic properties in a computationalist explanation of motor behavior, my argument concludes that practical meanings play a central role in an adequate psychological theory of motor skill. In the second part of this essay (§3), I generalize and clarify the notion of practical meaning, and I defend the intelligibility of practical meanings against an important objection. (shrink)
The claim defended in the paper is that the mechanistic account of explanation can easily embrace idealization in big-scale brain simulations, and that only causally relevant detail should be present in explanatory models. The claim is illustrated with two methodologically different models: Blue Brain, used for particular simulations of the cortical column in hybrid models, and Eliasmith’s SPAUN model that is both biologically realistic and able to explain eight different tasks. By drawing on the mechanistic theory of computational (...)explanation, I argue that large-scale simulations require that the explanandum phenomenon is identified; otherwise, the explanatory value of such explanations is difficult to establish, and testing the model empirically by comparing its behavior with the explanandum remains practically impossible. The completeness of the explanation, and hence of the explanatory value of the explanatory model, is to be assessed vis-à-vis the explanandum phenomenon, which is not to be conflated with raw observational data and may be idealized. I argue that idealizations, which include building models of a single phenomenon displayed by multi-functional mechanisms, lumping together multiple factors in a single causal variable, simplifying the causal structure of the mechanisms, and multi-model integration, are indispensable for complex systems such as brains; otherwise, the model may be as complex as the explanandum phenomenon, which would make it prone to so-called Bonini paradox. I conclude by enumerating dimensions of empirical validation of explanatory models according to new mechanism, which are given in a form of a “checklist” for a modeler. (shrink)
We provide two programmatic frameworks for integrating philosophical research on understanding with complementary work in computer science, psychology, and neuroscience. First, philosophical theories of understanding have consequences about how agents should reason if they are to understand that can then be evaluated empirically by their concordance with findings in scientific studies of reasoning. Second, these studies use a multitude of explanations, and a philosophical theory of understanding is well suited to integrating these explanations in illuminating ways.
In this article, I argue that the artificial components of hybrid bionic systems do not play a direct explanatory role, i.e., in simulative terms, in the overall context of the systems in which they are embedded in. More precisely, I claim that the internal procedures determining the output of such artificial devices, replacing biological tissues and connected to other biological tissues, cannot be used to directly explain the corresponding mechanisms of the biological component(s) they substitute (and therefore cannot be used (...) to explain the local mechanisms determining an overall biological or cognitive function replicated by such bionic models). I ground this analysis on the use of the Minimal Cognitive Grid (MCG), a novel framework proposed in Lieto (Cognitive design for artificial minds, 2021) to rank the epistemological and explanatory status of biologically and cognitively inspired artificial systems. Despite the lack of such a direct mechanistic explanation from the artificial component, however, I also argue that the hybrid bionic systems can have an indirect explanatory role similar to the one played by some AI systems built by using an overall structural design approach (but including the partial adoption of functional components). In particular, the artificial replacement of part(s) of a biological system can provide i) a local functional account of that part(s) in the context of the overall functioning of the hybrid biological–artificial system and ii) global insights about the structural mechanisms of the biological elements connected to such artificial devices. (shrink)
We outline a framework of multilevel neurocognitive mechanisms that incorporates representation and computation. We argue that paradigmatic explanations in cognitive neuroscience fit this framework and thus that cognitive neuroscience constitutes a revolutionary break from traditional cognitive science. Whereas traditional cognitive scientific explanations were supposed to be distinct and autonomous from mechanistic explanations, neurocognitive explanations aim to be mechanistic through and through. Neurocognitive explanations aim to integrate computational and representational functions and structures across multiple levels of organization in order to (...) explain cognition. To a large extent, practicing cognitive neuroscientists have already accepted this shift, but philosophical theory has not fully acknowledged and appreciated its significance. As a result, the explanatory framework underlying cognitive neuroscience has remained largely implicit. We explicate this framework and demonstrate its contrast with previous approaches. (shrink)
We overview the main historical and technological elements characterising the rise, the fall and the recent renaissance of the cognitive approaches to Artificial Intelligence and provide some insights and suggestions about the future directions and challenges that, in our opinion, this discipline needs to face in the next years.
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.