Intelligent Tutoring Systems (ITS) has a wide influence on the exchange rate, education, health, training, and educational programs. In this paper we describe an intelligent tutoring system that helps student study computer networks. The current ITS provides intelligent presentation of educational content appropriate for students, such as the degree of knowledge, the desired level of detail, assessment, student level, and familiarity with the subject. Our Intelligent tutoring system was developed using ITSB authoring tool for building ITS. A preliminary evaluation of (...) the ITS was done by a group of students and teachers. The results were acceptable. (shrink)
The relationship between abstract formal procedures and the activities of actual physical systems has proved to be surprisingly subtle and controversial, and there are a number of competing accounts of when a physical system can be properly said to implement a mathematical formalism and hence perform a computation. I defend an account wherein computational descriptions of physical systems are high-level normative interpretations motivated by our pragmatic concerns. Furthermore, the criteria of utility and success vary according to our diverse purposes (...) and pragmatic goals. Hence there is no independent or uniform fact to the matter, and I advance the ‘anti-realist’ conclusion that computational descriptions of physical systems are not founded upon deep ontological distinctions, but rather upon interest-relative human conventions. Hence physical computation is a ‘conventional’ rather than a ‘natural’ kind. (shrink)
The Computational Theory of Mind (CTM) holds that cognitive processes are essentially computational, and hence computation provides the scientific key to explaining mentality. The Representational Theory of Mind (RTM) holds that representational content is the key feature in distinguishing mental from non-mental systems. I argue that there is a deep incompatibility between these two theoretical frameworks, and that the acceptance of CTM provides strong grounds for rejecting RTM. The focal point of the incompatibility is the fact that representational content (...) is extrinsic to formal procedures as such, and the intended interpretation of syntax makes no difference to the execution of an algorithm. So the unique 'content' postulated by RTM is superfluous to the formal procedures of CTM. And once these procedures are implemented in a physical mechanism, it is exclusively the causal properties of the physical mechanism that are responsible for all aspects of the system's behaviour. So once again, postulated content is rendered superfluous. To the extent that semantic content may appear to play a role in behaviour, it must be syntactically encoded within the system, and just as in a standard computational artefact, so too with the human mind/brain - it's pure syntax all the way down to the level of physical implementation. Hence 'content' is at most a convenient meta-level gloss, projected from the outside by human theorists, which itself can play no role in cognitive processing. (shrink)
This book addresses key conceptual issues relating to the modern scientific and engineering use of computer simulations. It analyses a broad set of questions, from the nature of computer simulations to their epistemological power, including the many scientific, social and ethics implications of using computer simulations. The book is written in an easily accessible narrative, one that weaves together philosophical questions and scientific technicalities. It will thus appeal equally to all academic scientists, engineers, and researchers in industry interested in questions (...) related to the general practice of computer simulations. (shrink)
Physical Computation is the summation of Piccinini’s work on computation and mechanistic explanation over the past decade. It draws together material from papers published during that time, but also provides additional clarifications and restructuring that make this the definitive presentation of his mechanistic account of physical computation. This review will first give a brief summary of the account that Piccinini defends, followed by a chapter-by-chapter overview of the book, before finally discussing one aspect of the account in (...) more critical detail. (shrink)
The development of technology is unbelievably rapid. From limited local networks to high speed Internet, from crude computing machines to powerful semi-conductors, the world had changed drastically compared to just a few decades ago. In the constantly renewing process of adapting to such an unnaturally high-entropy setting, innovations as well as entirely new concepts, were often born. In the business world, one such phenomenon was the creation of a new type of entrepreneurship. This paper proposes a new academic discipline of (...) computational entrepreneurship, which centers on: (i) an exponentially growing (and less expensive) computing power, to the extent that almost everybody in a modern society can own and use that; (ii) omnipresent high-speed Internet connectivity, wired or wireless, representing our modern day’s economic connectomics; (iii) growing concern of exploiting “serendipity” for a strategic commercial advantage; and (iv) growing capabilities of lay people in performing calculations for their informed decisions in taking fast-moving entrepreneurial opportunities. Computational entrepreneurship has slowly become a new mode of operation for business ventures and will likely bring the academic discipline of entrepreneurship back to mainstream economics. (shrink)
In an attempt to determine the epistemic status of computer simulation results, philosophers of science have recently explored the similarities and differences between computer simulations and experiments. One question that arises is whether and, if so, when, simulation results constitute novel empirical data. It is often supposed that computer simulation results could never be empirical or novel because simulations never interact with their targets, and cannot go beyond their programming. This paper argues against this position by examining whether, and under (...) what conditions, the features of empiricality and novelty could be displayed by computer simulation data. I show that, to the extent that certain familiar measurement results have these features, so can some computer simulation results. (shrink)
Is the mathematical function being computed by a given physical system determined by the system’s dynamics? This question is at the heart of the indeterminacy of computation phenomenon (Fresco et al. [unpublished]). A paradigmatic example is a conventional electrical AND-gate that is often said to compute conjunction, but it can just as well be used to compute disjunction. Despite the pervasiveness of this phenomenon in physical computational systems, it has been discussed in the philosophical literature only indirectly, mostly with (...) reference to the debate over realism about physical computation and computationalism. A welcome exception is Dewhurst’s ([2018]) recent analysis of computational individuation under the mechanistic framework. He rejects the idea of appealing to semantic properties for determining the computational identity of a physical system. But Dewhurst seems to be too quick to pay the price of giving up the notion of computational equivalence. We aim to show that the mechanist need not pay this price. The mechanistic framework can, in principle, preserve the idea of computational equivalence even between two different enough kinds of physical systems, say, electrical and hydraulic ones. (shrink)
In this paper, I argue that computationalism is a progressive research tradition. Its metaphysical assumptions are that nervous systems are computational, and that information processing is necessary for cognition to occur. First, the primary reasons why information processing should explain cognition are reviewed. Then I argue that early formulations of these reasons are outdated. However, by relying on the mechanistic account of physical computation, they can be recast in a compelling way. Next, I contrast two computational models of working (...) memory to show how modeling has progressed over the years. The methodological assumptions of new modeling work are best understood in the mechanistic framework, which is evidenced by the way in which models are empirically validated. Moreover, the methodological and theoretical progress in computational neuroscience vindicates the new mechanistic approach to explanation, which, at the same time, justifies the best practices of computational modeling. Overall, computational modeling is deservedly successful in cognitive science. Its successes are related to deep conceptual connections between cognition and computation. Computationalism is not only here to stay, it becomes stronger every year. (shrink)
Computer-Assisted Argument Mapping (CAAM) is a new way of understanding arguments. While still embryonic in its development and application, CAAM is being used increasingly as a training and development tool in the professions and government. Inroads are also being made in its application within education. CAAM claims to be helpful in an educational context, as a tool for students in responding to assessment tasks. However, to date there is little evidence from students that this is the case. This paper outlines (...) the use of CAAM as an educational tool within an Economics and Commerce Faculty in a major Australian research university. Evaluation results are provided from students from a CAAM pilot within an upper-level Economics subject. Results indicate promising support for the use of CAAM and its potential for transferability within the disciplines. If shown to be valuable with further studies, CAAM could be included in capstone subjects, allowing computer technology to be utilised in the service of generic skill development. (shrink)
This paper explores how the Leviathan that projects power through nuclear arms exercises a unique nuclearized sovereignty. In the case of nuclear superpowers, this sovereignty extends to wielding the power to destroy human civilization as we know it across the globe. Nuclearized sovereignty depends on a hybrid form of power encompassing human decision-makers in a hierarchical chain of command, and all of the technical and computerized functions necessary to maintain command and control at every moment of the sovereign's existence: this (...) sovereign power cannot sleep. This article analyzes how the form of rationality that informs this hybrid exercise of power historically developed to be computable. By definition, computable rationality must be able to function without any intelligible grasp of the context or the comprehensive significance of decision-making outcomes. Thus, maintaining nuclearized sovereignty necessarily must be able to execute momentous life and death decisions without the type of sentience we usually associate with ethical individual and collective decisions. (shrink)
This paper argues that the idea of a computer is unique. Calculators and analog computers are not different ideas about computers, and nature does not compute by itself. Computers, once clearly defined in all their terms and mechanisms, rather than enumerated by behavioral examples, can be more than instrumental tools in science, and more than source of analogies and taxonomies in philosophy. They can help us understand semantic content and its relation to form. This can be achieved because they have (...) the potential to do more than calculators, which are computers that are designed not to learn. Today’s computers are not designed to learn; rather, they are designed to support learning; therefore, any theory of content tested by computers that currently exist must be of an empirical, rather than a formal nature. If they are designed someday to learn, we will see a change in roles, requiring an empirical theory about the Turing architecture’s content, using the primitives of learning machines. This way of thinking, which I call the intensional view of computers, avoids the problems of analogies between minds and computers. It focuses on the constitutive properties of computers, such as showing clearly how they can help us avoid the infinite regress in interpretation, and how we can clarify the terms of the suggested mechanisms to facilitate a useful debate. Within the intensional view, syntax and content in the context of computers become two ends of physically realizing correspondence problems in various domains. (shrink)
This paper presents a theoretical study of the binary oppositions underlying the mechanisms of natural computation understood as dynamical processes on natural information morphologies. Of special interest are the oppositions of discrete vs. continuous, structure vs. process, and differentiation vs. integration. The framework used is that of computing nature, where all natural processes at different levels of organisation are computations over informational structures. The interactions at different levels of granularity/organisation in nature, and the character of the phenomena that unfold (...) through those interactions, are modeled from the perspective of an observing agent. This brings us to the movement from binary oppositions to dynamic networks built upon mutually related binary oppositions, where each node has several properties. (shrink)
In most accounts of realization of computational processes by physical mechanisms, it is presupposed that there is one-to-one correspondence between the causally active states of the physical process and the states of the computation. Yet such proposals either stipulate that only one model of computation is implemented, or they do not reflect upon the variety of models that could be implemented physically. -/- In this paper, I claim that mechanistic accounts of computation should allow for a broad (...) variation of models of computation. In particular, some non-standard models should not be excluded a priori. The relationship between mathematical models of computation and mechanistically adequate models is studied in more detail. (shrink)
In most accounts of realization of computational processes by physical mechanisms, it is presupposed that there is one-to-one correspondence between the causally active states of the physical process and the states of the computation. Yet such proposals either stipulate that only one model of computation is implemented, or they do not reflect upon the variety of models that could be implemented physically. In this paper, I claim that mechanistic accounts of computation should allow for a broad variation (...) of models of computation. In particular, some non-standard models should not be excluded a priori. The relationship between mathematical models of computation and mechanistically adequate models is studied in more detail. (shrink)
The purpose of this paper is to argue against the claim that morphological computation is substantially different from other kinds of physical computation. I show that some (but not all) purported cases of morphological computation do not count as specifically computational, and that those that do are solely physical computational systems. These latter cases are not, however, specific enough: all computational systems, not only morphological ones, may (and sometimes should) be studied in various ways, including their energy (...) efficiency, cost, reliability, and durability. Second, I critically analyze the notion of “offloading” computation to the morphology of an agent or robot, by showing that, literally, computation is sometimes not offloaded but simply avoided. Third, I point out that while the morphology of any agent is indicative of the environment that it is adapted to, or informative about that environment, it does not follow that every agent has access to its morphology as the model of its environment. (shrink)
The segregation of image parts into foreground and background is an important aspect of the neural computation of 3D scene perception. To achieve such segregation, the brain needs information about border ownership; that is, the belongingness of a contour to a specific surface represented in the image. This article presents psychophysical data derived from 3D percepts of figure and ground that were generated by presenting 2D images composed of spatially disjoint shapes that pointed inward or outward relative to the (...) continuous boundaries that they induced along their collinear edges. The shapes in some images had the same contrast (black or white) with respect to the background gray. Other images included opposite contrasts along each induced continuous boundary. Psychophysical results demonstrate conditions under which figure-ground judgment probabilities in response to these ambiguous displays are determined by the orientation of contrasts only, not by their relative contrasts, despite the fact that many border ownership cells in cortical area V2 respond to a preferred relative contrast. Studies are also reviewed in which both polarity-specific and polarity-invariant properties obtain perceptual figure-ground grouping results. The FACADE and 3D LAMINART models are used to explain these data. Keywords: figure-ground separation, border ownership, perceptual grouping, surface filling-in, V2, V4, FACADE Theory. (shrink)
Since the sixties, computational modeling has become increasingly important in both the physical and the social sciences, particularly in physics, theoretical biology, sociology, and economics. Sine the eighties, philosophers too have begun to apply computational modeling to questions in logic, epistemology, philosophy of science, philosophy of mind, philosophy of language, philosophy of biology, ethics, and social and political philosophy. This chapter analyzes a selection of interesting examples in some of those areas.
Argument mapping is a way of diagramming the logical structure of an argument to explicitly and concisely represent reasoning. The use of argument mapping in critical thinking instruction has increased dramatically in recent decades. This paper overviews the innovation and provides a procedural approach for new teaches wanting to use argument mapping in the classroom. A brief history of argument mapping is provided at the end of this paper.
Very plausibly, nothing can be a genuine computing system unless it meets an input-sensitivity requirement. Otherwise all sorts of objects, such as rocks or pails of water, can count as performing computations, even such as might suffice for mentality—thus threatening computationalism about the mind with panpsychism. Maudlin in J Philos 86:407–432, ( 1989 ) and Bishop ( 2002a , b ) have argued, however, that such a requirement creates difficulties for computationalism about conscious experience, putting it in conflict with the (...) very intuitive thesis that conscious experience supervenes on physical activity. Klein in Synthese 165:141–153, ( 2008 ) proposes a way for computationalists about experience to avoid panpsychism while still respecting the supervenience of experience on activity. I argue that his attempt to save computational theories of experience from Maudlin’s and Bishop’s critique fails. (shrink)
The paper analyses six ethical challenges posed by cloud computing, concerning ownership, safety, fairness, responsibility, accountability and privacy. The first part defines cloud computing on the basis of a resource-oriented approach, and outlines the main features that characterise such technology. Following these clarifications, the second part argues that cloud computing reshapes some classic problems often debated in information and computer ethics. To begin with, cloud computing makes possible a complete decoupling of ownership, possession and use of data and this helps (...) to explain the problems occurring when different providers of cloud computing retain or relinquish the right to use or own users‘ data. The problem of safety in cloud computing is coupled to that of reliability, insofar as users have to trust providers to preserve their data, applications and content in a reliable manner. It is argued that, in this context, data insurance could play an important role. Regarding fairness, the paper argues that cloud computing is already reshaping the nature of the Digital. Responsibility, accountability and privacy close the ethical analysis of cloud computing. In this case, the thesis is that the necessity to account for the actions of cloud computing users imposes delicate trade-offs between users‘ privacy and the traceability of their operations. (shrink)
European Computing and Philosophy conference, 2–4 July Barcelona The Seventh ECAP (European Computing and Philosophy) conference was organized by Jordi Vallverdu at Autonomous University of Barcelona. The conference started with the IACAP (The International Association for CAP) presidential address by Luciano Floridi, focusing on mechanisms of knowledge production in informational networks. The first keynote delivered by Klaus Mainzer made a frame for the rest of the conference, by elucidating the fundamental role of complexity of informational structures that can be analyzed (...) on different levels of organization giving place for variety of possible approaches which converge in this cross-disciplinary and multi-disciplinary research field. Keynotes by Kevin Warwick about re-embodiment of rats’ neurons into robots, Raymond Turner on syntax and semantics in programming languages, Roderic Guigo on Biocomputing Sciences and Francesco Subirada on the past and future of supercomputing presented different topics of philosophical as well as practical aspects of computing. Vonference tracks included: Philosophy of Information (Patrick Allo), Philosophy of Computer Science (Raymond Turner), Computer and Information Ethics (Johnny Søraker and Alison Adam), Computational Approaches to the Mind (Ruth Hagengruber), IT and Cultural Diversity (Jutta Weber and Charles Ess), Crossroads (David Casacuberta), Robotics, AI & Ambient Intelligence (Thomas Roth-Berghofer), Biocomputing, Evolutionary and Complex Systems (Gordana Dodig Crnkovic and Søren Brier), E-learning, E-science and Computer-Supported Cooperative Work (Annamaria Carusi) and Technological Singularity and Acceleration Studies (Amnon Eden). (shrink)
The scope of Platonism is extended by introducing the concept of a “Platonic computer” which is incorporated in metacomputics. The theoretical framework of metacomputics postulates that a Platonic computer exists in the realm of Forms and is made by, of, with, and from metaconsciousness. Metaconsciousness is defined as the “power to conceive, to perceive, and to be self-aware” and is the formless, con-tentless infinite potentiality. Metacomputics models how metaconsciousness generates the perceived actualities including abstract entities and physical and nonphysical realities. (...) It is postulated that this is achieved via digital computation using the Platonic computer. The introduction of a Platonic computer into the realm of Forms thus bridges the “inverse explanatory gap” and therefore solves the “inverse hard problem of consciousness” in the philosophy of mind. (shrink)
Computers can mimic human intelligence, sometimes quite impressively. This has led some to claim that, a.) computers can actually acquire intelligence, and/or, b.) the human mind may be thought of as a very sophisticated computer. In this paper I argue that neither of these inferences are sound. The human mind and computers, I argue, operate on radically different principles.
Context: At present, we lack a common understanding of both the process of cognition in living organisms and the construction of knowledge in embodied, embedded cognizing agents in general, including future artifactual cognitive agents under development, such as cognitive robots and softbots. Purpose: This paper aims to show how the info-computational approach (IC) can reinforce constructivist ideas about the nature of cognition and knowledge and, conversely, how constructivist insights (such as that the process of cognition is the process of life) (...) can inspire new models of computing. Method: The info-computational constructive framework is presented for the modeling of cognitive processes in cognizing agents. Parallels are drawn with other constructivist approaches to cognition and knowledge generation. We describe how cognition as a process of life itself functions based on info-computation and how the process of knowledge generation proceeds through interactions with the environment and among agents. Results: Cognition and knowledge generation in a cognizing agent is understood as interaction with the world (potential information), which by processes of natural computation becomes actual information. That actual information after integration becomes knowledge for the agent. Heinz von Foerster is identified as a precursor of natural computing, in particular bio computing. Implications: IC provides a framework for unified study of cognition in living organisms (from the simplest ones, such as bacteria, to the most complex ones) as well as in artifactual cognitive systems. Constructivist content: It supports the constructivist view that knowledge is actively constructed by cognizing agents and shared in a process of social cognition. IC argues that this process can be modeled as info-computation. (shrink)
Traditionally, computational theory (CT) and dynamical systems theory (DST) have presented themselves as opposed and incompatible paradigms in cognitive science. There have been some efforts to reconcile these paradigms, mainly, by assimilating DST to CT at the expenses of its anti-representationalist commitments. In this paper, building on Piccinini’s mechanistic account of computation and the notion of functional closure, we explore an alternative conciliatory strategy. We try to assimilate CT to DST by dropping its representationalist commitments, and by inviting CT (...) to recognize the functionally closed nature of some computational systems. (shrink)
A response to a recent critique by Cem Bozşahin of the theory of syntactic semantics as it applies to Helen Keller, and some applications of the theory to the philosophy of computer science.
Scientists depend on complex computational systems that are often ineliminably opaque, to the detriment of our ability to give scientific explanations and detect artifacts. Some philosophers have s...
Any computer can create a model of reality. The hypothesis that quantum computer can generate such a model designated as quantum, which coincides with the modeled reality, is discussed. Its reasons are the theorems about the absence of “hidden variables” in quantum mechanics. The quantum modeling requires the axiom of choice. The following conclusions are deduced from the hypothesis. A quantum model unlike a classical model can coincide with reality. Reality can be interpreted as a quantum computer. The physical processes (...) represent computations of the quantum computer. Quantum information is the real fundament of the world. The conception of quantum computer unifies physics and mathematics and thus the material and the ideal world. Quantum computer is a non-Turing machine in principle. Any quantum computing can be interpreted as an infinite classical computational process of a Turing machine. Quantum computer introduces the notion of “actually infinite computational process”. The discussed hypothesis is consistent with all quantum mechanics. The conclusions address a form of neo-Pythagoreanism: Unifying the mathematical and physical, quantum computer is situated in an intermediate domain of their mutual transformation. (shrink)
This paper is in two parts. Part I outlines three traditional approaches to the teaching of critical thinking: the normative, cognitive psychology, and educational approaches. Each of these approaches is discussed in relation to the influences of various methods of critical thinking instruction. The paper contrasts these approaches with what I call the “visualisation” approach. This approach is explained with reference to computer-aided argument mapping (CAAM) which uses dedicated computer software to represent inferences between premise and conclusions. The paper presents (...) a detailed account of the CAAM methodology, and theoretical justification for its use, illustrating this with the argument mapping software Rationale™. A number of Rationale™ design conventions and logical principles are outlined including the principle of abstraction, the MECE principle, and the “Holding Hands” and “Rabbit Rule” heuristics. Part II of this paper outlines the growing empirical evidence for the effectiveness of CAAM as a method of teaching critical thinking. (shrink)
This chapter draws an analogy between computing mechanisms and autopoietic systems, focusing on the non-representational status of both kinds of system (computational and autopoietic). It will be argued that the role played by input and output components in a computing mechanism closely resembles the relationship between an autopoietic system and its environment, and in this sense differs from the classical understanding of inputs and outputs. The analogy helps to make sense of why we should think of computing mechanisms as non-representational, (...) and might also facilitate reconciliation between computational and autopoietic/enactive approaches to the study of cognition. (shrink)
This paper connects information with computation and cognition via concept of agents that appear at variety of levels of organization of physical/chemical/cognitive systems – from elementary particles to atoms, molecules, life-like chemical systems, to cognitive systems starting with living cells, up to organisms and ecologies. In order to obtain this generalized framework, concepts of information, computation and cognition are generalized. In this framework, nature can be seen as informational structure with computational dynamics, where an (info-computational) agent is needed (...) for the potential information of the world to actualize. Starting from the definition of information as the difference in one physical system that makes a difference in another physical system – which combines Bateson and Hewitt’s definitions, the argument is advanced for natural computation as a computational model of the dynamics of the physical world, where information processing is constantly going on, on a variety of levels of organization. This setting helps us to elucidate the relationships between computation, information, agency and cognition, within the common conceptual framework, with special relevance for biology and robotics. (shrink)
This dissertation examines aspects of the interplay between computing and scientific practice. The appropriate foundational framework for such an endeavour is rather real computability than the classical computability theory. This is so because physical sciences, engineering, and applied mathematics mostly employ functions defined in continuous domains. But, contrary to the case of computation over natural numbers, there is no universally accepted framework for real computation; rather, there are two incompatible approaches --computable analysis and BSS model--, both claiming to (...) formalise algorithmic computation and to offer foundations for scientific computing. -/- The dissertation consists of three parts. In the first part, we examine what notion of 'algorithmic computation' underlies each approach and how it is respectively formalised. It is argued that the very existence of the two rival frameworks indicates that 'algorithm' is not one unique concept in mathematics, but it is used in more than one way. We test this hypothesis for consistency with mathematical practice as well as with key foundational works that aim to define the term. As a result, new connections between certain subfields of mathematics and computer science are drawn, and a distinction between 'algorithms' and 'effective procedures' is proposed. -/- In the second part, we focus on the second goal of the two rival approaches to real computation; namely, to provide foundations for scientific computing. We examine both frameworks in detail, what idealisations they employ, and how they relate to floating-point arithmetic systems used in real computers. We explore limitations and advantages of both frameworks, and answer questions about which one is preferable for computational modelling and which one for addressing general computability issues. -/- In the third part, analog computing and its relation to analogue (physical) modelling in science are investigated. Based on some paradigmatic cases of the former, a certain view about the nature of computation is defended, and the indispensable role of representation in it is emphasized and accounted for. We also propose a novel account of the distinction between analog and digital computation and, based on it, we compare analog computational modelling to physical modelling. It is concluded that the two practices, despite their apparent similarities, are orthogonal. (shrink)
We present a simple example that disproves the universality principle. Unlike previous counter-examples to computational universality, it does not rely on extraneous phenomena, such as the availability of input variables that are time varying, computational complexity that changes with time or order of execution, physical variables that interact with each other, uncertain deadlines, or mathematical conditions among the variables that must be obeyed throughout the computation. In the most basic case of the new example, all that is used is (...) a single pre-existing global variable whose value is modified by the computation itself. In addition, our example offers a new dimension for separating the computable from the uncomputable, while illustrating the power of parallelism in computation. (shrink)
This work addresses a broad range of questions which belong to four fields: computation theory, general philosophy of science, philosophy of cognitive science, and philosophy of mind. Dynamical system theory provides the framework for a unified treatment of these questions. ;The main goal of this dissertation is to propose a new view of the aims and methods of cognitive science--the dynamical approach . According to this view, the object of cognitive science is a particular set of dynamical systems, which (...) I call "cognitive systems". The goal of a cognitive study is to specify a dynamical model of a cognitive system, and then use this model to produce a detailed account of the specific cognitive abilities of that system. The dynamical approach does not limit a-priori the form of the dynamical models which cognitive science may consider. In particular, this approach is compatible with both computational and connectionist modeling, for both computational systems and connectionist networks are special types of dynamical systems. ;To substantiate these methodological claims about cognitive science, I deal first with two questions in two different fields: What is a computational system? What is a dynamical explanation of a deterministic process? ;Intuitively, a computational system is a deterministic system which evolves in discrete time steps, and which can be described in an effective way. In chapter 1, I give a formal definition of this concept which employs the notions of isomorphism between dynamical systems, and of Turing computable function. In chapter 2, I propose a more comprehensive analysis which is based on a natural generalization of the concept of Turing machine. ;The goal of chapter 3 is to develop a theory of the dynamical explanation of a deterministic process. By a "dynamical explanation" I mean the specification of a dynamical model of the system or process which we want to explain. I start from the analysis of a specific type of explanandum--dynamical phenomena--and I then use this analysis to shed light on the general form of a dynamical explanation. Finally, I analyze the structure of those theories which generate explanations of this form, namely dynamical theories. (shrink)
In this paper a possible general framework for the representation of concepts in cognitive artificial systems and cognitive architectures is proposed. The framework is inspired by the so called proxytype theory of concepts and combines it with the heterogeneity approach to concept representations, according to which concepts do not constitute a unitary phenomenon. The contribution of the paper is twofold: on one hand, it aims at providing a novel theoretical hypothesis for the debate about concepts in cognitive sciences by providing (...) unexplored connections between different theories; on the other hand it is aimed at sketching a computational characterization of the problem of concept representation in cognitively inspired artificial systems and in cognitive architectures. (shrink)
A complete axiomatic system CTL$_{rp}$ is introduced for a temporal logic for finitely branching $\omega^+$-trees in a temporal language extended with so called reference pointers. Syntactic and semantic interpretations are constructed for the branching time computation tree logic CTL* into CTL$_{rp}$. In particular, that yields a complete axiomatization for the translations of all valid CTL*-formulae. Thus, the temporal logic with reference pointers is brought forward as a simpler (with no path quantifiers), but in a way more expressive medium for (...) reasoning about branching time. (shrink)
In this paper, building on these previous works, we propose to go deeper into the understanding of crowd behavior by proposing an approach which integrates ontologi- cal models of crowd behavior and dedicated computer vision algorithms, with the aim of recognizing some targeted complex events happening in the playground from the observation of the spectator crowd behavior. In order to do that, we first propose an ontology encoding available knowledge on spectator crowd behavior, built as a spe- cialization of the (...) DOLCE foundational ontology, which allows the representation of categories belonging both to the physical and to the social realms. We then propose a simplified and tractable version of such ontology in a new temporal extension of a description logic, which is used for temporally coupling events happening on the play- ground and spectator crowd behavior. At last, computer vision algorithms provide the input information concerning what is observed on the stands and ontological reasoning delivers the output necessary to perform complex event recognition. (shrink)
The articles in this volume present a selection of works from the Symposium on Natu-ral/Unconventional Computing at AISB/IACAP (British Society for the Study of Artificial Intelligence and the Simulation of Behaviour and The International Association for Computing and Philosophy) World Congress 2012, held at the University of Birmingham, celebrating Turing centenary. This book is about nature considered as the totality of physical existence, the universe. By physical we mean all phenomena - objects and processes - that are possible to detect (...) either directly by our senses or via instruments. Historically, there have been many ways of describ-ing the universe (cosmic egg, cosmic tree, theistic universe, mechanistic universe) and a par-ticularly prominent contemporary approach is computational universe. (shrink)
Critical in the computationalist account of the mind is the phenomenon called computational or computer simulation of human thinking, which is used to establish the theses that human thinking is a computational process and that computing machines are thinking systems. Accordingly, if human thinking can be simulated computationally then human thinking is a computational process; and if human thinking is a computational process then its computational simulation is itself a thinking process. This paper shows that the said phenomenon—the computational simulation (...) of human thinking—is ill-conceived, and that, as a consequence, the theses that it intends to establish are problematic. It is argued that what is simulated computationally is not human thinking as such but merely its behavioral manifestations; and that a computational simulation of these behavioral manifestations does not necessarily establish that human thinking is computational, as it is logically possible for a non-computational system to exhibit behaviors that lend themselves to a computational simulation. (shrink)
The most cursory examination of the history of artificial intelligence highlights numerous egregious claims of its researchers, especially in relation to a populist form of ‘strong’ computationalism which holds that any suitably programmed computer instantiates genuine conscious mental states purely in virtue of carrying out a specific series of computations. The argument presented herein is a simple development of that originally presented in Putnam’s (Representation & Reality, Bradford Books, Cambridge in 1988 ) monograph, “Representation & Reality”, which if correct, has (...) important implications for turing machine functionalism and the prospect of ‘conscious’ machines. In the paper, instead of seeking to develop Putnam’s claim that, “everything implements every finite state automata”, I will try to establish the weaker result that, “everything implements the specific machine Q on a particular input set ( x )”. Then, equating Q ( x ) to any putative AI program, I will show that conceding the ‘strong AI’ thesis for Q (crediting it with mental states and consciousness) opens the door to a vicious form of panpsychism whereby all open systems, (e.g. grass, rocks etc.), must instantiate conscious experience and hence that disembodied minds lurk everywhere. (shrink)
To study climate change, scientists employ computer models, which approximate target systems with various levels of skill. Given the imperfection of climate models, how do scientists use simulations to generate knowledge about the causes of observed climate change? Addressing a similar question in the context of biological modelling, Levins (1966) proposed an account grounded in robustness analysis. Recent philosophical discussions dispute the confirmatory power of robustness, raising the question of how the results of computer modelling studies contribute to the body (...) of evidence supporting hypotheses about climate change. Expanding on Staley’s (2004) distinction between evidential strength and security, and Lloyd’s (2015) argument connecting variety-of-evidence inferences and robustness analysis, I address this question with respect to recent challenges to the epistemology robustness analysis. Applying this epistemology to case studies of climate change, I argue that, despite imperfections in climate models, and epistemic constraints on variety-of-evidence reasoning and robustness analysis, this framework accounts for the strength and security of evidence supporting climatological inferences, including the finding that global warming is occurring and its primary causes are anthropogenic. (shrink)
In the last decades a growing body of literature in Artificial Intelligence (AI) and Cognitive Science (CS) has approached the problem of narrative understanding by means of computational systems. Narrative, in fact, is an ubiquitous element in our everyday activity and the ability to generate and understand stories, and their structures, is a crucial cue of our intelligence. However, despite the fact that - from an historical standpoint - narrative (and narrative structures) have been an important topic of investigation in (...) both these areas, a more comprehensive approach coupling them with narratology, digital humanities and literary studies was still lacking. With the aim of covering this empty space, in the last years, a multidisciplinary effort has been made in order to create an international meeting open to computer scientist, psychologists, digital humanists, linguists, narratologists etc.. This event has been named CMN (for Computational Models of Narrative) and was launched in the 2009 by the MIT scholars Mark A. Finlayson and Patrick H. Winston1. (shrink)
Cloud computing is rapidly gaining traction in business. It offers businesses online services on demand (such as Gmail, iCloud and Salesforce) and allows them to cut costs on hardware and IT support. This is the first paper in business ethics dealing with this new technology. It analyzes the informational duties of hosting companies that own and operate cloud computing datacenters (e.g., Amazon). It considers the cloud services providers leasing ‘space in the cloud’ from hosting companies (e.g, Dropbox, Salesforce). And it (...) examines the business and private ‘clouders’ using these services. The first part of the paper argues that hosting companies, services providers and clouders have mutual informational (epistemic) obligations to provide and seek information about relevant issues such as consumer privacy, reliability of services, data mining and data ownership. The concept of interlucency is developed as an epistemic virtue governing ethically effective communication. The second part considers potential forms of government restrictions on or proscriptions against the development and use of cloud computing technology. Referring to the concept of technology neutrality, it argues that interference with hosting companies and cloud services providers is hardly ever necessary or justified. It is argued, too, however, that businesses using cloud services (banks, law firms, hospitals etc. storing client data in the cloud, e.g.) will have to follow rather more stringent regulations. (shrink)
Arguments for extended cognition and the extended mind are typically directed at human-centred forms of cognitive extension—forms of cognitive extension in which the cognitive/mental states/processes of a given human individual are subject to a form of extended or wide realization. The same is true of debates and discussions pertaining to the possibility of Web-extended minds and Internet-based forms of cognitive extension. In this case, the focus of attention concerns the extent to which the informational and technological elements of the online (...) environment form part of the machinery of the (individual) human mind. In this paper, we direct attention to a somewhat different form of cognitive extension. In particular, we suggest that the Web allows human individuals to be incorporated into the computational/cognitive routines of online systems. These forms of computational/cognitive extension highlight the potential of the Web and Internet to support bidirectional forms of computational/cognitive incorporation. The analysis of such bidirectional forms of incorporation broadens the scope of philosophical debates in this area, with potentially important implications for our understanding of the foundational notions of extended cognition and the extended mind. (shrink)
The paper provides a critical review of thedebate on the foundations of Computer Ethics. Starting from a discussion of Moor'sclassic interpretation of the need for CEcaused by a policy and conceptual vacuum, fivepositions in the literature are identified anddiscussed: the ``no resolution approach'',according to which CE can have no foundation;the professional approach, according to whichCE is solely a professional ethics; the radicalapproach, according to which CE deals withabsolutely unique issues, in need of a uniqueapproach; the conservative approach, accordingto which CE (...) is only a particular appliedethics, discussing new species of traditionalmoral issues; and the innovative approach,according to which theoretical CE can expandthe metaethical discourse with a substantiallynew perspective. In the course of the analysis,it is argued that, although CE issues are notuncontroversially unique, they are sufficientlynovel to render inadequate the adoption ofstandard macroethics, such as Utilitarianismand Deontologism, as the foundation of CE andhence to prompt the search for a robust ethicaltheory. Information Ethics is proposed forthat theory, as the satisfactory foundation forCE. IE is characterised as a biologicallyunbiased extension of environmental ethics,based on the concepts of information object/infosphere/entropy rather thanlife/ecosystem/pain. In light of the discussionprovided in this paper, it is suggested that CEis worthy of independent study because itrequires its own application-specific knowledgeand is capable of supporting a methodologicalfoundation, IE. (shrink)
Detractors of Searle’s Chinese Room Argument have arrived at a virtual consensus that the mental properties of the Man performing the computations stipulated by the argument are irrelevant to whether computational cognitive science is true. This paper challenges this virtual consensus to argue for the first of the two main theses of the persons reply, namely, that the mental properties of the Man are what matter. It does this by challenging many of the arguments and conceptions put forth by the (...) systems and logical replies to the Chinese Room, either reducing them to absurdity or showing how they lead, on the contrary, to conclusions the persons reply endorses. The paper bases its position on the Chinese Room Argument on additional philosophical considerations, the foundations of the theory of computation, and theoretical and experimental psychology. The paper purports to show how all these dimensions tend to support the proposed thesis of the persons reply. (shrink)
Multiple realizability (MR) is traditionally conceived of as the feature of computational systems, and has been used to argue for irreducibility of higher-level theories. I will show that there are several ways a computational system may be seen to display MR. These ways correspond to (at least) five ways one can conceive of the function of the physical computational system. However, they do not match common intuitions about MR. I show that MR is deeply interest-related, and for this reason, difficult (...) to pin down exactly. I claim that MR is of little importance for defending computationalism, and argue that it should rather appeal to organizational invariance or substrate neutrality of computation, which are much more intuitive but cannot support strong antireductionist arguments. (shrink)
Interactive theorem provers might seem particularly impractical in the history of philosophy. Journal articles in this discipline are generally not formalized. Interactive theorem provers involve a learning curve for which the payoffs might seem minimal. In this article I argue that interactive theorem provers have already demonstrated their potential as a useful tool for historians of philosophy; I do this by highlighting examples of work where this has already been done. Further, I argue that interactive theorem provers can continue to (...) be useful tools for historians of philosophy in the future; this claim is defended through a more conceptual analysis of what historians of philosophy do that identifies argument reconstruction as a core activity of such practitioners. It is then shown that interactive theorem provers can assist in this core practice by a description of what interactive theorem provers are and can do. If this is right, then computer verification for historians of philosophy is in the offing. (shrink)
Computer and information ethics, as well as other fields of applied ethics, need ethical theories which coherently unify deontological and consequentialist aspects of ethical analysis. The proposed theory of just consequentialism emphasizes consequences of policies within the constraints of justice. This makes just consequentialism a practical and theoretically sound approach to ethical problems of computer and information ethics.
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.