The purpose of this paper is to argue against the claim that morphologicalcomputation is substantially different from other kinds of physical computation. I show that some (but not all) purported cases of morphologicalcomputation do not count as specifically computational, and that those that do are solely physical computational systems. These latter cases are not, however, specific enough: all computational systems, not only morphological ones, may (and sometimes should) be studied in various ways, (...) including their energy efficiency, cost, reliability, and durability. Second, I critically analyze the notion of “offloading” computation to the morphology of an agent or robot, by showing that, literally, computation is sometimes not offloaded but simply avoided. Third, I point out that while the morphology of any agent is indicative of the environment that it is adapted to, or informative about that environment, it does not follow that every agent has access to its morphology as the model of its environment. (shrink)
The emerging contemporary natural philosophy provides a common ground for the integrative view of the natural, the artificial, and the human-social knowledge and practices. Learning process is central for acquiring, maintaining, and managing knowledge, both theoretical and practical. This paper explores the relationships between the present advances in understanding of learning in the sciences of the artificial, natural sciences, and philosophy. The question is, what at this stage of the development the inspiration from nature, specifically its computational models such as (...) info-computation through morphological computing, can contribute to machine learning and artificial intelligence, and how much on the other hand models and experiments in machine learning and robotics can motivate, justify, and inform research in computational cognitive science, neurosciences, and computing nature. We propose that one contribution can be understanding of the mechanisms of ‘learning to learn’, as a step towards deep learning with symbolic layer of computation/information processing in a framework linking connectionism with symbolism. As all natural systems possessing intelligence are cognitive systems, we describe the evolutionary arguments for the necessity of learning to learn for a system to reach human-level intelligence through evolution and development. The paper thus presents a contribution to the epistemology of the contemporary philosophy of nature. (shrink)
The contribution of the body to cognition and control in natural and artificial agents is increasingly described as “off-loading computation from the brain to the body”, where the body is said to perform “morphologicalcomputation”. Our investigation of four characteristic cases of morphologicalcomputation in animals and robots shows that the ‘off-loading’ perspective is misleading. Actually, the contribution of body morphology to cognition and control is rarely computational, in any useful sense of the word. We (...) thus distinguish (1) morphology that facilitates control, (2) morphology that facilitates perception and the rare cases of (3) morphologicalcomputation proper, such as ‘reservoir computing.’ where the body is actually used for computation. This result contributes to the understanding of the relation between embodiment and computation: The question for robot design and cognitive science is not whether computation is offloaded to the body, but to what extent the body facilitates cognition and control – how it contributes to the overall ‘orchestration’ of intelligent behaviour. (shrink)
MorphologicalComputation is based on the observation that biological systems seem to carry out relevant computations with their morphology (physical body) in order to successfully interact with their environments. This can be observed in a whole range of systems and at many different scales. It has been studied in animals – e.g., while running, the functionality of coping with impact and slight unevenness in the ground is "delivered" by the shape of the legs and the damped elasticity of (...) the muscle-tendon system – and plants, but it has also been observed at the cellular and even at the molecular level – as seen, for example, in spontaneous self-assembly. The concept of morphologicalcomputation has served as an inspirational resource to build bio-inspired robots, design novel approaches for support systems in health care, implement computation with natural systems, but also in art and architecture. As a consequence, the field is highly interdisciplinary, which is also nicely reflected in the wide range of authors that are featured in this e-book. We have contributions from robotics, mechanical engineering, health, architecture, biology, philosophy, and others. (shrink)
“Morphologicalcomputation” is an increasingly important concept in robotics, artificial intelligence, and philosophy of the mind. It is used to understand how the body contributes to cognition and control of behavior. Its understanding in terms of "offloading" computation from the brain to the body has been criticized as misleading, and it has been suggested that the use of the concept conflates three classes of distinct processes. In fact, these criticisms implicitly hang on accepting a semantic definition of (...) what constitutes computation. Here, I argue that an alternative, mechanistic view on computation offers a significantly different understanding of what morphologicalcomputation is. These theoretical considerations are then used to analyze the existing research program in developmental biology, which understands morphogenesis, the process of development of shape in biological systems, as a computational process. This important line of research shows that cognition and intelligence can be found across all scales of life, as the proponents of the basal cognition research program propose. Hence, clarifying the connection between morphologicalcomputation and morphogenesis allows for strengthening the role of the former concept in this emerging research field. (shrink)
The integration of embodied and computational approaches to cognition requires that non-neural body parts be described as parts of a computing system, which realizes cognitive processing. In this paper, based on research about morphological computations and the ecology of vision, I argue that nonneural body parts could be described as parts of a computational system, but they do not realize computation autonomously, only in connection with some kind of—even in the simplest form—central control system. Finally, I integrate the (...) proposal defended in the paper with the contemporary mechanistic approach to wide computation. (shrink)
This paper presents a theoretical study of the binary oppositions underlying the mechanisms of natural computation understood as dynamical processes on natural information morphologies. Of special interest are the oppositions of discrete vs. continuous, structure vs. process, and differentiation vs. integration. The framework used is that of computing nature, where all natural processes at different levels of organisation are computations over informational structures. The interactions at different levels of granularity/organisation in nature, and the character of the phenomena that unfold (...) through those interactions, are modeled from the perspective of an observing agent. This brings us to the movement from binary oppositions to dynamic networks built upon mutually related binary oppositions, where each node has several properties. (shrink)
This paper presents a view of nature as a network of infocomputational agents organized in a dynamical hierarchy of levels. It provides a framework for unification of currently disparate understandings of natural, formal, technical, behavioral and social phenomena based on information as a structure, differences in one system that cause the differences in another system, and computation as its dynamics, i.e. physical process of morphological change in the informational structure. We address some of the frequent misunderstandings regarding the (...) natural/morphological computational models and their relationships to physical systems, especially cognitive systems such as living beings. Natural morphological infocomputation as a conceptual framework necessitates generalization of models of computation beyond the traditional Turing machine model presenting symbol manipulation, and requires agent-based concurrent resource-sensitive models of computation in order to be able to cover the whole range of phenomena from physics to cognition. The central role of agency, particularly material vs. cognitive agency is highlighted. (shrink)
Engineers fine-tune the design of robot bodies for control purposes, however, a methodology or set of tools is largely absent, and optimization of morphology (shape, material properties of robot bodies, etc.) is lagging behind the development of controllers. This has become even more prominent with the advent of compliant, deformable or ”soft” bodies. These carry substantial potential regarding their exploitation for control—sometimes referred to as ”morphologicalcomputation”. In this article, we briefly review different notions of computation by (...) physical systems and propose the dynamical systems framework as the most useful in the context of describing and eventually designing the interactions of controllers and bodies. Then, we look at the pros and cons of simple vs. complex bodies, critically reviewing the attractive notion of ”soft” bodies automatically taking over control tasks. We address another key dimension of the design space—whether model-based control should be used and to what extent it is feasible to develop faithful models for different morphologies. (shrink)
Context: At present, we lack a common understanding of both the process of cognition in living organisms and the construction of knowledge in embodied, embedded cognizing agents in general, including future artifactual cognitive agents under development, such as cognitive robots and softbots. Purpose: This paper aims to show how the info-computational approach (IC) can reinforce constructivist ideas about the nature of cognition and knowledge and, conversely, how constructivist insights (such as that the process of cognition is the process of life) (...) can inspire new models of computing. Method: The info-computational constructive framework is presented for the modeling of cognitive processes in cognizing agents. Parallels are drawn with other constructivist approaches to cognition and knowledge generation. We describe how cognition as a process of life itself functions based on info-computation and how the process of knowledge generation proceeds through interactions with the environment and among agents. Results: Cognition and knowledge generation in a cognizing agent is understood as interaction with the world (potential information), which by processes of natural computation becomes actual information. That actual information after integration becomes knowledge for the agent. Heinz von Foerster is identified as a precursor of natural computing, in particular bio computing. Implications: IC provides a framework for unified study of cognition in living organisms (from the simplest ones, such as bacteria, to the most complex ones) as well as in artifactual cognitive systems. Constructivist content: It supports the constructivist view that knowledge is actively constructed by cognizing agents and shared in a process of social cognition. IC argues that this process can be modeled as info-computation. (shrink)
The electric activities of cortical pyramidal neurons are supported by structurally stable, morphologically complex axo-dendritic trees. Anatomical differences between axons and dendrites in regard to their length or caliber reflect the underlying functional specializations, for input or output of neural information, respectively. For a proper assessment of the computational capacity of pyramidal neurons, we have analyzed an extensive dataset of three-dimensional digital reconstructions from the NeuroMorphoOrg database, and quantified basic dendritic or axonal morphometric measures in different regions and layers of (...) the mouse, rat or human cerebral cortex. Physical estimates of the total number and type of ions involved in neuronal electric spiking based on the obtained morphometric data, combined with energetics of neurotransmitter release and signaling fueled by glucose consumed by the active brain, support highly efficient cerebral computation performed at the thermodynamically allowed Landauer limit for implementation of irreversible logical operations. Individual proton tunneling events in voltage-sensing S4 protein alpha-helices of Na+, K+ or Ca2+ ion channels are ideally suited to serve as single Landauer elementary logical operations that are then amplified by selective ionic currents traversing the open channel pores. This miniaturization of computational gating allows the execution of over 1.2 zetta logical operations per second in the human cerebral cortex without combusting the brain by the released heat. (shrink)
According to the computational theory of mind , to think is to compute. But what is meant by the word 'compute'? The generally given answer is this: Every case of computing is a case of manipulating symbols, but not vice versa - a manipulation of symbols must be driven exclusively by the formal properties of those symbols if it is qualify as a computation. In this paper, I will present the following argument. Words like 'form' and 'formal' are ambiguous, (...) as they can refer to form in either the syntactic or the morphological sense. CTM fails on each disambiguation, and the arguments for CTM immediately cease to be compelling once we register that ambiguity. The terms 'mechanical' and 'automatic' are comparably ambiguous. Once these ambiguities are exposed, it turns out that there is no possibility of mechanizing thought, even if we confine ourselves to domains where all problems can be settled through decision-procedures. The impossibility of mechanizing thought thus has nothing to do with recherché mathematical theorems, such as those proven by Gödel and Rosser. A related point is that CTM involves, and is guilty of reinforcing, a misunderstanding of the concept of an algorithm. (shrink)
I want to suggest that the major influence of classical arguments for embodiment like "The Embodied Mind" by Varela, Thomson & Rosch (1991) has been a changing of positions rather than a refutation: Cognitivism has found ways to retreat and regroup at positions that have better fortification, especially when it concerns theses about artificial intelligence or artificial cognitive systems. For example: a) Agent-based cognitivism' that understands humans as taking in representations of the world, doing rule-based processing and then acting on (...) them (sense-plan-act) is often limited to conscious decision processes; and b) Purely syntactic cognition is compatible with embodiment, or supplemented by embodiment (e.g. for 'grounding'). While the empirical thesis of embodied cognition ('embodied cognitive science') is true and the practical engineering thesis ('morphologicalcomputation', 'cheap design') is often true, the conceptual thesis ('embodiment is necessary for cognition') is likely false - syntax is often enough for cognition, unless grounding is really necessary. I conclude that it has become more sensible to integrate embodiment with traditional approaches rather than "fight for embodiment" or "against cognitivism". (shrink)
A series of representations must be semantics-driven if the members of that series are to combine into a single thought: where semantics is not operative, there is at most a series of disjoint representations that add up to nothing true or false, and therefore do not constitute a thought at all. A consequence is that there is necessarily a gulf between simulating thought, on the one hand, and actually thinking, on the other. A related point is that a popular doctrine (...) - the so-called 'computational theory of mind' (CTM) - is based on a confusion. CTM is the view that thought-processes consist in 'computations', where a computation is defined as a 'form-driven' operation on symbols. The expression 'form-driven operation' is ambiguous, as it may refer either to syntax-driven operations or to morphology-driven operations. Syntax-driven operations presuppose the existence of operations that are driven by semantic and extra-semantic knowledge. So CTM is false if the terms 'computation' and 'form-driven operation' are taken to refer to syntax-driven operations. Thus, if CTM is to work, those expressions must be taken to refer to morphology-driven operations. CTM therefore fails, given that an operation must be semantics-driven if it is to qualify as a thought. CTM therefore fails on each possible disambiguation of the expressions 'formal operation' and 'computation,' and it is therefore false. (shrink)
Contextual vocabulary acquisition (CVA) is the deliberate acquisition of a meaning for a word in a text by reasoning from context, where “context” includes: (1) the reader’s “internalization” of the surrounding text, i.e., the reader’s “mental model” of the word’s “textual context” (hereafter, “co-text” [3]) integrated with (2) the reader’s prior knowledge (PK), but it excludes (3) external sources such as dictionaries or people. CVA is what you do when you come across an unfamiliar word in your reading, realize that (...) you don’t know what it means, decide that you need to know what it means in order to understand the passage, but there is no one around to ask, and it is not in the dictionary (or you are too lazy to look it up). In such a case, you can try to figure out its meaning “from context”, i.e., from clues in the co-text together with your prior knowledge. Our computational theory of CVA—implemented in a the SNePS knowledge representation and reasoning system [28]—begins with a stored knowledge base containing SNePS representations of relevant PK, inputs SNePS representations of a passage containing an unfamiliar word, and draws inferences from these two (integrated) information sources. When asked to define the word, definition algorithms deductively search the resulting network for information of the sort that might be found in a dictionary definition, outputting a definition frame whose slots are the kinds of features that a definition might contain (e.g., class membership, properties, actions, spatio-temporal information, etc.) and whose slot-fillers contain information gleaned from the network [6–8,20,23,24]. We are investigating ways to make our system more robust, to embed it in a naturallanguage-processing system, and to incorporate morphological information. Our research group, including reading educators, is also applying our methods to the develop-. (shrink)
This paper subjects Dan Brown’s most recent novel Origin to a philosophical reading. Origin is regarded as a literary window into contemporary technoscience, inviting us to explore its transformative momentum and disruptive impact, focusing on the cultural significance of artificial intelligence and computer science: on the way in which established world-views are challenged by the incessant wave of scientific discoveries made possible by super-computation. While initially focusing on the tension between science and religion, the novel’s attention gradually shifts to (...) the increased dependence of human beings on smart technologies and artificial intelligence. Origin’s message, I will argue, reverberates with Oswald Spengler’s The Decline of the West, which aims to outline a morphology of world civilizations. Although the novel starts with a series of oppositions, most notably between religion and science, the eventual tendency is towards convergence, synthesis and sublation, exemplified by Sagrada Família as a monumental symptom of this transition. Three instances of convergence will be highlighted, namely the convergence between science and religion, between humanity and technology and between the natural sciences and the humanities. (shrink)
Text-guided image generation models can be prompted to generate images using nonce words adversarially designed to robustly evoke specific visual concepts. Two approaches for such generation are introduced: macaronic prompting, which involves designing cryptic hybrid words by concatenating subword units from different languages; and evocative prompting, which involves designing nonce words whose broad morphological features are similar enough to that of existing words to trigger robust visual associations. The two methods can also be combined to generate images associated with (...) more specific visual concepts. The implications of these techniques for the circumvention of existing approaches to content moderation, and particularly the generation of offensive or harmful images, are discussed. (shrink)
Since the early days of physics, space has called for means to represent, experiment, and reason about it. Apart from physicists, the concept of space has intrigued also philosophers, mathematicians and, more recently, computer scientists. This longstanding interest has left us with a plethora of mathematical tools developed to represent and work with space. Here we take a special look at this evolution by considering the perspective of Logic. From the initial axiomatic efforts of Euclid, we revisit the major milestones (...) in the logical representation of space and investigate current trends. In doing so, we do not only consider classical logic, but we indulge ourselves with modal logics. These present themselves naturally by providing simple axiomatizations of different geometries, topologies, space-time causality, and vector spaces. (shrink)
“Emergence” – the notion of novel, unpredictable and irreducible properties developing out of complex organisational entities – is itself a complex, multi-dimensional concept. To date there is no single, generally agreed upon “theory of emergence”, but instead a number of different approaches and perspectives. Neither is there a common conceptual or meta-theoretical framework by which to systematically identify, exemplify and compare different “theories”. Building upon earlier work done by sociologist Kenneth Bailey, this article presents a method for creating such a (...) framework, and outlines the conditions for a collaborative effort in order to carry out such a task. A brief historical and theoretical background is given both to the concept of “emergence” and to the non-quantified modelling method General Morphological Analysis (GMA). (shrink)
The Computational Theory of Mind (CTM) holds that cognitive processes are essentially computational, and hence computation provides the scientific key to explaining mentality. The Representational Theory of Mind (RTM) holds that representational content is the key feature in distinguishing mental from non-mental systems. I argue that there is a deep incompatibility between these two theoretical frameworks, and that the acceptance of CTM provides strong grounds for rejecting RTM. The focal point of the incompatibility is the fact that representational content (...) is extrinsic to formal procedures as such, and the intended interpretation of syntax makes no difference to the execution of an algorithm. So the unique 'content' postulated by RTM is superfluous to the formal procedures of CTM. And once these procedures are implemented in a physical mechanism, it is exclusively the causal properties of the physical mechanism that are responsible for all aspects of the system's behaviour. So once again, postulated content is rendered superfluous. To the extent that semantic content may appear to play a role in behaviour, it must be syntactically encoded within the system, and just as in a standard computational artefact, so too with the human mind/brain - it's pure syntax all the way down to the level of physical implementation. Hence 'content' is at most a convenient meta-level gloss, projected from the outside by human theorists, which itself can play no role in cognitive processing. (shrink)
The relationship between abstract formal procedures and the activities of actual physical systems has proved to be surprisingly subtle and controversial, and there are a number of competing accounts of when a physical system can be properly said to implement a mathematical formalism and hence perform a computation. I defend an account wherein computational descriptions of physical systems are high-level normative interpretations motivated by our pragmatic concerns. Furthermore, the criteria of utility and success vary according to our diverse purposes (...) and pragmatic goals. Hence there is no independent or uniform fact to the matter, and I advance the ‘anti-realist’ conclusion that computational descriptions of physical systems are not founded upon deep ontological distinctions, but rather upon interest-relative human conventions. Hence physical computation is a ‘conventional’ rather than a ‘natural’ kind. (shrink)
Morphological and morphometric study of umbilical cord of 50 newborn babies were carried out during January to December 1998 at Bangabandhu Sheikh Mujib Medical University, Dhaka to expand the knowledge of gross anatomy of the umbilical cord of Bangladesh. The length of the cords irrespective of sex was ranged from 28 to 93 cm with a mean (±SD) of 55.6 (±10.78).The length of the umbilical cord of male was significantly longer than female (P<0.001). The diameter of the cord irrespective (...) of sex varied from 1 to 1.9 cm with a mean (±SD) of 1.45 +0.31 cm. The mean circumference length percentage ratio index of umbilical cord was 8.31. Thirty-three (66%) cords were inserted eccentrically, all being paracentral in position. The rest were inserted centrally. False knots were more frequent (47; 94%). Only one (2%) showed a true knot in addition of false knot. In 2(4%) cases cord had not any true or false knot. It is concluded that the gross morphological and morphometrical features of the umbilical cord in Bangladesh appear to be similar to those described in western literature. -/- . (shrink)
This book addresses key conceptual issues relating to the modern scientific and engineering use of computer simulations. It analyses a broad set of questions, from the nature of computer simulations to their epistemological power, including the many scientific, social and ethics implications of using computer simulations. The book is written in an easily accessible narrative, one that weaves together philosophical questions and scientific technicalities. It will thus appeal equally to all academic scientists, engineers, and researchers in industry interested in questions (...) related to the general practice of computer simulations. (shrink)
Physical Computation is the summation of Piccinini’s work on computation and mechanistic explanation over the past decade. It draws together material from papers published during that time, but also provides additional clarifications and restructuring that make this the definitive presentation of his mechanistic account of physical computation. This review will first give a brief summary of the account that Piccinini defends, followed by a chapter-by-chapter overview of the book, before finally discussing one aspect of the account in (...) more critical detail. (shrink)
Is the mathematical function being computed by a given physical system determined by the system’s dynamics? This question is at the heart of the indeterminacy of computation phenomenon (Fresco et al. [unpublished]). A paradigmatic example is a conventional electrical AND-gate that is often said to compute conjunction, but it can just as well be used to compute disjunction. Despite the pervasiveness of this phenomenon in physical computational systems, it has been discussed in the philosophical literature only indirectly, mostly with (...) reference to the debate over realism about physical computation and computationalism. A welcome exception is Dewhurst’s ([2018]) recent analysis of computational individuation under the mechanistic framework. He rejects the idea of appealing to semantic properties for determining the computational identity of a physical system. But Dewhurst seems to be too quick to pay the price of giving up the notion of computational equivalence. We aim to show that the mechanist need not pay this price. The mechanistic framework can, in principle, preserve the idea of computational equivalence even between two different enough kinds of physical systems, say, electrical and hydraulic ones. (shrink)
The development of technology is unbelievably rapid. From limited local networks to high speed Internet, from crude computing machines to powerful semi-conductors, the world had changed drastically compared to just a few decades ago. In the constantly renewing process of adapting to such an unnaturally high-entropy setting, innovations as well as entirely new concepts, were often born. In the business world, one such phenomenon was the creation of a new type of entrepreneurship. This paper proposes a new academic discipline of (...) computational entrepreneurship, which centers on: (i) an exponentially growing (and less expensive) computing power, to the extent that almost everybody in a modern society can own and use that; (ii) omnipresent high-speed Internet connectivity, wired or wireless, representing our modern day’s economic connectomics; (iii) growing concern of exploiting “serendipity” for a strategic commercial advantage; and (iv) growing capabilities of lay people in performing calculations for their informed decisions in taking fast-moving entrepreneurial opportunities. Computational entrepreneurship has slowly become a new mode of operation for business ventures and will likely bring the academic discipline of entrepreneurship back to mainstream economics. (shrink)
In an attempt to determine the epistemic status of computer simulation results, philosophers of science have recently explored the similarities and differences between computer simulations and experiments. One question that arises is whether and, if so, when, simulation results constitute novel empirical data. It is often supposed that computer simulation results could never be empirical or novel because simulations never interact with their targets, and cannot go beyond their programming. This paper argues against this position by examining whether, and under (...) what conditions, the features of empiricality and novelty could be displayed by computer simulation data. I show that, to the extent that certain familiar measurement results have these features, so can some computer simulation results. (shrink)
In this paper, I argue that computationalism is a progressive research tradition. Its metaphysical assumptions are that nervous systems are computational, and that information processing is necessary for cognition to occur. First, the primary reasons why information processing should explain cognition are reviewed. Then I argue that early formulations of these reasons are outdated. However, by relying on the mechanistic account of physical computation, they can be recast in a compelling way. Next, I contrast two computational models of working (...) memory to show how modeling has progressed over the years. The methodological assumptions of new modeling work are best understood in the mechanistic framework, which is evidenced by the way in which models are empirically validated. Moreover, the methodological and theoretical progress in computational neuroscience vindicates the new mechanistic approach to explanation, which, at the same time, justifies the best practices of computational modeling. Overall, computational modeling is deservedly successful in cognitive science. Its successes are related to deep conceptual connections between cognition and computation. Computationalism is not only here to stay, it becomes stronger every year. (shrink)
out during January to December 1998 at Bangabandhu Sheikh Mujib Medical University, Dhaka to expand the knowledge of gross anatomy of the umbilical cord of Bangladesh. The length of the cords were irrespective of sex ranged from 28 to 93 cm with a mean (±SD) of 55.6 (±10.78). The length of the umbilical cords of males were significantly longer than female (P<0.001). The diameter of the cords irrespective of sex were varied from 1 to 1.9 cm with a mean (±SD) (...) of 1.45±0.31 cm. The mean circumference length percentage ratio index of umbilical cord was 8.31. Thirty-three (66%) cords were inserted eccentrically, all being paracentral in position. The rest were inserted centrally. False knots were more frequent (47; 94%). Only one (2%) showed a true knot in addition of false knot. In 2(4%) cases cord had not any true or false knot. It is concluded that the gross morphological and morphometrical features of the umbilical cord in Bangladesh appear to be similar to those described in western literature. (shrink)
Intelligent Tutoring Systems (ITS) has a wide influence on the exchange rate, education, health, training, and educational programs. In this paper we describe an intelligent tutoring system that helps student study computer networks. The current ITS provides intelligent presentation of educational content appropriate for students, such as the degree of knowledge, the desired level of detail, assessment, student level, and familiarity with the subject. Our Intelligent tutoring system was developed using ITSB authoring tool for building ITS. A preliminary evaluation of (...) the ITS was done by a group of students and teachers. The results were acceptable. (shrink)
Computer-Assisted Argument Mapping (CAAM) is a new way of understanding arguments. While still embryonic in its development and application, CAAM is being used increasingly as a training and development tool in the professions and government. Inroads are also being made in its application within education. CAAM claims to be helpful in an educational context, as a tool for students in responding to assessment tasks. However, to date there is little evidence from students that this is the case. This paper outlines (...) the use of CAAM as an educational tool within an Economics and Commerce Faculty in a major Australian research university. Evaluation results are provided from students from a CAAM pilot within an upper-level Economics subject. Results indicate promising support for the use of CAAM and its potential for transferability within the disciplines. If shown to be valuable with further studies, CAAM could be included in capstone subjects, allowing computer technology to be utilised in the service of generic skill development. (shrink)
This paper explores how the Leviathan that projects power through nuclear arms exercises a unique nuclearized sovereignty. In the case of nuclear superpowers, this sovereignty extends to wielding the power to destroy human civilization as we know it across the globe. Nuclearized sovereignty depends on a hybrid form of power encompassing human decision-makers in a hierarchical chain of command, and all of the technical and computerized functions necessary to maintain command and control at every moment of the sovereign's existence: this (...) sovereign power cannot sleep. This article analyzes how the form of rationality that informs this hybrid exercise of power historically developed to be computable. By definition, computable rationality must be able to function without any intelligible grasp of the context or the comprehensive significance of decision-making outcomes. Thus, maintaining nuclearized sovereignty necessarily must be able to execute momentous life and death decisions without the type of sentience we usually associate with ethical individual and collective decisions. (shrink)
This paper argues that the idea of a computer is unique. Calculators and analog computers are not different ideas about computers, and nature does not compute by itself. Computers, once clearly defined in all their terms and mechanisms, rather than enumerated by behavioral examples, can be more than instrumental tools in science, and more than source of analogies and taxonomies in philosophy. They can help us understand semantic content and its relation to form. This can be achieved because they have (...) the potential to do more than calculators, which are computers that are designed not to learn. Today’s computers are not designed to learn; rather, they are designed to support learning; therefore, any theory of content tested by computers that currently exist must be of an empirical, rather than a formal nature. If they are designed someday to learn, we will see a change in roles, requiring an empirical theory about the Turing architecture’s content, using the primitives of learning machines. This way of thinking, which I call the intensional view of computers, avoids the problems of analogies between minds and computers. It focuses on the constitutive properties of computers, such as showing clearly how they can help us avoid the infinite regress in interpretation, and how we can clarify the terms of the suggested mechanisms to facilitate a useful debate. Within the intensional view, syntax and content in the context of computers become two ends of physically realizing correspondence problems in various domains. (shrink)
In most accounts of realization of computational processes by physical mechanisms, it is presupposed that there is one-to-one correspondence between the causally active states of the physical process and the states of the computation. Yet such proposals either stipulate that only one model of computation is implemented, or they do not reflect upon the variety of models that could be implemented physically. -/- In this paper, I claim that mechanistic accounts of computation should allow for a broad (...) variation of models of computation. In particular, some non-standard models should not be excluded a priori. The relationship between mathematical models of computation and mechanistically adequate models is studied in more detail. (shrink)
In most accounts of realization of computational processes by physical mechanisms, it is presupposed that there is one-to-one correspondence between the causally active states of the physical process and the states of the computation. Yet such proposals either stipulate that only one model of computation is implemented, or they do not reflect upon the variety of models that could be implemented physically. In this paper, I claim that mechanistic accounts of computation should allow for a broad variation (...) of models of computation. In particular, some non-standard models should not be excluded a priori. The relationship between mathematical models of computation and mechanistically adequate models is studied in more detail. (shrink)
Argument mapping is a way of diagramming the logical structure of an argument to explicitly and concisely represent reasoning. The use of argument mapping in critical thinking instruction has increased dramatically in recent decades. This paper overviews the innovation and provides a procedural approach for new teaches wanting to use argument mapping in the classroom. A brief history of argument mapping is provided at the end of this paper.
The segregation of image parts into foreground and background is an important aspect of the neural computation of 3D scene perception. To achieve such segregation, the brain needs information about border ownership; that is, the belongingness of a contour to a specific surface represented in the image. This article presents psychophysical data derived from 3D percepts of figure and ground that were generated by presenting 2D images composed of spatially disjoint shapes that pointed inward or outward relative to the (...) continuous boundaries that they induced along their collinear edges. The shapes in some images had the same contrast (black or white) with respect to the background gray. Other images included opposite contrasts along each induced continuous boundary. Psychophysical results demonstrate conditions under which figure-ground judgment probabilities in response to these ambiguous displays are determined by the orientation of contrasts only, not by their relative contrasts, despite the fact that many border ownership cells in cortical area V2 respond to a preferred relative contrast. Studies are also reviewed in which both polarity-specific and polarity-invariant properties obtain perceptual figure-ground grouping results. The FACADE and 3D LAMINART models are used to explain these data. Keywords: figure-ground separation, border ownership, perceptual grouping, surface filling-in, V2, V4, FACADE Theory. (shrink)
Very plausibly, nothing can be a genuine computing system unless it meets an input-sensitivity requirement. Otherwise all sorts of objects, such as rocks or pails of water, can count as performing computations, even such as might suffice for mentality—thus threatening computationalism about the mind with panpsychism. Maudlin in J Philos 86:407–432, ( 1989 ) and Bishop ( 2002a , b ) have argued, however, that such a requirement creates difficulties for computationalism about conscious experience, putting it in conflict with the (...) very intuitive thesis that conscious experience supervenes on physical activity. Klein in Synthese 165:141–153, ( 2008 ) proposes a way for computationalists about experience to avoid panpsychism while still respecting the supervenience of experience on activity. I argue that his attempt to save computational theories of experience from Maudlin’s and Bishop’s critique fails. (shrink)
Traditionally, computational theory (CT) and dynamical systems theory (DST) have presented themselves as opposed and incompatible paradigms in cognitive science. There have been some efforts to reconcile these paradigms, mainly, by assimilating DST to CT at the expenses of its anti-representationalist commitments. In this paper, building on Piccinini’s mechanistic account of computation and the notion of functional closure, we explore an alternative conciliatory strategy. We try to assimilate CT to DST by dropping its representationalist commitments, and by inviting CT (...) to recognize the functionally closed nature of some computational systems. (shrink)
The paper analyses six ethical challenges posed by cloud computing, concerning ownership, safety, fairness, responsibility, accountability and privacy. The first part defines cloud computing on the basis of a resource-oriented approach, and outlines the main features that characterise such technology. Following these clarifications, the second part argues that cloud computing reshapes some classic problems often debated in information and computer ethics. To begin with, cloud computing makes possible a complete decoupling of ownership, possession and use of data and this helps (...) to explain the problems occurring when different providers of cloud computing retain or relinquish the right to use or own users‘ data. The problem of safety in cloud computing is coupled to that of reliability, insofar as users have to trust providers to preserve their data, applications and content in a reliable manner. It is argued that, in this context, data insurance could play an important role. Regarding fairness, the paper argues that cloud computing is already reshaping the nature of the Digital. Responsibility, accountability and privacy close the ethical analysis of cloud computing. In this case, the thesis is that the necessity to account for the actions of cloud computing users imposes delicate trade-offs between users‘ privacy and the traceability of their operations. (shrink)
Since the sixties, computational modeling has become increasingly important in both the physical and the social sciences, particularly in physics, theoretical biology, sociology, and economics. Sine the eighties, philosophers too have begun to apply computational modeling to questions in logic, epistemology, philosophy of science, philosophy of mind, philosophy of language, philosophy of biology, ethics, and social and political philosophy. This chapter analyzes a selection of interesting examples in some of those areas.
European Computing and Philosophy conference, 2–4 July Barcelona The Seventh ECAP (European Computing and Philosophy) conference was organized by Jordi Vallverdu at Autonomous University of Barcelona. The conference started with the IACAP (The International Association for CAP) presidential address by Luciano Floridi, focusing on mechanisms of knowledge production in informational networks. The first keynote delivered by Klaus Mainzer made a frame for the rest of the conference, by elucidating the fundamental role of complexity of informational structures that can be analyzed (...) on different levels of organization giving place for variety of possible approaches which converge in this cross-disciplinary and multi-disciplinary research field. Keynotes by Kevin Warwick about re-embodiment of rats’ neurons into robots, Raymond Turner on syntax and semantics in programming languages, Roderic Guigo on Biocomputing Sciences and Francesco Subirada on the past and future of supercomputing presented different topics of philosophical as well as practical aspects of computing. Vonference tracks included: Philosophy of Information (Patrick Allo), Philosophy of Computer Science (Raymond Turner), Computer and Information Ethics (Johnny Søraker and Alison Adam), Computational Approaches to the Mind (Ruth Hagengruber), IT and Cultural Diversity (Jutta Weber and Charles Ess), Crossroads (David Casacuberta), Robotics, AI & Ambient Intelligence (Thomas Roth-Berghofer), Biocomputing, Evolutionary and Complex Systems (Gordana Dodig Crnkovic and Søren Brier), E-learning, E-science and Computer-Supported Cooperative Work (Annamaria Carusi) and Technological Singularity and Acceleration Studies (Amnon Eden). (shrink)
Computers can mimic human intelligence, sometimes quite impressively. This has led some to claim that, a.) computers can actually acquire intelligence, and/or, b.) the human mind may be thought of as a very sophisticated computer. In this paper I argue that neither of these inferences are sound. The human mind and computers, I argue, operate on radically different principles.
The scope of Platonism is extended by introducing the concept of a “Platonic computer” which is incorporated in metacomputics. The theoretical framework of metacomputics postulates that a Platonic computer exists in the realm of Forms and is made by, of, with, and from metaconsciousness. Metaconsciousness is defined as the “power to conceive, to perceive, and to be self-aware” and is the formless, con-tentless infinite potentiality. Metacomputics models how metaconsciousness generates the perceived actualities including abstract entities and physical and nonphysical realities. (...) It is postulated that this is achieved via digital computation using the Platonic computer. The introduction of a Platonic computer into the realm of Forms thus bridges the “inverse explanatory gap” and therefore solves the “inverse hard problem of consciousness” in the philosophy of mind. (shrink)
A response to a recent critique by Cem Bozşahin of the theory of syntactic semantics as it applies to Helen Keller, and some applications of the theory to the philosophy of computer science.
Scientists depend on complex computational systems that are often ineliminably opaque, to the detriment of our ability to give scientific explanations and detect artifacts. Some philosophers have s...
Any computer can create a model of reality. The hypothesis that quantum computer can generate such a model designated as quantum, which coincides with the modeled reality, is discussed. Its reasons are the theorems about the absence of “hidden variables” in quantum mechanics. The quantum modeling requires the axiom of choice. The following conclusions are deduced from the hypothesis. A quantum model unlike a classical model can coincide with reality. Reality can be interpreted as a quantum computer. The physical processes (...) represent computations of the quantum computer. Quantum information is the real fundament of the world. The conception of quantum computer unifies physics and mathematics and thus the material and the ideal world. Quantum computer is a non-Turing machine in principle. Any quantum computing can be interpreted as an infinite classical computational process of a Turing machine. Quantum computer introduces the notion of “actually infinite computational process”. The discussed hypothesis is consistent with all quantum mechanics. The conclusions address a form of neo-Pythagoreanism: Unifying the mathematical and physical, quantum computer is situated in an intermediate domain of their mutual transformation. (shrink)
This paper is in two parts. Part I outlines three traditional approaches to the teaching of critical thinking: the normative, cognitive psychology, and educational approaches. Each of these approaches is discussed in relation to the influences of various methods of critical thinking instruction. The paper contrasts these approaches with what I call the “visualisation” approach. This approach is explained with reference to computer-aided argument mapping (CAAM) which uses dedicated computer software to represent inferences between premise and conclusions. The paper presents (...) a detailed account of the CAAM methodology, and theoretical justification for its use, illustrating this with the argument mapping software Rationale™. A number of Rationale™ design conventions and logical principles are outlined including the principle of abstraction, the MECE principle, and the “Holding Hands” and “Rabbit Rule” heuristics. Part II of this paper outlines the growing empirical evidence for the effectiveness of CAAM as a method of teaching critical thinking. (shrink)
This chapter draws an analogy between computing mechanisms and autopoietic systems, focusing on the non-representational status of both kinds of system (computational and autopoietic). It will be argued that the role played by input and output components in a computing mechanism closely resembles the relationship between an autopoietic system and its environment, and in this sense differs from the classical understanding of inputs and outputs. The analogy helps to make sense of why we should think of computing mechanisms as non-representational, (...) and might also facilitate reconciliation between computational and autopoietic/enactive approaches to the study of cognition. (shrink)
This paper connects information with computation and cognition via concept of agents that appear at variety of levels of organization of physical/chemical/cognitive systems – from elementary particles to atoms, molecules, life-like chemical systems, to cognitive systems starting with living cells, up to organisms and ecologies. In order to obtain this generalized framework, concepts of information, computation and cognition are generalized. In this framework, nature can be seen as informational structure with computational dynamics, where an (info-computational) agent is needed (...) for the potential information of the world to actualize. Starting from the definition of information as the difference in one physical system that makes a difference in another physical system – which combines Bateson and Hewitt’s definitions, the argument is advanced for natural computation as a computational model of the dynamics of the physical world, where information processing is constantly going on, on a variety of levels of organization. This setting helps us to elucidate the relationships between computation, information, agency and cognition, within the common conceptual framework, with special relevance for biology and robotics. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.