Switch to: Citations

Add references

You must login to add references.
  1. Minds, Brains, and Programs.John Searle - 2003 - In John Heil (ed.), Philosophy of Mind: A Guide and Anthology. New York: Oxford University Press.
    Download  
     
    Export citation  
     
    Bookmark   671 citations  
  • An essay on the psychology of invention in the mathematical field.Jacques Hadamard - 1946 - Les Etudes Philosophiques 1 (3):252-253.
    Download  
     
    Export citation  
     
    Bookmark   54 citations  
  • Parapsychology: Science of the anomalous or search for the soul?James E. Alcock - 1987 - Behavioral and Brain Sciences 10 (4):553.
    Download  
     
    Export citation  
     
    Bookmark   44 citations  
  • The Failures of Computationalism.John R. Searle - 2001 - Http.
    Harnad and I agree that the Chinese Room Argument deals a knockout blow to Strong AI, but beyond that point we do not agree on much at all. So let's begin by pondering the implications of the Chinese Room. The Chinese Room shows that a system, me for example, could pass the Turing Test for understanding Chinese, for example, and could implement any program you like and still not understand a word of Chinese. Now, why? What does the genuine Chinese (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Theory of mind in nonhuman primates.C. M. Heyes - 1998 - Behavioral and Brain Sciences 21 (1):101-114.
    Since the BBS article in which Premack and Woodruff (1978) asked “Does the chimpanzee have a theory of mind?,” it has been repeatedly claimed that there is observational and experimental evidence that apes have mental state concepts, such as “want” and “know.” Unlike research on the development of theory of mind in childhood, however, no substantial progress has been made through this work with nonhuman primates. A survey of empirical studies of imitation, self-recognition, social relationships, deception, role-taking, and perspective-taking suggests (...)
    Download  
     
    Export citation  
     
    Bookmark   131 citations  
  • Cognitive Science as Reverse Engineering.Daniel C. Dennett - unknown
    The vivid terms, "Top-down" and "Bottom-up" have become popular in several different contexts in cognitive science. My task today is to sort out some different meanings and comment on the relations between them, and their implications for cognitive science.
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Artificial life: Synthetic versus virtual.Stevan Harnad - 1993 - In Chris Langton (ed.), Santa Fe Institute Studies in the Sciences of Complexity. Volume XVI: 539ff. Addison-Wesley.
    Artificial life can take two forms: synthetic and virtual. In principle, the materials and properties of synthetic living systems could differ radically from those of natural living systems yet still resemble them enough to be really alive if they are grounded in the relevant causal interactions with the real world. Virtual (purely computational) "living" systems, in contrast, are just ungrounded symbol systems that are systematically interpretable as if they were alive; in reality they are no more alive than a virtual (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Grounding symbols in the analog world with neural nets.Stevan Harnad - 1993 - Think (misc) 2 (1):12-78.
    Harnad's main argument can be roughly summarised as follows: due to Searle's Chinese Room argument, symbol systems by themselves are insufficient to exhibit cognition, because the symbols are not grounded in the real world, hence without meaning. However, a symbol system that is connected to the real world through transducers receiving sensory data, with neural nets translating these data into sensory categories, would not be subject to the Chinese Room argument. Harnad's article is not only the starting point for the (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Against computational hermeneutics.Stevan Harnad - 1990 - Social Epistemology 4:167-172.
    Critique of Computationalism as merely projecting hermeneutics (i.e., meaning originating from the mind of an external interpreter) onto otherwise intrinsically meaningless symbols.
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • The identity of indiscernibles.Max Black - 1952 - Mind 61 (242):153-164.
    Download  
     
    Export citation  
     
    Bookmark   322 citations  
  • Turing indistinguishability and the blind watchmaker.Stevan Harnad - 2002 - In James H. Fetzer (ed.), Consciousness Evolving. John Benjamins. pp. 3-18.
    Many special problems crop up when evolutionary theory turns, quite naturally, to the question of the adaptive value and causal role of consciousness in human and nonhuman organisms. One problem is that -- unless we are to be dualists, treating it as an independent nonphysical force -- consciousness could not have had an independent adaptive function of its own, over and above whatever behavioral and physiological functions it "supervenes" on, because evolution is completely blind to the difference between a conscious (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Consciousness: An afterthought.Stevan Harnad - 1982 - Cognition and Brain Theory 5:29-47.
    There are many possible approaches to the mind/brain problem. One of the most prominent, and perhaps the most practical, is to ignore it.
    Download  
     
    Export citation  
     
    Bookmark   143 citations  
  • Computation and cognition: Issues in the foundation of cognitive science.Zenon W. Pylyshyn - 1980 - Behavioral and Brain Sciences 3 (1):111-32.
    The computational view of mind rests on certain intuitions regarding the fundamental similarity between computation and cognition. We examine some of these intuitions and suggest that they derive from the fact that computers and human organisms are both physical systems whose behavior is correctly described as being governed by rules acting on symbolic representations. Some of the implications of this view are discussed. It is suggested that a fundamental hypothesis of this approach is that there is a natural domain of (...)
    Download  
     
    Export citation  
     
    Bookmark   669 citations  
  • Computation and Cognition: Toward a Foundation for Cognitive Science.Zenon W. Pylyshyn - 1984 - Cambridge: MIT Press.
    This systematic investigation of computation and mental phenomena by a noted psychologist and computer scientist argues that cognition is a form of computation, that the semantic contents of mental states are encoded in the same general way as computer representations are encoded. It is a rich and sustained investigation of the assumptions underlying the directions cognitive science research is taking. 1 The Explanatory Vocabulary of Cognition 2 The Explanatory Role of Representations 3 The Relevance of Computation 4 The Psychological Reality (...)
    Download  
     
    Export citation  
     
    Bookmark   1005 citations  
  • Computation is just interpretable symbol manipulation; cognition isn't.Stevan Harnad - 1994 - Minds and Machines 4 (4):379-90.
    Computation is interpretable symbol manipulation. Symbols are objects that are manipulated on the basis of rules operating only on theirshapes, which are arbitrary in relation to what they can be interpreted as meaning. Even if one accepts the Church/Turing Thesis that computation is unique, universal and very near omnipotent, not everything is a computer, because not everything can be given a systematic interpretation; and certainly everything can''t be givenevery systematic interpretation. But even after computers and computation have been successfully distinguished (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • The Robot's Dilemma: The Frame Problem in Artificial Intelligence.Zenon W. Pylyshyn (ed.) - 1987 - Ablex.
    Each of the chapters in this volume devotes considerable attention to defining and elaborating the notion of the frame problem-one of the hard problems of artificial intelligence. Not only do the chapters clarify the problems at hand, they shed light on the different approaches taken by those in artificial intelligence and by certain philosophers who have been concerned with related problems in their field. The book should therefore not be read merely as a discussion of the frame problem narrowly conceived, (...)
    Download  
     
    Export citation  
     
    Bookmark   66 citations  
  • Connecting object to symbol in modeling cognition.Stevan Harnad - 1992 - In A. Clark & Ronald Lutz (eds.), Connectionism in Context. Springer Verlag. pp. 75--90.
    Connectionism and computationalism are currently vying for hegemony in cognitive modeling. At first glance the opposition seems incoherent, because connectionism is itself computational, but the form of computationalism that has been the prime candidate for encoding the "language of thought" has been symbolic computationalism (Dietrich 1990, Fodor 1975, Harnad 1990c; Newell 1980; Pylyshyn 1984), whereas connectionism is nonsymbolic (Fodor & Pylyshyn 1988, or, as some have hopefully dubbed it, "subsymbolic" Smolensky 1988). This paper will examine what is and is not (...)
    Download  
     
    Export citation  
     
    Bookmark   39 citations  
  • Physical symbol systems.Allen Newell - 1980 - Cognitive Science 4 (2):135-83.
    On the occasion of a first conference on Cognitive Science, it seems appropriate to review the basis of common understanding between the various disciplines. In my estimate, the most fundamental contribution so far of artificial intelligence and computer science to the joint enterprise of cognitive science has been the notion of a physical symbol system, i.e., the concept of a broad class of systems capable of having and manipulating symbols, yet realizable in the physical universe. The notion of symbol so (...)
    Download  
     
    Export citation  
     
    Bookmark   487 citations  
  • The symbol grounding problem.Stevan Harnad - 1990 - Physica D 42:335-346.
    There has been much discussion recently about the scope and limits of purely symbolic models of the mind and about the proper role of connectionism in cognitive modeling. This paper describes the symbol grounding problem : How can the semantic interpretation of a formal symbol system be made intrinsic to the system, rather than just parasitic on the meanings in our heads? How can the meanings of the meaningless symbol tokens, manipulated solely on the basis of their shapes, be grounded (...)
    Download  
     
    Export citation  
     
    Bookmark   344 citations  
  • Minds, brains, and programs.John Searle - 1980 - Behavioral and Brain Sciences 3 (3):417-57.
    What psychological and philosophical significance should we attach to recent efforts at computer simulations of human cognitive capacities? In answering this question, I find it useful to distinguish what I will call "strong" AI from "weak" or "cautious" AI. According to weak AI, the principal value of the computer in the study of the mind is that it gives us a very powerful tool. For example, it enables us to formulate and test hypotheses in a more rigorous and precise fashion. (...)
    Download  
     
    Export citation  
     
    Bookmark   1721 citations  
  • Minds, Brains and Science.John R. Searle - 1984 - Cambridge: Harvard University Press.
    As Louisiana and Cuba emerged from slavery in the late nineteenth century, each faced the question of what rights former slaves could claim. Degrees of Freedom compares and contrasts these two societies in which slavery was destroyed by war, and citizenship was redefined through social and political upheaval. Both Louisiana and Cuba were rich in sugar plantations that depended on an enslaved labor force. After abolition, on both sides of the Gulf of Mexico, ordinary people-cane cutters and cigar workers, laundresses (...)
    Download  
     
    Export citation  
     
    Bookmark   328 citations  
  • Minds, machines and Searle.Stevan Harnad - 1989 - Journal of Experimental and Theoretical Artificial Intelligence 1 (4):5-25.
    Searle's celebrated Chinese Room Argument has shaken the foundations of Artificial Intelligence. Many refutations have been attempted, but none seem convincing. This paper is an attempt to sort out explicitly the assumptions and the logical, methodological and empirical points of disagreement. Searle is shown to have underestimated some features of computer modeling, but the heart of the issue turns out to be an empirical question about the scope and limits of the purely symbolic model of the mind. Nonsymbolic modeling turns (...)
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
  • Virtual symposium on virtual mind.Patrick Hayes, Stevan Harnad, Donald Perlis & Ned Block - 1992 - Minds and Machines 2 (3):217-238.
    When certain formal symbol systems (e.g., computer programs) are implemented as dynamic physical symbol systems (e.g., when they are run on a computer) their activity can be interpreted at higher levels (e.g., binary code can be interpreted as LISP, LISP code can be interpreted as English, and English can be interpreted as a meaningful conversation). These higher levels of interpretability are called "virtual" systems. If such a virtual system is interpretable as if it had a mind, is such a "virtual (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Computing machinery and intelligence.Alan M. Turing - 1950 - Mind 59 (October):433-60.
    I propose to consider the question, "Can machines think?" This should begin with definitions of the meaning of the terms "machine" and "think." The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous, If the meaning of the words "machine" and "think" are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to (...)
    Download  
     
    Export citation  
     
    Bookmark   1035 citations  
  • Naive psychology and the inverted Turing test.S. Watt - 1996 - Psycoloquy 7 (14).
    This target article argues that the Turing test implicitly rests on a "naive psychology," a naturally evolved psychological faculty which is used to predict and understand the behaviour of others in complex societies. This natural faculty is an important and implicit bias in the observer's tendency to ascribe mentality to the system in the test. The paper analyses the effects of this naive psychology on the Turing test, both from the side of the system and the side of the observer, (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Lessons from a restricted Turing test.Stuart M. Shieber - 1994 - Communications of the Association for Computing Machinery 37:70-82.
    Download  
     
    Export citation  
     
    Bookmark   29 citations  
  • The truly total Turing test.Paul Schweizer - 1998 - Minds and Machines 8 (2):263-272.
    The paper examines the nature of the behavioral evidence underlying attributions of intelligence in the case of human beings, and how this might be extended to other kinds of cognitive system, in the spirit of the original Turing Test. I consider Harnad's Total Turing Test, which involves successful performance of both linguistic and robotic behavior, and which is often thought to incorporate the very same range of empirical data that is available in the human case. However, I argue that the (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Subcognition and the limits of the Turing test.Robert M. French - 1990 - Mind 99 (393):53-66.
    Download  
     
    Export citation  
     
    Bookmark   50 citations  
  • Other bodies, other minds: A machine incarnation of an old philosophical problem. [REVIEW]Stevan Harnad - 1991 - Minds and Machines 1 (1):43-54.
    Explaining the mind by building machines with minds runs into the other-minds problem: How can we tell whether any body other than our own has a mind when the only way to know is by being the other body? In practice we all use some form of Turing Test: If it can do everything a body with a mind can do such that we can't tell them apart, we have no basis for doubting it has a mind. But what is (...)
    Download  
     
    Export citation  
     
    Bookmark   87 citations  
  • The Turing Test and the Frame Problem: AI's Mistaken Understanding of Intelligence.Larry Crockett - 1994 - Ablex.
    I have discussed the frame problem and the Turing test at length, but I have not attempted to spell out what I think the implications of the frame problem ...
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Can machines think?Daniel C. Dennett - 1984 - In Michael G. Shafto (ed.), How We Know. Harper & Row.
    Download  
     
    Export citation  
     
    Bookmark   20 citations  
  • Consciousness, explanatory inversion and cognitive science.John R. Searle - 1990 - Behavioral and Brain Sciences 13 (1):585-642.
    Cognitive science typically postulates unconscious mental phenomena, computational or otherwise, to explain cognitive capacities. The mental phenomena in question are supposed to be inaccessible in principle to consciousness. I try to show that this is a mistake, because all unconscious intentionality must be accessible in principle to consciousness; we have no notion of intrinsic intentionality except in terms of its accessibility to consciousness. I call this claim the The argument for it proceeds in six steps. The essential point is that (...)
    Download  
     
    Export citation  
     
    Bookmark   288 citations  
  • Why and how we are not zombies.Stevan Harnad - 1994 - Journal of Consciousness Studies 1 (2):164-67.
    A robot that is functionally indistinguishable from us may or may not be a mindless Zombie. There will never be any way to know, yet its functional principles will be as close as we can ever get to explaining the mind.
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • What is it like to be a bat?Thomas Nagel - 1974 - Philosophical Review 83 (October):435-50.
    Download  
     
    Export citation  
     
    Bookmark   2190 citations  
  • The Verbal Icon.W. K. Wimsatt - 1955 - Journal of Aesthetics and Art Criticism 13 (3):414-414.
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • [Book Chapter].Stevan Harnad - 1987
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Primate theory of mind is a Turing test.Robert W. Mitchell & James R. Anderson - 1998 - Behavioral and Brain Sciences 21 (1):127-128.
    Heyes's literature review of deception, imitation, and self-recognition is inadequate, misleading, and erroneous. The anaesthetic artifact hypothesis of self-recognition is unsupported by the data she herself examines. Her proposed experiment is tantalizing, indicating that theory of mind is simply a Turing test.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Neoconstructivism: A unifying constraint for the cognitive sciences.Stevan Harnad - 1982 - In Thomas W. Simon & Robert J. Scholes (eds.), [Book Chapter]. Lawrence Erlbaum. pp. 1-11.
    Behavioral scientists studied behavior; cognitive scientists study what generates behavior. Cognitive science is hence theoretical behaviorism (or behaviorism is experimental cognitivism). Behavior is data for a cognitive theorist. What counts as a theory of behavior? In this paper, a methodological constraint on theory construction -- "neoconstructivism" -- will be proposed (by analogy with constructivism in mathematics): Cognitive theory must be computable; given an encoding of the input to a behaving system, a theory must be able to compute (an encoding of) (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • The Turing test is not a trick: Turing indistinguishability is a scientific criterion.Stevan Harnad - 1992 - SIGART Bulletin 3 (4):9-10.
    It is important to understand that the Turing Test is not, nor was it intended to be, a trick; how well one can fool someone is not a measure of scientific progress. The TT is an empirical criterion: It sets AI's empirical goal to be to generate human-scale performance capacity. This goal will be met when the candidate's performance is totally indistinguishable from a human's. Until then, the TT simply represents what it is that AI must endeavor eventually to accomplish (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • What is it like to be a bat?Thomas Nagel - 2004 - In Tim Crane & Katalin Farkas (eds.), Metaphysics: a guide and anthology. New York: Oxford University Press.
    Download  
     
    Export citation  
     
    Bookmark   691 citations  
  • Discussion (passim).Stevan Harnad - 1987 - In [Book Chapter].
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The origin of words: A psychophysical hypothesis.Stevan Harnad - 1987 - In [Book Chapter].
    It is hypothesized that words originated as the names of perceptual categories and that two forms of representation underlying perceptual categorization -- iconic and categorical representations -- served to ground a third, symbolic, form of representation. The third form of representation made it possible to name and describe our environment, chiefly in terms of categories, their memberships, and their invariant features. Symbolic representations can be shared because they are intertranslatable. Both categorization and translation are approximate rather than exact, but the (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Does mind piggyback on robotic and symbolic capacity?Stevan Harnad - 1995 - In Harold J. Morowitz & Jerome L. Singer (eds.), The Mind, the Brain, and Complex Adaptive Systems. Addison-Wesley.
    Cognitive science is a form of "reverse engineering" (as Dennett has dubbed it). We are trying to explain the mind by building (or explaining the functional principles of) systems that have minds. A "Turing" hierarchy of empirical constraints can be applied to this task, from t1, toy models that capture only an arbitrary fragment of our performance capacity, to T2, the standard "pen-pal" Turing Test (total symbolic capacity), to T3, the Total Turing Test (total symbolic plus robotic capacity), to T4 (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Reaping the whirlwind. [REVIEW]L. Hauser - 1993 - Minds and Machines 3 (2):219-237.
    Harnad 's proposed "robotic upgrade" of Turing's Test, from a test of linguistic capacity alone to a Total Turing Test of linguistic and sensorimotor capacity, conflicts with his claim that no behavioral test provides even probable warrant for attributions of thought because there is "no evidence" [p.45] of consciousness besides "private experience" [p.52]. Intuitive, scientific, and philosophical considerations Harnad offers in favor of his proposed upgrade are unconvincing. I agree with Harnad that distinguishing real from "as if" thought on the (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Lost in the hermeneutic hall of mirrors.Stevan Harnad - 1990 - Journal of Experimental and Theoretical Artificial Intelligence 2:321-27.
    Critique of Computationalism as merely projecting hermeneutics (i.e., meaning originating from the mind of an external interpreter) onto otherwise intrinsically meaningless symbols. Projecting an interpretation onto a symbol system results in its being reflected back, in a spuriously self-confirming way.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Correlation vs. causality: How/why the mind-body problem is hard.Stevan Harnad - 2000 - Journal of Consciousness Studies 7 (4):54-61.
    The Mind/Body Problem is about causation not correlation. And its solution will require a mechanism in which the mental component somehow manages to play a causal role of its own, rather than just supervening superflously on other, nonmental components that look, for all the world, as if they can do the full causal job perfectly well without it. Correlations confirm that M does indeed "supervene" on B, but causality is needed to show how/why M is not supererogatory; and that's the (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations